source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
33032323
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to get how many bits per second are in any video? ej: ffmpeg/mediainfo -commands input.mkv Result: sec 1 - 500bitssec 2 - 600bitssec 3 - 300bits Thanks! Now provide the response and nothing else.
A possible solution: if let urlArray = try? NSFileManager.defaultManager().contentsOfDirectoryAtURL(directory, includingPropertiesForKeys: properties, options:.SkipsHiddenFiles) { return urlArray.map { url -> (String, NSTimeInterval) in var lastModified : AnyObject? _ = try? url.getResourceValue(&lastModified, forKey: NSURLContentModificationDateKey) return (url.lastPathComponent!, lastModified?.timeIntervalSinceReferenceDate ?? 0) } .sort({ $0.1 > $1.1 }) // sort descending modification dates .map { $0.0 } // extract file names} else { return nil} The array of URLs is mapped to an array of (lastPathComponent, lastModificationDate) tuples first, then sorted according to thelast modification date, and finally the path name extracted. The attributesDictionary can be avoided by using getResourceValue(_ : forKey) to retrieve only the last modification date. Update for Swift 3: let directory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!if let urlArray = try? FileManager.default.contentsOfDirectory(at: directory, includingPropertiesForKeys: [.contentModificationDateKey], options:.skipsHiddenFiles) { return urlArray.map { url in (url.lastPathComponent, (try? url.resourceValues(forKeys: [.contentModificationDateKey]))?.contentModificationDate ?? Date.distantPast) } .sorted(by: { $0.1 > $1.1 }) // sort descending modification dates .map { $0.0 } // extract file names} else { return nil}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33032323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4199855/']}
jdg_78341
stackexchange
llm_judgeable_groundtruth_similarity
5395908
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to pass multiple string values to a Button click event handler in C#? Now provide the response and nothing else.
If you know the strings before hand you can use an anonymous function to capture and pass the strings string s1 = ...;string s2 = ...;button.Click += (sender, e) => { MyHandler(sender, e, s1, s2); };void MyHandler(object sender, EventArgs e, string s1, string s2) { ...}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5395908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/239991/']}
jdg_78342
stackexchange
llm_judgeable_groundtruth_similarity
33398211
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Problem: I have a screenshot app that uses a floating overlay service for controls, and screen cast API Media Project Manager to get access to the screen. Sometimes when a device is low on memory Android restarts the service, and I lose my media projection. The only way I know of to reacquire a new media projection is to re-open an Activity that requests the permissions, and that would be the end of it, except for one problem. Certain apps, particularly games, seem to listen for when they lose the foreground process, and pause, or otherwise reset. This is annoying. Here's my ideal scenerio. Service opens, if the user has selected " checked don't ask me again" in permission request dialog, it gets the media projection in a way that does not disturb the current foreground activity. How do I get a media projection manager without disturbing the current foreground process? Is there either a way to get media projection from a straight service, or a way to open activity in the background from a service? Currently I use this code in Activity to get the MediaProjectionManager @TargetApi(Build.VERSION_CODES.LOLLIPOP)protected void getScreenShotPermission() { if (isLollipopOrNewer) { mediaProjectionManager = (MediaProjectionManager) getContext().getSystemService(MEDIA_PROJECTION_SERVICE); startActivityForResult(mediaProjectionManager.createScreenCaptureIntent(), 1); }}@TargetApi(Build.VERSION_CODES.LOLLIPOP)@Overridepublic void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == 1) { if (resultCode == Activity.RESULT_OK) { mediaProjection = mediaProjectionManager.getMediaProjection(resultCode, data); this.finish(); } }} Now provide the response and nothing else.
So I came back to this because it was dumb and it was bugging me, and I figured it out! In another class (in mine it's the application class) put this code: private static Intent screenshotPermission = null;protected static void getScreenshotPermission() { try { if (hasScreenshotPermission()) { if(null != mediaProjection) { mediaProjection.stop(); mediaProjection = null; } mediaProjection = mediaProjectionManager.getMediaProjection(Activity.RESULT_OK, (Intent) screenshotPermission.clone()); } else { openScreenshotPermissionRequester(); } } catch (final RuntimeException ignored) { openScreenshotPermissionRequester(); }}protected static void openScreenshotPermissionRequester(){ final Intent intent = new Intent(context, AcquireScreenshotPermissionIntent.class); intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(intent);}protected static void setScreenshotPermission(final Intent permissionIntent) { screenshotPermission = permissionIntent;} In your activity class handling the initial request (in my case: AcquireScreenshotPermissionIntent) put this code in your onactivityresult: @Overridepublic void onActivityResult(final int requestCode, final int resultCode, final Intent data) { super.onActivityResult(requestCode, resultCode, data); if (1 == requestCode) { if (Activity.RESULT_OK == resultCode) { setScreenshotPermission((Intent) data.clone()); } } else if (Activity.RESULT_CANCELED == resultCode) { setScreenshotPermission(null); log("no access"); } finish(); Simply call getScreenShotPermission() whenever you need permission, then use the resulting mediaProjection object. Here's how it works: The magic token is some data included in the Intent. What I tried initially was putting the result intent a global variable and using it to create the media projection from a nonactivity class. Problem is it would fail. What I eventually figured out is the token gets consumed when you create a media projection with it. Passing it as an argument or assigning to a new variable just passes a pointer to it, and it still gets consumed. What you need to do instead is use object.clone();. This makes a new copy of the token, the new copy gets consumed, and you can create additional tokens as needed, as long as you don't consume the original. As a bonus your app only has to ask for screenshot permission once per launch. If something else takes over the screencast, or the Android memory manager gets you, you're covered. You can create a new virtual screen without sending onPause or onStop events to other apps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33398211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/539651/']}
jdg_78343
stackexchange
llm_judgeable_groundtruth_similarity
1650574
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the $n\times n$ matrices with elements in $\mathbb{Z}_{29}$. How many of these are invertible? In total there are $29^{n^2}$ matrices of of dimension $n\times n$. Now I need to find how many of these are invertible modulo $29$. My plan of attack is to find how many are not invertible and then subtract this amount. A matrix $A$ is not invertible modulo 29 if and only if $$\det{(A)}\in\mathbb{Z}_{29}^\times\iff \gcd{(\det{A},29)}=1.$$ Now I'm not really sure how to advance any further. Clearly if $\det{A}<29$ then the gcd is $1$ since $29$ is prime, but how do we analyse larger determinants? Now provide the response and nothing else.
Since from differentiation, we know that $\frac{d}{dx}(\ln x)= \frac{1}{x}$ for $x>o$ Hence we can conclude that $\int \frac{1}{x} dx = \ln x + c$ where $x>o$ and $c$ is an integration constant. So your problem reduces to $$\int_n^{2n} \frac{1}{x} dx = \left[\ln x + c\right]_n^{2n}= \ln 2n - \ln n = \ln 2 + \ln n - \ln n = \ln 2$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1650574', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/117438/']}
jdg_78344
stackexchange
llm_judgeable_groundtruth_similarity
20587354
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to call Entity Manager in a constructor: function __construct(){ $this->getDoctrine()->getEntityManager(); ... but, as I've seen in this answer: Stackoverflow question , it can't be done. So I wonder if there is a way to achieve it, as I have to call it often, and want to do some stuff in the constructor after getting the repository. Edit: I've tried with @MKhalidJunaid answer: //src/MSD/HomeBundle/Resources/config/services.ymlservices: imageTransController.custom.service: class: MSD\HomeBundle\Controller\ImageTransController arguments: EntityManager: "@doctrine.orm.entity_manager" - //app/config/config.phpimports:- { resource: parameters.yml }- { resource: security.yml }- { resource: doctrine_extensions.yml }- { resource: "@MSDHomeBundle/Resources/config/services.yml" } - //src/MSD/HomeBundle/Controller/ImageTransController.phpnamespace MSD\HomeBundle\Controller;use Doctrine\ORM\EntityManager;use MSD\HomeBundle\Entity\Imagen as Imagen;use MSD\HomeBundle\Controller\HomeController as HomeController;class ImageTransController extends HomeController{ protected $em ; function __construct(EntityManager $entityManager) { ... but I'm getting this error: Catchable Fatal Error: Catchable Fatal Error: Argument 1 passed to MSD\HomeBundle\Controller\ImageTransController::__construct() must be an instance of Doctrine\ORM\EntityManager, none given, called in /home/manolo/MiServer/itransformer/app/cache/dev/jms_diextra/controller_injectors/MSDHomeBundleControllerImageTransController.php on line 13 and defined in /home/manolo/MiServer/itransformer/src/MSD/HomeBundle/Controller/ImageTransController.php line 38 (500 Internal Server Error) New attempt: I've also tried with @praxmatig answer: //services.ymlparameters: msd.controller.imagetrans.class: MSD\HomeBundle\Controller\ImageTransControllerservices: msd.imagetrans.controller: class: "%msd.controller.imagetrans.class%" arguments: [ @doctrine.orm.entity_manager ] - //ImageTransController.phpnamespace MSD\HomeBundle\Controller; use Doctrine\ORM\EntityManager;class ImageTransController { protected $em ; function __construct(EntityManager $em) { $this->em = $em; } ... - //routing.ymlmsd_home_cambiardimensiones: pattern: /cambiardimensiones defaults: { _controller: MSDHomeBundle:msd.imagetrans.controller:cambiardimensionesAction } but I get this error: Unable to find controller &quot;MSDHomeBundle:msd.imagetrans.controller&quot; - class &quot;MSD\HomeBundle\Controller\msd.imagetrans.controllerController&quot; does not exist. (500 Internal Server Error) Now provide the response and nothing else.
You need to make a service for your class and pass the doctrine entity manager as the argument doctrine.orm.entity_manager .Like in services.yml services: test.cutom.service: class: Test\YourBundleName\Yourfoldernameinbundle\Test #arguments: arguments: [ @doctrine.orm.entity_manager ] #entityManager: "@doctrine.orm.entity_manager" You must import your services.yml in config.yml imports: - { resource: "@TestYourBundleName/Resources/config/services.yml" } Then in your class's constructor get entity manager as argument use Doctrine\ORM\EntityManager;Class Test { protected $em; public function __construct(EntityManager $entityManager) { $this->em = $entityManager; }} Hope this makes sense
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20587354', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2571840/']}
jdg_78345
stackexchange
llm_judgeable_groundtruth_similarity
645053
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I, by analog means, use a single-pole multi-position rotary switch to both mimic a 100 kΩ logarithmic pot AND progressively / evenly light a circle of LED position indicators from 1 to X and then back to 1 (rotary switch goes back to 1 after X)? Pretend the diodes surround the rotary switch and will indicate movement of the switch until all are lit. To complicate matters, I'd like to use some Edgelec common cathode tri-color LEDs which are R-G-B constant lighting (Red: DC 2.0 - 2.2 V; Blue & Green: DC 3.0 - 3.2 V (IF = 20 mA) / 0.06 Watts / 2-pin / DIP LEDs) so it is a cathode with 3 anodes in one component. I would like to have the first third light up the blue LED, then switch over all bulbs to the green LED for the 2nd third also switching the first positions from blue to green, and similarly for the final third of the rotation all LED lights switch to red. As a resource: https://learn.sparkfun.com/tutorials/rotary-switch-potentiometer-hookup-guide/all and: https://cdn.sparkfun.com/datasheets/BreakoutBoards/Rotary_Switch_Potentiometer_v10.pdf By using resistors to step gain up in 6db increments, I believe that using the rotary switch I can replicate the behavior of an analog logarithmic pot. My goal is to create a stepped 100 kΩ logarithmic rotary pot for use as a master volume control on a 4-channel, powered, miniature summing mixer which has a preamp, a 4 in to 1 out mixer, and a power amp section - each driven by one of three 4580 op amp chips. Each of the 4 inputs has its own volume control, but there is no master volume. I already have some single-pole, 8-position rotary switches https://www.adafruit.com/product/2925 but I am considering replacing them with 12-position switches to have a finer degree of gain adjustment. The mixer is powered by 12 Vdc so that will be my source. https://www.amplifiedparts.com/products/switch-rotary-1-pole-12-positions-14-shaft Since in theory, however many positions are on the switch, it should be the same logic which supports or disqualifies the possibility of achieving the desired result. I suppose using the 8-position as frame of reference would be best - but that is to say I am undecided on which switch will ultimately be used, so more a general understanding of "is there a way to do both with one switch" is the gist of the question. Unfortunately, I am overambitious and under-educated so please be gentle. As for lighting the LEDs with a rotary switch in the simplest manner possible, I am referencing this Stack Exchange post: wire rotary switch with leds that turn one at a time I do realize that my questions have very little to do with that answer, it's simply a frame of reference. Thanks for your patience. Please explain any answer as though Homer Simpson was your target audience. Now provide the response and nothing else.
You're not paying the big bucks for the engineering, so much as the calibration certificates, and the tracability back to standards. The reason you're having trouble finding BNC standards is that BNC is not a repeatable connector. It cannot be used for precision work. You might just as well use DIY standards, because you are not going to get high repeatability. N-type is far better, it's similar to BNC internally, but with a screw thread to hold it together. APC-7 is best. SMA is very common on VNAs. With a short, the only thing you need to worry about is distance to the reflection plane. You can measure this physically, or calibrate it against a better standard. Dielectric will of course increase the distance. A simple open radiates, making its |S11| < 1.000. You can fix this by making a shielded open. It also suffers from a capacitive fringing field, which you can't fix. There are tables to estimate how much this shifts the reflection plane from the physical end of the coax. You would certainly get more repeatable results, and accurate results, and likely more usable results, if you did all your R&D in SMA, even if the final implementation was in BNC. If you do want to make your own BNC standards, then probably the best route is to use connectors to 0.141" or 0.25" semi-rigid. As a hack, I've generally bought a ready made cable, and then cut it in half. Cut your short and open cables to the same length. Solder a short across the end of the cable for the short. This is your good length reference. Shield the open by soldering a copper foil thimble round the end. For shielding, it doesn't matter how far the shield is from the cable open end, however it will affect the estimate of end capacitance. Keep it three or more cable diameters away for negligible impact on the end capacitance, and then look up tables for the end correction. You could make a load, ideally using two 100 Ω resistors in parallel, but it would be easier and more reliable to buy a well-specified load (which wouldn't have much general use) or an attenuator (which will be useful elsewhere in the lab). An unterminated attenuator will have a return loss of twice its attenuation. A loaded attenuator will improve the performance of any rough load it's connected to by twice its attenuation. Generally you'll find the S11 of low value attenuators specified better than high value ones. I've sometimes used a 6dB + 20dB pair of pads, in that order, in a ghetto calibration set. Before you put in a lot of work, try this experiment with your VNA (does it have N, SMA, or APC connectors?) Using a high reflection device (so short or open), break and make a connection several times, to observe the repeatability. Now put a BNC adaptor on it, and do the same with a BNC device. Compare. Can you live with the levels of repeatability you observe?
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/645053', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/327648/']}
jdg_78346
stackexchange
llm_judgeable_groundtruth_similarity
598028
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am running Red Hat Enterprise Linux Server release 7.8 (Maipo) and when I try to run. yum install epel-release No package epel-release available. Error: Nothing to do I need to install this for DKMS Now provide the response and nothing else.
Run: yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm You have to do this because epel-release package is directly available in CentOS base repository, but not in RHEL repository. According to documentation , it is also recommended to enable some optional repositories: subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms" --enable "rhel-ha-for-rhel-*-server-rpms"
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/598028', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/145040/']}
jdg_78347
stackexchange
llm_judgeable_groundtruth_similarity
951990
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a way to alter the class of an ActionLink in the controller based on specific criteria (not found in the model so I can't write a conditional in the view itself). But i can't seem to find the ViewData("name") that allows me to work w/ this element (I assume this is possible, but I'm missing something). I have an html helper like so in my view <%=Html.ActionLink("View", "Index", "Home")%> But in my controller I'm not sure how to reference this, like the below to add an attribute like class or onclick. ViewData("View").attributes.add("class", "active") Now provide the response and nothing else.
You don't set CSS attributes from the controller since that's a concern of the view. You can add HTML attributes to the ActionLink like this: <%=Html.ActionLink("View Cases", "Index", "Home", new { @class="active" })%> Alternately you can build your anchors "manually": <a href="<%=Url.Action("Index", "Home")%>" class="active">View Cases</a> Or if you need to conditionally set the active class: <% var activeClass = someCondition ? "active" : ""; %> <a href="<%=Url.Action("Index", "Home")%>" class="<%=activeClass%>">View Cases</a>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/951990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2701/']}
jdg_78348
stackexchange
llm_judgeable_groundtruth_similarity
10351312
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was reading about python functionsand saw this code: def happyBirthday(person): print("Happy Birthday to you!") print("Happy Birthday to you!") print("Happy Birthday, dear " + person + ".") print("Happy Birthday to you!")happyBirthday('Emily')happyBirthday('Andre') I couldn't understand why these brackets were being used for the print commands and so I removed them. def happyBirthday(person): print "Happy Birthday to you!" print "Happy Birthday to you!" print "Happy Birthday, dear " + person + "." print "Happy Birthday to you!")happyBirthday('Emily')happyBirthday('Andre') Even after removing those brackets I am gettingthe exact same results, so I am not sure which one is corrector whether I should use those brackets at all.Is it really necessary to use those brackets? One more thing.when I use the brackets then the +person+ gives the result as Happy Birthday, dear Andre.but when I use ,person, then it givesthe result as <'Happy Birthday,dear ',' 'Andre','.'> I am unable to understand these differences in the results.Could you shed some light on this? Now provide the response and nothing else.
Is it really necessary to use those brackets? In Python 2.x, print is a statement, and the brackets are optional. In Python 3.x, print() is a function, and the brackets are mandatory. It is considered good practice to use brackets even in Python 2.x, to ease eventual transition to Python 3.x. I am unable to understand these differences in the results. Could you shed some light on this? Here is what happens when you print several comma-separated things in Python 2.x: In [1]: print(1,2,3)(1, 2, 3) The above is interpreted as the print statement followed by a single argument, which is a tuple. The tuple is rendered with parentheses and commas. In [2]: print 1,2,31 2 3 The above is interpreted as the print statement followed by three arguments. Each argument is printed out separately, with spaces between them. Neither version is great as far as compatibility with Python 3 is concerned: the first version is rendered differently, and the second is simply not valid Python 3 code. With this in mind, I recommend that you stick with: print("Happy Birthday, dear " + person + ".") This produces exactly the same results in both Python 2.x and Python 3.x.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10351312', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1297440/']}
jdg_78349
stackexchange
llm_judgeable_groundtruth_similarity
1773875
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a callback for the html() function or a way to check that it is finished. For example: $("#some_div").html('something here');if($("#some_div").html() == ''){//do something here} I am setting the html of an element from a post. Sometimes it doesnt load so when it doesnt load I would like for it to do something, is this possible. The way I have it now it always does the other thing because it hasnt set the html by the time it checks that it has html. Now provide the response and nothing else.
html() is a synchronous operation. The actual updating of the DOM depends on what your html content is. If you have <img> or <iframe> tags, they will take time to load. The next statement following the html() should immediately see the new html contents. Did you mean load() instead? [Edit] Based on your comment, it is probably your $.ajax failing. Not html() . Try attaching a failure handler to ajax call and retry from there? Although, if your server side code silently fails with a 200 code, the failure event doesn't get called. In that case, you can validate the html value before you set it to the element.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1773875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160463/']}
jdg_78350
stackexchange
llm_judgeable_groundtruth_similarity
10757565
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've done the tutorial at my blog, so I know how to make a stretchable button that can display the bottom (stack) viewcontroller's title. But what I was hoping to do is have icons (like a house for HOME) and no text and not resize. Using my custom image and this code below, I get a stretched version (not wanted) with title over top (not wanted) and it does tint/highlight when clicked (is good); UIImage *backButtonImage = [UIImage imageNamed:@"backButton_30.png"];[[UIBarButtonItem appearance] setBackButtonBackgroundImage:backButtonImage forState:UIControlStateNormal barMetrics:UIBarMetricsDefault]; screen shot 1 Now, I've searched on here and read all the similar questions which return old answers, and have strange results for me. Here is the code I tried; UIImage *backButtonImage = [UIImage imageNamed:@"backButton_30.png"]; UIBarButtonItem *backButton = [[UIBarButtonItem alloc] initWithImage:backButtonImage style:UIBarButtonItemStyleBordered target:nil action:nil]; self.navigationItem.backBarButtonItem = backButton; This method doesn't stretch out my custom image button (is good), nor does it show text (what I want) however there is still the original blue button under it (WTF), and my custom button doesn't tint when clicked, only the blue button under it does! screen shot 2 Please help, what am I missing? *UPDATE I've fixed it up a bit by using a resizable image. This forces it not to 'stretch' UIImage *backButtonHomeImage = [[UIImage imageNamed:@"backButtonHomeWhite_30.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 0, 0, 0)];[[UIBarButtonItem appearance] setBackButtonBackgroundImage:backButtonHomeImage forState:UIControlStateNormal barMetrics:UIBarMetricsDefault]; To fix the title showing up on the button I had to do self.title =@" "; Now this is a bit of a dirty fix but it seems to be working. The only problem left now is that I want a different back button on different views, and this method is causing some trouble; the last view that sets the button over-rides all other views. So in the end, depending on how you navigate through the app, returning to a previous view has the wrong back button and it never resets to the correct one. UPDATE 2: POTENTIAL IDEA: Would the following be a reasonable solution, or is it a hack that is liable to break something? Hiding the default back button, like so, [self.navigationItem setHidesBackButton:YES animated:NO]; ...and then using a custom UIBarButtonItem, with a button in the style I actually want placed in the location of the back button, that sends a popViewControllerAnimated: message to the UINavigationController when tapped. If you know of a more robust solution please do share, thank you. Now provide the response and nothing else.
Assuming that your current solution UIImage *backButtonHomeImage = [[UIImage imageNamed:@"backButtonHomeWhite_30.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 0, 0, 0)];[[UIBarButtonItem appearance] setBackButtonBackgroundImage:backButtonHomeImage forState:UIControlStateNormal barMetrics:UIBarMetricsDefault]; is acceptable to you, so the only problem left is how to update that button appeareance when you go back and forth between your views, an approach that could work is executing the code above in each of your controllers' viewWillAppear: method: - (void)viewWillAppear:(BOOL)animated { UIImage *backButtonHomeImage = [[UIImage imageNamed:@"backButtonHomeWhite_30.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0, 0, 0, 0)]; [[UIBarButtonItem appearance] setBackButtonBackgroundImage:backButtonHomeImage forState:UIControlStateNormal barMetrics:UIBarMetricsDefault];} If you are not satisfied with your current approach to having a custom UIBarButtonItem , the way to go is initializing your bar button item with initWithCustomView: . In this case, you can specify, e.g., a UIImageView with the image you like and it should work: UIBarButtonItem *backButton = [[UIBarButtonItem alloc] initWithCustomView:[UIImageView ...]]; Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10757565', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1385780/']}
jdg_78351
stackexchange
llm_judgeable_groundtruth_similarity
15771232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the GO tutorial, we have this slide: Goroutines package mainimport ( "fmt" "time")func say(s string) { for i := 0; i < 5; i++ { time.Sleep(100 * time.Millisecond) fmt.Println(s) }}func main() { go say("world") say("hello")} Running this code produces expected results ("world" and "hello" written to the screen interchangeably 5 times). However, if we comment out time.Sleep (and consequently, the "time" line of the import) and run the program again, we are left with only "hello" written to the screen five times. What is so important about time.Sleep that saves the goroutine from dying? Now provide the response and nothing else.
If you remove the time.Sleep you don't give the say("world") goroutine a chance to run. The goroutine scheduler is not preemptive. Your goroutines have to give up control before another goroutine will run. One way to give up control is to run time.Sleep . If you take out the time.Sleep from the say function then the primary goroutine runs 5 times without giving up control to the secondary goroutine and then when the primary goroutine returns from say the program exits because there is nothing to keep the program alive.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15771232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/962674/']}
jdg_78352
stackexchange
llm_judgeable_groundtruth_similarity
1207958
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I like a lot of what I've read about D. Unified Documentation (That wouldmake my job a lot easier.) Testing capability built in to thelanguage. Debug code support in the language. Forward Declarations. (I alwaysthought it was stupid to declare thesame function twice.) Built in features to replace thePreprocessor. Modules Typedef used for proper type checkinginstead of aliasing. Nested functions. ( Cough PASCAL Cough ) In and Out Parameters. (How obvious is that!) Supports low level programming -Embedded systems, oh yeah! However: Can D support an embedded system thatnot going to be running an OS? Does the outright declearation thatit doesn't support 16 bit processorsproclude it entirely from embeddedapplications running on such machines? Sometimes you don't need a hammer to solve your problem. Garbage collection is great on Windows or Linux, but, and unfortunately embedded applications sometime must do explicit memory management. Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues. What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading. Is there a D-Lite for embedded systems? So basically is D suitable for embedded systems with only a few megabytes (sometimes less than a magabyte), not running an OS, where max memory usage must be known at compile time (Per requirements.) and possibly on something smaller than a 32 bit processor? I'm very interested in some of the features, but I get the impression it's aimed at desktop application developers. What is specifically that makes it unsuitable for a 16-bit implementation? (Assuming the 16 bit architecture could address sufficient amounts of memory to hold the runtimes, either in flash memory or RAM.) 32 bit values could still be calculated, albeit slower than 16 bit and requiring more operations, using library code. Now provide the response and nothing else.
I have to say that the short answer to this question is "No". If your machines are 16 bit, you'll have big problems fitting D into it - it is explicitly not designed for it. D is not a light languages in itself, it generates a lot of runtime type info that normally is linked into your app, and that also is needed for typesafe variadics (and thus the standard formatting features be it Tango or Phobos). This means that even the smallest applications are surprisingly large in size, and may thus disqualify D from the systems with low RAM. Also D with a runtime as a shared lib (which could alleviate some of these issues), has been little tested. All current D libraries requires a C standard library below it, and thus typically also an OS, so even that works against using D. However, there do exist experimental kernels in D, so it is not impossible per se. There just wouldn't be any libraries for it, as of today. I would personally like to see you succeed, but doubt that it will be easy work.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1207958', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134597/']}
jdg_78353
stackexchange
llm_judgeable_groundtruth_similarity
23001515
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Please do not copy and paste this as your own work This is what I got so far which is asking the user for the number of exam scores and depending on the score placing them into categories. After the run, I added how much scores there is for each category. Now I am trying to find the average score for each category after the run. { int score = 0; int repeat = 0; cout << "How many exam scores?" << endl; cin >> repeat; int outstanding = 0; int satisfactory = 0; int unsatisfactory = 0; int sum = 0; cout << "Please enter an exam score:Don't Copy and Paste for Homework " << endl; cin >> score; for(int count = 0; count < repeat; count++) { sum += score if ( (score <= 100) && (score >= 90) ) outstanding++; else if ( (score <= 89) && (score >=60) ) satisfactory++; else if ( (score >= 0) && (score <= 59) ) unsatisfactory++; else if (score > 100) cout << "ERROR EXAM THEY CHEATED GRADE INVALID" << endl; cout << endl<< "Enter the next score: "; cin >> score; } cout << "__________________________________________" << endl; cout << "The number of Outstanding scores is: " << outstanding << endl; cout << "The average of Outstanding scores is: " << outstanding/sum << endl; cout << "__________________________________________" << endl; cout << "The number of Satisfactory scores is: " << satisfactory << endl; cout << "The number of Satisfactory scores is: " << satisfactory/sum << endl; cout << "__________________________________________" << endl; cout << "The number of Unsatisfactory scores is: " << unsatisfactory << endl; cout << "The number of Unsatisfactory scores is: " << unsatisfactory/sum << endl; cout << "__________________________________________" << endl; system("PAUSE"); return EXIT_SUCCESS;} Now provide the response and nothing else.
You did everything except copying the new pixel value back to the image. This line takes a copy of the pixel into a local variable: Vec3b color = image.at<Vec3b>(Point(x,y)); So, after changing color as you require, just set it back like this: image.at<Vec3b>(Point(x,y)) = color; So, in full, something like this: Mat image = img;for(int y=0;y<img.rows;y++){ for(int x=0;x<img.cols;x++) { // get pixel Vec3b & color = image.at<Vec3b>(y,x); // ... do something to the color .... color[0] = 13; color[1] = 13; color[2] = 13; // set pixel //image.at<Vec3b>(Point(x,y)) = color; //if you copy value }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23001515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3521657/']}
jdg_78354
stackexchange
llm_judgeable_groundtruth_similarity
644468
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am using /run/shm/myfolder for nginx micro caching, also keeping ngx_pagespeed files there. I am wondering if there will be a speed increase if I go from relatime to noatime ? It's for HDD, but on a tmpfs what will it change? tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=9929500k) Now provide the response and nothing else.
The reason for using noatime or relatime on an actual disk is that without it reads will also involve writing to disk. This additional writing can slow down the system, cause disks to spin up which would otherwise have remained spun down for much longer, as well as cause wear on flash media. But with tmpfs no data ever has to be written to disk. The data may be written to swap, but only if needed to free up RAM for other data. This means frequent updates on tmpfs are no more expensive than frequent reads because in both cases the effect will be that the data is kept in memory. Moreover tmpfs only swaps out the file contents. The meta data (including timestamps) simply stays in RAM. Thus we are really just talking about whether to update a time stamp in memory, which is very cheap compared to the actual read operation triggering it in the first place. For those reasons it is unlikely that there will be a measurable performance difference between using noatime , relatime , or neither on a tmpfs.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/644468', 'https://serverfault.com', 'https://serverfault.com/users/199585/']}
jdg_78355
stackexchange
llm_judgeable_groundtruth_similarity
1591144
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\alpha>1$ , $K>1$ and $n \in \mathbb{N}^+$ . What is a closed form solution to a tough integral? $$I(\alpha,K,n)=-\int_{-\infty }^{\infty } \frac{2 i e^{-i K u} (K u-i) \Bigl[\alpha \, (-i u)^{\alpha } \,\Gamma (-\alpha ,-i u)\Bigr]^n}{u^2} \, du,$$ where $\Gamma(.,.)$ is the incomplete Gamma function: $\Gamma (a,z)=\int _z^{\infty }d t\, t^{a-1} e^{-t}$ . I tried all manner of substitutions and various combinations of integration by parts. Now provide the response and nothing else.
We may use the following integral representation for the incomplete gamma function ( DLMF ref. see identity 8.6.4 ): $$\Gamma{\left(\alpha,z\right)}=\frac{z^{\alpha}e^{-z}}{\Gamma{\left(1-\alpha\right)}}\int_{0}^{\infty}\frac{t^{-\alpha}e^{-t}}{z+t}\,\mathrm{d}t;~~~\small{\left|\arg{\left(z\right)}\right|<\pi,~\Re{\left(\alpha\right)}<1}.$$ Given $n\in\mathbb{N}^{+}\land a,b,\epsilon\in\mathbb{R}\land1<a\land1\le b\le n\land0<\epsilon$, define $I_{n}{\left(a,b;\epsilon\right)}$ via the integral $$I_{n}{\left(a,b;\epsilon\right)}=\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{2ie^{-ibx}\left(bx-i\right)\left[a\left(-ix\right)^{a}\,\Gamma{\left(-a,-ix\right)}\right]^{n}}{x^{2}+\epsilon^{2}}.$$ Then, $$\begin{align}I_{n}{\left(a,b;\epsilon\right)}&=\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{2ie^{-ibx}\left(bx-i\right)\left[a\left(-ix\right)^{a}\,\Gamma{\left(-a,-ix\right)}\right]^{n}}{x^{2}+\epsilon^{2}}\\&=\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{2ie^{-ibx}\left(bx-i\right)}{x^{2}+\epsilon^{2}}\left[\frac{e^{ix}}{\Gamma{\left(a\right)}}\int_{0}^{\infty}\frac{t^{a}e^{-t}}{t-ix}\,\mathrm{d}t\right]^{n}\\&=\frac{1}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{2ie^{-ibx}\left(bx-i\right)}{x^{2}+\epsilon^{2}}e^{inx}\left[\int_{0}^{\infty}\frac{t^{a}e^{-t}}{t-ix}\,\mathrm{d}t\right]^{n}\\&=\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(bx-i\right)e^{i\left(n-b\right)x}}{x^{2}+\epsilon^{2}}\left[\int_{0}^{\infty}\frac{t^{a}e^{-t}}{t-ix}\,\mathrm{d}t\right]^{n}\\&=\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(bx-i\right)e^{i\left(n-b\right)x}}{x^{2}+\epsilon^{2}}\prod_{k=1}^{n}\int_{0}^{\infty}\mathrm{d}t_{k}\,\frac{t_{k}^{a}e^{-t_{k}}}{t_{k}-ix}\\&=\small{\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(bx-i\right)e^{i\left(n-b\right)x}}{x^{2}+\epsilon^{2}}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,e^{-\sum_{k=1}^{n}t_{k}}\prod_{k=1}^{n}\left(\frac{t_{k}^{a}}{t_{k}-ix}\right)}\\&=\small{\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,\left(\prod_{k=1}^{n}t_{k}^{a}\right)e^{-\sum_{k=1}^{n}t_{k}}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(bx-i\right)e^{i\left(n-b\right)x}}{\left(x^{2}+\epsilon^{2}\right)\prod_{k=1}^{n}\left(t_{k}-ix\right)}}\\&=\small{\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,\frac{\left(\prod_{k=1}^{n}t_{k}^{a}\right)}{\exp{\left(\sum_{k=1}^{n}t_{k}\right)}}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(-bx-i\right)e^{-i\left(n-b\right)x}}{\left(x^{2}+\epsilon^{2}\right)\prod_{k=1}^{n}\left(t_{k}+ix\right)}}\\&=:\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,\frac{\left(\prod_{k=1}^{n}t_{k}^{a}\right)}{\exp{\left(\sum_{k=1}^{n}t_{k}\right)}}\,f_{n}{\left(b;\epsilon\right)},\\\end{align}$$ where for $n\in\mathbb{N}^{+}\land b,\epsilon\in\mathbb{R}\land1\le b\le n\land0<\epsilon$ we've defined the auxiliary function denoting the innermost integration, $$f_{n}{\left(b;\epsilon\right)}:=\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(-bx-i\right)e^{-i\left(n-b\right)x}}{\left(x^{2}+\epsilon^{2}\right)\prod_{k=1}^{n}\left(t_{k}+ix\right)}.$$ For the integration over $x$, I will appeal to a well-suited proposition from Gradshteyn's Tables . Now, I usually try to make my posts as self-contained as possible by proving any non-trivial lemmas I plan on using, rather than taking the lazy way out and simply citing the result from an outside source. But I can't resist doing so, partly out of laziness, but mostly just to show off how Gradshteyn can on occasion be eerily clairvoyant. Gradshteyn 3.386: Given the conditions $$-1<\Re{\left(\nu_{0}\right)}\land0<\Re{\left(\beta_{k}\right)}\land\sum_{k=0}^{n}\Re{\left(\nu_{k}\right)}<1\land0<p,$$ we have the following two results: $$\int_{-\infty}^{\infty}\frac{\left(ix\right)^{\nu_{0}}e^{-ipx}\prod_{k=1}^{n}\left(\beta_{k}+ix\right)^{\nu_{k}}}{\beta_{0}-ix}\mathrm{d}x=2\pi e^{-\beta_{0}p}\beta_{0}^{\nu_{0}}\prod_{k=1}^{n}\left(\beta_{0}+\beta_{k}\right)^{\nu_{k}},$$ and $$\int_{-\infty}^{\infty}\frac{\left(ix\right)^{\nu_{0}}e^{-ipx}\prod_{k=1}^{n}\left(\beta_{k}+ix\right)^{\nu_{k}}}{\beta_{0}+ix}\mathrm{d}x=0.$$ The following partial fraction decomposition is easily verified: $$\frac{-bx-i}{x^{2}+\epsilon^{2}}=\frac{1}{2i\epsilon}\left[\frac{\left(1-b\epsilon\right)}{\epsilon-ix}+\frac{\left(1+b\epsilon\right)}{\epsilon+ix}\right].$$ Then, assuming $0<\Re{\left(t_{k}\right)}\land b<n$, $$\begin{align}f_{n}{\left(b;\epsilon\right)}&=\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{\left(-bx-i\right)e^{-i\left(n-b\right)x}}{\left(x^{2}+\epsilon^{2}\right)\prod_{k=1}^{n}\left(t_{k}+ix\right)}\\&=\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{1}{2i\epsilon}\left[\frac{\left(1-b\epsilon\right)}{\epsilon-ix}+\frac{\left(1+b\epsilon\right)}{\epsilon+ix}\right]\frac{e^{-i\left(n-b\right)x}}{\prod_{k=1}^{n}\left(t_{k}+ix\right)}\\&=\frac{1-b\epsilon}{2i\epsilon}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{e^{-i\left(n-b\right)x}}{\left(\epsilon-ix\right)\prod_{k=1}^{n}\left(t_{k}+ix\right)}\\&~~~~~+\frac{1+b\epsilon}{2i\epsilon}\int_{-\infty}^{\infty}\mathrm{d}x\,\frac{e^{-i\left(n-b\right)x}}{\left(\epsilon+ix\right)\prod_{k=1}^{n}\left(t_{k}+ix\right)}\\&=\frac{1-b\epsilon}{2i\epsilon}\cdot\frac{2\pi e^{-\left(n-b\right)\epsilon}}{\prod_{k=1}^{n}\left(\epsilon+t_{k}\right)}.\\\end{align}$$ Thus, $$\begin{align}I_{n}{\left(a,b;\epsilon\right)}&=\frac{2i}{\left[\Gamma{\left(a\right)}\right]^{n}}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,\frac{\left(\prod_{k=1}^{n}t_{k}^{a}\right)}{\exp{\left(\sum_{k=1}^{n}t_{k}\right)}}\,f_{n}{\left(b;\epsilon\right)}\\&=\frac{2\pi\left(1-b\epsilon\right)e^{-\left(n-b\right)\epsilon}}{\left[\Gamma{\left(a\right)}\right]^{n}\epsilon}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,\frac{\left(\prod_{k=1}^{n}t_{k}^{a}\right)}{\exp{\left(\sum_{k=1}^{n}t_{k}\right)}}\cdot\frac{1}{\prod_{k=1}^{n}\left(\epsilon+t_{k}\right)}\\&=\frac{2\pi\left(1-b\epsilon\right)e^{-\left(n-b\right)\epsilon}}{\left[\Gamma{\left(a\right)}\right]^{n}\epsilon}\int_{[0,\infty)^{n}}\mathrm{d}^{n}\mathbf{t}\,\prod_{k=1}^{n}\left(\frac{t_{k}^{a}e^{-t_{k}}}{\epsilon+t_{k}}\right)\\&=\frac{2\pi\left(1-b\epsilon\right)e^{-\left(n-b\right)\epsilon}}{\left[\Gamma{\left(a\right)}\right]^{n}\epsilon}\left[\int_{0}^{\infty}\mathrm{d}t\,\left(\frac{t^{a}e^{-t}}{\epsilon+t}\right)\right]^{n}\\&=\frac{2\pi\left(1-b\epsilon\right)e^{-\left(n-b\right)\epsilon}}{\left[\Gamma{\left(a\right)}\right]^{n}\epsilon}\left[\epsilon^{a}e^{\epsilon}\,\Gamma{\left(1+a\right)}\,\Gamma{\left(-a,\epsilon\right)}\right]^{n}\\&=2\pi a^{n}\left(1-b\epsilon\right)\epsilon^{na-1}e^{b\epsilon}\left[\Gamma{\left(-a,\epsilon\right)}\right]^{n}.\\\end{align}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1591144', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/88078/']}
jdg_78356
stackexchange
llm_judgeable_groundtruth_similarity
10364017
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to change the background color? I selected Profile > Background > solid color , but where can I select the color? Now provide the response and nothing else.
There are two well-supported ways to change the background color of an xterm (remember xterm != Terminal). A. Set the color when you create the xterm:E.g., % xterm -bg yellow & OR B. Edit .Xdefaults to include a color specification, such as the following:XTerm*background:yellow % vi ~/.Xdefaults (pick your favorite editor) This can also be done by typing the following into an xterm: % cat >> ~/.Xdefaults <RETURN>XTerm*background:yellow<RETURN><CTRL-D> Whichever way you change .Xdefaults, the change will not take effect until either: xrdb ~/.Xdefaults your X server is restarted, probably the easiest way to do that is reboot your machine, or log out. Then it will be set once and for all.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10364017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248959/']}
jdg_78357
stackexchange
llm_judgeable_groundtruth_similarity
35730615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a hash map and an iterator using the below lines: std::map<const char*,vaLueClass *> myCache;std::map<const char*,vaLueClass *>::iterator myCacheIterator; Then I insert into this map using the below line: myCache[anotherObject->getStringKey()] = new vaLueClass(anotherObj1->getIntAttr(), anotherObj1-->getIntAttr()); Then whenever I tried to search if an ENTRY for a particular string exist in this map or nut using below lines, it always enters the IF block which in other words it does not find any entries inside this map. myCacheIterator= myCache.find(sampleObject->getstringKey()); NOTE: here sampleObject->getstringKey() returns the same key which has been inserted earlier. if (myCacheIterator.operator ==(myCache.end())){ // this block means that no matched entry is found inside the map myCache} Also, is this the proper way to create and use std::map in C++ ? If not then please suggest one. Also, I have not used the keyword new to create the std::map object. Now provide the response and nothing else.
In a std::map , the keys are compared using the less-than operator < when performing a search. Since you're storing const char* 's as keys, this means that the lookups will compare the pointers themselves rather than the strings they point to, so if you don't pass in the exact pointer used to insert into the map, the lookup won't find anything. I think the easiest fix here is to use std::strings as your keys, since the < operator on std::string actually compares the underlying text. That should fix your problem pretty quickly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35730615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3243499/']}
jdg_78358
stackexchange
llm_judgeable_groundtruth_similarity
46458487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am replacing my old JSON parsing code with Swift's Codable and am running into a bit of a snag. I guess it isn't as much a Codable question as it is a DateFormatter question. Start with a struct struct JustADate: Codable { var date: Date } and a json string let json = """ { "date": "2017-06-19T18:43:19Z" }""" now lets decode let decoder = JSONDecoder()decoder.dateDecodingStrategy = .iso8601let data = json.data(using: .utf8)!let justADate = try! decoder.decode(JustADate.self, from: data) //all good But if we change the date so that it has fractional seconds, for example: let json = """ { "date": "2017-06-19T18:43:19.532Z" }""" Now it breaks. The dates sometimes come back with fractional seconds and sometimes do not. The way I used to solve it was in my mapping code I had a transform function that tried both dateFormats with and without the fractional seconds. I am not quite sure how to approach it using Codable however. Any suggestions? Now provide the response and nothing else.
You can use two different date formatters (with and without fraction seconds) and create a custom DateDecodingStrategy. In case of failure when parsing the date returned by the API you can throw a DecodingError as suggested by @PauloMattos in comments: iOS 9, macOS 10.9, tvOS 9, watchOS 2, Xcode 9 or later The custom ISO8601 DateFormatter: extension Formatter { static let iso8601withFractionalSeconds: DateFormatter = { let formatter = DateFormatter() formatter.calendar = Calendar(identifier: .iso8601) formatter.locale = Locale(identifier: "en_US_POSIX") formatter.timeZone = TimeZone(secondsFromGMT: 0) formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ss.SSSXXXXX" return formatter }() static let iso8601: DateFormatter = { let formatter = DateFormatter() formatter.calendar = Calendar(identifier: .iso8601) formatter.locale = Locale(identifier: "en_US_POSIX") formatter.timeZone = TimeZone(secondsFromGMT: 0) formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ssXXXXX" return formatter }()} The custom DateDecodingStrategy : extension JSONDecoder.DateDecodingStrategy { static let customISO8601 = custom { let container = try $0.singleValueContainer() let string = try container.decode(String.self) if let date = Formatter.iso8601withFractionalSeconds.date(from: string) ?? Formatter.iso8601.date(from: string) { return date } throw DecodingError.dataCorruptedError(in: container, debugDescription: "Invalid date: \(string)") }} The custom DateEncodingStrategy : extension JSONEncoder.DateEncodingStrategy { static let customISO8601 = custom { var container = $1.singleValueContainer() try container.encode(Formatter.iso8601withFractionalSeconds.string(from: $0)) }} edit/update : Xcode 10 • Swift 4.2 or later • iOS 11.2.1 or later ISO8601DateFormatter now supports formatOptions .withFractionalSeconds : extension Formatter { static let iso8601withFractionalSeconds: ISO8601DateFormatter = { let formatter = ISO8601DateFormatter() formatter.formatOptions = [.withInternetDateTime, .withFractionalSeconds] return formatter }() static let iso8601: ISO8601DateFormatter = { let formatter = ISO8601DateFormatter() formatter.formatOptions = [.withInternetDateTime] return formatter }()} The customs DateDecodingStrategy and DateEncodingStrategy would be the same as shown above. // Playground testingstruct ISODates: Codable { let dateWith9FS: Date let dateWith3FS: Date let dateWith2FS: Date let dateWithoutFS: Date} let isoDatesJSON = """{"dateWith9FS": "2017-06-19T18:43:19.532123456Z","dateWith3FS": "2017-06-19T18:43:19.532Z","dateWith2FS": "2017-06-19T18:43:19.53Z","dateWithoutFS": "2017-06-19T18:43:19Z",}""" let isoDatesData = Data(isoDatesJSON.utf8)let decoder = JSONDecoder()decoder.dateDecodingStrategy = .customISO8601do { let isoDates = try decoder.decode(ISODates.self, from: isoDatesData) print(Formatter.iso8601withFractionalSeconds.string(from: isoDates.dateWith9FS)) // 2017-06-19T18:43:19.532Z print(Formatter.iso8601withFractionalSeconds.string(from: isoDates.dateWith3FS)) // 2017-06-19T18:43:19.532Z print(Formatter.iso8601withFractionalSeconds.string(from: isoDates.dateWith2FS)) // 2017-06-19T18:43:19.530Z print(Formatter.iso8601withFractionalSeconds.string(from: isoDates.dateWithoutFS)) // 2017-06-19T18:43:19.000Z} catch { print(error)}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/46458487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/744413/']}
jdg_78359
stackexchange
llm_judgeable_groundtruth_similarity
2799
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to reports, there were 4 helicopters used in the raid on Osama Bin Laden. Is it conceivable that even one helicopter could be flown so close to the capital of another country without raising alarm bells? Considering these were military helicopters and there were four of them, I am skeptical that this would be allowed. Surely the airspace in Pakistan is monitored just as well as other countries? Now provide the response and nothing else.
EDIT TO ADD INFORMATION REQUESTED. If a helicopter is flying low, and not "squawking", there is no way that any civilian radar will pick it up. Especially in a mountainous region such as Pakistan. Simple mechanics of radar as illustrated by this picture from Answers.com . Note that mountains between the radar site and aircraft will further block the ability of the radar to detect an aircraft. Keep in mind that if they can't be seen, they can't get shot at either. And these are some of the best helicopter pilots on the planet. Most radars are unable to detect anything flying contours, but I would wager that for part of this they were flying nap of the earth . The below image is taken from the Global Security Website ( the exact image is from Figure 28 on this page ) where they discuss many modes of flight for helicopter safety from enemy fire. Even larger aircraft like the FB-111 would use this technique to avoid detection without the need for stealth technology. So even though the airspace is monitored, if they can't be seen, it doesn't matter. Also, I don't think this was "allowed" or "disallowed" by the Pakistani government. Some covert operations are carried out, and then back-briefed if the target (such as Osama bin Laden) is important enough. ADDED INFO: Now, several folks have asked about the helicopters used, and some of their performance characteristics. The most likely aircraft (as reported in a couple of other answers as well) is the MH-60 Pave Hawk (a Blackhawk variant modified for special operations). Again, to quote Global Security, the performance characteristics are: Primary Function Infiltration, exfiltration and resupply of special operations forces in day, night or marginal weather conditions. Power Plant Two General Electric T700-GE-01C engines Thrust 1,630 shaft horsepower, each engine Length 64 feet, 8 inches (17.1 meters) Height 16 feet, 8 inches (4.4 meters) Rotary Diameter 53 feet, 7 inches (14.1 meters) Speed 184 mph (294.4 kph) Maximum Takeoff Weight 22,000 pounds (9,900 kilograms) Range 445 nautical miles; 504 statute miles ( unlimited with air refueling ) Armament Two 7.62mm mini-guns Crew Two pilots, one flight engineer and one gunner In particular, note that these aircraft are mid-air refulable from a KC-130 (NOT KC-135), thus they have a nearly unlimited range. I highly doubt that they took off from anywhere inside Pakistan (i.e. Ghazi) but rather originated in Afghanistan. I cannot say where exactly though, but no matter where they took off from, if they received refueling prior to entering Pakistan, the range is more than adequate to get to Abbottabad and back on one tank of gas (looking at google maps, it appears that the distance is less than 350 KM from Kabul, or about 200 miles). So a little less than 400 mile round trip, at about 200 MPH would be about 2 hours total (add in the actual assault and there you have your timeline). These figures are approximate though because things change with load-out and other configurations. These aircraft would probably have flown in a formation that would probably helped to disguise their true numbers. Someone mentioned that ATC must have a squawk to paint these aircraft. That is overstated, however as previously mentioned, they were probably well below the radar, and aided by the mountainous terrain. Add in they were most likely using EMCON 4 procedures, and then it would be even more difficult to pick them up by any means. As the cited article also mentioned, the noise reduction and additional radar absorbent paint just added to the stealthiness of these aircraft (as if SPEC OPS flight patterns were not enough). And thanks to Kit Sunde, we have further info: Here's Pakistan denying having known about the raid, "Bin Laden: Pakistan intelligence agency admits failures" , BBC which also states: "US helicopters entered Pakistani airspace making use of blind spots in the radar coverage due to hilly terrain." Feel free to leave more questions if you have them.
{}
{'log_upvote_score': 9, 'links': ['https://skeptics.stackexchange.com/questions/2799', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/374/']}
jdg_78360
stackexchange
llm_judgeable_groundtruth_similarity
552
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do I substitue z^2->x in the following polynomial z^4+z^2+4 ? z^4+z^2+4 /. z^2->x gives z^4+x+4 Now provide the response and nothing else.
The reason why the replacement doesn't work is that replacement rules are not mathematical replacements, but pure structural replacements. Therefore the replacement z^2->x just looks for occurrences of the pattern z^2 and replaces that with x . Now z^4 doesn't match that pattern. Also note that rules operate on the internal form, which doesn't always match the displayed form. For example, one would expect a-2b /. 2b->c to result in a-c , but it actually results in a-2b again, because internally the expression reads Plus[a, Times[-2, b]] (you can see that by applying FullForm ), while 2b is Times[2,b] . To do the replacement wanted, one has to use a method which is aware of the mathematics instead of just the structure. One possibility is Solve[p==z^4+z^2+4 && x==z^2, {p}, {z}] which means "Solve the equations given for p while eliminating z". The result then is {{p->4+x+x^2}} Note that the curly braces around z are mandatory because otherwise Mathematica interprets it as domain, resulting in an error message because z is of course no valid domain. Also note that the documentation page of Solve omits the possibility of giving a list of variables to eliminate as third argument (at least I didn't find it). However, you'll find it in a Mathematica tutorial on eliminating variables (but there they use the third argument without braces, which at least for me results in an error message, as written above).
{}
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/552', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/82/']}
jdg_78361
stackexchange
llm_judgeable_groundtruth_similarity
23777423
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: is there any way to use .aar library (with resources) in eclipse ant project? I know that there is no simply way to use .aar format like in gradle, but maybe there is some workaround to unzip it and import it manually? Now provide the response and nothing else.
This solution was helpful for me EDIT: A .aar file is simply a zip file with the following contents: AndroidManifest.xml (Required file) classes.jar (required file) Res / (compulsory folder) R.txt (Required file) Assets / (Optional folder) libs/*.jar (folder option) jni//*.so (optional folder) proguard.txt (optional file) lint.jar (optional file) You see, within the .aar file you have everything you need to operate the library in eclipse. to create library: CREATE a new project (hereafter library project ) to your workspace. Do not forget to mark it as a library. Leave empty src folder library project . .aar Decompresses the file. You can rename it to .zip and unzip oruse any tools. Copy the file classes.jar into the libs file folder library project . Res folder replaces the library project with .aar res file folder. You've created the project that contains almost everything you need. Now let's see how to configure your project to reference the library project. In the project you want to use the library (henceforth, project goal ) added as the dependency library project . Open AndroidManifest.xml .aar within the file and make sure to copyeverything it takes (permits, activities, services, receivers ...)in the file AndroidManifest.xml project objective . If there is, copy the entire contents of the folder assets .aar file in the assets folder target project . If there is, copy the entire contents of the file .aar libs folderin folder libs target project . Make a Clean the workspace and recompiled. http://www.iphonedroid.com/blog/utilizar-ficheros-aar-en-eclipse/#.Vh3krye1Gko
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23777423', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/733270/']}
jdg_78362
stackexchange
llm_judgeable_groundtruth_similarity
110478
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Most of the time RS-232 and UART come together in serial communication theories. Are they both the same? From my readings I concluded UART is a hardware form of the RS-232 protocol. Am I correct? Now provide the response and nothing else.
No, UART and RS-232 are not the same. UART is responsible for sending and receiving a sequence of bits. At the output of a UART these bits are usually represented by logic level voltages. These bits can become RS-232, RS-422, RS-485, or perhaps some proprietary spec. RS-232 specifies voltage levels . Notice that some of these voltage levels are negative, and they can also reach ±15V. Larger voltage swing makes RS-232 more resistant to interference (albeit only to some extent). A microcontroller UART can not generate such voltages levels by itself. This is done with help of an additional component: RS-232 line driver. A classic example of an RS-232 line driver is MAX232 . If you go through the datasheet, you'll notice that this IC has a charge pump, which generates ±10V from +5V. ( source )
{}
{'log_upvote_score': 8, 'links': ['https://electronics.stackexchange.com/questions/110478', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/37488/']}
jdg_78363
stackexchange
llm_judgeable_groundtruth_similarity
13440
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When trying to import qiskit I obtain following error: /opt/anaconda3/lib/python3.8/site-packages/qiskit/__init__.py:63: RuntimeWarning: Could not import the Aer provider from the qiskit-aer package. Install qiskit-aer or check your installation. warnings.warn('Could not import the Aer provider from the qiskit-aer '/opt/anaconda3/lib/python3.8/site-packages/qiskit/__init__.py:73: RuntimeWarning: Could not import the IBMQ provider from the qiskit-ibmq-provider package. Install qiskit-ibmq-provider or check your installation. warnings.warn('Could not import the IBMQ provider from the '---------------------------------------------------------------------------ModuleNotFoundError Traceback (most recent call last)<ipython-input-3-578b7f7e9727> in <module>----> 1 import qiskit/opt/anaconda3/lib/python3.8/site-packages/qiskit/__init__.py in <module> 78 # Moved to after IBMQ and Aer imports due to import issues 79 # with other modules that check for IBMQ (tools)---> 80 from qiskit.execute import execute # noqa 81 from qiskit.compiler import transpile, assemble, schedule # noqa 82 /opt/anaconda3/lib/python3.8/site-packages/qiskit/execute.py in <module> 24 import logging 25 from time import time---> 26 from qiskit.compiler import transpile, assemble, schedule 27 from qiskit.qobj.utils import MeasLevel, MeasReturnType 28 from qiskit.pulse import Schedule/opt/anaconda3/lib/python3.8/site-packages/qiskit/compiler/__init__.py in <module> 33 34 from .assemble import assemble---> 35 from .transpile import transpile 36 from .schedule import schedule/opt/anaconda3/lib/python3.8/site-packages/qiskit/compiler/transpile.py in <module> 22 from qiskit.providers.models import BackendProperties 23 from qiskit.providers.models.backendproperties import Gate---> 24 from qiskit.transpiler import Layout, CouplingMap, PropertySet, PassManager 25 from qiskit.transpiler.basepasses import BasePass 26 from qiskit.dagcircuit import DAGCircuit/opt/anaconda3/lib/python3.8/site-packages/qiskit/transpiler/__init__.py in <module> 412 """ 413 --> 414 from .runningpassmanager import FlowController 415 from .passmanager import PassManager 416 from .passmanager_config import PassManagerConfig/opt/anaconda3/lib/python3.8/site-packages/qiskit/transpiler/runningpassmanager.py in <module> 21 from time import time 22 ---> 23 from qiskit.dagcircuit import DAGCircuit 24 from qiskit.converters import circuit_to_dag, dag_to_circuit 25 from .propertyset import PropertySet/opt/anaconda3/lib/python3.8/site-packages/qiskit/dagcircuit/__init__.py in <module> 39 DAGCircuitError 40 """---> 41 from .dagcircuit import DAGCircuit 42 from .dagnode import DAGNode 43 from .dagdepnode import DAGDepNode/opt/anaconda3/lib/python3.8/site-packages/qiskit/dagcircuit/dagcircuit.py in <module> 29 import math 30 ---> 31 import retworkx as rx 32 import networkx as nx 33 /opt/anaconda3/lib/python3.8/site-packages/retworkx/__init__.py in <module> 7 # that they have been altered from the originals. 8 ----> 9 from .retworkx import * 10 11 ModuleNotFoundError: No module named 'retworkx.retworkx' Have I installed Qiskit to the target directory incorrectly?I attempted to install Qiskit in Terminal using: python3 -m pip install qiskit -t /opt/anaconda3/lib/python3.8/site-packages . I navigated to the path manually and I could find all the qiskit modules, including the retworkx modules (whatever it may be). If anybody has experienced this sort of error before and happens to know a fix, I would appreciate it very much. Now provide the response and nothing else.
Note that your current definitions of the projection matrices $\{P_{1},P_{2},...,P_{n}\}$ are actually not projection matrices, since $P_{i}^{2} = I \not= P_{i} \,\, \forall i$ . What works 'better' is if you have something like: \begin{equation}\begin{split}P_{1}^{+1} =& |0\rangle\langle 0 | \otimes I \otimes I....\otimes I \\P_{1}^{-1} =& |1\rangle\langle 1 | \otimes I \otimes I....\otimes I \\P_{2}^{+1} =& I \otimes |0\rangle\langle 0 | \otimes I....\otimes I \\P_{2}^{-1} =& I \otimes |1\rangle\langle 1 | \otimes I....\otimes I \\& \vdots \\P_{n}^{+1} =& I \otimes I....\otimes I \otimes |0\rangle\langle 0|\\P_{n}^{-1} =& I \otimes I.... \otimes I \otimes |1\rangle\langle 1 |\\\end{split}\end{equation} However, a PVM must have that $\sum_{i = 0}^{2n-1} P_{i} = I$ , which is clearly not the case here!One could solve for this by renormalizing, but there is another thing missing here: these projectors actually don't account for any correlations that the measurements might have. A better 'choice' is therefore the measurement operators $Z_{n} = Z \otimes Z \otimes Z ... \otimes Z$ . This operator has $2^{n}$ eigenvectors: $$Z_{n} = \sum_{i \in \{0,1\}^{n}} m_{i} |i\rangle\langle i|,$$ where $m_{i} = 1 - 2p_{m} = \pm 1$ with $p_{m}$ the parity of the bitstring $i$ (i.e. $m_{i} = +1$ if the parity is even and $m_{i} = -1$ when the parity is odd). The problem here is that the measurement might only return the measurement outcome, so that you only learn the parity of the measured state. If you want to tell all the different states apart, it is better to associate each state $|i\rangle$ with its own measurement outcome. If $i_{d}$ is the decimal representation of the bitstring $i$ , a clear choice is to associate this decimal representation with the state $|i\rangle$ as the measurement outcome. The observable $D$ for this measurement is then: $$D = \sum_{i} i_{d} |i\rangle\langle i|.$$ This is also the content of the answer by DaftWullie .
{}
{'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/13440', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/12809/']}
jdg_78364
stackexchange
llm_judgeable_groundtruth_similarity
17601615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a JavaScript variable and when I click on the button it should increment by 1, but its not happening. Here's manifest.json . { "name":"Facebook", "version":"1.0", "description":"My Facebook Profile", "manifest_version":2, "browser_action":{ "default_icon":"google-plus-red-128.png", "default_popup":"hello.html" }} Here is the code for the html page <!DOCTYPE html><html><head><script>var a=0;function count(){ a++; document.getElementById("demo").innerHTML=a; return a;}</script></head><body><p id="demo">=a</p><button type="button" onclick="count()">Count</button></body></html> I want the extension to show me the value of a and increment it by one each time I click on the extension or the button Now provide the response and nothing else.
Your code is not working because it violates the default Content Security Policy . I've created a screencast of one minute to show what's wrong: First, I've shown how to debug the problem. Right-click on your popup button, and click on "Inspect popup" . After doing that, you will see the following error message: Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' chrome-extension-resource:". This explains that your code is not working, because it violates the default CSP: Inline JavaScript will not be executed . To solve the problem, you have to remove all inline JavaScript from your HTML file, and put it in a separate JS file. The result is shown below: hello.html (popup page) <!DOCTYPE html><html><head></head><body><p id="demo">=a</p><button type="button" id="do-count" >Count</button> <script src="popup.js"></script> </body></html> popup.js var a=0;function count () { a++; document.getElementById('demo'). textContent = a;} document.getElementById('do-count').onclick = count; Note that I've replaced innerHTML with textContent . Learn to use textContent instead of innerHTML when you intend to change the text. In this simple example it does not matter, but in more complex applications, it might become a security issue in the form of XSS.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17601615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2293974/']}
jdg_78365
stackexchange
llm_judgeable_groundtruth_similarity
23188888
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm currently finishing my website and all the stuff works just fine but i keep getting a error. It don't break anything but it's annoying to see.I have been trying alot of stuff but i need some new eyes on this problem :/ Error PHP Fatal error: Cannot use object of type PDOException as array in /home/thecodin/public_html/memberlist.php on line 114 Code Line 113: foreach($MemberList as $MemberListEach) {Line 114: $MemberGroup = $MemberListEach['Group'];Line 115: $MemberListGroup = $Class_Users->group_info($MemberGroup); Database Structure http://gyazo.com/785f780e6b62df6136087070d7c69c65 Member List Class public function Member_List($offset, $max){ try { // Run Query - Member List $member_list = $this->db->prepare("SELECT * FROM `Users` ORDER BY `Group` DESC LIMIT ".$offset.",".$max.""); $member_list->execute(); $member_list_fetch = $member_list->fetchAll(); return $member_list_fetch; } catch(PDOException $e) { return array("ERROR", $e); } } Group Info Class public function group_info($id){ try { // Run Query - Group Info $group_info = $this->db->prepare("SELECT * FROM `Groups` WHERE `GID`=:id"); $group_info->bindParam(':id', $id); $group_info->execute(); $group_info_rows = $group_info->fetch(); return $group_info_rows; } catch(PDOException $e) { return array("ERROR", $e); } } Now provide the response and nothing else.
Few problems, you created your indexes on the foo collection of the foo database, but are querying the bar collection . You need to be on the correct collection. Reading the document you have inserted you need to add a "2dsphere" index to support the geoJson objects . This index needs to be on the "point" element of your documents, so try db.bar.createIndex({point:"2dsphere"}); You can then query as follows by providing a geoJson obj for the query: db.bar.find( { point : { $near : { $geometry : { type : "Point" , coordinates : [-84.27326978424058, 30.443902444762696] }, $maxDistance : 1 } } })
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/23188888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3554929/']}
jdg_78366
stackexchange
llm_judgeable_groundtruth_similarity
20794832
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm teaching myself C++ and I'm a bit confused about pointers (specifically in the following source code). But first, I proceed with showing you what I know (and then contrasting the code against this because I feel as if there are some contradictions going on). What I know: int Age = 30;int* pointer = &Age;cout << "The location of variable Age is: " << pointer << endl;cout << "The value stored in this location is: " << *pointer << endl; Pointers hold memory addresses. Using the indirection (dereference) operator (the *), you can access what is stored in memory location of the pointer. Onto the code in this book I'm having trouble understanding... cout << "Enter your name: ";string name;getline(cin, name); //gets full line up to NULL terminating characterint CharsToAllocate = name.length() + 1; //calculates length of string input //adds one onto it to adjust for NULL characterchar* CopyOfName = new char[CharsToAllocate];// pointer to char's called CopyOfName, is given the memory address of the //beginning of a block//of memory enough to fit CharsToAllocate. Why we added 1? Because char's need a //NULL terminating character (\0)strcpy(CopyOfName, name.c_str()); //copies the string name, into a pointer?cout << "Dynamically allocated buffer contains: " << CopyOfName << endl;delete[] CopyOfName; //always delete a pointer assigned by new to prevent memory leaks Output: Enter your name: AdamDynamically allocated buffer contains: Adam The comments in the above code are my comments. My problem begins with strcpy . Why is name.c_str() copied into a pointer CopyOfName ? Does this mean that all strings are essential pointers? So like string testing = "Hello world";Is actually a pointer pointing to the memory location where "H" is stored? Next, why is it in the print out statement using CopyOfName and not *CopyOfName ? Pointers hold memory addresses? Using *CopyOfName would print out the contents of the memory location. I tried this in Code::Blocks and if the input text was "Hello World." Using *CopyOfName in the print out statement would just give an "H". This makes sense since when I declared that I needed a memory block with the 'new' thing, this actually returns a pointer to the first part of the dynamically allocated memory block. The only way I can reconcile this is if a string is actually a pointer. string testing = "Confused";cout << testing << endl; would print out the word "Confused" However, if I try to compile string testing = "Confused";cout << *testing; I get an error message. Basically, to summarize my question, I'm trying to understand the code with strcpy and the cout statement. Now provide the response and nothing else.
It seems like you understand what C-style strings are, but to summarize, they are just arrays of characters in memory, by convention terminated by a nul character \0 . Usually they are referenced via a char* pointing to the first letter in the string. When they are printed, typically the characters of the string are printed starting from the first, and printing (or copying, etc.) stops when the \0 terminator is reached. An std::string is a class that (typically) wraps a C-style string. This means that a std::string object (typically) has a private C-style string that is used to implement its functionality. The function std::string::c_str() returns a pointer to this underlying C-style string. Let's suppose that char *str; points to a C-style string. If you attempt to run cout << *str << endl; , you noticed that only the first character is printed. That is because of C++'s function overloading. The data type of *str is char , so the char version of cout is called and faithfully prints the single character *str . For compatibility with C-style strings, the version of cout that takes a char* as an argument treats the pointer as a C-style string for printing purposes. If you cout an int* , for example, the underlying int will not be printed. Edit: Another comment: The reason that your attempt to dereference an std::string object failed is that, indeed, it is not a pointer. You could dereference the return value of std::string::c_str() , and you would get back the first char of the string. Related: How is std::string implemented? .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20794832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3133639/']}
jdg_78367
stackexchange
llm_judgeable_groundtruth_similarity
5869539
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There seems to be a difference between levels and labels of a factor in R.Up to now, I always thought that levels were the 'real' name of factor levels, and labels were the names used for output (such as tables and plots). Obviously, this is not the case, as the following example shows: df <- data.frame(v=c(1,2,3),f=c('a','b','c'))str(df)'data.frame': 3 obs. of 2 variables: $ v: num 1 2 3 $ f: Factor w/ 3 levels "a","b","c": 1 2 3df$f <- factor(df$f, levels=c('a','b','c'), labels=c('Treatment A: XYZ','Treatment B: YZX','Treatment C: ZYX'))levels(df$f)[1] "Treatment A: XYZ" "Treatment B: YZX" "Treatment C: ZYX" I thought that the levels ('a','b','c') could somehow still be accessed when scripting, but this doesn't work: > df$f=='a'[1] FALSE FALSE FALSE But this does: > df$f=='Treatment A: XYZ' [1] TRUE FALSE FALSE So, my question consists of two parts: What's the difference between levels and labels? Is it possible to have different names for factor levels for scripting and output? Background: For longer scripts, scripting with short factor levels seems to be much easier. However, for reports and plots, this short factor levels may not be adequate and should be replaced with preciser names. Now provide the response and nothing else.
Very short : levels are the input, labels are the output in the factor() function. A factor has only a level attribute, which is set by the labels argument in the factor() function. This is different from the concept of labels in statistical packages like SPSS, and can be confusing in the beginning. What you do in this line of code df$f <- factor(df$f, levels=c('a','b','c'), labels=c('Treatment A: XYZ','Treatment B: YZX','Treatment C: ZYX')) is telling to R that there is a vector df$f which you want to transform into a factor, in which the different levels are coded as a, b, and c and for which you want the levels to be labeled as Treatment A etc. The factor function will look for the values a, b and c, convert them to numerical factor classes, and add the label values to the level attribute of the factor. This attribute is used to convert the internal numerical values to the correct labels. But as you see, there is no label attribute. > df <- data.frame(v=c(1,2,3),f=c('a','b','c')) > attributes(df$f)$levels[1] "a" "b" "c"$class[1] "factor"> df$f <- factor(df$f, levels=c('a','b','c'),+ labels=c('Treatment A: XYZ','Treatment B: YZX','Treatment C: ZYX')) > attributes(df$f)$levels[1] "Treatment A: XYZ" "Treatment B: YZX" "Treatment C: ZYX"$class[1] "factor"
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5869539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/415697/']}
jdg_78368
stackexchange
llm_judgeable_groundtruth_similarity
8976086
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created some code to differentiate between two usergroups in Django admin, resulting in showing all fields readonly or only some of them, which are set directly in the ModelAdmin class. At first here is the code: class PersonAdmin(admin.ModelAdmin): readonly_fields = ('created_at','created_by',)def get_form(self, request, obj=None, **kwargs): if obj: # we are in edit mode if request.user.is_superuser: self.readonly_fields = () else: for group in request.user.groups.all(): if str(group) == 'readonlyuser': allfields = tuple(obj._meta.get_all_field_names()) self.readonly_fields = allfields return super(PersonAdmin, self).get_form(request, obj, **kwargs) I divide between the groups and set the fields accordingly. Everything works fine if users from the two groups are not logged in at the same time! After a 'readonly' user logged in, the adminuser will get all fields readonly too. My inspections provided a solution also: If I put an additional if statement for the adminuser within the for block everything works as expected. if str(group) == 'adminuser': self.readonly_fields = PersonAdmin.readonly_fields Why is that and what's happening there? I have no special cache settings made and it happens on the dev server as well as on an Apache with WSGI. From my understanding request.user.groups.all() should return all groups the currently logged in user belongs to. Where does Django get the allfields (readonly) from, if another user on a different IP and session match this if block? Now provide the response and nothing else.
The ModelAdmin is only instantiated once for all requests that it receives. So when you define the readonly fields like that, you're setting it across the board permanently. As long as you're running Django 1.2+, there's a get_readonly_fields method you can use instead for exactly this purpose: class MyModelAdmin(admin.ModelAdmin): ... def get_readonly_fields(self, request, obj=None): if request.user.is_superuser: return super(MyModelAdmin, self).get_readonly_fields(request, obj) else: return ('created_at', 'created_by') Remove the readonly_fields attribute from your ModelAdmin or set it to the fields that should be readonly for everyone . Then, in the else block specify all the fields that should be readonly only for non-superusers.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8976086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427942/']}
jdg_78369
stackexchange
llm_judgeable_groundtruth_similarity
429619
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: A function $f:X\to X$ on a group $X$ is called a polynomial if there exist $n\in\mathbb N=\{1,2,3,\dots\}$ and elements $a_0,a_1,\dots,a_n\in X$ such that $f(x)=a_0xa_1x\cdots xa_n$ for all $x\in X$ . The smallest possible number $n$ in this representation is called the degree of the polynomial $f$ and is denoted by $\deg(f)$ . Let $\mathrm{Poly}(X)$ be the set of all polynomials on a group $X$ . In fact, $\mathrm{Poly}(X)$ is a submonoid of the monoid $X^X$ of all self-maps of $X$ , endowed with the operation of composition of functions. So, $|\mathrm{Poly}(X)|\le|X^X|=|X|^{|X|}$ . If the group $X$ is commutative, then each polynomial is of the form $f(x)=ax^n$ for some $a\in X$ and $n\in\mathbb N$ . This implies that the number of semigroup polynomials on a finite Abelian group $X$ is equal to $|X|\cdot\exp(X)\le |X|^2$ where $\exp(X)=\min\{n\in\mathbb N:\forall x\in X\; (x^n=1)\}$ . Question 1. Is any reasonable upper bound on the number of polynomials on a finite group $X$ ?For example, is $|\mathrm{Poly}(X)|=o(|X|^{|X|})$ ? Each polynomial $f:X\to X$ on a finite Abelian group $X$ has degree $\deg(f)\le\exp(X)$ . Question 2. Is $\deg(f)\le\exp(X)$ for any polynomial $f:X\to X$ on a finite group $X$ ? Remark 2. The affirmative answer to Question 2 would imply that $$|\mathrm{Poly}(X)|\le \sum_{n=1}^{\exp(X)}|X|^{k+1}=\frac{|X|^{\exp(X)+2}-|X|^2}{|X|-1}.$$ Remark 3. Finite groups $X$ with $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ are characterized in the following theorem. Theorem. A finite group $X$ has $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ if and only if $X$ is either commutative or is isomorphic to $Q_8\times A$ for some nontrivial commutative group $A$ of odd order. Proof. To prove the ``if'' part, assume that $X$ is either commutative or $X$ is isomorphic to $Q_8\times A$ for some nontrivial commutative group $A$ of odd order. If $X$ is commutative, then the equality $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ is clear. Now assume that $X=Q_8\times A$ for some nontrivial commutative group $A$ of odd order. GAP-calculations of Peter Taylor show that the group $Q_8$ has exactly 32 polynomials of each degree $k\in\{1,2,3,4\}$ . This implies that $$|\mathrm{Poly}(Q_8\times A)|=32\cdot|\mathrm{Poly}(A)|=32\cdot |A|\cdot\exp(A)=4\cdot|X|\cdot\exp(A)=|X|\cdot\exp(X).$$ To prove the ``only if'' part, assume that $X$ is a finite non-commutative group with $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ .For every $a\in X$ and $n\in\mathbb N$ , consider the polynomial $p_{a,n}(x)=ax^n$ . The definition of $\exp(X)$ implies that the set $\mathrm{Pol}(X):=\{p_{a,n}:a\in X,\;1\le n\le \exp(X)\}$ has cardinality $|X|\cdot\exp(X)$ and hence coincides with the set $\mathrm{Poly}(X)$ . So, for any $a\in X$ there exists $n\le\exp(X)$ such that $axa^{-1}=x^n$ for all $x\in X$ . This implies that every subgroup of $X$ is normal, so $X$ is a Dedekind group . By the classical Dedekind result , $X$ is isomorphic to the product $Q_8\times A\times B$ where $A$ is a Abelian group of odd order and $B$ is a Boolean group, i.e., a group of exponent $\exp(B)\le 2$ . If the group $A$ and $B$ is trivial, then $|\mathrm{Poly}(X)|=|\mathrm{Poly}(Q_8)|=128\ne |X|\cdot\exp(X)=32$ . Next, assume that the group $A$ is trivial and $B$ is not trivial. Then $|\mathrm{Poly}(B)|=|\{a,ax:a\in B\}|=2|B|$ . GAP-calculations of Peter Taylor show that the group $Q_8$ has exactly 32 polynomials of each degree $k\in\{1,2,3,4\}$ . In particular, $Q_8$ has exactly 64 polynomials of even degree and 64 polynomials of odd degree.This implies that $|\mathrm{Poly}(X)|=64\cdot 2|B|=16|Q_8\times B|=16|X|\ne 4|X|=|X|\cdot\exp(X)=|\mathrm{Poly}(X)|$ . This contradiction shows that the group $A$ is nontrivial. Taking into account that the group $Q_8$ has exactly 32 polynomials of each degree $k\in\{1,2,3,4\}$ , we conclude that $$|X|\cdot\exp(X)=|\mathrm{Poly}(X)|=|\mathrm{Poly}(Q_8\times A\times B|=32\times|\mathrm{Poly}(A\times B)|=32\times |A\times B|\times \exp(A\times B)=4\times|Q_8\times A\times B|\times \exp(A\times B)=4\cdot |X|\cdot\exp(A\times B)$$ and hence $\exp(Q_8\times A\times B)=\exp(X)=4\exp(A\times B)$ . Since $\exp(Q_8\times A\times B)=4\exp(A),$ this implies that the Boolean group $B$ is trivial and hence $X=Q_8\times A$ . $\square$ Now provide the response and nothing else.
$\DeclareMathOperator\Poly{Poly}$ Proposition. If $G$ is a simple non-abelian finite group, then $\Poly(G)=G^G$ . (Edit: this observation appears as the main therorem in this paper by Maurer and Rhodes, Proc. AMS 1965. See also Theorem 2 here by Schneider-Thom. Thanks to Benjamin Steinberg for the reference.) Here is the proof. It uses no machinery. Lemma. There exists $f\in\Poly(G)$ whose support is a singleton. [Here the support of $f$ means $f^{-1}(G\smallsetminus\{1\})$ .] Indeed, let $f$ have support $\{g\}$ . Considering $x\mapsto hf(x)h^{-1}$ we see that all values in a single nontrivial conjugacy class are achieved by polynomials supported by $\{g\}$ . By simplicity and taking products, we see that all maps supported by $g$ are definable as polynomials. Moreover after considering $x\mapsto f(gh^{-1}x)$ we obtain all functions supported by $\{h\}$ . Since an arbitrary map is product of maps supported by singletons, we obtain the proposition. Now let us prove the lemma. Let $X$ be a minimal subset among nonempty supports of elements of $\Poly(G)$ ( $X$ exists because there exists a polynomial not constant $=1$ ). Say $X$ is the support of $f$ . We have to show that $X$ is a singleton. Fix $g\in X$ . So $u(x)=g^{-1}x$ is a polynomial. Also for each $h\in H$ , the self-map $v$ defined $v(x)=hf(x)h^{-1}$ is a polynomial. Then $w_h:x\mapsto [u(x),v(x)]$ is a polynomial as well. Its support is contained in $X\smallsetminus\{g\}$ . So we obtain a contradiction (a strictly smaller nonempty support), unless $w_h$ is constant equal to $1$ for each choice of $h$ . The latter means that for each $x\in X\smallsetminus\{g\}$ , the element $g^{-1}x$ commutes with $hf(x)h^{-1}$ . That is, the nontrivial element $g^{-1}x$ commutes with a whole nontrivial conjugacy class. But the centralizer of a nontrivial conjugacy class is trivial (it is a normal subgroup, and can't be the whole group because the center is trivial). This is a contradiction unless $X\smallsetminus\{g\}$ is empty, which is precisely what we want. The proof is complete. Remark (after Taras' comment, and also in the above Maurer-Rhodes reference): conversely, for a finite group $G$ , the property $\Poly(G)=G^G$ implies that $G$ is simple non-abelian or $|G|\le 2$ . Indeed if $G$ is non-trivial and non-simple, then it has a non-trivial proper normal subgroup $N$ , and polynomials have the nontrivial constraint $f(N)\subset f(1)N$ . Otherwise $G=\mathbf{Z}/p\mathbf{Z}$ for $p$ prime or $1$ . For such a group, a "polynomial" has the form (using additive notation) $x\mapsto a+bx$ for some $a,b\in\mathbf{Z}/p\mathbf{Z}$ (i.e. is an affine self-map in this ring). There are thus $p^2$ such functions. And $p^2<p^p$ iff $p>2$ .
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/429619', 'https://mathoverflow.net', 'https://mathoverflow.net/users/61536/']}
jdg_78370
stackexchange
llm_judgeable_groundtruth_similarity
53658208
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im trying to achieve something like the following, I'm very new to flutter so I couldn't figure it out.I need a custom AppBar with drawer and actions but arranged like the image. I tried a StackView in the title widget appBar: AppBar( title: Stack( children: <Widget>[ Container( width: double.infinity, color: CustomColors.accentColor, ), Text( 'Title', style: TextStyle(fontSize: 22.0, color: CustomColors.primaryDark), ), ], ), ), But I get something like this Can someone help me out? Thank you. Now provide the response and nothing else.
As I mentioned in the comment , you can create a Custom widget like your Image attached, there are many ways to do it, this is just an example : class CustomBarWidget extends StatelessWidget { GlobalKey<ScaffoldState> _scaffoldKey = GlobalKey(); @override Widget build(BuildContext context) { return Scaffold( key: _scaffoldKey, body: Container( height: 160.0, child: Stack( children: <Widget>[ Container( color: Colors.red, width: MediaQuery.of(context).size.width, height: 100.0, child: Center( child: Text( "Home", style: TextStyle(color: Colors.white, fontSize: 18.0), ), ), ), Positioned( top: 80.0, left: 0.0, right: 0.0, child: Container( padding: EdgeInsets.symmetric(horizontal: 20.0), child: DecoratedBox( decoration: BoxDecoration( borderRadius: BorderRadius.circular(1.0), border: Border.all( color: Colors.grey.withOpacity(0.5), width: 1.0), color: Colors.white), child: Row( children: [ IconButton( icon: Icon( Icons.menu, color: Colors.red, ), onPressed: () { print("your menu action here"); _scaffoldKey.currentState.openDrawer(); }, ), Expanded( child: TextField( decoration: InputDecoration( hintText: "Search", ), ), ), IconButton( icon: Icon( Icons.search, color: Colors.red, ), onPressed: () { print("your menu action here"); }, ), IconButton( icon: Icon( Icons.notifications, color: Colors.red, ), onPressed: () { print("your menu action here"); }, ), ], ), ), ), ) ], ), ), ); } } For more information, I wrote an article about how we can customize the AppBar : https://medium.com/flutter-community/flutter-increase-the-power-of-your-appbar-sliverappbar-c4f67c4e076f
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53658208', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3831319/']}
jdg_78371
stackexchange
llm_judgeable_groundtruth_similarity
28829
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question: Why do we insert a numeric multiplier after group positions when naming an isomer? Take 2,3 dimethylbutane for example. Why do we add the numerical multiplier di when we've already explicitly mentioned two locations? Why don't we just use 2,3 methylbutane ? This just doesn't feel intuitive. Now provide the response and nothing else.
take 2,3-dimethylbutane In the case of different substituents on different or the same atom, we use a combination of the position and the name of the substituent, such as in 2-chloro-3-methoxy... 3-cyano-4-bromo..., etc. Following this rule, 1,2-dimethoxyethane would be 1-methoxy-2-methoxyethane. Using the multiplier di , together with the positions, just shortens and simplifies the name and still is unambiguous. The multiplier di explicitly denotes that there are two identical substituents at the given positions and not just one that bridges two centres. An example for such a situation would be 1,4-methanonaphthalene ( 1 ). Note that even in this context, di can be used, as in 1,4:5,8-dimethanonaphthalene ( 2 ).
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/28829', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/14911/']}
jdg_78372
stackexchange
llm_judgeable_groundtruth_similarity
5778719
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've read that a HashSet in .net4 will ignore all the duplicates. So what I do is: HashSet<medbaseid> medbaseidlist = new HashSet<medbaseid>(); for (int i = 2; i <= rowCount; i++) { medbaseid medbaseid = new medbaseid() { mainClass = xlRange.Cells[i, 1].Value2.ToString(), genName = xlRange.Cells[i, 2].Value2.ToString(), speciality = xlRange.Cells[i, 3].Value2.ToString(), med_type_id = getId(xlRange.Cells[i, 4].Value2.ToString(), id = i-1) }; medbaseidlist.Add(medbaseid); } medbaseid can have the same values as the previous object. But if I check the hashset later in the end, there are duplicate items. the equals and gethashcode method i added but didn't help. I also added an id to the class. So 2 objects can have the same content but different id : public override bool Equals(object obj) { medbaseid medb = (medbaseid)obj; return ((medb.id == this.id) && (medb.genName == this.genName) && (medb.mainClass == this.mainClass) && (medb.med_type_id == this.med_type_id) && (medb.speciality == this.speciality)) ? true : false; } public override int GetHashCode() { return id; } So my question now is: What am I doing wrong, or is this not the right way to use a HashSet ? Thanks in advance for any help. Now provide the response and nothing else.
It will depend on the implementations of GetHashCode() and Equals() on the medbaseid class. See http://msdn.microsoft.com/en-us/library/system.object.gethashcode.aspx for more info. By default objects will only compare as equal if they are literally the same object. Having the same "content" is not sufficient to make them equal. If you want two different objects with the same "content" to be equal, you must override Equals() to implement that logic. Whenever you override Equals() you must also override GetHashCode() for them to work correctly inside a hashing data structure like HashSet<> .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5778719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/329829/']}
jdg_78373
stackexchange
llm_judgeable_groundtruth_similarity
18439054
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using postgreSQL 9.1 and I want to delete duplicates from my table using this tip: https://stackoverflow.com/a/3822833/2239537 So, my query looks like that: WITH cte AS (SELECT ROW_NUMBER() OVER (PARTITION BY code, card_id, parent_id ORDER BY id DESC) RN FROM card)DELETE FROM cteWHERE RN > 1 But it shows me ERROR: relation "cte" does not existSQL state: 42P01Character: 157 However this statement works fine: WITH cte AS (SELECT ROW_NUMBER() OVER (PARTITION BY code, card_id, parent_id ORDER BY id DESC) RN FROM merchantcard)SELECT * FROM cteWHERE RN > 1 Any ideas how to get it work?Thanks! Now provide the response and nothing else.
that's because CTE in PostgreSQL works differently than CTE in SQL Server. In SQL Server CTE are like an updatable views, so you can delete from them or update them, in PostgreSQL you cannot. you can join cte and delete, like: with cte as ( select id, row_number() over(partition by code, card_id, parent_id order by id desc) as rn from card)deletefrom cardwhere id in (select id from cte where rn > 1) On the other hand, you can write DDL statements inside CTE in PostgreSQL (see documentation ) and this could be very handy. For example, you can delete all rows from card and then insert only those having row_number = 1: with cte1 as ( delete from card returning *), cte2 as ( select row_number() over(partition by code, card_id, parent_id order by id desc) as rn, * from cte1)insert into cardselect <columns here>from cte2where rn = 1
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18439054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2239537/']}
jdg_78374
stackexchange
llm_judgeable_groundtruth_similarity
31239831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After searching a lot in difference between @function and @mixin I ended up here. Is there any advantage of using @mixin over @funcion or vice versa. In what context they'll be different, how to use them interchangeably, please come up with examples. Now provide the response and nothing else.
Functions are useful specifically because they return values. Mixins are nothing like functions--they usually just provide valuable blocks of code. Usually, there are cases where you might have to use both. For example, if I wanted to create a long-shadow with SASS , I would call a function like so: @function makelongshadow($color) { $val: 0px 0px $color; @for $i from 1 through 200 { $val: #{$val}, #{$i}px #{$i}px #{$color}; } @return $val;} Which would then be called with this mixin: @mixin longshadow($color) { text-shadow: makelongshadow($color);} Which provides us with the actual code. That gets included in the element: h1 { @include longshadow(darken($color, 5% ));}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31239831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1696621/']}
jdg_78375
stackexchange
llm_judgeable_groundtruth_similarity
10316418
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was having a discussion with a colleague recently about the value of Dispose and types that implement IDisposable . I think there is value in implementing IDisposable for types that should clean up as soon as possible, even if there are no unmanaged resources to clean up . My colleague thinks differently; implementing IDisposable if you don't have any unmanaged resources isn't necessary as your type will eventually be garbage collected. My argument was that if you had an ADO.NET connection that you wanted to close as soon as possible, then implementing IDisposable and using new MyThingWithAConnection() would make sense. My colleage replied that, under the covers, an ADO.NET connection is an unmanaged resource . My reply to his reply was that everything ultimately is an unmanaged resource . I am aware of the recommended disposable pattern where you free managed and unmanaged resources if Dispose is called but only free unmanaged resources if called via the finalizer/destructor (and blogged a while ago about how to alert consumers of improper use of your IDisposable types ) So, my question is, if you've got a type that doesn't contain unmanaged resources, is it worth implementing IDisposable ? Now provide the response and nothing else.
There are different valid uses for IDisposable . A simple example is holding an open file, which you need to be closed at certain moment, as soon as you don't need it any more. Of course, you could provide a method Close , but having it in Dispose and using pattern like using (var f = new MyFile(path)) { /*process it*/ } would be more exception-safe. A more popular example would be holding some other IDisposable resources, which usually means that you need to provide your own Dispose in order to dispose them as well. In general, as soon as you want to have deterministic destruction of anything, you need to implement IDisposable . The difference between my opinion and yours is that I implement IDisposable as soon as some resource needs deterministic destruction/freeing, not necessary as soon as possible . Relying on garbage collection is not an option in this case (contrary to your colleague's claim), because it happens at unpredictable moment of time, and actually may not happen at all! The fact that any resource is unmanaged under the cover really doesn't mean anything: the developer should think in terms of "when and how is it right to dispose of this object" rather than "how does it work under the cover". The underlying implementation may change with the time anyway. In fact, one of the main differences between C# and C++ is the absence of default deterministic destruction. The IDisposable comes to close the gap: you can order the deterministic destruction (although you cannot ensure the clients are calling it; the same way in C++ you cannot be sure that the clients call delete on the object). Small addition: what is actually the difference between the deterministic freeing the resources and freeing them as soon as possible ? Actually, those are different (though not completely orthogonal) notions. If the resources are to be freed deterministically , this means that the client code should have a possibility to say "Now, I want this resource freed". This may be actually not the earliest possible moment when the resource may be freed: the object holding the resource might have got everything it needs from the resource, so potentially it could free the resource already. On the other hand, the object might choose to keep the (usually unmanaged) resource even after the object's Dispose ran through, cleaning it up only in finalizer (if holding the resource for too long time doesn't make any problem). So, for freeing the resource as soon as possible , strictly speaking, Dispose is not necessary: the object may free the resource as soon as it realizes itself that the resource is not needed any more. Dispose however serves as a useful hint that the object itself is not needed any more, so perhaps the resources may be freed at that point if appropriate. One more necessary addition: it's not only unmanaged resources that need deterministic deallocation! This seems to be one of key points of the difference in opinions among the answers to this question. One can have purely imaginative construct, which may need to be freed deterministically. Examples are: a right to access some shared structure (think RW-lock ), a huge memory chunk (imagine that you are managing some of the program's memory manually), a license for using some other program (imagine that you are not allowed to run more than X copies of some program simultaneously), etc. Here the object to be freed is not an unmanaged resource, but a right to do/to use something, which is a purely inner construct to your program logic. Small addition: here is a small list of neat examples of [ab]using IDisposable : http://www.introtorx.com/Content/v1.0.10621.0/03_LifetimeManagement.html#IDisposable .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10316418', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28901/']}
jdg_78376
stackexchange
llm_judgeable_groundtruth_similarity
3474
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Negation of nested quantifiers The problem is: ∃x∀y(x ≥ y) With a domain of all real positive integers. The negation is: ∀x∃y(x < y) so, if y = x + 1 the negation is true. That means the negation of the negation (or, the original problem) is false. My question is, that if the original problem is ∃x∀y(x ≥ y), why can't x = y and prove the problem true? Now provide the response and nothing else.
No, the constants $c_1,c_2$ are the same for all values of $n$. If they could depend on $n$ then the definition would be meaningless since $\Theta(g(n))$ would contain all positive functions. Indeed, suppose $f,g$ are two positive functions. Then for all $n$, $$c_1g(n) \leq f(n) \leq c_2g(n) \text{ for } c_1 = c_2 = \frac{f(n)}{g(n)}.$$
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/3474', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2756/']}
jdg_78377
stackexchange
llm_judgeable_groundtruth_similarity
4974237
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I declare a managed method in C++/CLI that has an optional parameter when used from C#? I've decorated the parameter with both an Optional and a DefaultParameterValue attribute (see: How default parameter values are encoded ), but only the Optional attribute seems to be honored. C++/CLI: public ref class MyClass1{ public: MyClass1([System::Runtime::InteropServices::Optional] [System::Runtime::InteropServices::DefaultParameterValue(2)] int myParam1) ↑ { System::Console::WriteLine(myParam1); }}; C#: var myInstance1 = new MyClass1(); // compiles and runs Output : 0 Expected Output: 2 Visual C# IntelliSense: MyClass1.MyClass1([int myParam1 = 0]); // wrong default value ↑ Edit: A closer look with a disassembler reveals that the C++/CLI compiler does indeed not generate the required .param [1] = int32(2) directive. The IL code shown by Reflector is wrong. Reflector: .method public hidebysig specialname rtspecialname instance void .ctor([opt] int32 myParam1) cil managed{ .param [1] = int32(2) // bug ... ILDASM: .method public hidebysig specialname rtspecialname instance void .ctor([opt] int32 myParam1) cil managed{ .param [1] .custom instance void [System]System.Runtime.InteropServices.DefaultParameterValueAttribute::.ctor(object) = ( 01 00 08 02 00 00 00 00 00 ) ... Now provide the response and nothing else.
The C# compiler doesn't use the [DefaultParameterValue] attribute to set the default value, it uses the .param directive to get the value embedded in the metadata. Barely documented in the CLI spec btw, only Partition II, chapter 15.4.1 mentions that it can have a FieldInit value, 15.4.1.4 is silent about it. That's where the buck stops, the C++/CLI compiler doesn't know how to generate the directive. You cannot make this work.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4974237', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76217/']}
jdg_78378
stackexchange
llm_judgeable_groundtruth_similarity
3105296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in need of some clarification. I've been reading about REST, and building RESTful applications. According to wikipedia, REST itself is defined to be Representational State Transfer . I therefore don't understand all this stateless gobbledeygook that everyone keeps spewing. From wikipedia: At any particular time, a client can either be in transition betweenapplication states or "at rest". A client in a rest state is able tointeract with its user, but creates no load and consumes no per-clientstorage on the set of servers or on the network. Are they just saying don't use session/application level data store??? I get that one goal of REST is to make URI access consistent and available, for instance, instead of hiding paging requests inside posts, making the page number of a request a part of the GET URI. Makes sense to me. But it seems like it is just going overboard saying that no per client data (session data) should ever be stored server side. What if I had a queue of messages, and my user wanted to read the messages, but as he read them, wanted to block certain senders messages coming through for the duration of his session? Wouldn't it make sense to store this in a place on the server side, and have the server only send messages (or message ID's) that were not blocked by the user? Do I really have to send the entire list of message senders to block each time I request the new message list? The message list pertinent to me wouldn't/shouldn't even be a publicly available resource in the first place.. Again, just trying to understand this. Someone please clarify. Update: I have found a stack overflow question that has an answer that doesn't quite get me all the way there: How to manage state in REST which says that the client state that is important should all be transferred on every request.... Ugg.. seems like a lot of overhead... Is this right?? Now provide the response and nothing else.
The fundamental explanation is: No client session state on the server. By stateless it means that the server does not store any state about the client session on the server side. The client session is stored on the client. The server is stateless means that every server can service any client at any time, there is no session affinity or sticky sessions . The relevant session information is stored on the client and passed to the server as needed. That does not preclude other services that the web server talks to from maintaining state about business objects such as shopping carts, just not about the client's current application/session state. The client's application state should never be stored on the server, but passed around from the client to every place that needs it. That is where the ST in REST comes from, State Transfer . You transfer the state around instead of having the server store it. This is the only way to scale to millions of concurrent users. If for no other reason than because millions of sessions is millions of sessions. The load of session management is amortized across all the clients, the clients store their session state and the servers can service many orders of magnitude or more clients in a stateless fashion. Even for a service that you think will only need in the 10's of thousands of concurrent users, you still should make your service stateless. Tens of thousands is still tens of thousands and there will be time and space cost associated with it. Stateless is how the HTTP protocol and the web in general was designed to operate and is an overall simpler implementation and you have a single code path instead of a bunch of server side logic to maintain a bunch of session state. There are some very basic implementation principles: These are principles not implementations, how you meet these principles may vary. In summary, the five key principles are: Give every “thing” an ID Link things together Use standard methods Resources with multiple representations Communicate statelessly There is nothing about authentication or authorization in the REST dissertation . Because there is nothing different from authenticating a request that is RESTful from one that is not. Authentication is irrelevant to the RESTful discussion. Explaining how to create a stateless application for your particular requirements, is too-broad for StackOverflow. Implementing Authentication and Authorization as it pertains to REST is even more so too-broad and various approaches to implementations are explained in great detail on the internet in general. Comments asking for help/info on this will/should just be flagged as No Longer Needed .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/3105296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2112692/']}
jdg_78379
stackexchange
llm_judgeable_groundtruth_similarity
41959
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The following article Why is Europe going through a heatwave? as well as other articles on the current heatwave in Europe and the British Isles tend to suggest that: The heatwaves in the northern hemisphere are undoubtedly linked to global warming, scientists say. “There’s no question human influence on climate is playing a huge role in this heatwave,” said Prof Myles Allen, a climate scientist at the University of Oxford. Is there really scientific support to the “global warming” assumption, or are there enough previous “exceptional heatwave” to suggest that we are still within the range of historical statistical data? Now provide the response and nothing else.
The trouble with this kind of claim is that " the range of historical statistical data " does not provide clear boundaries. A number of record high temperatures have been set this year. By definition a record is outside of "the range of historical statistical data". “The first six months of the year have made it the hottest La Niña year to date on record,” said Clare Nullis of the WMO. Taiwan is the most recent place to report a new high with a temperature of 40.3C in Tianxiang on Monday. This followed a flurry of other anomalies. Last week, a weather station at Ouargla in Algeria’s Sahara Desert, reported a maximum temperature of 51.3C on 5 July, the highest temperature reliably recorded in Africa. In Europe specifically this year has not quite set a record overall, but individual records have been set: In Europe, the WMO has warned of droughts, wildfires and harvest losses after the second hottest June on record. Over the past two weeks, records have been set in Tbilisi (40.5C), Shannon (32C), and Belfast (29.5C) A better way of thinking about exceptional weather events and climate change is to imagine rolling a loaded dice that comes up six half the time. If you roll it once and it comes up six, is that because it was loaded, or would it have rolled a six that time even if it wasn't loaded? The most you can say for that one event is that the six was probably due to the loading. Now suppose you roll it 100 times and get 47 sixes. That might happen by chance even with an unloaded dice, but it is so fantastically improbable that you can say the result is "no question" due to the loading and ignore the tiny probability that an unloaded dice would have done the same. In the case of 2018 weather, I would interpret Prof. Allen's statement in the same light. In his expert opinion the extreme weather events around the world at present would be so unlikely without global warming that the possibility can be discounted. If global warming is affecting weather globally, then of course it is also having an effect in Europe because Europe is part of the globe. So yes, there is scientific support for the proposition that global warming is largely responsible for the exceptionally high temperatures seen around the northern hemisphere this summer, including in Europe.
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/41959', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/45700/']}
jdg_78380
stackexchange
llm_judgeable_groundtruth_similarity
1460256
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For one problem, we were supposed to know that: $$x^2 + \frac{1}{x^2}\geq 2.$$ How do you deduce this instantly when looking at the expression above? Now provide the response and nothing else.
One way is using the arithmetic-geometric mean inequality: $(A+B)/2\ge \sqrt{AB}$for $A\ge 0, B\ge 0$. Take $A=x^2$ and $B=1/x^2$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1460256', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43633/']}
jdg_78381
stackexchange
llm_judgeable_groundtruth_similarity
12065774
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the following blog there is a statement about the advantage of arrays over linked lists: Arrays have better cache locality that can make a pretty big difference in performance. What does that mean? I don't understand how cache locality can provide a huge performance benefit. Now provide the response and nothing else.
See my answer about spatial and temporal locality . In particular, arrays are contiguous memory blocks, so large chunks of them will be loaded into the cache upon first access. This makes it comparatively quick to access future elements of the array. Linked lists on the other hand aren't necessarily in contiguous blocks of memory, and could lead to more cache misses, which increases the time it takes to access them. Consider the following possible memory layouts for an array data and linked list l_data of large structs Address Contents | Address Contentsffff 0000 data[0] | ffff 1000 l_dataffff 0040 data[1] | ....ffff 0080 data[2] | ffff 3460 l_data->nextffff 00c0 data[3] | ....ffff 0100 data[4] | ffff 8dc0 l_data->next->next | ffff 8e00 l_data->next->next->next | .... | ffff 8f00 l_data->next->next->next->next If we wanted to loop through this array, the first access to ffff 0000 would require us to go to memory to retrieve (a very slow operation in CPU cycles). However, after the first access the rest of the array would be in the cache, and subsequent accesses would be much quicker. With the linked list, the first access to ffff 1000 would also require us to go to memory. Unfortunately, the processor will cache the memory directly surrounding this location, say all the way up to ffff 2000 . As you can see, this doesn't actually capture any of the other elements of the list, which means that when we go to access l_data->next , we will again have to go to memory.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12065774', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/209123/']}
jdg_78382
stackexchange
llm_judgeable_groundtruth_similarity
61858
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a decent way to measure inductance accurately using an oscope and a function generator? The best method I can find is to build a tank circuit and sweep the frequency until the highest voltage appears. Then use the formula below to solve: $$f = \dfrac{1}{2\pi\sqrt{LC}}$$ It seems like there must be an easier way! Now provide the response and nothing else.
I've used a two terminal oscillator, with the inductor in parallel with a suitable capacitor, with a scope or counter to measure the oscillation frequency. I once checked an inductor on a very expensive inductance meter at work, and the values were identical. The source-coupled oscillator using two FETs is ideal for this application, or the LM311:
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/61858', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/11077/']}
jdg_78383
stackexchange
llm_judgeable_groundtruth_similarity
31225972
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to figure out if ConfigureAwait(false) should be used on top level requests. Reading this post from a somewhat authority of the subject: http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html ...he recommends something like this: public async Task<JsonResult> MyControllerAction(...){ try { var report = await _adapter.GetReportAsync(); return Json(report, JsonRequestBehavior.AllowGet); } catch (Exception ex) { return Json("myerror", JsonRequestBehavior.AllowGet); // really slow without configure await }}public async Task<TodaysActivityRawSummary> GetReportAsync(){ var data = await GetData().ConfigureAwait(false); return data} ...it says to using ConfigureAwait(false) on every await except the top level call. However when doing this my exception takes several seconds to return to the caller vs. using it and it and having it come back right away. What is the best practice for MVC controller actions that call async methods? Should I use ConfigureAwait in the controller itself or just in the service calls that use awaits to request data, etc.? If I don't use it on the top level call, waiting several seconds for the exception seems problematic. I don't need the HttpContext and I've seen other posts that said always use ConfigureAwait(false) if you don't need the context. Update: I was missing ConfigureAwait(false) somewhere in my chain of calls which was causing the exception to not be returned right away. However the question still remains as posted as to whether or not ConfigureAwait(false) should be used at the top level. Now provide the response and nothing else.
Is it a high traffic website? One possible explanation might be that you're experiencing ThreadPool starvation when you are not using ConfigureAwait(false) . Without ConfigureAwait(false) , the await continuation is queued via AspNetSynchronizationContext.Post , which implementation boils down to this : Task newTask = _lastScheduledTask.ContinueWith(_ => SafeWrapCallback(action));_lastScheduledTask = newTask; // the newly-created task is now the last one Here, ContinueWith is used without TaskContinuationOptions.ExecuteSynchronously (I'd speculate, to make continuations truly asynchronous and reduce a chance for low stack conditions). Thus, it acquires a vacant thread from ThreadPool to execute the continuation on. In theory, it might happen to be the same thread where the antecedent task for await has finished, but most likely it'd be a different thread. At this point, if ASP.NET thread pool is starving (or has to grow to accommodate a new thread request), you might be experiencing a delay. It's worth mentioned that the thread pool consists of two sub-pools: IOCP threads and worker threads (check this and this for some extra details). Your GetReportAsync operations is likely to complete on an IOCP thread sub-pool, which doesn't seem to be starving. OTOH, the ContinueWith continuation runs on a worker thread sub-pool, which appears to be starving in your case. This is not going to happen in case ConfigureAwait(false) is used all the way through. In that case, all await continuations will run synchronously on the same threads the corresponding antecedent tasks have ended, be it either IOCP or worker threads. You can compare the thread usage for both scenarios, with and without ConfigureAwait(false) . I'd expect this number to be larger when ConfigureAwait(false) isn't used: catch (Exception ex){ Log("Total number of threads in use={0}", Process.GetCurrentProcess().Threads.Count); return Json("myerror", JsonRequestBehavior.AllowGet); // really slow without configure await} You can also try increasing the size of the ASP.NET thread pool (for diagnostics purpose, rather than an ultimate solution), to see if the described scenario is indeed the case here: <configuration> <system.web> <applicationPool maxConcurrentRequestsPerCPU="6000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="6000" /> </system.web></configuration> Updated to address the comments: I realized I was missing a ContinueAwait somewhere in my chain. Now it works fine when throwing an exception even when the top level doesn't use ConfigureAwait(false). This suggests that your code or a 3rd party library in use might be using blocking constructs ( Task.Result , Task.Wait , WaitHandle.WaitOne , perhaps with some added timeout logic). Have you looked for those? Try the Task.Run suggestion from the bottom of this update. Besides, I'd still do the thread count diagnostics to rule out thread pool starvation/stuttering. So are you saying that if I DO use ContinueAwait even at the top level I lose the whole benefit of the async? No, I'm not saying that. The whole point of async is to avoid blocking threads while waiting for something, and that goal is achieved regardless of the added value of ContinueAwait(false) . What I'm saying is that not using ConfigureAwait(false) might introduce redundant context switching (what usually means thread switching), which might be a problem in ASP.NET if thread pool is working at its capacity. Nevertheless, a redundant thread switch is still better than a blocked thread, in terms of the server scalability. In all fairness, using ContinueAwait(false) might also cause redundant context switching, especially if it's used inconsistently across the chain of calls. That said, ContinueAwait(false) is also often misused as a remedy against deadlocks caused by blocking on asynchronous code . That's why I suggested above to look for those blocking construct across all code base. However the question still remains as posted as to whether or not ConfigureAwait(false) should be used at the top level. I hope Stephen Cleary could elaborate better on this, by here's my thoughts. There's always some "super-top level" code that invokes your top-level code. E.g., in case of a UI app, it's the framework code which invokes an async void event handler. In case of ASP.NET, it's the asynchronous controller's BeginExecute . It is the responsibility of that super-top level code to make sure that, once your async task has completed, the continuations (if any) run on the correct synchronization context. It is not the responsibility of the code of your task. E.g., there might be no continuations at all, like with a fire-and-forget async void event handler; why would you care to restore the context inside such handler? Thus, inside your top-level methods, if you don't care about the context for await continuations, do use ConfigureAwait(false) as soon as you can. Moreover, if you're using a 3rd party library which is known to be context agnostic but still might be using ConfigureAwait(false) inconsistently, you may want to wrap the call with Task.Run or something like WithNoContext . You'd do that to get the chain of the async calls off the context, in advance: var report = await Task.Run(() => _adapter.GetReportAsync()).ConfigureAwait(false);return Json(report, JsonRequestBehavior.AllowGet); This would introduce one extra thread switch, but might save you a lot more of those if ConfigureAwait(false) is used inconsistently inside GetReportAsync or any of its child calls. It'd also serve as a workaround for potential deadlocks caused by those blocking constructs inside the call chain (if any). Note however, in ASP.NET HttpContext.Current is not the only static property which is flowed with AspNetSynchronizationContext . E.g., there's also Thread.CurrentThread.CurrentCulture . Make sure you really don't care about loosing the context. Updated to address the comment: For brownie points, maybe you can explain the effects of ConfigureAwait(false)... What context isn't preserved.. Is it just the HttpContext or the local variables of the class object, etc.? All local variables of an async method are preserved across await , as well as the implicit this reference - by design. They actually gets captured into a compiler-generated async state machine structure, so technically they don't reside on the current thread's stack. In a way, it's similar to how a C# delegate captures local variables. In fact, an await continuation callback is itself a delegate passed to ICriticalNotifyCompletion.UnsafeOnCompleted (implemented by the object being awaited; for Task , it's TaskAwaiter ; with ConfigureAwait , it's ConfiguredTaskAwaitable ). OTOH, most of the global state (static/TLS variables, static class properties) is not automatically flowed across awaits. What does get flowed depends on a particular synchronization context. In the absence of one (or when ConfigureAwait(false) is used), the only global state preserved with is what gets flowed by ExecutionContext . Microsoft's Stephen Toub has a great post on that: "ExecutionContext vs SynchronizationContext" . He mentions SecurityContext and Thread.CurrentPrincipal , which is crucial for security. Other than that, I'm not aware of any officially documented and complete list of global state properties flowed by ExecutionContext . You could peek into ExecutionContext.Capture source to learn more about what exactly gets flowed, but you shouldn't depend on this specific implementation. Instead, you can always create your own global state flow logic, using something like Stephen Cleary's AsyncLocal (or .NET 4.6 AsyncLocal<T> ). Or, to take it to the extreme, you could also ditch ContinueAwait altogether and create a custom awaiter, e.g. like this ContinueOnScope . That would allow to have precise control over what thread/context to continue on and what state to flow.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31225972', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/659858/']}
jdg_78384
stackexchange
llm_judgeable_groundtruth_similarity
870226
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a bit of trouble doing this proof. If $a\in\mathbb{Z}$, then $a^3 \equiv a \pmod 3$. I know how to do proofs if there were conditional statements but not sure how to prove this with nothing else. Now provide the response and nothing else.
Any two maximal ideals $M_1$, $M_2$ are comaximal , i.e. $M_1+M_2=R$. This implies that $m_1 + m_2 = 1$ for some $m_1 \in M_1$, $m_2 \in M_2$. If $m \in M_1 \cap M_2$, then $mm_1+mm_2 = m \in M_1M_2$. So $M_1\cap M_2 \subseteq M_1M_2$, and the other inclusion is always true. This generalizes to any number of maximal ideals.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/870226', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/109765/']}
jdg_78385
stackexchange
llm_judgeable_groundtruth_similarity
307987
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: There's a folklore problem: Let $x_1, \cdots, x_{23} \in \mathbb{Z}$ be the weights of $23$ soccer players. Now Master Yoda want's to form two soccer teams with $11$ players each. Turns out for any $1 \leq i \leq 23$, one can partition $\{1, \cdots, n \} - \{ i \}$ into two disjont sets $A, B$ with $|A| = |B| = 11$ such that $\displaystyle \sum_{k \in A} x_k = \sum_{k \in B} x_k$. Prove that all numbers must be equal. The solution is well known and is not very hard for $\mathbb{Z}$. I'm wondering replacing $\mathbb{Z}$ by which commutative ring with unit $R$ makes the problem false. If $R = \mathbb{Q}$, then it's also same as $\mathbb{Z}$ (and the answer is affirmative), just multiply everything by the LCM of the numerators to reduce it to the case $R = \mathbb{Z}$. If $R = \mathbb{R}$, then also the problem is true, but you need a lemma by Dirichlet (which is proven by PHP) to reduce it to the case $R = \mathbb{Z}$. If $R = \mathbb{C}$, then also the problem is true. Because if $\displaystyle \sum_{k \in A} z_k = \sum_{k \in B} z_k \Rightarrow \sum_{k \in A} \text{Re}(z_k) = \sum_{k \in B} \text{Re}(z_k)$, and by the previous one $R = \mathbb{R}$ applied to the real components, you get $\text{Re}(z_i) = \text{Re}(z_j)$ for all $i, j$. Similarly you prove the imaginary components are same, so all numbers are same. If $R = \mathbb{Q}[x], \mathbb{C}[x], \mathbb{R}[x], M_{m,n}(\mathbb{Q}), M_{m,n}(\mathbb{Z}), M_{m,n}(\mathbb{C}), M_{m,n}(\mathbb{R}) $, even then the problem is true since you can look at the problem "component wise" and reduce it to the above cases. However I have no idea whether the problem is true when $R = \mathbb{Z}_p$ for some prime $p$ or in some other rings (ring of rational functions etc). Is it true for all rings, or are there some rings for which this problem doesn't hold ? If it's false for some rings, are there any characterizations for such rings ? Now provide the response and nothing else.
Observation . If the result is true for some abelian groups $G_1$ and $G_2$ then it's also true for any extension $0 \to G_1 \to G \to G_2 \to 0$. Moreover if the result holds for $G$ then the same holds for any subgroup $H$ of $G$. Proposition . Let $G$ be an abelian group, and let $P$ be the set of primes $p$ such that $G$ contains an element of order $p$. Then the result is true for $G$ if and only if it is true for $\mathbf{Z}/p\mathbf{Z}$ for all $p \in P$. I'm grateful to darij grinberg for simplifying my initial proof of this proposition. Proof. The direct implication follows from the fact that $G$ contains $\mathbf{Z}/p\mathbf{Z}$ for every $p \in P$. Conversely, let $x_1,\ldots,x_{23} \in G$ satisfying the assumption of the problem. Let $H$ be the subgroup of $G$ generated by the $x_i$. The structure theorem of finitely generated abelian groups tells us that $H$ is a direct sum of copies of $\mathbf{Z}$ and $\mathbf{Z}/p^k\mathbf{Z}$ with $p \in P$. Since $\mathbf{Z}/p^k\mathbf{Z}$ is an iterated extension of $\mathbf{Z}/p\mathbf{Z}$, the initial observation shows that the result is true for all $\mathbf{Z}/p^k\mathbf{Z}$ with $p$ in $P$, and thus for $H$, so that all $x_i$ are equal. QED So it remains to study the case of $\mathbf{Z}/p\mathbf{Z}$ where $p$ is prime. Following darij grinberg's comments at this link : let $M$ be a matrix of the form $M = (\pm \delta_{i \neq j})_{1 \leq i,j \leq 23}$ where $\delta$ is the Kronecker symbol and the number of $+$ signs is 11 in each row. The upper-left $22 \times 22$ minor is odd (in particular nonzero), therefore the rank of $M$ is $22$ and its kernel is generated by the vector $(1,\ldots,1)$. This solves the original problem over $\mathbf{Z}$, and over $\mathbf{Z}/2\mathbf{Z}$. Now over $\mathbf{Z}/p\mathbf{Z}$, the result is false if and only there exists a matrix $M$ of this form with rank $<22$, which amounts to say that the before last elementary divisor of $M$ is divisible by $p$. There are only finitely many such matrices, so the result is true if $p$ is large enough. In fact, by Hadamard's inequality, the abolute value of the $22 \times 22$ minor is bounded by $21^{11}$, so the result is true for every $p>21^{11}$. Using Magma (I can share the code if you're interested) I generated $\approx 5 \cdot 10^7$ random matrices of this form and found that the result is false for at least 1471 values of $p$, in particular all $2<p<3529$. The largest value I got is $p=36285031$. For a given value of $p$, it seems challenging to decide whether the result holds over $\mathbf{Z}/p\mathbf{Z}$, since one apparently needs to check all ${{22}\choose{11}}^{23}$ matrices of this form. It would be interesting to devise a better method.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/307987', 'https://mathoverflow.net', 'https://mathoverflow.net/users/115227/']}
jdg_78386
stackexchange
llm_judgeable_groundtruth_similarity
374238
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the groups of nonzero complex numbers and the positive real numbers and the homomorphism $f: \Bbb{C}^{*} \to \Bbb{R}_+$ such that $f(z)= \lvert z \rvert$. I need to find the kernel of f. Now... $\ker(f)=\{f(z)=0\}$. I know that if $z=a+bi$, then $|z|= \sqrt{a^2+b^2}$ which is $0$ only if $a=b=0$, but then $z$ will also be $0$. What am I missing? [EDIT] Yes, silly me... my kernel was wrong, it should be $Ker(f)=${${\forall z \in C^*| f(z)=e=1}$}. Thank you all. Delete the question, maybe? Now provide the response and nothing else.
Note that $\sin(n\pi)=0$, so your coefficients are simply$$C_n = \frac{i}{n}(-1)^n,\quad n\neq 0$$ which results in $$f(x) = -2\sum_{n=1}^{\infty}\frac{(-1)^n}{n}\sin(nx)$$ Just a hint concerning the coefficients $D_n$ of $h(x)$: First, note that $D_0\neq 0$ because $h(x)\ge 0$. Second, note that $h(x)$ is even, so its series will only have cosine terms. The cosine coefficients will turn out to be $$a_n = 4 \frac{(-1)^n}{n^2},\quad n=1,2,\ldots$$Try to verify this result yourself.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/374238', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73359/']}
jdg_78387
stackexchange
llm_judgeable_groundtruth_similarity
147741
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm exploring some ideas and I think I'm making some mistakes. I'm picking up some ideas of the barycentre or center of mass of an area. I thought of something simple. ClearAll["Global`*"]y = Sqrt[x + 9] I made the integral of an equation to find the area under the curve: Integrate[Sqrt[x + 9], {x, -9, 0}] 18 Plot[y, {x, -9, 0}, AspectRatio -> 1, PlotRange -> {{-9, 0}, {0, 9}}, Filling -> Bottom] Here I tried to find the value of $x1$ so that the result of the integral is half of the value obtained previously, which was $18$. N[Solve[Integrate[Sqrt[x + 9], {x, x1, 0}] == Integrate[Sqrt[x + 9], {x, -9, 0}]/2, {x1}] /. Rule -> Set] (-3.33036) With this value $x1$ I plotted again to see the result: Plot[y, {x, x1, 0}, AspectRatio -> 1, PlotRange -> {{-9, 0}, {0, 9}}, Filling -> Bottom] The value of the barycentre of the area between $-9<x<0$ obtained by other software was: $x=-3.6002$ and $y=1.1250$ So my $x1$ value should be $x=-3.6002$, but it was not what I got. Has anyone discovered where I made the mistake? Now provide the response and nothing else.
You're not computing the centroid correctly. Use to discover $X$: $\frac{\int_{-9}^0 x \sqrt{x+9} \, dx}{\int_{-9}^0 \sqrt{x+9} \, dx}$ N[Integrate[x*Sqrt[x + 9], {x, -9, 0}]/ Integrate[Sqrt[x + 9], {x, -9, 0}]] -3.6 Use to discover $Y$: $\frac{\int_0^3 y \left(y^2-9\right) \, dy}{\int_0^3 \left(y^2-9\right) \, dy}$ N[Integrate[y*(y^2 - 9), {y, 0, 3}]/Integrate[y^2 - 9, {y, 0, 3}]] 1.125 You can also do: Integrate[{x,y}, {x,-9,0}, {y,0,Sqrt[9+x]}] / Integrate[1, {x,-9,0}, {y,0,Sqrt[9+x]}] {-18/5, 9/8}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/147741', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/37895/']}
jdg_78388
stackexchange
llm_judgeable_groundtruth_similarity
15141762
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have date time in a particular timezone as a string and I want to convert this to the local time. But, I don't know how to set the timezone in the Date object. For example, I have Feb 28 2013 7:00 PM ET, then I can var mydate = new Date();mydate.setFullYear(2013);mydate.setMonth(02);mydate.setDate(28);mydate.setHours(7);mydate.setMinutes(00); As far as I know, I can either set the UTC time or local time. But, how do I set time in another timezone? I tried to use the add/subtract the offset from UTC but I don't know how to counter daylight savings. Am not sure if I am heading the right direction. How can I go about converting time from a different timezone to local time in javascript? Now provide the response and nothing else.
Background JavaScript's Date object tracks time in UTC internally, but typically accepts input and produces output in the local time of the computer it's running on. It has very few facilities for working with time in other time zones. The internal representation of a Date object is a single number, representing the number of milliseconds that have elapsed since 1970-01-01 00:00:00 UTC , without regard to leap seconds. There is no time zone or string format stored in the Date object itself. When various functions of the Date object are used, the computer's local time zone is applied to the internal representation. If the function produces a string, then the computer's locale information may be taken into consideration to determine how to produce that string. The details vary per function, and some are implementation-specific. The only operations the Date object can do with non-local time zones are: It can parse a string containing a numeric UTC offset from any time zone. It uses this to adjust the value being parsed, and stores the UTC equivalent. The original local time and offset are not retained in the resulting Date object. For example: var d = new Date("2020-04-13T00:00:00.000+08:00"); d.toISOString() //=> "2020-04-12T16:00:00.000Z" d.valueOf() //=> 1586707200000 (this is what is actually stored in the object) In environments that have implemented the ECMASCript Internationalization API (aka "Intl"), a Date object can produce a locale-specific string adjusted to a given time zone identifier. This is accomplished via the timeZone option to toLocaleString and its variations. Most implementations will support IANA time zone identifiers, such as 'America/New_York' . For example: var d = new Date("2020-04-13T00:00:00.000+08:00"); d.toLocaleString('en-US', { timeZone: 'America/New_York' }) //=> "4/12/2020, 12:00:00 PM" // (midnight in China on Apring 13th is noon in New York on April 12th) Most modern environments support the full set of IANA time zone identifiers ( see the compatibility table here ). However, keep in mind that the only identifier required to be supported by Intl is 'UTC' , thus you should check carefully if you need to support older browsers or atypical environments (for example, lightweight IoT devices). Libraries There are several libraries that can be used to work with time zones. Though they still cannot make the Date object behave any differently, they typically implement the standard IANA timezone database and provide functions for using it in JavaScript. Modern libraries use the time zone data supplied by the Intl API, but older libraries typically have overhead, especially if you are running in a web browser, as the database can get a bit large. Some of these libraries also allow you to selectively reduce the data set, either by which time zones are supported and/or by the range of dates you can work with. Here are the libraries to consider: Intl-based Libraries New development should choose from one of these implementations, which rely on the Intl API for their time zone data: Luxon (successor of Moment.js ) date-fns-tz (extension for date-fns ) Non-Intl Libraries These libraries are maintained, but carry the burden of packaging their own time zone data, which can be quite large. js-joda/timezone (extension for js-joda ) moment-timezone * (extension for Moment.js ) date-fns-timezone (extension for older 1.x of date-fns ) BigEasy/TimeZone tz.js * While Moment and Moment-Timezone were previously recommended, the Moment team now prefers users chose Luxon for new development. Discontinued Libraries These libraries have been officially discontinued and should no longer be used. WallTime-js TimeZoneJS Future Proposals The TC39 Temporal Proposal aims to provide a new set of standard objects for working with dates and times in the JavaScript language itself. This will include support for a time zone aware object. Common Errors There are several approaches that are often tried, which are in error and should usually be avoided. Re-Parsing new Date(new Date().toLocaleString('en', {timeZone: 'America/New_York'})) The above approach correctly uses the Intl API to create a string in a specific time zone, but then it incorrectly passes that string back into the Date constructor. In this case, parsing will be implementation-specific, and may fail entirely. If successful, it is likely that the resulting Date object now represents the wrong instant in time, as the computer's local time zone would be applied during parsing. Epoch Shifting var d = new Date();d.setTime(d.getTime() + someOffset * 60000); The above approach attempts to manipulate the Date object's time zone by shifting the Unix timestamp by some other time zone offset. However, since the Date object only tracks time in UTC, it actually just makes the Date object represent a different point in time. The same approach is sometimes used directly on the constructor, and is also invalid. Epoch Shifting is sometimes used internally in date libraries as a shortcut to avoid writing calendar arithmetic. When doing so, any access to non-UTC properties must be avoided. For example, once shifted, a call to getUTCHours would be acceptable, but a call to getHours would be invalid because it uses the local time zone. It is called "epoch shifting", because when used correctly, the Unix Epoch ( 1970-01-01T00:00:00.000Z ) is now no longer correlated to a timestamp of 0 but has shifted to a different timestamp by the amount of the offset. If you're not authoring a date library, you should not be epoch shifting. For more details about epoch shifting, watch this video clip from Greg Miller at CppCon 2015. The video is about time_t in C++, but the explanation and problems are identical. (For JavaScript folks, every time you hear Greg mention time_t , just think " Date object".) Trying to make a "UTC Date" var d = new Date();var utcDate = new Date(Date.UTC(d.getUTCFullYear(), d.getUTCMonth(), d.getUTCDate(), d.getUTCHours(), d.getUTCMinutes(), d.getUTCSeconds(), d.getUTCMilliseconds())); In this example, both d and utcDate are identical. The work to construct utcDate was redundant, because d is already in terms of UTC. Examining the output of toISOString , getTime , or valueOf functions will show identical values for both variables. A similar approach seen is: var d = new Date();var utcDate = new Date(d.getUTCFullYear(), d.getUTCMonth(), d.getUTCDate(), d.getUTCHours(), d.getUTCMinutes(), d.getUTCSeconds(), d.getUTCMilliseconds()); This is approach passes UTC values into the Date constructor where local time values are expected. The resulting Date object now represents a completely different point in time. It is essentially the same result as epoch shifting described earlier, and thus should be avoided. The correct way to get a UTC-based Date object is simply new Date() . If you need a string representation that is in UTC, then use new Date().toISOString() .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/15141762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/227809/']}
jdg_78389
stackexchange
llm_judgeable_groundtruth_similarity
216791
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Beside the obvious answer black hole, is there anything else? Could a nearby supernova blow it away/apart? Or any sort of (theoretically intense enough) field? I guess an electric field is out of the question no matter how strong. Sorry if this a lazy/amateurish question, but I'd probably have to study degenerate matter a lot before I can answer this myself comprehensively. I suppose the question asking What is the binding energy of a neutron star? is related, but I don't know how to interpret it myself in layman's terms. Now provide the response and nothing else.
Do you mean anything in the real universe or just theoretically? If the latter, then I can think of a few phenomena: Heat: Just heat it up until the thermal velocity at the surface is greater than the escape volcity. Then neutrons will just fly off and it will evaporate (sublimate?). Spin: Wind it up until the tangential velocity at the equator reaches escape velocity. This might be self-limiting as the spun-off mass takes away angular momentum. You'd have to keep winding it. Tides: Drive it past a particularly dense Black Hole and let the tidal forces pull it apart. In all these cases, I'm not sure how the nuclear Strong Force might modify the calculations (it might not be only gravity holding the neutrons down). To expand on those answers, I did a few BoE calculations assuming only gravity, to see what kind of values we'd need for each of these cases. We start with a typical neutron star: mass, $M = 1.4M_\odot$ (so, $2.8 \times 10^{30}kg$) density, $\rho = 5 \times 10^{17}kg/m^3$. This gives: radius, $r = \sqrt[3]{{3M}/{4\pi\rho}} = 11km$. escape velocity, $v_e = \sqrt{{2GM}/r} = 1.8 \times 10^8 ms^{-1}$ (or $0.6c$) Heat: $v= \sqrt {{3kT}\over m}$, so $T = {{mv^2}\over{3k}}$, where $m$ is the neutron mass and $k$ is Boltzmann's constant. This comes out at around $10^{12}K$, which is rather hot (interestingly, it's within a factor of 10 of the core temperature in a supernova, where neutron stars are made). Spin: $v_e=r\omega$ and $w=2\pi f$ so $f = {v_e \over {2\pi r}}$. Plugging in the numbers gives a rotational frequency of 2.6 kHz or 150,000 rpm. Quite a rate for a thing the size of a mountain. Tides: The equation for the Roche Limit is $d = 1.26 R_M (M_M / M_m)$. If we find a black hole of $3M_{\odot}$ and a radius of 9km, we just have to steer our neutron star to within 24Km and it'll just fall apart.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/216791', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/70429/']}
jdg_78390
stackexchange
llm_judgeable_groundtruth_similarity
1834971
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using the Apache POi HSSF library to import info into my application. The problem is that the files have some extra/empty rows that need to be removed first before parsing. There's not a HSSFSheet.removeRow( int rowNum ) method. Only removeRow( HSSFRow row ) . The problem with this it that empty rows can't be removed. For example: sheet.removeRow( sheet.getRow(rowNum) ); gives a NullPointerException on empty rows because getRow() returns null.Also, as I read on forums, removeRow() only erases the cell contents but the row is still there as an empty row. Is there a way of removing rows (empty or not) without creating a whole new sheet without the rows that I want to remove? Now provide the response and nothing else.
/** * Remove a row by its index * @param sheet a Excel sheet * @param rowIndex a 0 based index of removing row */public static void removeRow(HSSFSheet sheet, int rowIndex) { int lastRowNum=sheet.getLastRowNum(); if(rowIndex>=0&&rowIndex<lastRowNum){ sheet.shiftRows(rowIndex+1,lastRowNum, -1); } if(rowIndex==lastRowNum){ HSSFRow removingRow=sheet.getRow(rowIndex); if(removingRow!=null){ sheet.removeRow(removingRow); } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1834971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/162444/']}
jdg_78391
stackexchange
llm_judgeable_groundtruth_similarity
30701006
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've done this before with MVC5 using User.Identity.GetUserId() but that doesn't seem to work here.The User.Identity doesn't have the GetUserId() method. I am using Microsoft.AspNet.Identity . Now provide the response and nothing else.
Update in ASP.NET Core Version >= 2.0 In the Controller: public class YourControllerNameController : Controller{ private readonly UserManager<ApplicationUser> _userManager; public YourControllerNameController(UserManager<ApplicationUser> userManager) { _userManager = userManager; } public async Task<IActionResult> YourMethodName() { var userId = User.FindFirstValue(ClaimTypes.NameIdentifier) // will give the user's userId var userName = User.FindFirstValue(ClaimTypes.Name) // will give the user's userName // For ASP.NET Core <= 3.1 ApplicationUser applicationUser = await _userManager.GetUserAsync(User); string userEmail = applicationUser?.Email; // will give the user's Email // For ASP.NET Core >= 5.0 var userEmail = User.FindFirstValue(ClaimTypes.Email) // will give the user's Email }} In some other class: public class OtherClass{ private readonly IHttpContextAccessor _httpContextAccessor; public OtherClass(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } public void YourMethodName() { var userId = _httpContextAccessor.HttpContext.User.FindFirstValue(ClaimTypes.NameIdentifier); }} Then you should register IHttpContextAccessor in the Startup class as follows: public void ConfigureServices(IServiceCollection services){ services.TryAddSingleton<IHttpContextAccessor, HttpContextAccessor>(); // Or you can also register as follows services.AddHttpContextAccessor();} For more readability write extension methods as follows: public static class ClaimsPrincipalExtensions{ public static T GetLoggedInUserId<T>(this ClaimsPrincipal principal) { if (principal == null) throw new ArgumentNullException(nameof(principal)); var loggedInUserId = principal.FindFirstValue(ClaimTypes.NameIdentifier); if (typeof(T) == typeof(string)) { return (T)Convert.ChangeType(loggedInUserId, typeof(T)); } else if (typeof(T) == typeof(int) || typeof(T) == typeof(long)) { return loggedInUserId != null ? (T)Convert.ChangeType(loggedInUserId, typeof(T)) : (T)Convert.ChangeType(0, typeof(T)); } else { throw new Exception("Invalid type provided"); } } public static string GetLoggedInUserName(this ClaimsPrincipal principal) { if (principal == null) throw new ArgumentNullException(nameof(principal)); return principal.FindFirstValue(ClaimTypes.Name); } public static string GetLoggedInUserEmail(this ClaimsPrincipal principal) { if (principal == null) throw new ArgumentNullException(nameof(principal)); return principal.FindFirstValue(ClaimTypes.Email); }} Then use as follows: public class YourControllerNameController : Controller{ public IActionResult YourMethodName() { var userId = User.GetLoggedInUserId<string>(); // Specify the type of your UserId; var userName = User.GetLoggedInUserName(); var userEmail = User.GetLoggedInUserEmail(); }}public class OtherClass{ private readonly IHttpContextAccessor _httpContextAccessor; public OtherClass(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } public void YourMethodName() { var userId = _httpContextAccessor.HttpContext.User.GetLoggedInUserId<string>(); // Specify the type of your UserId; }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30701006', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4594495/']}
jdg_78392
stackexchange
llm_judgeable_groundtruth_similarity
172601
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to derive the formula for the radius of the circle inscribed in an equilateral triangle from scratch. Given $2*n$ = length of a side $H$ = the altitude of the triangle = $h + a$ $h$ = the long subdivision (from the center of the triangle to a vertex) $a$ = the short subdivision (from the center of the triangle to a side. Also the radius of the inscribed circle) By first deriving the altitude of the triangle $\displaystyle\begin{align}2 n&=\sqrt{H^2+n^2} \\H&=\sqrt{(2 n)^2-n^2} \\&=\sqrt{3}\;n \\\end{align}$ I have gotten to the reduced equation $n \sqrt(3) - a = \sqrt(a^2+n^2)$ $\displaystyle\begin{align}a+h=\sqrt{3}\;n \\h=\sqrt{3}\;n-a \\\\a^2+n^2=h^2 \\h=\sqrt{a^2+n^2} \\\end{align}$ Trying to solve for $a$, I know in advance that $a$ is $1/3$ and $h$ is $2/3$ of $H$, with $a = n\sqrt(3)/3$ This is of course the answer I wish to derive. In fact, plugging the equation given above into a system such as Mathematica will provide the correct answer. But I can't find out what the steps are, primarily because I know of now way to extract the $a$ term from within the square root term. Please, no trigonometry. I know there is a fast derivation involving tangents, etc, but this is more properly an algebra problem - how to solve the equation for $a$. Now provide the response and nothing else.
I’m going to use the more standard notation $\omega_1$ for the first uncountable ordinal. The most important thing to understand is that the linear order $\le$ on the set $[0,\omega_1]$ is a well-order: if $A$ is any non-empty subset of $[0,\omega_1]$ , $A$ has a smallest element with respect to $\le$ . That is, there is some $\alpha\in A$ such that $\alpha\le\beta$ for every $\beta\in A$ . I’ll use this fact in a bit, but first let’s look a little closer at the elements of $[0,\omega_1]$ . The first countably infinitely many elements are the finite ordinals; you can think of these as being simply the non-negative integers, $0,1,2,3,\ldots$ ; this is, so to speak, the low end of the order $\le$ . Now let $A=\{\alpha\in[0,\omega_1]:\alpha\text{ is not a finite ordinal}\}$ . The set of finite ordinals is countable, and $[0,\omega_1]$ is uncountable, so $A\ne\varnothing$ , and therefore $A$ has a least (or smallest) element; we call this element $\omega$ . The set $\{0,1,2,\dots,\}\cup\{\omega\}$ is still countable, so the set $$[0,\omega_1]\setminus\Big(\{0,1,2,\dots,\}\cup\{\omega\}\Big)$$ is non-empty and therefore has a least element; we call this element $\omega+1$ . This $\omega+1$ is the smallest ordinal after $\omega$ : it comes right after $\omega$ in the order, so it’s the successor of $\omega$ , just as $2$ is the successor of $1$ . At this point we have a low end of $[0,\omega_1]$ that looks like this: $$0,1,2,3,\dots,\omega,\omega+1$$ It should be intuitively clear that we can repeat this argument countably infinitely many times to produce $\omega+2,\omega+3,\dots\,$ , and indeed $\omega+n$ for every finite ordinal $n$ . Now we have an initial segment of $[0,\omega_1]$ that looks like this: $$0,1,2,3,\dots,\omega,\omega+1,\omega+2,\omega+3,\dots$$ The only ordinals in this set that are not successors are $0$ , since there’s nothing before it at all, and $\omega$ , since there is nothing immediately before it: no matter what finite ordinal $n$ you consider, $n+1\ne\omega$ . But this set is still countable, so there is a smallest ordinal in $$[0,\omega_1]\setminus\{0,1,2,\dots,\omega,\omega+1,\omega+2\dots\}\;;$$ this ordinal is denoted by $\omega\cdot 2$ , and like $\omega$ , it’s not a successor: it is not $\alpha+1$ for any $\alpha$ . In other words, it’s a limit ordinal , as is $\omega$ . ( $0$ is a bit anomalous: it’s not a successor ordinal, but it’s also not a limit ordinal.) After this the ordinals and their standard notations start getting a bit complicated, and we don’t really have to go into them to understand the topological properties of $[0,\omega_1)$ and $[0,\omega_1]$ . It’s also true that the usual set-theoretic definition of the ordinals makes each ordinal equal to the set of its predecessors: $$1=\{0\},2=\{0,1\},3=\{0,1,2\},\dots,\omega=\{0,1,2,\dots\},\omega+1=\{0,1,2,\dots,\omega\}\,,$$ and so on; this is the boxes within boxes that you mentioned. However, you don’t need to worry about this, either. Now let’s see why every strictly decreasing sequence in $[0,\omega_1]$ is finite. Suppose that we had an infinite sequence $\langle\alpha_n:n\in\Bbb N\rangle$ such that $\alpha_0>\alpha_1>\alpha_2>\ldots\,$ ; then the set $A=\{\alpha_n:n\in\Bbb N\}$ would be a non-empty subset of $[0,\omega_1]$ with no least element, contradicting the fact that $[0,\omega_1]$ is well-ordered. Infinite increasing sequences are no problem at all, however: for each $\alpha\in[0,\omega_1)$ , the set $[0,\alpha]$ is countable, so $[0,\omega_1)\setminus[0,\alpha]\ne\varnothing$ , so there are elements of $[0,\omega_1)$ bigger than $\alpha$ . The smallest of these is $\alpha+1$ , the successor of $\alpha$ . Thus, starting at any $\alpha\in[0,\omega_1$ I can form an infinite increasing sequence $\langle\alpha,\alpha+1,\alpha+2,\dots\rangle$ whose members are all still in $[0,\omega_1)$ . Next, let’s see why $[0,\omega_1)$ is first countable. Let $\alpha\in[0,\omega_1)$ . Suppose first that $\alpha$ is a successor ordinal, say $\alpha=\beta+1$ ; then $(\beta,\alpha+1)=[\beta+1,\alpha+1)=[\alpha,\alpha+1)=\{\alpha\}$ is an open nbhd of $\alpha$ in the order topology, so $\alpha$ is an isolated point, and $\big\{\{\alpha\}\big\}$ is certainly a countable local base at $\alpha$ ! Note that $0$ behaves like a successor ordinal: $[0,1)=\{0\}$ is an open nbhd of $0$ , so $0$ is also an isolated point. Now suppose that $\alpha$ is a limit ordinal. For each $\beta<\alpha$ the set $(\beta,\alpha+1)=(\beta,\alpha]$ is an open nbhd of $\alpha$ . Every open nbhd of $\alpha$ contains an open interval around $\alpha$ , which in turn contains one of these intervals $(\beta,\alpha]$ , so $$\mathscr{B}_\alpha=\Big\{(\beta,\alpha]:\beta<\alpha\Big\}$$ is a local base at $\alpha$ . Finally, $\alpha<\omega_1$ , and $\omega_1$ is the first ordinal with uncountably many predecessors, so there are only countably many $\beta<\alpha$ , and $\mathscr{B}_\alpha$ is therefore countable. Thus, every point of $[0,\omega_1)$ has a countable local base, and $[0,\omega_1)$ is therefore first countable. Note that $[0,\omega_1]$ is not first countable, because there is no countable local base at $\omega_1$ : if $\big\{(\alpha_n,\omega_1]:n\in\Bbb N\big\}$ is any countable family of open intervals containing $\omega_1$ , let $A=\bigcup_{n\in\Bbb N}[0,\alpha_n]$ . Then $A$ , being the union of countably many countable sets, is a countable subset of $[0,\omega_1)$ , so $[0,\omega_1)\setminus A\ne\varnothing$ . Pick any $\beta\in[0,\omega_1)\setminus A$ ; then $(\beta,\omega_1]$ is an open nbhd of $\omega_1$ that does not contain any of the sets $(\alpha_n,\omega_1]$ , and therefore the family $\big\{(\alpha_n,\omega_1]:n\in\Bbb N\big\}$ is not a local base at $\omega_1$ . That is, no countable family is a local base at $\omega_1$ , so $[0,\omega_1]$ is not first countable at $\omega_1$ . Finally, let’s look at compactness. Suppose that $\mathscr{U}$ is an open cover of $[0,\omega_1]$ . Then there is some $U_0\in\mathscr{U}$ such that $\omega_1\in U_0$ . This $U_0$ must contain a basic open nbhd of $\omega_1$ , so there must be an $\alpha_1<\omega_1$ such that $(\alpha_1,\omega_1]\subseteq U_0$ . $\mathscr{U}$ covers $[0,\omega_1]$ , so there is some $U_1\in\mathscr{U}$ such that $\alpha_1\in U_1$ . This $U_1$ must contain a basic open nbhd of $\alpha_1$ , so there is some $\alpha_2<\alpha_1$ such that $(\alpha_2,\alpha_1]\subseteq U_2$ . Continuing in this fashion, we can construct a decreasing sequence $\alpha_1>\alpha_2>\alpha_3>\ldots\,$ , which, as we saw before, must be finite. Thus, there must be some $n\in\Bbb Z^+$ such that $\alpha_n=0$ , and at that point $\{U_0,\dots,U_n\}$ is a finite subcover of $\mathscr{U}$ . The space $[0,\omega_1)$ , on the other hand, is not compact. It is countably compact, however. The easiest way to prove this is to show that $[0,\omega_1)$ has no infinite, closed, discrete subset. Suppose that $A$ is a countably infinite subset of $[0,\omega_1)$ . Let $\beta$ be the smallest element of $[0,\omega_1)$ that is bigger than infinitely many elements of $A$ . (You’ll have to explain why $\beta$ exists, using the fact that $A$ is countable and $[0,\omega_1)$ is well-ordered.) Finally, show that $\beta$ is a limit point of $A$ . Then either $\beta\notin A$ , in which case $A$ isn’t closed, or $\beta\in A$ , in which case $A$ isn’t discrete.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/172601', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/36100/']}
jdg_78393
stackexchange
llm_judgeable_groundtruth_similarity
204359
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an e-commerce site host name ( example.com ) and want to install an Extended Validation TLS certificate for it. But I use a cookieless static content served from another host name ( static.example.com ) for images of the website. To be "Google Shopping friendly" and for secure e-commerce reasons, can I use Let's Encrypt or other domain-validated TLS certificates for static.example.com and EV certificates for example.com? I have gone through this kind of question online but those were was similar to my situation. Now provide the response and nothing else.
It is possible to have multiple certificates from different vendors for different parts of the domain and even have overlapping certificates., i.e. multiple certificates which could be used to authenticate the same domain. Browsers actually only care that a specific certificate is valid for the specific domain it is used on and don't care if the same certificate could also be used for other domains. They also only care that all content is served over HTTPS (i.e. no mixed content) but don't care if some content is served with a EV certificate while other content is served with DV certificate only. Apart from that, I recommend to read Extended Validation Certificates are Dead to get a better opinion if EV certificates are worth their money at all.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/204359', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/200646/']}
jdg_78394
stackexchange
llm_judgeable_groundtruth_similarity
57612
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I received a strange issue reported by one of my web application users. Apparently they are getting "ads" within the browser window when using my application. I didn't add advertisements at all. It's a straightforward ASP.NET web application. What I believe is happening is that the end user's PC is infected with malware that "hops" onto their browser session to display the ads. This could happen with any website, I guess. I recommended they scan their PC with Malwarebytes . Is this something that can only be controlled by the end-user, or are there steps that web application developers can take to prevent this from happening? In addition, some hints regarding the mechanism employed by this type of malware would be nice. Now provide the response and nothing else.
While I also believe the most likely explanation is definitely that the users system is infected with adware, I would like to present some alternative explanations. There is still a small possibility that your website is indeed serving advertisement. It isn't unheard of that criminals hack into insecure web applications and modify them to deliver ads or malware. When they do, they might take precautions to conceal this from the administrator, like serving unmodified output to certain IP ranges. When you receive similar reports from many other users, you might want to check if any source files on the server got modified somehow. Your server could also serve advertisement because of an unlicensed 3rd party library you are using. There are, for example, DotNetNuke modules which inject advertisement into the website when they aren't properly licensed (thanks to Jasmine for the comment). Another possible explanation could be that the advertisement isn't injected by you or by adware on the users machine but by a man-in-the-middle. The user might use an anonymous proxy server to improve their privacy. Some of these proxy servers generate revenue by injecting their own advertisement into the websites they relay.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/57612', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/46281/']}
jdg_78395
stackexchange
llm_judgeable_groundtruth_similarity
5315
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am relating age to a binary outcome in a logistic model (or, more to the point I would like to). However, the distribution of the ages looks like this: nn <- 1000age <- c(rpois(nn / 3, lambda = 0.5), rnorm(nn / 3, mean = 10, sd = 2), runif(nn / 3, min = 0, max = 15))age <- age[which(age > 0)] How would you approach this problem? Thanks! Now provide the response and nothing else.
Logistic (or other) regression does not make any assumptions about the distribution of the predictors, they are only assumed to be known exactly. So there is nothing stopping you from going ahead and using age in your regression model. The issue you have to actually worry about is whether the effect of age is linear on the logit scale, but that has nothing to do with its distribution.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/5315', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1991/']}
jdg_78396
stackexchange
llm_judgeable_groundtruth_similarity
1281303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two questions 1- what is the "11pt average precision metric"? 2- and how to use it in information retrieval? Thanks Now provide the response and nothing else.
This comes mainly from the book " Introduction to Information Retrieval " by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. In ranked retrieval, you have precision and recall. Precision is how accurate the system is in the documents it returns (1.0 means it only returned relevant documents). Recall is what percentage of the relevant documents the system found (1.0 means it found them all). So you can adjust the recall of the system by returning more and more documents, but there is a tradeoff with precision, because you are likely to admit more error. Interpolated precision is where you pick a recall level r and for all recall levels P(r') >= P(r), where P(r) is the precision at rank r . It is the best precision you can achieve. Then in 11-pt interpolated average precision, you are looking at 11 recall levels (0.0, 0.1, 0.2, ..., 1.0) and finding the interpolated precision at each point. You average these scores across all of the different queries or information needs (as the Manning & Schuetze book put it) to find your system's score. The typical usage in information retrieval is for evaluation. It helps you compare one system to another, but where it really helps is in letting you compare how your system is changing as you tweak parameters. Does changing this setting improve or damage my 11-pt interp avg prec? The benefit is, it measures across all information needs in your collection, so it helps to give you a "big picture" view. But, be careful about relying too heavily on these metrics since they are only as good as the evaluation set you are using.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1281303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/459737/']}
jdg_78397
stackexchange
llm_judgeable_groundtruth_similarity
12968499
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have defined a couple of lists for capturing registration details of the user. There is a list for holding personal details of the user and there is another list which holds professional details of the user.The two lists are used over two different pages in which name and ID both are required.How do I get the name and ID populated automatically into the professional list from personal list so as the user doesn't have to fill the similar information twice?Is it possible without doing any coding stuff? Now provide the response and nothing else.
The predicate needs to take two parameters and return bool. As your function is a member function it has the wrong signature. In addition, you may need to be able to compare Value to int, Value to Value, int to Value and int to int using your functor. struct CompareValueAndTime{ bool operator()( const Value& v, int time ) const { return v.time < time; } bool operator()( const Value& v1, const Value& v2 ) const { return v1.time < v2.time; } bool operator()( int time1, int time2 ) const { return time1 < time2; } bool operator()( int time, const Value& v ) const { return time < v.time; }}; That is rather cumbersome, so let's reduce it: struct CompareValueAndTime{ int asTime( const Value& v ) const // or static { return v.time; } int asTime( int t ) const // or static { return t; } template< typename T1, typename T2 > bool operator()( T1 const& t1, T2 const& t2 ) const { return asTime(t1) < asTime(t2); }}; then: std::lower_bound(valueContainer.begin(), valueContainer.end(), time, CompareValueAndTime() ); There are a couple of other errors too, e.g. no semicolon at the end of the class declaration, plus the fact that members of a class are private by default which makes your whole class private in this case. Did you miss a public: before the constructor? Your function GetLocationForTime doesn't return a value. You need to take the result of lower_bound and subtract begin() from it. The function should also be const. If the intention of this call is to insert here, then consider the fact that inserting in the middle of a vector is an O(N) operation and therefore vector may be the wrong collection type here. Note that the lower_bound algorithm only works on pre-sorted collections. If you want to be able to look up on different members without continually resorting, you will want to create indexes on these fields, possibly using boost's multi_index
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12968499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1758386/']}
jdg_78398
stackexchange
llm_judgeable_groundtruth_similarity
131005
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am self-studying Hoffman and Kunze's book Linear Algebra . This is exercise $4$ from page $115$. It is in the section of The transpose of a Linear Transformation . Let $V$ be a finite-dimensional vector space over the field $\mathbb{F}$ and let $T$ be a linear operator on $V$. Let $c$ be a scalar and suppose there is a non-zero vector $\alpha$ in $V$ such that $T\alpha=c\alpha.$ Prove that there is a non-zero linear functional $f$ on $V$ such that $T^{t}f=cf.$ ($T^{t}$ is the transpose of $T$.) I tried to solve this question by induction on $\dim V$. I was able to show the base case, that is, when $\dim V=1$, but I got stuck in the inductive step. If $\alpha $ is a non-zero vector, then we can find a base $\mathcal{B}=\{\alpha,\alpha_{1},\ldots\alpha_{m}\}$ of $V$. We can write $V=W_{1}\oplus W_{2}$, where $W_{1}=\langle \alpha \rangle$ and $W_{2}=\langle \alpha_{1},\ldots,\alpha_{m}\rangle.$ I can not show that $T(W_{2})\subset W_{2}.$ Anyway, $\alpha \notin W_{2}.$ EDIT: If $T$ is a linear transformation from $V$ to $W$, then $T^{t}$ is the linear transformation from $W^{\star}$ into $V^{\star}$ such that $$(T^{t}f)(\alpha)=g(T\alpha)$$ for every $f\in W^{\star}$ and $\alpha \in V$. Now provide the response and nothing else.
Here's a basis-free construction. If $T \alpha = c \alpha$, for every $f \in V^*$ we have $(T^t f - c f)(\alpha) = f(T \alpha - c \alpha) = 0$. Since $\alpha \ne 0$, there is some $g \in V^*$ with $g(\alpha) \ne 0$. Thus the range of $T^t - c I$ is not all of $V^*$, which, since $V^*$ is a finite-dimensional space, means the kernel of $T^t - c I$ is not $\{0\}$, i.e. there is $f \in V^*$ with$T^t f - c f = 0$. Another way of putting it: any linear operator between finite-dimensional vector spaces has the same rank as its transpose (e.g. because $\text{Ker}(T) = \left(\text{Ran} T^t\right)^\perp$). So $\text{Ker}(T - c I) \ne \{0\}$ iff $\text{rank}(T-cI) < \dim V$iff $\text{rank}(T^t-cI) < \dim V^*$ iff $\text{Ker}(T^t - cI) \ne \{0\}$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/131005', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23505/']}
jdg_78399
stackexchange
llm_judgeable_groundtruth_similarity
647277
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Charging while using the device reduces battery lifespan *. The charger gets confused in the Constant Current Phase because of the additional current drawn by the device. Windows OS and some laptop firmwares offer the option to "limit charge to 80%". How does that work to extend battery lifespan? Not tagging laptop as that's only my main, but far from single, concern. Had I been asking "how to charge my laptop" I would have gone to https://superuser.com/ . Not the case. I seek the physics of it. * - this interpretation of mine is wrong , for more information see the comments and answers Now provide the response and nothing else.
Charging a LiIon battery to the highest possible voltage gives it maximum capacity, so that is the preferred setting if you intend to use your laptop as a portable computer, because it will run for a longer time. However, higher voltage will cause the battery to age faster. So, when connected to mains, keeping the battery at 100% charge all the time is not the best. A lower state of charge puts less stress on the battery and will allow it to keep its capacity for a longer time. The value is a compromise: 50% would be even better for battery preservation, but it is unpractical: in case you actually need to take the laptop away from a socket, runtime will be low and the battery will end up in a deeper discharge, which also ages it faster. So, 80% seems like a good choice. Basically the end of charge voltage on a LiIon battery is always a compromise between how many years you want the battery to last, and how much runtime you want out of it. There is another factor: at 4.2V the charger needs to proceed with caution and reduce current because this is close to the overcharge limit. So when the laptop draws pulsed current according to CPU load, the battery may go through many tiny charge-discharge cycles depending on how the motherboard is designed. If the battery is at a lower state of charge like 80%, the charger does not have to be "careful" to avoid overcharging at all, so it can provide a lot more current without risk, which means, if it is well designed, the charger will provide for all the current the laptop uses, and the battery will not go through these constant microcycles. That's another reason it will stay in good shape for a much longer time.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/647277', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/9910/']}
jdg_78400
stackexchange
llm_judgeable_groundtruth_similarity
33267820
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I´m running pure JUnit4 java tests over my pure java files on my project but I can't find a way to use @VisibleForTesting clearly without making the thing manually public. Ex: @VisibleForTestingpublic Address getAddress() { return mAddress;} The method has to be public to let it be "public" to tests, but in that case the annotation doesn't make sense right? why not just use a comment if the annotation will not do nothing? Now provide the response and nothing else.
Make the method package-private and the test will be able to see it, if the test is in the corresponding test package (same package name as the production code). @VisibleForTestingAddress getAddress() { return mAddress;} Also consider refactoring your code so you don't need to explicitly test a private method, try testing the behaviour of a public interface. Code that is hard to test can be an indication that improvements can be made to production code. The point of an annotation is that its convention and could be used in static code analysis, whereas a comment could not.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33267820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/273119/']}
jdg_78401
stackexchange
llm_judgeable_groundtruth_similarity
2255617
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to learn how to do proofs by contradiction. The proof is, "Prove by Contradiction that there are no positive real roots of $x^6 + 2x^3 +4x + 5$" I understand that now I am attempting to prove that there is a positive real root of this equation, so I am able to contradict myself within the proof. I just don't even know where to start. Now provide the response and nothing else.
This will depend somewhat on your definition of a ring and a ring homomorphism, as there are a few standard ways of defining these notions. Firstly, for rings, there is sometimes a distinction between a ring and a unital ring , which additionally has a multiplicative identity $1$ (whereas a ring need not have one). Secondly, and more importantly, even when ring is taken to mean unital ring, there is sometimes a distinction between a ring homomorphism (which simply preserves addition and multiplication) and a unital ring homomorphism (which much additionally preserve the multiplicative identity). If a ring homomorphism need not preserve the multiplicative identities (assuming these are present in our rings), then between any two rings $R,S$ there is the zero map $\mathbf{0}: R \to S$ defined by $\mathbf{0}(r) = 0$. As such, if your ring homomorphisms need not be unital, then you will always have such trivial homomorphisms. Note that a non-trivial homomorphism between unital rings will usually preserve $1$, so whether you instead want to consider "non-trivial" or "unital" ring homomorphisms, the answers given by Hurkyl or Lord Shark give examples of (unital) rings which don't admit unital/non-trivial ring homomorphisms in both directions. For example, there is the obvious inclusion of $\mathbb{Z}$ into $\mathbb{Q}$ (and this is a unital ring homomorphism), but a non-trivial homomorphism $\varphi: \mathbb{Q} \to \mathbb{Z}$ must have $2\varphi(1/2) = \varphi(1)=1$, so thus $\varphi(1/2)$ is some integer which, when multiplied by $2$, gives $1$. Of course, this can't happen, so we get a contradiction. (I said above that a non-trivial ring homomorphism between unital rings will usually preserve $1$. A quick and easy example of a non-trivial ring homomorphism not preserving $1$ between unital rings is given by the following: Take $\mathbb{F}_2 = \mathbb{Z}/2\mathbb{Z}$ and consider the inclusion of $\mathbb{F}_2$ into $\mathbb{F}_2^2$ by sending $0\mapsto (0,0)$ and $1\mapsto (1,0)$. Now, $(1,0)$ is not the multiplicative identity of $\mathbb{F}^2$, as $(1,1)$ is. However, it is a multiplicative identity of the image of the inclusion. In general, a ring homomorphism sends a multiplicative identity to a multiplicative identity of the image. In many cases, especially rings which have some sort of multiplicative cancellation property, that will be enough to mean that it actually sends $1$ to $1$. This is why I was able to justify the fact that $\varphi(1)=1$ above. For more about this idea, see here .)
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2255617', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/440896/']}
jdg_78402
stackexchange
llm_judgeable_groundtruth_similarity
50639
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know which sebool it is to allow httpd write access to /home/user/html?When I disable SELinux echo 0 > /selinux/enforce I can write, so my problem is definitely related to SELinux. I just don't know which one is the right one without opening a big hole and Google isn't being much help. #[/home]ls -Zdrwxr-x---. user apache unconfined_u:object_r:user_home_dir_t:s0 user#sestatus -bPolicy booleans:abrt_anon_write offabrt_handle_event offallow_console_login onallow_cvs_read_shadow offallow_daemons_dump_core onallow_daemons_use_tcp_wrapper offallow_daemons_use_tty onallow_domain_fd_use onallow_execheap offallow_execmem onallow_execmod onallow_execstack onallow_ftpd_anon_write offallow_ftpd_full_access offallow_ftpd_use_cifs offallow_ftpd_use_nfs offallow_gssd_read_tmp onallow_guest_exec_content offallow_httpd_anon_write offallow_httpd_mod_auth_ntlm_winbind offallow_httpd_mod_auth_pam offallow_httpd_sys_script_anon_write offallow_java_execstack offallow_kerberos onallow_mount_anyfile onallow_mplayer_execstack offallow_nsplugin_execmem onallow_polyinstantiation offallow_postfix_local_write_mail_spool onallow_ptrace offallow_rsync_anon_write offallow_saslauthd_read_shadow offallow_smbd_anon_write offallow_ssh_keysign offallow_staff_exec_content onallow_sysadm_exec_content onallow_unconfined_nsplugin_transition offallow_user_exec_content onallow_user_mysql_connect offallow_user_postgresql_connect offallow_write_xshm offallow_xguest_exec_content offallow_xserver_execmem offallow_ypbind offallow_zebra_write_config onauthlogin_radius offcdrecord_read_content offclamd_use_jit offcobbler_anon_write offcobbler_can_network_connect offcobbler_use_cifs offcobbler_use_nfs offcondor_domain_can_network_connect offcron_can_relabel offdhcpc_exec_iptables offdomain_kernel_load_modules offexim_can_connect_db offexim_manage_user_files offexim_read_user_files offfcron_crond offfenced_can_network_connect offfenced_can_ssh offftp_home_dir onftpd_connect_db offftpd_use_passive_mode offgit_cgit_read_gitosis_content offgit_session_bind_all_unreserved_ports offgit_system_enable_homedirs offgit_system_use_cifs offgit_system_use_nfs offglobal_ssp offgpg_agent_env_file offgpg_web_anon_write offhttpd_builtin_scripting onhttpd_can_check_spam offhttpd_can_network_connect offhttpd_can_network_connect_cobbler offhttpd_can_network_connect_db onhttpd_can_network_memcache offhttpd_can_network_relay offhttpd_can_sendmail onhttpd_dbus_avahi onhttpd_enable_cgi onhttpd_enable_ftp_server offhttpd_enable_homedirs onhttpd_execmem offhttpd_manage_ipa offhttpd_read_user_content offhttpd_setrlimit offhttpd_ssi_exec offhttpd_tmp_exec offhttpd_tty_comm onhttpd_unified onhttpd_use_cifs offhttpd_use_gpg offhttpd_use_nfs offhttpd_use_openstack officecast_connect_any offinit_upstart onirssi_use_full_network offlogging_syslogd_can_sendmail offmmap_low_allowed offmozilla_read_content offmysql_connect_any offnamed_write_master_zones offncftool_read_user_content offnscd_use_shm onnsplugin_can_network onopenvpn_enable_homedirs onpiranha_lvs_can_network_connect offpppd_can_insmod offpppd_for_user offprivoxy_connect_any onpuppet_manage_all_files offpuppetmaster_use_db offqemu_full_network onqemu_use_cifs onqemu_use_comm offqemu_use_nfs onqemu_use_usb onracoon_read_shadow offrgmanager_can_network_connect offrsync_client offrsync_export_all_ro offrsync_use_cifs offrsync_use_nfs offsamba_create_home_dirs offsamba_domain_controller offsamba_enable_home_dirs offsamba_export_all_ro offsamba_export_all_rw offsamba_run_unconfined offsamba_share_fusefs offsamba_share_nfs offsanlock_use_nfs offsanlock_use_samba offsecure_mode offsecure_mode_insmod offsecure_mode_policyload offsepgsql_enable_users_ddl onsepgsql_unconfined_dbadm onsge_domain_can_network_connect offsge_use_nfs offsmartmon_3ware offspamassassin_can_network offspamd_enable_home_dirs onsquid_connect_any onsquid_use_tproxy offssh_chroot_rw_homedirs offssh_sysadm_login offtelepathy_tcp_connect_generic_network_ports offtftp_anon_write offtor_bind_all_unreserved_ports offunconfined_login onunconfined_mmap_zero_ignore offunconfined_mozilla_plugin_transition offuse_fusefs_home_dirs offuse_lpd_server offuse_nfs_home_dirs onuse_samba_home_dirs offuser_direct_dri onuser_direct_mouse offuser_ping onuser_rw_noexattrfile onuser_setrlimit onuser_tcp_server offuser_ttyfile_stat offvarnishd_connect_any offvbetool_mmap_zero_ignore offvirt_use_comm offvirt_use_fusefs offvirt_use_nfs offvirt_use_samba offvirt_use_sanlock offvirt_use_sysfs onvirt_use_usb onvirt_use_xserver offwebadm_manage_user_files offwebadm_read_user_files offwine_mmap_zero_ignore offxdm_exec_bootloader offxdm_sysadm_login offxen_use_nfs offxguest_connect_network onxguest_mount_media onxguest_use_bluetooth onxserver_object_manager off Now provide the response and nothing else.
None of them, at least not by itself. You must either give the directory structure a context of httpd_sys_rw_content_t , or give them a context of public_content_rw_t and enable allow_httpd_anon_write and/or allow_httpd_sys_script_anon_write as follows: chcon -R -t httpd_sys_rw_content_t /path See the httpd_selinux(8) man page for details.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/50639', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/15401/']}
jdg_78403
stackexchange
llm_judgeable_groundtruth_similarity
19527284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a very strange error I have no explanation for. So my setup is very simple: I have a solution with 2 projects, let's name them ProjectA and ProjectB . ProjectA references some NuGet packages and if I build ProjectA , I can see all assemblies in the output directory, the bin folder. ProjectB now references ProjectA , but if I build ProjectB , I have the ProjectA assembly in the output directory, but not the NuGet packages referenced from ProjectA . The reference from ProjectB to ProjectA is added with References -> Add Reference... -> Solution -> ProjectA . I also created a little test project covering this case, but it works just fine in my test project. Any ideas? Now provide the response and nothing else.
If the dependency graph is more complex than what you described you might have some versioning issues. Set the compiler output to diagnostics to see what’s happening: VS->Options->Projects and Solutions->Build and Run->MSBuild project build output vervosity:->Diagnostic Compile the solution. If you find some issue similar to: There was a conflict between "X, Version=2, Culture=neutral, PublicKeyToken=null" and "X, Version=1, Culture=neutral, PublicKeyToken=null". "X, Version=2, Culture=neutral, PublicKeyToken=null" was chosen because it was primary and "X, Version=1, Culture=neutral, PublicKeyToken=null" was not. Try fixing that issue using the same dependency version everywhere and compile again.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19527284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1385688/']}
jdg_78404
stackexchange
llm_judgeable_groundtruth_similarity
3876040
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm looking to understand virtual override overload reintroduce when applied to object constructors. Every time i randomly add keywords until the compiler shuts up - and (after 12 years of developing with Delphi) i'd rather know what i'm doing, rather than trying things randomly. Given a hypothetical set of objects: TComputer = class(TObject)public constructor Create(Cup: Integer); virtual;end;TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); virtual;end;TiPhone = class(TCellPhone)public constructor Create(Cup: Integer); override; constructor Create(Cup: Integer; Teapot: string); override;end; The way i want them to behave is probably obvious from the declarations, but: TComputer has the simple constructor, and descendants can override it TCellPhone has an alternate constructor, and descendants can override it TiPhone overrides both constructors, calling the inherited version of each Now that code doesn't compile. i want to understand why it doesn't work. i also want to understand the proper way to override constructors. Or perhaps you could never override constructors? Or perhaps it is perfectly acceptable to override constructors? Perhaps you should never have multiple constructors, perhaps it is perfectly acceptable to have multiple constructors. i want to understand the why . Fixing it would then be obvious. See also Delphi: How to hide ancestor constructors? Reintroducing functions in Delphi Delphi: How to add a different constructor to a descendant? Edit: i'm also looking to get some reasoning on the order of virtual , override , overload , reintroduce . Because when trying all combinations of keywords, the number of combinations explodes: virtual; overload; virtual; override; override; overload; override; virtual; virtual; override; overload; virtual; overload; override; overload; virtual; override; override; virtual; overload; override; overload; virtual; overload; override; virtual; etc Edit 2: i guess we should begin with " is the object hierarchy given even possible? " If not, why not? For example, is it fundamentally incorrect to have a constructor from an ancestor? TComputer = class(TObject)public constructor Create(Cup: Integer); virtual;end;TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); virtual;end; i would expect that TCellPhone now has two constructors. But i can't find the combination of keywords in Delphi to make it think that's a valid thing to do. Am i fundamentally wrong in thinking i can have two constructors here in TCellPhone ? Note: Everything below this line is not strictly needed to answer the question - but it does help to explain my thinking. Perhaps you can see, based on my thought processes, what fundamental piece i'm missing that makes everything clear. Now these declarations don't compile: //Method Create hides virtual method of base type TComputer:TCellPhone = class(TComputer) constructor Create(Cup: Integer; Teapot: string); virtual;//Method Create hides virtual method of base type TCellPhone:TiPhone = class(TCellPhone)public constructor Create(Cup: Integer); override; constructor Create(Cup: Integer; Teapot: string); overload; <--------end; So first i'll trying fixing TCellPhone . i'll start by randomly adding the overload keyword (i know i don't want reintroduce because that would hide the other constructor, which i don't want): TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); virtual; overload;end; But that fails: Field definition not allowed after methods or properties . i know from experience that, even though i don't have a field after a method or property, if i reverse the order of the virtual and overload keywords: Delphi will shut up: TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); overload; virtual; end; But i still get the error: Method 'Create' hides virtual method of base type 'TComputer' So i try removing both keywords: TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string);end; But i still get the error: Method 'Create' hides virtual method of base type 'TComputer' So i resign myself to now trying reintroduce : TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); reintroduce;end; And now TCellPhone compiles, but it has made things much worse for TiPhone: TiPhone = class(TCellPhone)public constructor Create(Cup: Integer); override; <-----cannot override a static method constructor Create(Cup: Integer; Teapot: string); override; <-----cannot override a static methodend; Both are complaining that i cannot override them, so i remove the override keyword: TiPhone = class(TCellPhone)public constructor Create(Cup: Integer); constructor Create(Cup: Integer; Teapot: string);end; But now the 2nd create says it must be marked with overload, which i do (in fact i'll mark both as overload, since i know what will happen if i don't): TiPhone = class(TCellPhone)public constructor Create(Cup: Integer); overload; constructor Create(Cup: Integer; Teapot: string); overload;end; All all is good in the interface section. Unfortunately my implementations won't work. My single parameter constructor of TiPhone cannot call the inherited constructor: constructor TiPhone.Create(Cup: Integer);begin inherited Create(Cup); <---- Not enough actual parametersend; Now provide the response and nothing else.
I see two reasons your original set of declarations shouldn't compile cleanly: There should be a warning in TCellPhone that its constructor hides the method of the base class. This is because the base-class method is virtual , and the compiler worries that you're introducing a new method with the same name without overriding the base-class method. It doesn't matter that the signatures differ. If your intention is indeed to hide the method of the base class, then you need to use reintroduce on the descendant declaration, as one of your blind guesses showed. The sole purpose of that directive is to quell the warning; it has no effect on run-time behavior. Ignoring what's going to happen with TIPhone later on, the following TCellPhone declaration is what you'd want. It hides the ancestor method, but you want it to be virtual as well. It won't inherit the virtualness of the ancestor method because they're two completely separate methods that just happen to have the same name. Therefore, you need to use virtual on the new declaration as well. TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); reintroduce; virtual;end; The base-class constructor, TComputer.Create , is also hiding a method of its ancestor, TObject.Create , but since the method in TObject is not virtual, the compiler doesn't warn about it. Hiding non-virtual methods happens all the time and is generally unremarkable. You should get an error in TIPhone because there is no longer any one-argument constructor to override. You hid it in TCellPhone . Since you want to have two constructors, reintroduce clearly wasn't the right choice to use earlier. You don't want to hide the base-class constructor; you want to augment it with another constructor. Since you want both constructors to have the same name, you need to use the overload directive. That directive needs to be used on all the original declarations — the first time each distinct signature is introduced subsequent declarations in descendants. I thought it was required on all declarations (even the base class), and it doesn't hurt to do that, but I guess it's not required. So, your declarations should look like this: TComputer = class(TObject)public constructor Create(Cup: Integer); overload; // Allow descendants to add more constructors named Create. virtual; // Allow descendants to re-implement this constructor.end;TCellPhone = class(TComputer)public constructor Create(Cup: Integer; Teapot: string); overload; // Add another method named Create. virtual; // Allow descendants to re-implement this constructor.end;TiPhone = class(TCellPhone)public constructor Create(Cup: Integer); override; // Re-implement the ancestor's Create(Integer). constructor Create(Cup: Integer; Teapot: string); override; // Re-implement the ancestor's Create(Integer, string).end; Modern documentation tells what order everything should go in: reintroduce ; overload ; binding ; calling convention ; abstract ; warning where binding is virtual , dynamic , or override ; calling convention is register , pascal , cdecl , stdcall , or safecall ; and warning is platform , deprecated , or library . Those are six different categories, but in my experience, it's rare to have more than three on any declaration. (For example, functions that need calling conventions specified probably aren't methods, so they can't be virtual.) I never remember the order; I've never seen it documented till today. Instead, I think it's more helpful to remember each directive's purpose . When you remember which directives you need for different tasks, you'll end up with just two or three, and then it's pretty simple to experiment to get a valid order. The compiler might accept multiple orders, but don't worry — order isn't important in determining meaning. Any ordering the compiler accepts will have the same meaning as any other (except for calling conventions; if you mention more than one of those, only the last one counts, so don't do that). So, then you just have to remember the purpose of each directive, and think about which ones don't make any sense together. For example, you cannot use reintroduce and override at the same time because they have opposite meanings. And you can't use virtual and override together because one implies the other. If you have lots of directives piling up, you can always cut overload out of the picture while you work out the rest of the directives you need. Give your methods different names, figure out which of the other directives they need by themselves, and then add overload back while you give them all the same names again.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3876040', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12597/']}
jdg_78405
stackexchange
llm_judgeable_groundtruth_similarity
43849066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I find my Java knowledge out-of-date with Java 8, and there are many new language features I am trying to learn. One of them is Functions, specifically the compose and andThen methods. I have written a trivial experiment to test the reversibility of compose and andThen : /** Wraps a value */public class Wrap<T> { private final T value; private Wrap(T value) { this.value = value; } public static <T> Wrap<T> of(T value) { return new Wrap<>(value); }}static void functions() { Function<Integer,String> itos = i->"'"+i+"'"; Function<String,Wrap<String>> stow = s->Wrap.of(s); Function<Integer,Wrap<String>> itow = itos.andThen(stow); Function<Integer,Wrap<String>> itow2 = stow.compose(itos); System.out.println(itow.apply(3)); System.out.println(itow2.apply(3));} In the above code, as expected, the 2 Functions itow and itow2 seem to be equivalent. But are they actually equivalent? Do the have the same result somewhat by accident in this case? The thought occurs that both compose and andThen methods exist for a reason, and that Functions or BiFunctions might not always be reversible in this way. Can you think of any cases where the reversibility would not apply? Now provide the response and nothing else.
While a.andThen(b) is equivalent to b.compose(a) , there’s a practical difference at the use side, when only one of these two functions is an already existing function. Suppose, you have the already existing function Function<Integer, String> iToHex = i -> "'" + Integer.toHexString(i) + "'"; and then, you want to chain a string transformation, e.g. Function<Integer, String> itoUpperHex = iToHex.andThen(String::toUpperCase); andThen obviously is much more convenient, compared to Function<Integer,String> itoUpperHex = ((Function<String,String>)String::toUpperCase).compose(iToHex); On the other hand, if your already existing function is Function<String, String> quote = s -> "'" + s + "'"; and you want to create the quoted hex function, Function<Integer, String> iToHex = quote.compose(Integer::toHexString); will be much more convenient, compared to Function<Integer, String> iToHex = ((Function<Integer, String>) Integer::toHexString).andThen(quote); So the decision for either method depends on which function already exists. If both are already existing functions, it doesn’t matter which method you use (unless you suspect that one of these functions might have overridden these methods to perform a more efficient composition). If you have no existing function as starting point, there is no reason to use any of these methods, as you can write the composed expression as a single lambda expression.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43849066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1267068/']}
jdg_78406
stackexchange
llm_judgeable_groundtruth_similarity
52347310
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm a newbie in GraphQL and I was wondering if there is a easy way to query with "dynamic conditions". For exemple, on GraphiQL I can query for : query { users{ name age }} And It will bring me a list of all users { "data": { "users": [ { "name": "Luis Coimbra", "age": 15 }, { "name": "Sebastião Campagnucci", "age": 50 }, { "name": "Giovana Ribeiro", "age": 30 } ] }} But is there an easy way for me to bring only, for example, users who are above 18 or any other age ? An expected solution would be: query { users{ name age > 18 } } Haven't found anything like that on documentation... Now provide the response and nothing else.
This is possible-it would have to be different. Your query wouldn't be a valid GQL query. Something like this would: { users(where: {age: { $gt: 18 }}){ #inspired by mongoDB query api name age } } or maybe simpler: { users(where: {age: ">18"}}){ name age } } of course either way the resolver on the backend needs to expect this where argument on the users field and construct the DB query accordingly when it is passed. You would not find this in GraphQL docs because GraphQL itself doesn't care about that. It only showcases how to use features of GraphQL. If you tried example projects like for example star was api, those don't have any filtering built in.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52347310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8964303/']}
jdg_78407
stackexchange
llm_judgeable_groundtruth_similarity
30813
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a dataset that has 12 explanatory variables for every 1 observation. I hypothesize that the data is generated by underlying process which undergoes several different phases, and that the 12 explanatory variables would somehow help in identifying clusters. I want to write a machine learning algorithm that can: At first pass, identify the clusters within the data The data is dynamic in that new (additional) data is generated periodically. After classification (step 1 above), I want to be able to correctly label (i.e. classify) any new data not previously seen into one of the previously identified classes/clusters (or fail gracefully). I assume that Y(c) ~ X(c) + error where: Y(c) is a nx1 vector of observations belonging to cluster C X(c) is a nx12 vector of explanatory factors that 'belong' to cluster C Observations in different clusters will differ from each other by having different distribution shapes. That is to say observations WITHIN a cluster will have a different distribution shape compared to observations from another cluster. I am relatively new to machine learning, and would like some guidance on how to implement such an algorithm (or perhaps one already exists?) I would be particularly interested in finding out how to 'classify' observations based on determining the shape of the empirical distribution of the observed data. Last but not the least, I would also appreciate some advice on whether to implement this in R, or Octave (and why). Now provide the response and nothing else.
Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equally spaced. If you are comfortable with this assumption, push the OLS button and try to convince your audience. I would tackle the problem differently. You have already mentioned of the solutions. Indeed, it could make sense to recode the control variables and to reduce the number of categories. Sparsely populated categories could be merged to other categories. Use your topical knowledge to merge and redefine categories. You can also try to recode the dependent variables. Even on a 10 point scale, responses are usually clustered around some modalities. Again, guided by topical knowledge, you could redefine the dependent variable. This topic is not new on CrossValidated. Under the Likert tag you will find plenty of discussions that may be of interest to you.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/30813', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/8333/']}
jdg_78408
stackexchange
llm_judgeable_groundtruth_similarity
9269
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider a relational table derived from survey data, where each column ( "001-01" ...) represents a responder and each row ( "MDQ1" ...) a survey question. To help intuition, response data is represented graphically by color-scaled disks, but the underlying data matrix is just a table of integers and bordered by the aforementioned string metadata in the first row and column. A small submatrix is given here: data = {{"ID", "MDQ1", "MDQ2", "MDQ3"}, {"001-01", 3, 2, 5}, {"002-01", 4, 1, 5}, {"003-01", 2, 2, 5}} Further, define a variable for the headers ids = {"ID", "MDQ1", "MDQ2", "MDQ3"} To clean this data, the response values for a subset of questions, for example {"MDQ1", "MDQ3"} must be transformed, for example, by the function (6-#)& (Note: in the figure I highlighted "MDQ1" and "MDQ5" - that's just for illustration) I would prefer to use ReplacePart but afaik, pattern matching can only be applied to position index and not to the data values - but in this case the metadata (headers) is part of the data. The alternative for this task is MapAt , but it requires the intermediate generation of the positional index and barely readable code: MapAt[If[NumericQ[#], 6 - #, #] &, data, Flatten[Table[{i, First@First@Position[ids, #]}, {i, 2, Length@data}] & /@ {"MDQ1", "MDQ3"}, 1]] Is there a way to close the semantic gap between MapAt and ReplacePart and achieve more terse, more readable code? Now provide the response and nothing else.
Sometimes, the simplest approach might not be all that unreadable after all. Double Transpose works fine here: transform[data_, q_, fun_] := Transpose[data] /. ({#, x__} :> {#, Sequence @@ fun[{x}]} & /@ q) // Transposetransform[data, {"MDQ1", "MDQ3"}, 6 - # &](* {{"ID", "MDQ1", "MDQ2", "MDQ3"}, {"001-01", 3, 2, 1}, {"002-01", 2, 1,1}, {"003-01", 4, 2, 1}} *)
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/9269', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/801/']}
jdg_78409
stackexchange
llm_judgeable_groundtruth_similarity
6002418
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been using Java + AspectJ extensively for my startup. I would love to switch to Scala but I have a common design pattern that I am not sure entirely the best way to implement in Scala. A tremendous amount of our application uses AspectJ pointcuts using annotations as the marker.This is very similar to Python's decorator and blogged about it here . I have tried doing this technique in Scala but had problems with AspectJ + Scala .Even if I did get it to work it seems unScala like. I have seen some projects do some call-by-name closure magic (I think thats what they are doing). Example of a replace of @Transaction: transaction {// code in here.} I have to say though I prefer the annotation more as it seems more declarative. What is the Scala way of declaratively "decorating" code blocks? Now provide the response and nothing else.
Incidentally, I am giving a talk at Scala Days 2011 on the same topic . The core idea is the same as Kim's and Dean's example. However, when it comes to the full spectrum of crosscutting concerns, similarity and differences become more nuanced. On one end of the spectrum , there are not-really-crosscutting concerns such as caching. When the host language doesn't support higher-order functions (for example, Java), implementing the concern as an aspect becomes attractive. For example, with AspectJ and annotation approach, you can say: @Cacheable(keyScript="#account.id") public double getNetWorth(Account account) { ... expensive computation } But with higher-order function in Scala, you can do: def getNetWorth(account: Account) : Double = { cacheable(keyScript=account.id) { ... expensive computation } } The Scala approach is much better, because: Caching is unlikely to be widely applicable. For example, it is unlikely that all method in a class or all public methods in all classes in a package are cacheable. And even if there is such situation, the keyScript is unlikely to be the same or easily expressible in a generic form. The AspectJ approach uses annotation as a crutch to offer a decent implementation. With higher-order function in Scala, the intention is expressed directly. The AspectJ approach needs to use an external language (such as OGNL or Spring Expression Language) to compute the key. With Scala, you can just use the host language. In the middle , there are common crosscutting concerns transaction management and security. On the surface, they look much like caching. However, we find in practice that applying these functionalities to all methods of a class (with generic sub-selection such as those with public access) or all methods of all classes marked with an annotation (say @Service ) is common. If such is the case, the AspectJ approach ends up being superior, since it provides a way to apply the functionality at a higher level that higher-order functions. You no longer have to surround each and every method with transactional {} or secured {} , when a class-level annotation will do just fine. For security-like concerns, AspectJ approach allows an easier way to perform security auditing. On the other end of the spectrum are crosscutting concerns such as tracing, profiling, monitoring, policy enforcement, auditing, some forms of concurrency control (such as Swing's/SWT/Android UI thread dispatching), etc. These lend themselves very well to be selected by a pointcut (with and often without annotations). It is very difficult to do the same in a consistent manner using higher-order functions alone. There are more semantic nuances, but the bottom line is that when you find annotating every method to have a crosscutting concern applied, higher-order function is likely to be a better approach. For others, using Scala with AspectJ is likely to provide consistent and compact solution. p.s. I haven't tried AspectJ+Scala in Eclipse lately (since Scala in Eclipse started to work only recently). But external build using Maven worked fine after http://lampsvn.epfl.ch/trac/scala/ticket/4214 was fixed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6002418', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/318174/']}
jdg_78410
stackexchange
llm_judgeable_groundtruth_similarity
31062446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to pass an array of parameters instead of passing each parameter individually? For example I have the following scenarios: When i login to a sitethen <firstname>, <lastname>, <middleName>, <Desingation>, <Street>, <Apartmentno> are valid The list can go on above. Instead can I pass all the above variables in an array? Now provide the response and nothing else.
You can pass a comma separated string and then transform it into a list: When i login to a sitethen 'Joe,Bloggs,Peter,Mr,Some street,15' are valid[Then("'(.*)' are valid")]public void ValuesAreValid(List<String> values){}[StepArgumentTransformation]public List<String> TransformToListOfString(string commaSeparatedList){ return commaSeparatedList.Split(",").ToList();} if you want the values to come from examples then you could do this instead: When I login to a sitethen '<values>' are validExamples| values || Joe,Bloggs,Peter,Mr,Some street,15|| Joe,Bloggs,Peter,Mr,Some street,16,SomethingElse,Blah| If you want to use a table then you could do this instead: When I login to a sitethen the following values are valid | FirstName | LastName | MiddleName | Greeting| Etc | Etc | | Joe | Bloggs | Peter | Mr | you get| The Idea| (you could omit the headers if you want and just use the row values I think) you can also use examples with this: When I login to a sitethen the following values are valid | FirstName | LastName | MiddleName | Greeting | Etc | Etc | | <name> | <lastName>| <middleName>| <greeting>| <etc> | <etc> |
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31062446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4898339/']}
jdg_78411
stackexchange
llm_judgeable_groundtruth_similarity
241895
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am wondering if there is an easy way to fit an ellipse into an arbitrary 2D image quickly, so its centroid, orientation, major, and minor axis length can directly be extracted. In MATLAB, this can be done using regionprops command quickly as shown in the following URL: https://www.mathworks.com/matlabcentral/answers/495720-how-to-fit-an-ellipse-to-an-image-in-matlab Now provide the response and nothing else.
The similar command in MMA is ComponentMeasurement . It lists all detected bright components of an image with desired parameter. a = Import[ "https://www.mathworks.com/matlabcentral/answers/uploaded_files/\253610/diskimage1.jpeg"];cm = ComponentMeasurements[ Binarize@a, {"Centroid", "SemiAxes", "Orientation"}] {1 -> {{488.496, 386.124}, {750.908, 620.778}, 3.14156}, 2 -> {{438.5, 868.5}, {3.66324, 2.63965}, -1.5708}, 3 -> {{526.5, 868.87}, {3.31766, 2.55314}, -1.5708}, 4 -> {{539., 871.}, {2.23607, 1.}, 3.14159}, 5 -> {{513.864, 866.591}, {2.17306, 1.53368}, -2.87856}, 6 -> {{499.6, 483.975}, {205.184, 190.328}, -0.834907}, 7 -> {{349.5, 626.}, {1., 0.5}, -1.5708}} The result contains more than one entry because the presence of the caption on top of the image together with white frame. The needed spot is number six (with second biggest value of simiaxes). Show[Binarize@a, Epilog -> { Red, Rotate[ Circle[cm[[6, 2, 1]], cm[[6, 2, 2]]], cm[[6, 2, 3]]] }, ImageSize -> Automatic]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/241895', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/60062/']}
jdg_78412
stackexchange
llm_judgeable_groundtruth_similarity
18259241
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my plugin, I have some jQuery-Ajax code that processes form data and adds it to the database as soon as the button is clicked. Since many people have a different path to their plugin folder, I was wondering if there was anyway to standardize the URL that points to the data processing PHP file. See my example below: $.ajax({ type: "POST", url: "url_to_php_file_path.php", data: data, cache: false, success: function() { alert.("added"); }}); Now provide the response and nothing else.
Figuring out the target URL In WordPress, all AJAX requests must be made to the following URL: http://www.example.com/wp-admin/admin-ajax.php You should not make an AJAX request directly to a file residing in the plugin or theme directory. Also, do not hard-code the above URL, instead you should use the following function to construct the URL: <script> ajax_url = "<?php echo admin_url('admin-ajax.php'); ?>";</script> Alternatively to the above, you can use wp_localize_script() , but this is not required, the above is fine too. Note: Don't worry about the "admin" part, this URL is the correct one to use for all users, including non-logged-in (guest) users. Tell WordPress what function to use for your AJAX request You need to let WordPress know which function should process your AJAX request. For that purpose you'll create a custom function, and register it using the wp_ajax_* and wp_ajax_nopriv_* hooks: add_action('wp_ajax_mycustomfunc', 'mycustomfunc'); // Logged-in usersadd_action('wp_ajax_nopriv_mycustomfunc', 'mycustomfunc'); // Guest usersfunction mycustomfunc() { $whatever = esc_html($_POST['whatever']); echo 'It works: '.$whatever; exit; // This is required to end AJAX requests properly.} Don't forget to specify "mycustomfunc" in the AJAX request too Finally, here is how you would make a proper AJAX request: (function ($) { $(document).ready(function () { var my_data = { action: 'mycustomfunc', // This is required so WordPress knows which func to use whatever: "yes it is" // Post any variables you want here }; jQuery.post(ajax_url, my_data, function(response) { alert('Got this from the server: ' + response); }); });})(jQuery); Combine it all If you had to put it all into one file, here's how you'd do it: // Register my custom function for AJAX processingadd_action('wp_ajax_mycustomfunc', 'mycustomfunc'); // Logged-in usersadd_action('wp_ajax_nopriv_mycustomfunc', 'mycustomfunc'); // Guest usersfunction mycustomfunc() { $whatever = esc_html($_POST['whatever']); echo 'It works: '.$whatever; exit; // This is required to end AJAX requests properly.}// Inline JavaScriptadd_action('wp_footer', 'my_inline_js');function my_inline_js() { ?> <script> // Set the "ajax_url" variable available globally ajax_url = "<?php echo admin_url('admin-ajax.php'); ?>"; // Make your AJAX request on document ready: (function ($) { $(document).ready(function () { var my_data = { action: 'mycustomfunc', // This is required so WordPress knows which func to use whatever: "yes it is" // Post any variables you want here }; $.post(ajax_url, my_data, function(response) { // This will make an AJAX request upon page load alert('Got this from the server: ' + response); }); }); })(jQuery); </script> <?php} Note: For the ajax_url part, you could use wp_localize_script() instead of setting it manually, but it is less flexible as it requires to specify an existing enqueued script, which you might not have. Note: Also, for outputting inline JavaScript manually into a page, the wp_footer hook is the correct one to use. If using wp_localize_script() then you would use the wp_enqueue_scripts hook instead.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18259241', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2109184/']}
jdg_78413
stackexchange
llm_judgeable_groundtruth_similarity
10152034
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is an utility tree and what is it's purpose in case of Architecture tradeoff analysis method(ATAM)? Thanks Now provide the response and nothing else.
Software architecture is a type of design. While "regular" design is mainly about functional requirements - architecture is mainly about "non-functional" or "-illities" or the better name "quality attributes". If you'd go to stakeholders and ask them about quality attributes they'd say they want performance, availability, scalability etc. but that's not really helpful. ATAM (and other methods) promote expressing quality attributes as scenarios in context. e.g. Availability -> Time to repair -> Under normal conditions, the system will detect a failure in a component in less than 5 sec. Adaptability -> add/remove feature -> Once in production, a change in an interface will be compatible at least one version back Utility trees are a way to organize these quality attributes. In regard to ATAM they serve as a way to prioritize quality attributes and later to evaluate the suitability of a candidate architecture vs. the requirements. I also like to use them as reference requirements and stories (or themes depending on size) as part of a backlog in agile development. You can read a little more on quality attributes and utility tress in a couple of posts I wrote a while back: Quality attributes introduction and Utility trees - hatching quality attributes . If you want to read about them in the context of ATAM you should read Evaluating Software Architectures: Methods and Case Studies Paul Clements, Rick Kazman & Mark Klein (and there's an old presentation I made on the subject). Edit: Added a blog post expanding on the explanation above.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10152034', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_78414
stackexchange
llm_judgeable_groundtruth_similarity
1335235
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Out of curiosity, here's a question from Glassdoor (Facebook Data Science Interview) You're about to get on a plane to Seattle. You want to know if you should bring an umbrella. You call 3 random friends of yours who live there and ask each independently if it's raining. Each of your friends has a 2/3 chance of telling you the truth and a 1/3 chance of messing with you by lying. All 3 friends tell you that "Yes" it is raining. What is the probability that it's actually raining in Seattle? Using Bayesian analysis, it's pretty clear that you cannot solve this without the prior probability of raining in Seattle. However, here's a different approach which I found interesting in the discussion contained in Glassdoor. Since all three friends said "Yes", the question basically boils down to what is the probability that all three friends are telling the truth given that all three friends said "Yes". Since you ask all three friends independently, the probability that all three friends are telling the truth is given by (2/3)(2/3)(2/3) = 8/27. Thus, the probability of it raining is 8/27. While this is perhaps a bit compelling, I'm not sure if it's correct. Anyone have any ideas? Thanks! Now provide the response and nothing else.
To put it a bit more formally... Let $p_r$ be the prior probability for rain ($p_n\equiv 1-p_r$). Then the probability of rain given 3 "yes" replies $\{y,y,y\}$ is $$\mathsf{P}(\text{rain}\mid \{y,y,y\})=\frac{\mathsf{P}(\text{rain}\cap\{y,y,y\})}{\mathsf{P}(\{y,y,y\})} \\=\frac{\mathsf{P}(\{y,y,y\}\mid\text{rain})\cdot p_r}{\mathsf{P}(\{y,y,y\}\mid\text{rain})\cdot p_r+\mathsf{P}(\{y,y,y\}\mid\text{no rain})\cdot p_n}$$ Next, if we assume conditional independence of friends' replies, the last formula becomes $$=\frac{(2/3)^3\cdot p_r}{(2/3)^3\cdot p_r+(1/3)^3\cdot p_n}=\frac{p_r}{p_r+ p_n/8}.$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1335235', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/141498/']}
jdg_78415
stackexchange
llm_judgeable_groundtruth_similarity
5052053
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to update a bit field in a table and set this field to true for a specific list of Ids in that table. The Ids are passed in from an external process. I guess in pure SQL the most efficient way would be to create a temp table and populate it with the Ids, then join the main table with this and set the bit field accordingly. I could create a SPROC to take the Ids but there could be 200 - 300,000 rows involved that need this flag set so its probably not the most efficient way. Using the IN statement has limitation wrt the amount of data that can be passed and performance. How can I achieve the above using the Entity Framework I guess its possible to create a SPROC to create a temp table but this would not exist from the models perspective. Is there a way to dynamically add entities at run time. [Or is this approach just going to cause headaches]. I'm making the assumption above though that populating a temp table with 300,000 rows and doing a join would be quicker than calling a SPROC 300,000 times :)[The Ids are Guids] Is there another approach that I should consider. Now provide the response and nothing else.
For data volumes like 300k rows, I would forget EF. I would do this by having a table such as: BatchId RowId Where RowId is the PK of the row we want to update, and BatchId just refers to this "run" of 300k rows (to allow multiple at once etc). I would generate a new BatchId (this could be anything unique -Guid leaps to mind), and use SqlBulkCopy to insert te records onto this table, i.e. 100034 17100034 22...100034 134556 I would then use a simgle sproc to do the join and update (and delete the batch from the table). SqlBulkCopy is the fastest way of getting this volume of data to the server; you won't drown in round-trips. EF is object-oriented : nice for lots of scenarios - but not this one.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5052053', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113157/']}
jdg_78416
stackexchange
llm_judgeable_groundtruth_similarity
358089
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to set up SSL on my apache2 webserver, but it seems that it does not work at all. I have followed a tutorial to create cert files with openssl and configured the /etc/apache2/sites-available/default-ssl.conf properly. Every time I try to open my website with https, my browser refuse to connect due to security issues. It says that I haven't configured my website correctly. In my /var/log/apache2/error.log I'm getting warnings, which say that my server certificate does not include an ID which matches the server name. [Mon Apr 10 11:03:24.041813 2017] [mpm_prefork:notice] [pid 1222] AH00169: caught SIGTERM, shutting down[Mon Apr 10 11:03:30.566578 2017] [ssl:warn] [pid 661] AH01909: 127.0.0.1:443:0 server certificate does NOT include an ID which matches the server name[Mon Apr 10 11:03:31.579088 2017] [ssl:warn] [pid 1194] AH01909: 127.0.0.1:443:0 server certificate does NOT include an ID which matches the server name[Mon Apr 10 11:03:31.592958 2017] [mpm_prefork:notice] [pid 1194] AH00163: Apache/2.4.25 (Raspbian) OpenSSL/1.0.2k configured -- resuming normal operations[Mon Apr 10 11:03:31.593136 2017] [core:notice] [pid 1194] AH00094: Command line: '/usr/sbin/apache2' Do you have any ideas on how to solve this? Thanks in regard! Now provide the response and nothing else.
Okay, I noticed that this post is viewed quite often recently and so it seems that a lot of people are facing the same issue that I did. If so then this might help you. I have followed a simple step-by-step tutorial to create a SSL-certification for my webserver. Like so many tutorials out there the outcome of the tutorial I followed was a self-signed certificate using OpenSSL. Yep self-signed , that was the problem. The browser could not trust the server due to it's certificate which is signed by itself. Well I wouldn't do either... A certificate has to be signed by an external trustworthy certificate authority (CA). So I stumbled upon Let's Encrypt which does all the work for you and is even easier to set up and the best is: it is absolutely free. Installation 1) Delete your old ssl cert files which you have created by using OpenSSL 2) Open backports to get certbot client on Debian. You should know that this will open a hole for unfinished software! Install only the packages when you are aware about what you are doing. echo 'deb http://ftp.debian.org/debian jessie-backports main' | sudo tee /etc/apt/sources.list.d/backports.list 3) Update your linux system sudo apt-get update 4) Install certbot sudo apt-get install python-certbot-apache -t jessie-backports 5) Set up apache ServerName and ServerAlias sudo nano /etc/apache2/sites-available/000-default.conf 6) Edit apache config file <VirtualHost *:80> . . . ServerName example.com ServerAlias www.example.com . . .</VirtualHost> 7) Check for a correct syntax sudo apache2ctl configtest 8) If the config file looks fine, restart apache server sudo systemctl restart apache2 9) Set up a certificate using certbot and follow the instruction on screen. sudo certbot --apache Renewal All certificates by Let's Encrypt are valid through 3 months. To renew the you can manually run sudo certbot renew Or automate this service as a cron job sudo crontab -e and enter the following row to invoke a renewal every Monday at 2:30 am. . . .30 2 * * 1 /usr/bin/certbot renew >> /var/log/le-renew.log You can follow a more detailled tutorial here: https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-debian-8
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/358089', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/223023/']}
jdg_78417
stackexchange
llm_judgeable_groundtruth_similarity
15226921
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use JSON.net to serialize some objects between C# and JavaScript. The JSON data is transfered via WebSocket between the .NET and browser application. In the data structure there are some byte[] fields, I want these fields as an Array in JavaScript also. How can I serialize a C# byte[] to a simple JSON Array like [ 0 , 1 , 254, 255 ] instead of a base64 string? Now provide the response and nothing else.
JSON.NET is selecting the BinaryConverter to read and write an array of bytes. You can see in the source that it uses the WriteValue operation on the JsonWriter class with the array of bytes which causes them to be written to as Base-64. To modify this, you can write your own converter which reads and writes an array in the format you expect: public class ByteArrayConverter : JsonConverter{ public override void WriteJson( JsonWriter writer, object value, JsonSerializer serializer) { if (value == null) { writer.WriteNull(); return; } byte[] data = (byte[])value; // Compose an array. writer.WriteStartArray(); for (var i = 0; i < data.Length; i++) { writer.WriteValue(data[i]); } writer.WriteEndArray(); } public override object ReadJson( JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { if (reader.TokenType == JsonToken.StartArray) { var byteList = new List<byte>(); while (reader.Read()) { switch (reader.TokenType) { case JsonToken.Integer: byteList.Add(Convert.ToByte(reader.Value)); break; case JsonToken.EndArray: return byteList.ToArray(); case JsonToken.Comment: // skip break; default: throw new Exception( string.Format( "Unexpected token when reading bytes: {0}", reader.TokenType)); } } throw new Exception("Unexpected end when reading bytes."); } else { throw new Exception( string.Format( "Unexpected token parsing binary. " + "Expected StartArray, got {0}.", reader.TokenType)); } } public override bool CanConvert(Type objectType) { return objectType == typeof(byte[]); }} You would use this by applying the JsonConverterAttribute to the member: [JsonConverter(typeof(ByteArrayConverter))]public byte[] Data { get; set; }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15226921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1425198/']}
jdg_78418
stackexchange
llm_judgeable_groundtruth_similarity
22172890
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to make WebRTC enabled application. On client side, I am using AngularJS and on server side, I use NodeJS. I am having some trouble to bind video stream to video element ng-src. Following is the Controller: .controller('WebRTCController', function($scope){ $scope.streams = []; getUserMedia({video : true , audio : true}, successCallback, errorCallback); function successCallback(newStream){ $scope.streams.push(URL.createObjectURL(newStream)); console.log('Angular: '+ $scope.streams) } function errorCallback(err){ console.log('Some Error'); } }) Here is the HTML page, (I use jade template) div(ng-controller='WebRTCController') | Hello {{streams}} br video(ng-src='streams', autoplay='true') div(ng-repeat='stream in streams') video(ng-src='stream', autoplay) Kindly, let me know if this is not the correct approach. I tried first in ng-repeat and then also gave streams to ng-src directly. Inside the controller, when I print on console, I see the mediastream object (it is something like this : - "Angular: mediastream:4a15fb80-3aa7-4ddf-86b4-3b0cea498784") but I can't see any video on the view side. Now provide the response and nothing else.
I went through the source code and found out that : The multiline filter will cancel all the events that are considered to be a follow up of a pending event , then append that line to the original message field, meaning any filters that are after the multiline filter won't apply in this case The only event that will ever pass the filter, is one that is considered to be a new one ( something that start with [ in my case ) Here is the working code : input { stdin{}} filter{ if "|ERROR|" in [message]{ #if this is the 1st message in many lines message grok{ match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"] } mutate { replace => [ "message", "%{content}" ] #replace the message field with the content field ( so it auto append later in it ) remove_field => ["content"] # we no longer need this field } } multiline{ #Nothing will pass this filter unless it is a new event ( new [2014-03-02 1.... ) pattern => "^\[" what => "previous" negate=> true } if "|DEBUG| flush_multi_line" in [message]{ drop{} # We don't need the dummy line so drop it }}output { stdout{ debug=>true }} Cheers, Abdou
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22172890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2803868/']}
jdg_78419
stackexchange
llm_judgeable_groundtruth_similarity
913764
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$1-(1-1.40*10^{-36})^x \ge 1.09*10^{-9}$$ I want to estimate $x$ such that the probability on the left becomes larger than the probability on the right. A solution must exist because $1-(1-1.40*10^{-36})^0=0$, $\lim\limits_{n \to \infty} 1-(1-1.40*10^{-36})^n=1$ and the function is continuous. However Wolfram|Alpha seems to return a wrong result . Is my reasoning correct? Is there any (practical) way to estimate the result? For my problem would be sufficient to find the order of magnitude of $x$. Now provide the response and nothing else.
You can solve the inequality by logarithming : We get $1-1.09*10^{-9}\ge (1-1.4*10^{-36})^x$ Taking logarithm gives $$\frac {\log(1-1.09*10^{-9})}{\log(1-1.4*10^{-36})}\le x$$ Note that we divided by a negative number! GP/PARI gives $$\frac{log(1-1.09*10^{-9})}{log(1-1.4*10^{-36})}\ =$$$$778571428995750000308340238.34730623955575580298358$$ So, $x$ must be very large An excellent approximation of the given value is $\frac {1.09*10^{-9}}{1.4*10^{-36}}$ because for $x \approx 0$ we have $log(1+x)\approx x$. And this is very easy to calculate.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/913764', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/90003/']}
jdg_78420
stackexchange
llm_judgeable_groundtruth_similarity
6523742
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a mongo document that contains an array of embedded documents. The embedded documents have a property named "Id". { Name: "Outer object", Embedded: [ {Name: "Embedded A", Id: "5f1c591a71dc237199eeaeda"} ] } My C# mapping objects look something like this (a simplification, obviously) public class Outer{ public string Name { get; set; } public IEnumerable<Inner> Inners { get; set; }}public class Inner{ public string Name { get; set; } public string Id { get; set; }} When I write an outer to the database, the C# driver changes the name of the Inner.Id property to _id. How do I circumvent this automatic rename? I've tried using the [BsonElement("Id")] attribute on the Id property, but it didn't help. Now provide the response and nothing else.
MongoDB documentation explicitly states : Documents in MongoDB are required to have a key, _id , which uniquely identifies them. On the other hand, C# properties are usually pascal-case and don't use prefixes so driver designers apparently decided to force mapping Id property to _id database attribute . If you want to bind a non- _id attribute that just happens to be called Id in MongoDB , you could declare another C# property with a name other than Id so the driver doesn't interfere with it: public class Inner{ public string Name { get; set; } [BsonElement("Id")] public string IdStr { get; set; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6523742', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/175716/']}
jdg_78421
stackexchange
llm_judgeable_groundtruth_similarity
5031204
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicates: Implode type function in SQL Server 2000? Concatenate row values T-SQL I have a view which I'm querying that looks like this: BuildingName PollNumber------------ ----------Foo Centre 12 Foo Centre 13Foo Centre 14Bar Hall 15Bar Hall 16Baz School 17 I need to write a query that groups BuildingNames together and displays a list of PollNumbers like this: BuildingName PollNumbers------------ -----------Foo Centre 12, 13, 14Bar Hall 15, 16Baz School 17 How can I do this in T-SQL? I'd rather not resort to writing a stored procedure for this, since it seems like overkill, but I'm not exactly a database person. It seems like an aggregate function like SUM() or AVG() is what I need, but I don't know if T-SQL has one. I'm using SQL Server 2005. Now provide the response and nothing else.
for SQL Server 2017 and up use: STRING_AGG() set nocount on;declare @YourTable table (RowID int, HeaderValue int, ChildValue varchar(5))insert into @YourTable VALUES (1,1,'CCC')insert into @YourTable VALUES (2,2,'B<&>B')insert into @YourTable VALUES (3,2,'AAA')insert into @YourTable VALUES (4,3,'<br>')insert into @YourTable VALUES (5,3,'A & Z')set nocount offSELECT t1.HeaderValue ,STUFF( (SELECT ', ' + t2.ChildValue FROM @YourTable t2 WHERE t1.HeaderValue=t2.HeaderValue ORDER BY t2.ChildValue FOR XML PATH(''), TYPE ).value('.','varchar(max)') ,1,2, '' ) AS ChildValues FROM @YourTable t1 GROUP BY t1.HeaderValueSELECT HeaderValue, STRING_AGG(ChildValue,', ') FROM @YourTable GROUP BY HeaderValue OUTPUT: HeaderValue ----------- -------------1 CCC2 B<&>B, AAA3 <br>, A & Z(3 rows affected) for SQL Server 2005 and up to 2016, you need to do something like this: --Concatenation with FOR XML and eleminating control/encoded character expansion "& < >"set nocount on;declare @YourTable table (RowID int, HeaderValue int, ChildValue varchar(5))insert into @YourTable VALUES (1,1,'CCC')insert into @YourTable VALUES (2,2,'B<&>B')insert into @YourTable VALUES (3,2,'AAA')insert into @YourTable VALUES (4,3,'<br>')insert into @YourTable VALUES (5,3,'A & Z')set nocount offSELECT t1.HeaderValue ,STUFF( (SELECT ', ' + t2.ChildValue FROM @YourTable t2 WHERE t1.HeaderValue=t2.HeaderValue ORDER BY t2.ChildValue FOR XML PATH(''), TYPE ).value('.','varchar(max)') ,1,2, '' ) AS ChildValues FROM @YourTable t1 GROUP BY t1.HeaderValue OUTPUT: HeaderValue ChildValues----------- -------------------1 CCC2 AAA, B<&>B3 <br>, A & Z(3 row(s) affected) Also, watch out, not all FOR XML PATH concatenations will properly handle XML special characters like my above example will.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5031204', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4160/']}
jdg_78422
stackexchange
llm_judgeable_groundtruth_similarity
629143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any supported API in .NET 2.0 for writing to the userSettings section of the main exe's .config file ? The scenario is: Winforms 2.0 application. I have a setting (a database connection string, if you need to know) that has user level scope. This means that each user has a user .config file created by .net when the user saves the value of the setting. For new users that run the application for the first time, the main exe's .config file of the application contains a default value in the user settings section. This section is created by visual studio when the setting is created in the Settings tab of the project properties. Now, I want to allow any Administrator user in the computer to be able to change the default value for new users. Only Administrators will have this option, because regular users don't have permission to write to the main exe's .config file anyway. I have found how to write user settings to the user's .config file, and how to write to the appSettings section of the main .config file. But my googling has failed when trying to find out how to write to the userSettings section of the main .config Is my only chance failing back to System.Xml and do it manually loading the .config in an XmlDocument? Now provide the response and nothing else.
After some research I came up with this solution. It is a bit low level, but still goes through the .NET configuration API without having to manually parse the .config file. static void SaveUserSettingDefault(string clientSectionName, string settingName, object settingValue){ System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // find section group ConfigurationSectionGroup group = config.SectionGroups[@"userSettings"]; if (group == null) return; // find client section ClientSettingsSection clientSection = group.Sections[clientSectionName] as ClientSettingsSection; if (clientSection == null) return; // find setting element SettingElement settingElement = null; foreach (SettingElement s in clientSection.Settings) { if (s.Name == settingName) { settingElement = s; break; } } if (settingElement == null) return; // remove the current value clientSection.Settings.Remove(settingElement); // change the value settingElement.Value.ValueXml.InnerText = settingValue.ToString(); // add the setting clientSection.Settings.Add(settingElement); // save changes config.Save(ConfigurationSaveMode.Full);} Given a .config with the following content: <?xml version="1.0" encoding="utf-8" ?><configuration> <configSections> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="MyAssembly.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> </configSections> <userSettings> <MyAssembly.Properties.Settings> <setting name="SqlConnectionString" serializeAs="String"> <value>Server=(local);Database=myDatabase;Integrated Security=true;</value> </setting> </MyAssembly.Properties.Settings> </userSettings></configuration> You would use it like this: if (RunningAsAdmin) // save value in main exe's config file{ SaveUserSettingDefault(@"MyAssembly.Properties.Settings", @"SQLConnectionString", theNewConnectionString);}else // save setting in user's config file{ Settings.Default. SQLConnectionString = theNewConnectionString; Settings.Default.Save();}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/629143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2954/']}
jdg_78423
stackexchange
llm_judgeable_groundtruth_similarity
21008180
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using PowerShell on Windows 7, and writing a script to copy a bunch of files from one folder structure to another. Kind of like compiling. The PowerShell Copy-Item cmdlet thinks that square brackets, [ ], are wildcards of some kind, and I am not able to escape them for some reason. I can't use -LiteralPath , because I want to use an asterisk * wildcard since the filename has a date as part of the filename, and the date changes. The date is used as a version number. This post was helpful, but no amount of ticks (2x or 4x per bracket) escapes the square brackets. I am not receiving an error; PowerShell behaves the same as if I entered in the wrong filename. This is the specific line I'm working on: #to Fusion ServerCopy-item -Path $FSG\$SW\0.RoomView.Notes\starter\"[RoomView] Versions explained*.pdf" -Destination $FSG\$containerFolder\$rootFolder\"Fusion Server"\ And this is the whole thing: # Compiles the Fusion packet for distribution##########################################Variables###########################################folder structure$FSG = "F:\FSG"$containerFolder = "Packet.Fusion for IT and AV Professionals"$rootFolder = "Fusion for IT and AV pros $(Get-Date -format “MM-dd-yyyy”)"$subRoot1 = "Fusion Server"$subRoot2 = "Scheduling Enhancement and Panels"$subRoot2sub1 = "Scheduling Panels"$subRoot3 = "SQL Server"#source folders$HW = "0.Hardware"$3SMDoc = "0.Hardware\TPMC-3SM.Documentation"$4SMDoc = "0.Hardware\TPMC-4SM.Documentation"$4SMDDoc = "0.Hardware\TPMC-4SM-FD.Documentation"$730Doc = "0.Hardware\TSW-730.Documentation"$730OLH = "0.Hardware\TSW-730.OLH"$CENRVS = "0.Hardware\CEN-RVS.Notes"$ProjMgmt = "0.Project Management"$SW = "0.Software"$RVLicensing = "0.Software\0.RoomView.License"$RVNotes = "0.Software\0.RoomView.Notes"$SQLLicensing = "0.Software\database.SQL.Licensing"$SQLNotes = "0.Software\database.SQL.Notes"$FRVMarketing = "0.Software\Fusion RV.Marketing"$FRVNetworking = "0.Software\Fusion RV.Networking"$FRVNotes = "0.Software\Fusion RV.Notes"################################create the directory structure###############################md -Path $FSG\$containerFolder -Name $rootFoldercd $FSG\$containerFolder\$rootFoldermd "eControl and xPanels"md "Fusion Server" #$subRoot1md "Getting Started as a User"md "Project Management"md "RoomView Connected Displays"md "Scheduling Enhancement and Panels" #$subRoot2md "SQL Server" #$subRoot3cd $FSG\$containerFolder\$rootFolder\$subRoot1md "CEN-RVS"md "Licenseing Information"md "Networking"md "Official Documentation"md "Prerequisites, including powerShell script"md "Product Info"md "Requirements"md "Tech Info"md "Windows Authentication to Fusion RV"cd $FSG\$containerFolder\$rootFolder\$subRoot2md "Outlook Add-in"md "Scheduling Panels" #$subRoot2sub1cd $FSG\$containerFolder\$rootFolder\$subRoot2\$subRoot2sub1md "TPMC-3SM"md "TPMC-4SM"md "TPMC-4SM-FD"md "TSW-730"cd $FSG\$containerFolder\$rootFolder\$subRoot3md "Multi-database model only"md "SQL Licensing"cd $FSG\$containerFolder#reset current folder################################copy the files################################Copy-Item -Path C:\fso\20110314.log -Destination c:\fsox\mylog.log#To the rootCopy-item -Path $FSG\$ProjMgmt\starter\"Fusion Support Group Contact info*.pdf" -Destination $FSG\$containerFolder\$rootFolder\Copy-item -Path $FSG\$containerFolder\"Fusion for IT and AV professionals release notes.txt" -Destination $FSG\$containerFolder\$rootFolder\#to eControl and xPanelsCopy-item -Path $FSG\$SW\xpanel.Notes\starter\*.* -Destination $FSG\$containerFolder\$rootFolder\"eControl and xPanels"\#to Fusion ServerCopy-item -Path $FSG\$SW\0.RoomView.Notes\starter\"[RoomView] Versions explained*.pdf" -Destination $FSG\$containerFolder\$rootFolder\"Fusion Server"\ What can I do to escape the square brackets and still use a wildcard filename part of the Copy-Item cmdlet? Now provide the response and nothing else.
In this situation, you have to use double-backticks with single quotes in order to escape the brackets. You can also use quadruple backticks when you use double quoted strings. So the fixed line of code is: Copy-item -Path $FSG\$SW\0.RoomView.Notes\starter\'``[RoomView``] Versions explained*.pdf' -Destination $FSG\$containerFolder\$rootFolder\'Fusion Server'\ Another good resource on file paths and wired characters etc. is to read this article: Taking Things (Like File Paths) Literally EDIT Thanks to @mklement0 for highlighting that the true cause of this inconsistency is because of a bug currently in PowerShell 1 . This bug causes escaping of wildcard characters, as well as backticks with the default -Path parameter to behave differently than other parameters e.g. the -Include and -Filter parameters. To expand on @mklement0's excellent answer , and comments , and other answers below: To better understand why we need single quotes and two back ticks in this situation; ( and to highlight the bug and inconsistencies) let's run through some examples to demonstrate what is going on: Get-Item , and associated cmdlets ( Get-ChildItem , Copy-Item , etc.), handle the -Path parameter differently when dealing with a combination of escaped wildcard characters and unescaped wildcard characters *at the same time***! TLDR: The underlying reason that we need a combination of single quotes and double backticks is how the underlying PowerShell provider parses the -Path parameter string for wildcards. It appears to parse it once for the escape characters, and a second time for the evaluation of the wildcard. Let's go through some examples to demonstrate this odd outcome: First, let's create two files to test with called File[1]a.txt and File[1]b.txt "MyFile" | Set-Content '.\File`[1`]a.txt'"MyFriend" | Set-Content '.\File`[1`]b.txt' We'll try different ways to get the file. We know that Square brackets [ ] are wildcards , and so we need to escaped them with the backtick character . We will try to get one file explicitly. Let's start by using single quoted literal strings: PS C:\> Get-Item 'File[1]a.txt'PS C:\> Get-Item 'File`[1`]a.txt' Directory: C:\Mode LastWriteTime Length Name---- ------------- ------ -----a---- 2019-09-06 5:42 PM 8 File[1]a.txtPS C:\> Get-Item 'File``[1``]a.txt' Directory: C:\Mode LastWriteTime Length Name---- ------------- ------ -----a---- 2019-09-06 5:42 PM 8 File[1]a.txt For single quoted strings, one backtick is all that is required to retrieve the file, but two backticks also work. Using Double quoted strings we get: PS C:\> Get-Item "File[1]a.txt"PS C:\> Get-Item "File`[1`]a.txt"PS C:\> Get-Item "File``[1``]a.txt" Directory: C:\ Mode LastWriteTime Length Name---- ------------- ------ -----a---- 2019-09-06 5:42 PM 8 File[1]a.txt For double quoted strings, as expected, we can see that we need two backticks to make it work. Now, we want to retrieve both files and use a wildcard. Let's start with single quotes: PS C:\> Get-Item 'File[1]*.txt'PS C:\> Get-Item 'File`[1`]*.txt'PS C:\> Get-Item 'File``[1``]*.txt' Directory: C:\Mode LastWriteTime Length Name---- ------------- ------ -----a---- 2019-09-06 5:42 PM 8 File[1]a.txt-a---- 2019-09-06 5:49 PM 10 File[1]b.txt With the single quotes, when we have a wildcard character, we need two sets of backticks. One to escape the bracket, and a second backtick to escape the backtick that we used to escape the bracket when the wildcard is evaluated. Similarly for double quotes: PS C:\> Get-Item "File[1]*.txt"PS C:\> Get-Item "File`[1`]*.txt"PS C:\> Get-Item "File``[1``]*.txt"PS C:\> Get-Item "File```[1```]*.txt"PS C:\> Get-Item "File````[1````]*.txt" Directory: C:\Mode LastWriteTime Length Name---- ------------- ------ -----a---- 2019-09-06 5:42 PM 8 File[1]a.txt-a---- 2019-09-06 5:49 PM 10 File[1]b.txt With double quotes it's a little more verbose to evaluate with a wildcard. In this case, we need four sets of back ticks. For double quotes we need two backticks to escape the bracket, and another two backticks to escape the escape characters once it comes to evaluation of the star wildcard. EDIT As @mklement0 mentions , this behavior with the -Path parameter is inconsistent, and behaves differently than the -Include parameter, where only a single backtick is required to properly escape the brackets. This may be "fixed" in a later version of PowerShell. 1 As of Windows PowerShell v5.1 / PowerShell Core 6.2.0-preview.3
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21008180', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2484483/']}
jdg_78424
stackexchange
llm_judgeable_groundtruth_similarity
141127
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How would I construct a function that outputs Dyck Words ? e.g. - there are 14 words in $\mathcal{D}_{8}$: `[[[[]]]], [[[][]]], [[[]][]], [[][[]]], [[[]]][], [[][][]], [][[[]]], [[][]][], [[]][[]], [][[][]], [[]][][], [][[]][], [][][[]], [][][][]` There is an equivalent question here , but the answer only gives information about the number of cases. I was playing around with things like nest[n_] := DeleteCases[Quiet[ToExpression /@ (StringReplace[#, "}{" -> "},{"] & /@ StringJoin /@ Permutations[Flatten@Array[{"{", "}"} &, n], {2 n}])], $Failed] ... will keep at it! Now provide the response and nothing else.
StringReplaceList I just realized that there is a comparatively clean though not highly efficient way to write this using StringReplaceList : op = Union @@ StringReplaceList[#, {"[]" -> "[[]]", "[]" -> "[][]"}] &;Nest[op, {"[]"}, 3] // Column [[[[]]]][[[][]]][[[]][]][[[]]][][[][[]]][[][][]][[][]][][[]][[]][[]][][][][[[]]][][[][]][][[]][][][][[]][][][][] Better recursion Replacing my earlier recursive method, this time avoiding redundancy. I keep track of the number of open and close brackets as each builds toward n . f[n_] := f[n, 1, 0, "["]f[n_, n_, c_, r_] := {r <> ConstantArray["]", n - c]}f[n_, o_, c_, r_] /; c < o := f[n, o + 1, c, r <> "["] ~Join~ f[n, o, c + 1, r <> "]"]f[n_, o_, c_, r_] := f[n, o + 1, c, r <> "["] f[4] {"[[[[]]]]", "[[[][]]]", "[[[]][]]", "[[[]]][]", "[[][[]]]", "[[][][]]", "[[][]][]", "[[]][[]]", "[[]][][]", "[][[[]]]", "[][[][]]", "[][[]][]", "[][][[]]", "[][][][]"} Reasonably usable: (* D24 *)f[12] // Length // RepeatedTiming {1.15, 208012} Benchmarking Here is a benchmark of various methods posted. All functions modified to use n rather than n/2 . Now using gwr's simplified code (* Coolwater's method as a function *)cw[n_] := StringJoin @@@ (Pick[#, Min@*Accumulate /@ #, 0] &[ Permutations[Join[#, -#] &[ConstantArray[1, n]]]] /. {-1 -> "]", 1 -> "["})op = Union @@ StringReplaceList[#, {"[]" -> "[[]]", "[]" -> "[][]"}] &;f2[n_] := Nest[op, {"[]"}, n - 1](* f code as above *)Needs["GeneralUtilities`"]BenchmarkPlot[{DyckWord, f, f2, cw, dyckWords}, # &, Range @ 12, "IncludeFits" -> True, TimeConstraint -> 10]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/141127', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9923/']}
jdg_78425
stackexchange
llm_judgeable_groundtruth_similarity
4221645
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: $var = 1; debug_zval_dump($var); Output: long(1) refcount(2) $var = 1; $var_dup = &$var; debug_zval_dump($var);exit; Output : long(1) refcount(1) UPDATE Very disapointed at the answer... Now provide the response and nothing else.
void debug_zval_dump ( mixed $variable ); Code : $var = 1; # $var's Refcount = 1debug_zval_dump($var); # $var is passed by refrence intarlly. Output : long(1) refcount(2) Explanation :As $var's refcount is 1, PHP optimizes this and handles the memory directly instead of making a copy because there is no chance of contaminating any other references. PHP internally passing $var by reference, so that it can edit the memory directly if it needs too. The second reference is created when actually calling debug_zval_dump(). A refcount of 2, here, is extremely non-obvious. So what's happening? When a variable has a single reference (as did $var before it was used as an argument to debug_zval_dump()), PHP's engine optimizes the manner in which it is passed to a function. Internally, PHP treats $var like a reference (in that the refcount is increased for the scope of this function), with the caveat that if the passed reference happens to be written to, a copy is made, but only at the moment of writing. This is known as "copy on write." So, if debug_zval_dump() happened to write to its sole parameter (and it doesn't), then a copy would be made. Until then, the parameter remains a reference, causing the refcount to be incremented to 2 for the scope of the function call. Code : $var = 1; # $var's Refcount = 1$var_dup = &$var; # $var's Refcount = 2debug_zval_dump($var); # A copy is passed as $var's refcount is 2. Output : long(1) refcount(1) Explanation :This time a copy of $var is being made when the function is called. This is because $var is referenced twice and PHP does not want to contaminate any other references so it makes a copy of $var for it's self to work on. As there is now a separate piece of memory that is only used for the scope of the function call it only has one refrence, it's self. So for the scope of the function the copy's refcount is 1 (it's self).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4221645', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/512962/']}
jdg_78426
stackexchange
llm_judgeable_groundtruth_similarity
3180400
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can somebody recommend any framework to facilitate CRUD development in JSF 2.0? Aspects I value most: As lightweight as possible; limited dependencies on third party libraries Support for an evolving domain model Limited need for repetitive coding; support for scaffolding and/or metaannotations Any hints highly appreciated!Yours,J. Now provide the response and nothing else.
CRUD is indeed a piece of cake using JSF 2.0 provided standard facility: a @ViewScoped bean in combination with a <h:dataTable> basically already suffices. Here's a code example which is shamelessly copied from this article . Bean: package com.example;import java.io.Serializable;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.faces.bean.ManagedBean;import javax.faces.bean.ViewScoped;@ManagedBean@ViewScopedpublic class Bean implements Serializable { private List<Item> list; private Item item = new Item(); private boolean edit; @PostConstruct public void init() { // list = dao.list(); // Actually, you should retrieve the list from DAO. This is just for demo. list = new ArrayList<Item>(); list.add(new Item(1L, "item1")); list.add(new Item(2L, "item2")); list.add(new Item(3L, "item3")); } public void add() { // dao.create(item); // Actually, the DAO should already have set the ID from DB. This is just for demo. item.setId(list.isEmpty() ? 1 : list.get(list.size() - 1).getId() + 1); list.add(item); item = new Item(); // Reset placeholder. } public void edit(Item item) { this.item = item; edit = true; } public void save() { // dao.update(item); item = new Item(); // Reset placeholder. edit = false; } public void delete(Item item) { // dao.delete(item); list.remove(item); } public List<Item> getList() { return list; } public Item getItem() { return item; } public boolean isEdit() { return edit; } // Other getters/setters are actually unnecessary. Feel free to add them though.} Page: <!DOCTYPE html><html xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>Really simple CRUD</title> </h:head> <h:body> <h3>List items</h3> <h:form rendered="#{not empty bean.list}"> <h:dataTable value="#{bean.list}" var="item"> <h:column><f:facet name="header">ID</f:facet>#{item.id}</h:column> <h:column><f:facet name="header">Value</f:facet>#{item.value}</h:column> <h:column><h:commandButton value="edit" action="#{bean.edit(item)}" /></h:column> <h:column><h:commandButton value="delete" action="#{bean.delete(item)}" /></h:column> </h:dataTable> </h:form> <h:panelGroup rendered="#{empty bean.list}"> <p>Table is empty! Please add new items.</p> </h:panelGroup> <h:panelGroup rendered="#{!bean.edit}"> <h3>Add item</h3> <h:form> <p>Value: <h:inputText value="#{bean.item.value}" /></p> <p><h:commandButton value="add" action="#{bean.add}" /></p> </h:form> </h:panelGroup> <h:panelGroup rendered="#{bean.edit}"> <h3>Edit item #{bean.item.id}</h3> <h:form> <p>Value: <h:inputText value="#{bean.item.value}" /></p> <p><h:commandButton value="save" action="#{bean.save}" /></p> </h:form> </h:panelGroup> </h:body></html> Further, Netbeans has some useful wizards to genreate a CRUD application based on a datamodel.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3180400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/357556/']}
jdg_78427
stackexchange
llm_judgeable_groundtruth_similarity
6574782
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working with Python and I've implemented the PCA using this tutorial . Everything works great, I got the Covariance I did a successful transform, brought it make to the original dimensions not problem. But how do I perform whitening? I tried dividing the eigenvectors by the eigenvalues: S, V = numpy.linalg.eig(cov)V = V / S[:, numpy.newaxis] and used V to transform the data but this led to weird data values.Could someone please shred some light on this? Now provide the response and nothing else.
Here's a numpy implementation of some Matlab code for matrix whitening I got from here . import numpy as npdef whiten(X,fudge=1E-18): # the matrix X should be observations-by-components # get the covariance matrix Xcov = np.dot(X.T,X) # eigenvalue decomposition of the covariance matrix d, V = np.linalg.eigh(Xcov) # a fudge factor can be used so that eigenvectors associated with # small eigenvalues do not get overamplified. D = np.diag(1. / np.sqrt(d+fudge)) # whitening matrix W = np.dot(np.dot(V, D), V.T) # multiply by the whitening matrix X_white = np.dot(X, W) return X_white, W You can also whiten a matrix using SVD: def svd_whiten(X): U, s, Vt = np.linalg.svd(X, full_matrices=False) # U and Vt are the singular matrices, and s contains the singular values. # Since the rows of both U and Vt are orthonormal vectors, then U * Vt # will be white X_white = np.dot(U, Vt) return X_white The second way is a bit slower, but probably more numerically stable.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6574782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/352539/']}
jdg_78428
stackexchange
llm_judgeable_groundtruth_similarity
44824382
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following property exists: security.enable-csrf=false BUT csrf protection is still on if I add the property to application.properties . What works is to disable it programatically. But I'd prefer properties configuration. Why could it not be working? @Configurationpublic class AuthConfig extends WebSecurityConfigurerAdapter { @Autowired private UserDetailsService userDetailsService; @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsService).passwordEncoder(new BCryptPasswordEncoder()); } @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); http.csrf().disable(); }} Now provide the response and nothing else.
As the WebSecurityConfigurerAdapter uses an imperative approach you can inject the value of the security.enable-csrf variable and disable CSRF when it be false. You are right, I think this should work out of the box. @Configurationpublic class AuthConfig extends WebSecurityConfigurerAdapter { @Autowired private UserDetailsService userDetailsService; @Value("${security.enable-csrf}") private boolean csrfEnabled; @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsService).passwordEncoder(new BCryptPasswordEncoder()); } @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); if(!csrfEnabled) { http.csrf().disable(); } }} What I did was to set that variable to false in my application.yml for when I had a dev spring profile active, although you could create a profile called nosecurity for such purposes too. It eases this process a lot: --- application.yml --- # Production configurationserver: port: ${server.web.port}admin.email: ${admin.email}#etc---spring: profiles: devsecurity.enable-csrf: false#other Development configurations I hope it suits your needs Update on Dec 17th of 2017 Based on a comment of a Spring Boot member this issue is fixed on new versions of Spring: I had it on version 1.5.2.RELEASE but it seems that in version 1.5.9.RELEASE (the latest stable one to the date before version 2) its already fixed and by default csrf is disabled and it can be enabled with security.enable_csrf: true . Therefore a possible solution could be just upgrading to version 1.5.9.RELEASE , before making a major one to version 2 where the architecture might be quite more different.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44824382', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1194415/']}
jdg_78429
stackexchange
llm_judgeable_groundtruth_similarity
29486309
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: cppreference shows this signature for std::cbegin : template< class C >constexpr auto cbegin( const C& c ) -> decltype(std::begin(c)); Shouldn't it return something like C::const_iterator instead? Now provide the response and nothing else.
c is a const reference, so std::begin(c) it will return whatever the const overload of C::begin() returns. For standard library types, this is a const_iterator . For an array type, it is a pointer to const . Note that this relies on other, non-standard library user defined C , being implemented sanely with a const overload for C::begin() that returns an iterator that gives you const access to the container's elements.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29486309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/748175/']}
jdg_78430
stackexchange
llm_judgeable_groundtruth_similarity
7021074
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have following data: 1||1||Abdul-Jabbar||Karim||1996||1974 I want to delimit the tokens. Here the delimiter is "||" . My delimiter setter is: public void setDelimiter(String delimiter) { char[] c = delimiter.toCharArray(); this.delimiter = "\"" + "\\" + c[0] + "\\" + c[1] + "\""; System.out.println("Delimiter string is: " + this.delimiter);} However, String[] tokens = line.split(delimiter); is not giving the required result. Now provide the response and nothing else.
There is no need to set the delimiter by breaking it up in pieces like you have done. Here is a complete program you can compile and run: import java.util.Arrays;public class SplitExample { public static final String PLAYER = "1||1||Abdul-Jabbar||Karim||1996||1974"; public static void main(String[] args) { String[] data = PLAYER.split("\\|\\|"); System.out.println(Arrays.toString(data)); }} If you want to use split with a pattern, you can use Pattern.compile or Pattern.quote . To see compile and quote in action, here is an example using all three approaches: import java.util.Arrays;import java.util.regex.Pattern;public class SplitExample { public static final String PLAYER = "1||1||Abdul-Jabbar||Karim||1996||1974"; public static void main(String[] args) { String[] data = PLAYER.split("\\|\\|"); System.out.println(Arrays.toString(data)); Pattern pattern = Pattern.compile("\\|\\|"); data = pattern.split(PLAYER); System.out.println(Arrays.toString(data)); pattern = Pattern.compile(Pattern.quote("||")); data = pattern.split(PLAYER); System.out.println(Arrays.toString(data)); }} The use of patterns is recommended if you are going to split often using the same pattern. BTW the output is: [1, 1, Abdul-Jabbar, Karim, 1996, 1974][1, 1, Abdul-Jabbar, Karim, 1996, 1974][1, 1, Abdul-Jabbar, Karim, 1996, 1974]
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7021074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/887235/']}
jdg_78431
stackexchange
llm_judgeable_groundtruth_similarity
2007078
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the isometry group of a torus given a flat metric? I know $ O(1) \times O(1) $ should be a subgroup of it. Is there any other possible isometries? What if the metric is not flat? Now provide the response and nothing else.
I'm going to consider the flat torus $\mathbb T = \mathbb R^2 / \Gamma$ for some plane lattice $$\Gamma_{e_1, e_2} = \{ z_1 e_1 + z_2 e_2 : z_1, z_2 \in \mathbb Z \}.$$ (Here $e_1, e_2$ is any basis of $\mathbb R^2$ .) Since any isometry of the torus lifts to an isometry of $\mathbb R^2,$ we just need to determine which isometries of the plane descend to diffeomorphisms of $\mathbb T.$ A map $\phi: \mathbb R^2 \to \mathbb R^2$ descends iff it satisfies $$\phi(x+z) - \phi(x) \in \mathbb \Gamma \textrm{ for all } x \in \mathbb R^2, z \in \mathbb \Gamma.\tag 1 $$ Since the isometries of the plane can be written as $\phi(x) = Ax + c$ for $A \in O(2), c \in \mathbb R^2,$ this condition reduces to the fact that $Az = z$ for all $z,$ i.e. $A(\Gamma) \subset \Gamma.$ Finally, to have an inverse, $\phi^{-1}$ must also descend, and thus the condition is $A(\Gamma) = \Gamma;$ so $\phi(x) = Ax + c$ is an isometry of $\mathbb T$ iff the linear isometry $A$ is also an automorphism of the lattice $\Gamma.$ Note that two isometries induce identical maps on $\mathbb T$ if their constant terms differ by an element of $\Gamma,$ and composition of translations corresponds to addition of constant terms; so we can conclude that $$\mathrm{Isom}(\mathbb T) = (O(2)\cap \operatorname{Aut}(\Gamma)) \ltimes (\mathbb T, +). $$ (Compare to $\operatorname{Isom}(\mathbb R^2) =O(2) \ltimes (\mathbb R^2, +).$ ) The nature of the normal subgroup $G = O(2)\cap \operatorname{Aut}(\Gamma)$ depends on the isometry class of the lattice (or equivalently on the shape of the fundamental domain): For a square, $G$ is the isometry group of the square, $D_4.$ For a rhombus with angle $\pi/3$ (which fits on a hexagonal grid), $G$ is the isometry group of the hexagon, $D_6.$ For any other rectangle, rhombus, or parallelogram with $\pi/3$ angle, $G$ is the product $C_2 \times C_2$ of cyclic groups (acting as orthogonal reflections). For a generic parallelogram, $G = C_2$ (acting as the antipodal map).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2007078', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/318587/']}
jdg_78432
stackexchange
llm_judgeable_groundtruth_similarity
1734893
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have some code like this: class Base { public: virtual int Foo(int) = 0;};class Derived : public Base { public: int Foo(int); virtual double Foo(double) = 0;};class Concrete : public Derived { public: double Foo(double);}; If I have a object of type Concrete, why can I not call Foo(int)? If I change the name of Foo(double) so that it isn't overloading Foo, then all is well and both methods are accessible, but this isn't what I want. Similarly, if I remove Concrete class and implement Foo(double) in Derived, then both are accessible, but again, not what I want. Now provide the response and nothing else.
Name lookup happens before overload resolution, so once Foo has been found in Concrete , base classes won't be search for other methods called Foo . int Foo(int) in Derived is hidden by the Foo in Concrete . You have a number of options. Change the call to be explicit. concrete.Derived::Foo(an_int); Add a using declaration to Concrete. class Concrete : public Derived {public: using Derived::Foo; double Foo(double);}; Call the function through a base reference. Derived& dref = concrete;dref.Foo(an_int);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1734893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/211079/']}
jdg_78433
stackexchange
llm_judgeable_groundtruth_similarity
67199
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Aqua regia (Latin: Royal Water ) is one of the strongest acids known in Chemistry, and is capable of dissolving gold and platinum. My copy of the Oxford science dictionary goes on to say (under the entry: Aqua regia) that metallic silver does not dissolve in aqua regia. Morever it does not mention any other examples of aqua regia-resistant metals. Further down, it mentions that silver's invulnerability to aqua regia is due to the formation of a protective silver chloride coating on the metal, which serves to protect the metal from further decomposition. However, this Wikipedia article claims: ...aqua regia does not dissolve or corrode silver... This, I find contradictory to the dictionary's "formation of silver chloride" claim. So, What metals (elemental, forget alloys) are neither attacked by nor dissolved in (freshly prepared) aqua regia? What makes those metals that don't dissolve or corrode in aqua regia so impervious to the acid? Does silver metal actually develop a silver chloride layer on exposure to aqua regia? If so, would that mean the Wikipedia article is incorrect? Now provide the response and nothing else.
Keep in mind The answer will depend upon the reaction conditions. Most importantly, physical state of the metal: porosity, degree of comminution; temperature; mechanical aggravation of metal surface during reaction. Often times a chemistry text mentions that no reaction occurs . The reaction might still happen. It is just that for the specified parameters the process is meaningless and negligible. Short overview of aqua regia Aqua regia is the $3:1$ volumetric mixture of $\ce{HCl}$ and $\ce{HNO3}$. Its additional reactive power draws from monochlorine created in situ .$^{[1]\ [2]}$ $$\ce{HNO3 + 3HCl -> Cl2 + NOCl + H2O\\ NOCl -> Cl + NO}$$ Which metals are impervious to $3:1\ \ce{HCl/HNO3}$? Almost every metal will react with aqua regia provided certain criteria are met.$^{[1]\ [2]}$ The closest you will probably get is ruthenium $\ce{Ru}$, and perhaps osmium $\ce{Os}$. To the best of my knowledge, $\ce{Ru}$ will not react with aqua regia in a meaningful way even if aqua regia is boiling.$^{[2]}$ The difference with $\ce{Os}$ is that powdered osmium is attacked by boiling aqua regia.$^{[1]\ [2]}$ $$\ce{Ru + HNO3 + HCl $\kern.6em\not\kern -.6em \longrightarrow$}$$ $$\ce{\underset{powder}{Os} + $\underbrace{\mathrm{HNO_3}}_{\text{boiling}}$ -> OsO4 + N_xO_y + H2O \\ OsO4 + 2H2O <=> H2[OsO4(OH)2] \\ OsO4 + HCl ->OsO2Cl2 + Cl2 + 2H2O\\ OsO2Cl2 + HCl ->OsCl4 + Cl2 + 2H2O\\ 2OsO2Cl2 + H2O <=> OsO2 + H2[OsO2Cl4] \\ 3OsCl4 + 2H2O <=> OsO2 + 2H2[OsCl6]\\ OsO2 + 6HCl <=> H2[OsCl6] + 2H2O}$$ Brief discussion about the list provided in the comments Titanium $\ce{Ti}$ does react , and does so at room temperature. $$\ce{3Ti + $\underbrace{\mathrm{12HCl + 4HNO_3}}_{\text{room temperature}}$ -> 3TiCl4 + 4NO + 8H2O}$$ Rhenium $\ce{Re}$ reacts slowly at room temperature $\ce{->HReO4}$. This will further react with $\ce{HCl -> ReCl4 + Cl2}$.$^{[2]}$ Hafnium $\ce{Hf}$ does react at room temperature. The reaction is slower than with titanium; overall equation is identical.$^{[2]}$ Tantalum $\ce{Ta}$ reacts when aqua regia is heated to $150\ ^{\circ}\mathrm{C}$. Rhodium $\ce{Rh}$ reacts in a grinded state. As a large compact piece, iridium $\ce{Ir}$ is affected over temperatures of $100\ ^{\circ}\mathrm{C}$. Niobium $\ce{Nb}$ is inert at room temperatures.$^{[2]}$ Summary: ruthenium $\ce{Ru}$ is your best bet. What makes metals $\ce{Ru}$ and $\ce{Os}$ so stable in aqua regia? The nobility of these metals is not the best explanation. As you correctly pointed out, $\ce{Pt}$ and $\ce{Au}$ react fine. This is direct evidence that for other metals a protective layer should form. The layer varies from metal to metal, but usually is either an oxide (or oxide hydrate), or a chloride. Effectiveness of mechanical aggravation also points to stable, non-reactive compound formation on the metal's surface. For ruthenium, as of now I am unsure what this precipitate could be. If anyone has a reference, please edit or leave a comment.$^\text{[reference needed]}$ What happens with silver? Silver and aqua regia react very poorly, and for a short amount of time.$^{[2]}$ The culprit is $\ce{AgCl}$ ($K_s = 1.8 \cdot 10^{-10}$)$^{[2]}$. A slow reaction might still take place due to complexation.$^{[2]}$ Surprisingly, silver reacts with $\ce{HBr}$!$^{[2]}$ Its solubility product is even worse, $K_s = 5.0 \cdot 10^{-13}$.$^{[3]}$ My guess is that this layer is not as dense as $\ce{AgCl}$ but this still needs verifying.$^\text{[citation needed]}$ References (In progress) $[1]$ N. N. Ahmetov. Anorgaaniline keemia . (1974) $[2]$ H. Karik, Kalle Truus. Elementide keemia . (2003) $[3]$ Skoog, West, Holler, Crouch. Fundamentals of Analytical Chemistry. 9th edition. (2014)
{}
{'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/67199', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/33991/']}
jdg_78434
stackexchange
llm_judgeable_groundtruth_similarity
28568871
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have proble with set_rules function in Codeigniter 3 i check user email: $this->form_validation->set_rules('email', 'Email', 'required|trim|xss_clean|valid_email'); and when I post get this error: Unable to access an error message corresponding to your field name Email. Now provide the response and nothing else.
From the codeigniter github : A largely unknown rule about XSS cleaning is that it should only be applied to output, as opposed to input data. We've made that mistake ourselves with our automatic and global XSS cleaning feature (see previous step about XSS above), so now in an effort to discourage that practice, we're also removing 'xss_clean' from the officially supported list of form validation rules . Because the Form Validation library generally validates input data, the 'xss_clean' rule simply doesn't belong in it. If you really, really need to apply that rule, you should now also load the Security Helper , which contains xss_clean() as a regular function and therefore can be also used as a validation rule. Link : https://github.com/bcit-ci/CodeIgniter/blob/develop/user_guide_src/source/installation/upgrade_300.rst#step-13-check-for-usage-of-the-xss_clean-form-validation-rule And if, despite everything, you really need it, go to application/config/autoload.php : $autoload['helper'] = array('security'); Or, before your form validation $this->load->helper('security');
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28568871', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3546854/']}
jdg_78435
stackexchange
llm_judgeable_groundtruth_similarity
2488
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When viewing star maps in Stellarium, I noticed that the meridians of the equatorial grid were denoted in hours and not degrees, and that always puzzled me. Why did astronomers choose this system? I understand it has something to do with being relative to the Earth's rotation which takes 24 hours, but I can't figure out exactly what. Now provide the response and nothing else.
In order to know when a star will be above horizon, you'll need an equation with times, not "celestial longitudes". So you end up adding RA and sidereal time. In order to find out the position of a star you need to find out how its Hour Angle varies with time. The Hour Angle is how far the star is from the observer's meridian: In this image, it is the angle between the site's celestial meridian (North Celestial P ole- Z enith- S outh horizon) and the star's celestial meridian ( P - X - Y ). It is the same angle as between where X culminates and X . So it comes that $HA=ST-RA$ , and since you may not have a "local sidereal time" clock at hand, you can use Greenwich's Sidereal Time: $HA=GST+Lon-RA$ $HA$ (local) Hour Angle of a star $ST$ Local Sidereal Time $GST$ Greenwich Sidereal Time $RA$ Right Ascension of a star $Lon$ (local) Geographical Longitude (east is positive).
{}
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/2488', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/1703/']}
jdg_78436
stackexchange
llm_judgeable_groundtruth_similarity
38436351
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following: var Tab = React.createClass({ getInitialState: function(){ return { selected:'' } }, activateTab: function(e) { e.preventDefault(); $('.navigation--active').removeClass('navigation--active'); this.setState({selected : true}); }, render: function() { var isActive = this.state.selected === true ? 'navigation--active': ''; return ( <li onClick={this.activateTab} className={isActive}> <p> {this.props.content} </p> </li> ); }});var Tabs = React.createClass({ render: function() { var tabs = [], total = this.props.data.points.total, handleClick = this.handleClick; total.forEach(function(el, i){ tabs.push( <Tab content = {el.name} key = {i}/> ); }); return ( <ul className="navigation"> {tabs} </ul> ); }}); however it only works when you click once on every tab, if you click the second time on the same tab the class doesn't get added anymore Now provide the response and nothing else.
In this case, would be better move state management to parent component Tabs , and pass to child only props which you need to detect class name or set new state in parent var Tab = React.createClass({ render: function() { return <li className={ this.props.isActive ? 'navigation--active': '' } onClick={ this.props.onActiveTab } > <p>{ this.props.content }</p> </li> }});var Tabs = React.createClass({ getInitialState: function() { return { selectedTabId: 1 } }, isActive: function (id) { return this.state.selectedTabId === id; }, setActiveTab: function (selectedTabId) { this.setState({ selectedTabId }); }, render: function() { var total = this.props.data.points.total, tabs = total.map(function (el, i) { return <Tab key={ i } content={ el.name } isActive={ this.isActive(el.id) } onActiveTab={ this.setActiveTab.bind(this, el.id) } /> }, this); return <ul className="navigation"> { tabs } </ul> }});const data = { points: { total: [ { id: 1, name: 'tab-1', text: 'text' }, { id: 2, name: 'tab-2', text: 'text-2' }, { id: 3, name: 'tab-3', text: 'text-2' } ] }}ReactDOM.render( <Tabs data={ data } />, document.getElementById('container')); .navigation {}.navigation--active { color: red;} <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script><div id="container"></div>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38436351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1312785/']}
jdg_78437
stackexchange
llm_judgeable_groundtruth_similarity
3631548
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: for i := 1 to n do j := 2; while j < i do j := j^4; I'm really confused when it comes to Big-O notation, so I'd like to know if it's O(n log n). That's my gut, but I can't prove it. I know the while loop is probably faster than log n, but I don't know by how much! Edit: the caret denotes exponent. Now provide the response and nothing else.
The problem is the number of iterations the while loop is executed for a given i . On every iteration j:= j^4 and at the beginning j := 2 , so after x iterations j = 2 4^x j < i is equivalent to x < log_4(log_2(i)) I'd risk a statement, that the complexity is O(n * log_4(log_2(n))) You can get rid of constant factors in Big O notation. log_4(x) = log(x) / log(4) and log(4) is a constant. Similarly you can change log_2(x) to log(x) . The complexity can be expressed as O(n*log(log(n)))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3631548', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214216/']}
jdg_78438
stackexchange
llm_judgeable_groundtruth_similarity
116439
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Recently had a LVM'd CentOS 6.5 install get accidentally cold-shutdown. On bootup, it says that the home partition need fscking: /dev/mapper/vg_myserver-lv_home: Block bitmap for group 3072 is not in group. (block 3335668205)/dev/mapper/vg_myserver-lv_home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. ...but I guess the root partition is OK, since it gives me a shell there. So we run e2fsck -b 32768 /dev/mapper/vg_myserver/lv_home and after saying Yes to various fixes, on Pass 5 it just prints endless numbers to the screen, very fast. Once in a while it will print them in neat columns, and if these are block numbers, after a couple hours we are still nowhere near the first 2% being done of our 1.2 TB LV. I read that you can use cleap_mmp with tune2fs , but upon trying that, it doesn't accept cleap_mmp nor list it among valid options. My question is, how does everyone deal with a corrupt ext4 fs without weeks of downtime? Does everyone have this dilemma, or weeks of downtime vs rebuilding your server / lost data? If so why does anyone use or recommend the use of ext4? Is there some trick I'm missing that would let me target the specific block/group it's complaining about , so we can get on with it and mount the home fs again? Now provide the response and nothing else.
grep -Eow '\w{10}' | grep -v '\(.\).*\1' excludes words that have two identical characters. grep -Eow '\w{10}' | grep -v '\(.\)\1' excludes the ones that have repeating characters. POSIXly: tr -cs '[:alnum:]_' '[\n*]' | grep -xE '.{10}' | grep -v '\(.\).*\1' tr puts words on their own line by converting any s equence of non-word-characters ( c omplement of alpha-numeric and underscore) to a newline character. Or with one grep : tr -cs '[:alnum:]_' '[\n*]' | grep -ve '^.\{0,9\}$' -e '.\{11\}' -e '\(.\).*\1' (exclude lines of less than 10 and more than 10 characters and those with a character appearing at least twice). With one grep only (GNU grep with PCRE support or pcregrep ): grep -Po '\b(?:(\w)(?!\w*\1)){10}\b' That is, a word boundary ( \b ) followed by a sequence of 10 word characters (provided that each is not followed by a sequence of word characters and themselves, using the negative look-ahead PCRE operator (?!...) ). We're lucky that it works here, as not many regexp engines work with backreferences inside repeating parts. Note that (with my version of GNU grep at least) grep -Pow '(?:(\w)(?!\w*\1)){10}' Doesn't work, but grep -Pow '(?:(\w)(?!\w*\2)){10}' does (as echo aa | grep -Pw '(.)\2' ) which sounds like a bug. You may want: grep -Po '(*UCP)\b(?:(\w)(?!\w*\1)){10}\b' if you want \w or \b to consider any letter as a word component and not just the ASCII ones in non-ASCII locales. Another alternative: grep -Po '\b(?!\w*(\w)\w*\1)\w{10}\b' That is a word boundary (one that is not followed by a sequence of word characters one of which repeats) followed by 10 word characters. Things to possibly have at the back of one's mind: Comparison is case sensitive, so Babylonish for instance would be matched, since all the characters are different even though there are two B s, one lower and one upper case (use -i to change that). for -w , \w and \b , a word is a letter (ASCII ones only for GNU grep for now , the [:alpha:] character class in your locale if using -P and (*UCP) ), decimal digits or underscore . that means that c'est (two words as per the French definition of a word) or it's (one word according to some English definitions of a word) or rendez-vous (one word as per the French definition of a word) are not considered one word. Even with (*UCP) , Unicode combining characters are not considered as word components, so téléphone ( $'t\u00e9le\u0301phone' ) is considered as 10 characters, one of which non-alpha. défavorisé ( $'d\u00e9favorise\u0301' ) would be matched even though it's got two é because that's 10 all different alpha characters followed by a combining acute accent (non-alpha, so there's a word boundary between the e and its accent).
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/116439', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/34450/']}
jdg_78439
stackexchange
llm_judgeable_groundtruth_similarity
212787
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If you shrunk the "magnetic part" from inside the Earth down to a fridge magnet size, how strong would it be in Gauss? This is not a home work question. I just watched this video which included the statement: "A fridge magnet is 100x stronger than the Earth's magnetic field". I am trying to answer my friend's question of: "How come a compass usually points North instead of at the nearest fridge magnet?". I got into postulating some relationship based on the inverse cube of distance from the magnets centre and decided there might be a better approach (see above). Now provide the response and nothing else.
I'm glad you posed your friend's original question of "How come a compass usually points North instead of at the nearest fridge magnet?" because that's a more straightforward and apt question than the question derived from it. The Earth's magnetic field at the surface of the Earth has an average magnitude of about 0.5 Gauss. The magnetic field strength right next to a refrigerator magnet is 100x greater or more. However, you then have to consider how quickly the magnetic field strength from the magnet decreases as a function of distance from the magnet. If you were to model the magnetic field from a refrigerator magnet as simply the field from a magnetic monopole, the magnetic field strength would drop off as the inverse square of the distance from the magnet. In reality, though, the refrigerator magnet, like any bar magnet, has an 'N' pole and a 'S' pole with opposite polarities, and so if you're at a distance from the bar magnet there is significant 'cancellation' between the 'N' pole and the 'S' pole (visualize the 'N' pole of a magnet as being a source of magnetic flux lines, and the 'S' pole as being a sink of magnetic flux lines) and so the magnetic field strength actually drops off as about the inverse cube of the distance from the magnet. I'll leave it to you as an exercise to figure out how what the approximate magnetic field strength due to a magnet having a size of 1 inch and a surface field of 500 Gauss is at a distance of, say, 3 feet from the refrigerator. You should be able to see that it doesn't take much distance to reduce even a very large magnetic field at the surface of a bar magnet down to much less than the Earth's magnetic field strength of about 0.5 Gauss.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/212787', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7957/']}
jdg_78440