source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
160497
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a script which needs to calculate the number of characters in a command's output in a single step . For example, using the command readlink -f /etc/fstab should return 10 because the output of that command is 10 characters long. This is already possible with stored variables using the following code: variable="somestring";echo ${#variable};# 10 Unfortunately, using the same formula with a command-generated string does not work: ${#(readlink -f /etc/fstab)};# bash: ${#(readlink -f /etc/fstab)}: bad substitution I understand it is possible to do this by first saving the output to a variable: variable=$(readlink -f /etc/fstab);echo ${#variable}; But I would like to remove the extra step. Is this possible? Compatibility with the Almquist shell (sh) using only in-built or standard utilities is preferable. Now provide the response and nothing else.
With GNU expr : $ expr length + "$(readlink -f /etc/fstab)"10 The + there is a special feature of GNU expr to make sure the next argument is treated as a string even if it happens to be an expr operator like match , length , + ... The above will strip any trailing newline of output. To work around it: $ expr length + "$(readlink -f /etc/fstab; printf .)" - 210 The result was subtracted to 2 because the final newline of readlink and the character . we added. With Unicode string, expr does not seem to work, because it returns length of string in bytes instead of characters count (See line 654 ) $ LC_ALL=C.UTF-8 expr length ăaa4 So, you can use: $ printf "ăaa" | LC_ALL=C.UTF-8 wc -m3 POSIXLY: $ expr " $(readlink -f /etc/fstab; printf .)" : ".*" - 310 The space before command substitution prevent command from being crashed with string start with - , so we need to subtract 3.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/160497', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/87443/']}
jdg_73341
stackexchange
llm_judgeable_groundtruth_similarity
2901198
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to ask regular ESS/R users what key bindings do they use frequently and tips on using ESS/R. Now provide the response and nothing else.
I have set several shortcuts in my .emacs file. The most useful are: C-tab to switch between the R command line and the file (similar to josh answer, but much faster): (global-set-key [C-tab] 'other-window) Control and up/down arrow keys to search history with matching what you've already typed: (define-key comint-mode-map [C-up] 'comint-previous-matching-input-from-input)(define-key comint-mode-map [C-down] 'comint-next-matching-input-from-input) Comment-uncomment a selected region with C-d or C-maj-d (defun uncomment-region (beg end) "Like `comment-region' invoked with a C-u prefix arg." (interactive "r") (comment-region beg end -1))(define-key ess-mode-map (kbd "C-d") 'comment-region)(define-key ess-mode-map (kbd "C-S-d") 'uncomment-region) Also I've also enabled CUA mode (from options menu) and reconfigured quite a lot of shortcuts to require only two keystrokes (instead of four in standard mode): ;; Delete selection when pressing [delete] key (delete-selection-mode t);; ESS Mode (.R file) (define-key ess-mode-map "\C-l" 'ess-eval-line-and-step) (define-key ess-mode-map "\C-p" 'ess-eval-function-or-paragraph-and-step) (define-key ess-mode-map "\C-r" 'ess-eval-region);; iESS Mode (R console) (define-key inferior-ess-mode-map "\C-u" 'comint-kill-input) (define-key inferior-ess-mode-map "\C-w" 'backward-kill-word) (define-key inferior-ess-mode-map "\C-a" 'comint-bol) (define-key inferior-ess-mode-map [home] 'comint-bol);; Comint Mode (R console as well) (define-key comint-mode-map "\C-e" 'comint-show-maximum-output) (define-key comint-mode-map "\C-r" 'comint-show-output) (define-key comint-mode-map "\C-o" 'comint-kill-output);; Search with C-f / C-F (control-maj-F for backware search) (global-set-key "\C-f" 'isearch-forward) (global-set-key (kbd "C-S-f") 'isearch-backward) (define-key isearch-mode-map "\C-f" 'isearch-repeat-forward) (define-key isearch-mode-map (kbd "C-S-f") 'isearch-repeat-backward);; Save with C-s / C-S (global-set-key (kbd "C-s") 'save-buffer) (global-set-key (kbd "C-S-s") 'write-file) ;; need to redefine them for isearch mode (don't know why) (define-key isearch-mode-map (kbd "C-s") 'save-buffer) (define-key isearch-mode-map (kbd "C-S-s") 'write-file);; Pause = dedicate window. (defun toggle-current-window-dedication () (interactive) (let* ((window (selected-window)) (dedicated (window-dedicated-p window))) (set-window-dedicated-p window (not dedicated)) (message "Window %sdedicated to %s" (if dedicated "no longer " "") (buffer-name)))) (global-set-key [pause] 'toggle-current-window-dedication);; delete = delete (global-set-key [delete] 'delete-char);; C-b = list buffers (global-set-key (kbd "C-b") 'bs-show) You will find many more useful shortcuts in ESS documentation .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2901198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/186891/']}
jdg_73342
stackexchange
llm_judgeable_groundtruth_similarity
45300301
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an image, over which I am putting some heading on the right and left side of the image like below <div class="container-fluid no-padding"> <div class="row"> <div class="col-md-12"> <div class="col-sm-3 text-center image-overlapping"> <h1>Hello Suarj</h1> Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ullam omnis, ipsum eveniet minus amet. </div> <img src="http://oi65.tinypic.com/15oi0w9.jpg" class="ls-bg" alt="Slide background" style="width:100%; height:auto;"> <div class="col-sm-3 text-center image-overlapping right"> <h1>Hello Suarj</h1> Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ullam omnis, ipsum eveniet minus amet. </div> </div></div> This works well for me, but it is not responsive for the mobile. How do I go about it. Here's the fiddle for the same. Now provide the response and nothing else.
what keyword type means when I write the following type is a member defined on object in Scala. Suppose you have a class and its companion object: class Foo(a: String)object Foo { def apply(a: String) = new Foo(a)} Now suppose you want to write a method that accepts the object Foo as an input. What's its type? If you write this: def someMethod(fooObj: Foo) = fooObj.apply("x") // doesn't compile it's not going to compile. Foo refers to the type of an instance of the class (i.e. the one returned by new Foo("x") or Foo("x") ). That's why objects have a type member that can refer to their own type: def someMethod(fooObj: Foo.type) = fooObj.apply("x") // compiles! In your specific example List.type is the type of the companion object of List . Here's a couple of examples that I hope will clarify what it means: val listObj: List.type = Listval anEmptyList: List[Int] = listObj.empty[Int] // List()val aListOfIntegers: List[Int] = listObj.range(1, 4) // List[(1, 2, 3) And how this type related to type that can be defined as a field in a trait for example. The type keyword defines a type member. .type is a type member. Conceptually it's like every object in Scala has a type member named type, like this: object Foo { type type = Foo} Obviously this won't compile, but it gives you an idea of what it may look like.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45300301', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6443675/']}
jdg_73343
stackexchange
llm_judgeable_groundtruth_similarity
43612046
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my actual array : let mainArray= [ { value: '/AG_TURF-123', label: 'Ag & Turf', checked: false, id:123, children: [ { value: '/AG_TURF-123/TRACTOR-456', label: 'Tractors', checked: false, id:456, children: [ { value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series', label: 'Large (7, 8, 9) Series', checked: false, id:789, children: [{ value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series/7-family-tractor', label: '7 Family Tractor', checked: false, id:101112 },{ value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series/8-family-tractor', label: '8 Family Tractor', checked: false, id:131415 },{ value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series/9-family-tractor', label: '9 Family Tractor', checked: false, id:161718 }], }, { value: '/app/Http/routes.js', label: 'routes.js', checked: false, id:181920 }, ], }, { value: '/app/Providers', label: 'Providers', checked: false, id:212223, children: [{ value: '/app/Http/Providers/EventServiceProvider.js', label: 'EventServiceProvider.js', checked: false, id:242526 }], }, ], }, { value: '/config', label: 'config', checked: false, id:272829, children: [ { value: '/config/app.js', label: 'app.js', checked: false, id:303132 }, { value: '/config/database.js', label: 'database.js', checked: false, id:333435 }, ], }, { value: '/public', label: 'public', checked: false, id:353637, children: [ { value: '/public/assets/', label: 'assets', checked: false, id:383940, children: [{ value: '/public/assets/style.css', label: 'style.css',checked: false, id:404142 }], }, { value: '/public/index.html', label: 'index.html', checked: false, id: 434445 }, ], }, { value: '/.env', label: '.env', checked: false, id: 464748 }, { value: '/.gitignore', label: '.gitignore', checked: false, id: 495051 }, { value: '/README.md', label: 'README.md', checked: false, id: 525354 },]; This is my list of value : const ids=[525354,123,131415]; I want to set checked to true if id matched.My main array may go to 6 to 7 step deep. What I've done till yet : setCheckedFlagToItems(checkList, items) { return items.map((item) => { const node = item; if (checkList.indexOf(node.id) !== -1) { node.checked = true; } if ((node.children) && (Array.isArray(node.children) && node.children.length > 0)) { this.setCheckedFlagToItems(checkList, node.children); } return node; }, this); } But it not working as usual. Now provide the response and nothing else.
You could use an iterative and recursice approach with a named function as callback for Array#forEach . let mainArray = [{ value: '/AG_TURF-123', label: 'Ag & Turf', checked: false, id: 123, children: [ { value: '/AG_TURF-123/TRACTOR-456', label: 'Tractors', checked: false, id: 456, children: [ { value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series', label: 'Large (7, 8, 9) Series', checked: false, id: 789, children: [ { value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series/7-family-tractor', label: '7 Family Tractor', checked: false, id: 101112 }, { value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series/8-family-tractor', label: '8 Family Tractor', checked: false, id: 131415 }, { value: '/AG_TURF-123/TRACTOR-456/Large-7-8-9-series/9-family-tractor', label: '9 Family Tractor', checked: false, id: 161718 } ] }, { value: '/app/Http/routes.js', label: 'routes.js', checked: false, id: 181920 } ] }, { value: '/app/Providers', label: 'Providers', checked: false, id: 212223, children: [ { value: '/app/Http/Providers/EventServiceProvider.js', label: 'EventServiceProvider.js', checked: false, id: 242526 } ] } ]}, { value: '/config', label: 'config', checked: false, id: 272829, children: [ { value: '/config/app.js', label: 'app.js', checked: false, id: 303132 }, { value: '/config/database.js', label: 'database.js', checked: false, id: 333435 } ]}, { value: '/public', label: 'public', checked: false, id: 353637, children: [ { value: '/public/assets/', label: 'assets', checked: false, id: 383940, children: [ { value: '/public/assets/style.css', label: 'style.css', checked: false, id: 404142 } ] }, { value: '/public/index.html', label: 'index.html', checked: false, id: 434445 } ] }, { value: '/.env', label: '.env', checked: false, id: 464748 }, { value: '/.gitignore', label: '.gitignore', checked: false, id: 495051 }, { value: '/README.md', label: 'README.md', checked: false, id: 525354 }], ids = [525354, 123, 131415];mainArray .forEach( function iter(a) { if (ids.includes(a.id)) { a.checked = true; } Array.isArray(a.children) && a.children.forEach(iter); } );console.log(mainArray); .as-console-wrapper { max-height: 100% !important; top: 0; }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43612046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2833769/']}
jdg_73344
stackexchange
llm_judgeable_groundtruth_similarity
14868989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a submit button with a onClick view event. This event checks a flag and depending upon the condition it will allow form submission. I'd like the submit action on the controller to be called. What is the best way to do this? Now provide the response and nothing else.
Here another solution based on the example by albertjan for the case you have to perform some logic in your View and afterwards delegate to your controller. This is the way i understood your question: HBS: <script type="text/x-handlebars" data-template-name="index"> <button {{action submit target="view"}} >Sumbit</button></script> View: App.ThingView = Ember.View.extend({ submit : function(){ //do the view part of your logic var object = //do whatever you may need this.get("controller").send("submitInController", object); //you do not have to send object, if you do not need to }}); Controller: App.ThingController = Em.ObjectController.extend({ submitInController: function(model) { // do the controller part of your logic }}); Note: The call from your view will also bubble up to your current route. So this is basically the same code, that ember is executing when using the action helper.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14868989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/366782/']}
jdg_73345
stackexchange
llm_judgeable_groundtruth_similarity
15153048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using django-rest-framework (latest) for REST API, and implemented few test cases in django using built in test client. following django test case was working fine with django version < 1.5 self.client.put('/core/accounts/%s/'% self.account.id, data = prepare_dict(self.account), HTTP_AUTHORIZATION=self.token) upgraded to django 1.5, all tests are passing except tests related to HTTP PUT while looking into the issue found this @ https://docs.djangoproject.com/en/dev/releases/1.5/#options-put-and-delete-requests-in-the-test-client If you were using the data parameter in a PUT request without a content_type, you must encode your data before passing it to the test client and set the content_type argument. So, updated my test to reflect this change and tried following, but still getting http 415 instead of http 200 from django.test.client import MULTIPART_CONTENT, BOUNDARY, encode_multipartself.client.put('/core/accounts/%s/'% self.account.id, data = encode_multipart(BOUNDARY, prepare_dict(self.account)), content_type=MULTIPART_CONTENT, HTTP_AUTHORIZATION=self.token) Any idea what I'm missing?P.S: All functionality is working fine from django-rest-framework built-in web UI Now provide the response and nothing else.
You're absolutely on the right track - the breaking test in that case is certainly due to Django's change in PUT behavior for the test client. Your fix looks right to me, too. 415 is the "Unsupported Media Type" response, which means that the request content type wasn't something that could be handled by any of the parsers configured for the view. Normally in case like this, that'd be due to forgetting to set the content type of the request, but it looks like you've got that correctly set to multipart/form-data; boundary=... Things to double check: Exactly what does response.data display as the error details? What do you have configured in you DEFAULT_PARSER_CLASSES setting, if you have one, or what do you have set on the view attribute parser_classes if it has one? Make sure there's not a typo in content_type in the test (even though it's correct here). Edit : Thanks for your comments - that clears everything up. You've only got the JSON parser installed, but you're trying to send Form encoded data. You should either: Add FormParser and MultiPartParser to your settings/view, so that it supports form encodings. (Note also that the default DEFAULT_PARSER_CLASSES setting does include them, so if you don't set anything at all it'll work as expected) Or Encode the request using json encoding, not form encoding... data=json.dumps(prepare_dict(self.account)), content_type='application/json' in your test case.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15153048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/374166/']}
jdg_73346
stackexchange
llm_judgeable_groundtruth_similarity
3779605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to load a random document out of a set of documents stored in a CouchDB database. The method for picking and loading the document should conform to the following requirements: Efficiency: The lookup of the document should be efficient, most importantly the time to load the document must not grow linearly with the total number of documents. This means the skip query argument cannot be used. Uniform distribution: The choice should be truly random (as far as possible, using standard random number generators), every document should have equal chances of being chosen. What is the best way to implement this in CouchDB? Now provide the response and nothing else.
After giving this some more thought, I came up with a solution. For completeness sake, I will first show two simple approaches and explain why they are flawed. The third solution is the one I'm going with. Approach 1: Skip This is the trivial solution: You have a simple view (let's call it random ) with a map function that emits all documents you want to choose from and the built-in _count reduce function. To pick a random document, follow these steps: Find the total number of documents N in the view by calling: http://localhost:5984/db/_design/d/_view/random Pick random number 0 <= i < N Load the i 'th document: http://localhost:5984/db/_design/d/_view/random?reduce=false&skip=i&limit=1 This approach is bad because it doesn't scale well for large numbers of documents. According to this section of "CouchDB - The Definitive Guide" the skip argument should only be used with single-digit values. The solution above would have to loop through i documents before returning the chosen one. In SQL terms it's the equivalent of a full table scan as opposed to an index lookup. Approach 2: Random Number in Document With this approach, a random number is generated for each document at creation time and stored in the document. An example document: { _id: "4f12782c39474fd0a498126c0400708c", rand: 0.4591819887660398, // actual data...} The random view then has the following map function: function(doc) { if (doc.rand) { emit(doc.rand, doc); }} These are the steps to pick a random document: Pick a random number 0 <= r < 1 Load the document: http://localhost:5984/db/_design/d/_view/random?startkey=r&limit=1 If no document is returned (because r is larger than the largest random number stored in the database), wrap around and load the first document. This is very fast and looks great at first sight. However, there's a serious flaw: not all documents have the same chance of being picked. In the most simple example, there are two documents in the database. When I choose a random document a very large number of times, I want each document to come up half of the time. Let's say the documents were assigned the random numbers 0.2 and 0.9 at creation time. So document A is picked when (r <= 0.2) or (r > 0.9) and document B is chosen when 0.2 < r <= 0.9 . The chance of being picked is not 50% for each document, but 30% for A and 70% for B. You might think the situation improves when there are more documents in the database, but it really doesn't. The intervals between documents get smaller, but the variation in interval size get's even worse: Imagine three documents A, B and C with the random numbers 0.30001057, 0.30002057 and 0.30002058 (no other documents are in between). The chances of B being chosen are 1000 times greater than C being chosen. In the worst case, two documents are assigned the same random number. Then only one of them can be found (the one with the lower document id), the other is essentially invisible. Approach 3: A combination of 1 and 2 The solution I came up with combines the speed of approach 2 with the fairness of approach 1. Here it is: As in approach 2, each document is assigned a random number at creation time, the same map function is used for the view. As in approach 1, I also have a _count reduce function. These are the steps for loading a random document: Find the total number of documents N in the view by calling: http://localhost:5984/db/_design/d/_view/random Pick random number 0 <= r < 1 Calculate random index: i = floor(r*N) My goal is to load the i 'th document (as in approach 1). Assuming the distribution of random numbers is more or less uniform, I'm guessing the i 'th document has a random value of approximately r . Find the number of documents L with a random value lower than r : http://localhost:5984/db/_design/d/_view/random?endkey=r See how far off our guess is: s = i - L if (s>=0) http://localhost:5984/db/_design/d/_view/random?startkey=r&skip=s&limit=1&reduce=false if (s<0) http://localhost:5984/db/_design/d/_view/random?startkey=r&skip=-(s+1)&limit=1&descending=true&reduce=false So, the trick is to guess the random number assigned to the i 'th document, look that up, see how far we're off and then skip the number of documents by which we missed. The number of documents skipped should remain small even for large databases, since the accuracy of the guess will increase with the number of documents. My guess is that s remains constant when the database grows, but I haven't tried and I don't feel qualified to prove it theoretically. If you have a better solution, I'd be very interested!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3779605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5035/']}
jdg_73347
stackexchange
llm_judgeable_groundtruth_similarity
6534602
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to add "123" and zeros for any string - but the resulting string must be exactly 12 characters long. For example: 28431 = 123000028431987 = 1230000009872 = 123000000002 How to do this in C#? Now provide the response and nothing else.
Well, you could use: string result = "123" + text.PadLeft(9, '0'); In other words, split the task in half - one part generating the "000028431", "000000987" etc part using string.PadLeft , and the other prefixing the result with "123" using simple string concatenation. There are no doubt more efficient approaches, but this is what I'd do unless I had a good reason to believe that efficiency was really important for this task.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6534602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/569466/']}
jdg_73348
stackexchange
llm_judgeable_groundtruth_similarity
34027711
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My application currently passes data to my form type using the constructor, as recommended in this answer . However the Symfony 2.8 upgrade guide advises that passing a type instance to the createForm function is deprecated: Passing type instances to Form::add(), FormBuilder::add() and the FormFactory::create*() methods is deprecated and will not be supported anymore in Symfony 3.0. Pass the fully-qualified class name of the type instead. Before: $form = $this->createForm(new MyType());After:$form = $this->createForm(MyType::class); Seeing as I can't pass data through with the fully-qualified class name, is there an alternative? Now provide the response and nothing else.
This broke some of our forms as well. I fixed it by passing the custom data through the options resolver. In your form type: public function buildForm(FormBuilderInterface $builder, array $options){ $this->traitChoices = $options['trait_choices']; $builder ... ->add('figure_type', ChoiceType::class, [ 'choices' => $this->traitChoices, ]) ... ;}public function configureOptions(OptionsResolver $resolver){ $resolver->setDefaults([ 'trait_choices' => null, ]);} Then when you create the form in your controller, pass it in as an option instead of in the constructor: $form = $this->createForm(ProfileEditType::class, $profile, [ 'trait_choices' => $traitChoices,]);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34027711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/526495/']}
jdg_73349
stackexchange
llm_judgeable_groundtruth_similarity
11336477
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is a minimal example outlining my problem test.c: #include <stdio.h>#include <math.h>main (){ fmod ( 3, 2 );} And here is the command I am issuing to compile test.c gcc -lm test.c -o test And here is the output I get when I issue the above command /tmp/ccQmRk99.o: In function `main':test.c:(.text+0x3e): undefined reference to `fmod'collect2: ld returned 1 exit status I get the same output if instead I use cc . I am using the following version of gcc gcc-4.6.real (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1 Any ideas why my program won't compile? Now provide the response and nothing else.
The problem is coming from the linker, ld , rather than gcc (hence the exit status message). In general ld requires objects and libraries to be specified in the order user supplier , where user is an object that uses a library function and supplier is the object which provides it. When your test.c is compiled to an object the compiler states that fmod is an undefined reference $ gcc -c test.c$ nm test.o U fmod0000000000000000 T main (nm lists all the functions referred to by an object file) The linker changes the undefined references to defined ones, looking up the references to see if they are supplied in other files. $ gcc -lm test.o$ nm a.out0000000000600e30 d _DYNAMIC0000000000600fe8 d _GLOBAL_OFFSET_TABLE_00000000004006a8 R _IO_stdin_used w _Jv_RegisterClasses0000000000600e10 d __CTOR_END__...0000000000601018 D __dso_handle w __gmon_start__... U __libc_start_main@@GLIBC_2.2.50000000000601020 A _edata0000000000601030 A _end0000000000400698 T _fini0000000000400448 T _init0000000000400490 T _start00000000004004bc t call_gmon_start0000000000601020 b completed.73820000000000601010 W data_start0000000000601028 b dtor_idx.7384 U fmod@@GLIBC_2.2.50000000000400550 t frame_dummy0000000000400574 T main Most of these refer to libc functions that are run before and after main to set the environment up. You can see that fmod now points to glibc, where it will be resolved by the shared library system. My system is set up to use shared libraries by default. If I instead force static linking I get the order dependency you see $ gcc -static -lm test.otest.o: In function `main':test.c:(.text+0x40): undefined reference to `fmod'collect2: ld returned 1 exit status Putting -lm later in the linker command, after test.o , allows it to link successfully. Checking the symbols fmod should now be resolved to an actual address, and indeed it is $ gcc -static test.o -lm$ nm a.out | grep fmod0000000000400480 T __fmod0000000000402b80 T __ieee754_fmod0000000000400480 W fmod
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11336477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/654789/']}
jdg_73350
stackexchange
llm_judgeable_groundtruth_similarity
47449
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm graphing a Markov process mp = DiscreteMarkovProcess[{1, 0, 0}, ({ {0.6, 0.1, 0.3}, {0.2, 0.7, 0.1}, {0.3, 0.3, 0.4} })]; and would like to have arrows whose thicknesses corresponds to the transition probabilities, with arrowheads of a different color in the exact center of each edge. But all my attempts end up a mess. g = Graph[mp];Scan[(PropertyValue[{g, #}, EdgeLabels] = PropertyValue[{g, #}, "Probability"]) &, EdgeList[g]];Scan[(PropertyValue[{g, #}, EdgeStyle] = Directive[Arrowheads[{{.045, .575}}], GrayLevel[.7], Thickness[PropertyValue[{g, #}, "Probability"]/20]]) &, EdgeList[g]];g The thick edges leave gaps between their ends and the nodes of the graph, and I can't figure out how to change the color of the arrow heads so that they stand out against the color of the edges. How can I change the color of the arrowheads in my figure. How can I avoid the gaps that appear between nodes and the ends of the edges? Now provide the response and nothing else.
Using an EdgeShapeFunction seems to do what you want. Adapting from the examples in the help: ef[pts_List, e_] := {Arrowheads[{{0.1, 0.5, Graphics@{Red, Arrowheads[0.5], Arrow[{{0, 0}, {0.5, 0}}]}}}], Arrow[pts]}g = Graph[mp];Scan[(PropertyValue[{g, #}, EdgeLabels] = PropertyValue[{g, #}, "Probability"]) &, EdgeList[g]];Scan[(PropertyValue[{g, #}, EdgeStyle] = Directive[GrayLevel[.7], Thickness[PropertyValue[{g, #}, "Probability"]/20]]) &, EdgeList[g]];Scan[(PropertyValue[{g, #}, EdgeShapeFunction] = ef) &, EdgeList[g]];g It's a bit ugly, with mysterious red dots within the arrowheads. But this only reflects how little time I've put into it. With some competence and patience I suspect it could do what you want. Edit : Something nicer: ef[pts_List, e_] := {Arrowheads[{{0.02, 0.65, Graphics@{Red, EdgeForm[Gray], Polygon[{{-1.5, -1}, {1.5, 0}, {-1.5, 1}}]} }}], Arrow[pts]}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/47449', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/37/']}
jdg_73351
stackexchange
llm_judgeable_groundtruth_similarity
234475
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that I have a Head A that appears pretty often in my code and, in order to make my outputs look nicer, I define Format[A[x_]]:=Subscript[A,x]; Now if $A_1$ appears in the output and I try to copy and paste it, the result of the paste is Subscript[A,1] rather than A[1] . I would like to find a way such that the data copied to clipboard is not the representation of the expression, but rather its underlying structure. Just like when, for example, I assign it to a variable. Now provide the response and nothing else.
Graphics`Mesh`FindIntersections cp = ContourPlot[Evaluate[{eq1, eq2}], {x, -60, 60}, {y, -60, 60}];intersections = Graphics`Mesh`FindIntersections[cp, Graphics`Mesh`AllPoints -> False] {{-14.3143, -11.0858}, {-14.3143, 11.0858}, {14.3143, -11.0858}, {14.3143, 11.0858}} Show[cp, Graphics[{Red, PointSize[Large], Point @ intersections}]] MeshFunctions + Mesh + MeshStyle ContourPlot[Evaluate[{eq1, eq2}], {x, -60, 60}, {y, -60, 60}, MeshFunctions -> Function[{x, y}, Evaluate[eq1[[1]] - eq2[[1]]]], Mesh -> {{0}}, MeshStyle -> Directive[Red, PointSize[Large]]] ImplicitRegion + Region + Show {ir1, ir2, ir12} = ImplicitRegion[#, {{x, -60, 60}, {y, -60, 60}}] & /@ {eq1, eq2, And[eq1, eq2]}; Show[MapThread[Region[#, BaseStyle -> #2] &][{{ir1, ir2, ir12}, {Blue, Orange, Directive[Red, PointSize[Large]]}}], Frame -> True, AspectRatio -> 1] ImplicitRegion + RegionPlot RegionPlot[{ir1, ir2, ir12}, BaseStyle -> PointSize[Large]] /. p_Point :> {Red, p}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/234475', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/55861/']}
jdg_73352
stackexchange
llm_judgeable_groundtruth_similarity
16107680
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Python, for a dictionary d , d.setdefault('key', value) sets d['key'] = value if 'key' was not in d , and otherwise leaves it as it is. Is there a clean, idiomatic way to do this on a Javascript object, or does it require an if statement? Now provide the response and nothing else.
It's basically like using an if statement, but shorter: d.key || (d.key = value); Or d.key = d.key || value; Update: as @bobtato noted, if the property is already set to the value false it would overwrite it, so a better way would be: !d.key && d.key !== false && (d.key = value); Or, to do it as he suggested (just the shorthanded version): 'key' in d || (d.key = value);// including overwriting null values:('key' in d && d.key !== null) || (d.key = value);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16107680', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/172999/']}
jdg_73353
stackexchange
llm_judgeable_groundtruth_similarity
11249
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: By Fermat's little theorem we know that $$b^{p-1}=1 \mod p$$ if p is prime and $\gcd(b,p)=1$. On the other hand, I was wondering whether $$b^{n-1}=-1 \mod n$$ can occur at all? Update: sorry, I meant n odd. Please excuse. Now provide the response and nothing else.
There are no solutions to $b^{n-1}\equiv-1\pmod n$ with $n$ odd. Let $n>1$ be odd. Every prime dividing $n$ can be written as $2^km+1$ for some positive $k$ and some odd integer $m$. Among those primes, let $p$ have the minimal value of $k$. Then $n-1=2^kr$ for some integer $r$. If $b^{n-1}\equiv-1\pmod n$ then $b^{n-1}\equiv-1\pmod p$ so $b^{(n-1)m}\equiv(-1)^m\equiv-1\pmod p$ and $\gcd(b,p)=1$. But $b^{(n-1)m}=b^{2^kmr}=b^{(p-1)r}\equiv1\pmod p$ by little Fermat. Contradiction, QED.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/11249', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3032/']}
jdg_73354
stackexchange
llm_judgeable_groundtruth_similarity
90737
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to make a project at differential geometry about the Hairy Ball theorem and its applications. I was thinking of including a proof of the theorem in the project. Using the Poincare-Hopf Theorem seems easy enough, but I was thinking that this proves the desired result using a stronger theorem (just like proving Liouville's Theorem in complex analysis using Picard's theorem). Is there a simple proof of the fact that there is no continuous non-zero vector field on the even dimensional sphere? It is good enough if the proof works only for $S^2$, because that is the case I will be focusing on in the applications. Now provide the response and nothing else.
The simplest I can remember off the top of my head is this: Assume there is such a vector field. Let $v_x$ denote the vector at the point $x$. Now, define the homotopy $H: S^2\times [0, 1] \rightarrow S^2$ by the following: $H(x, t)$ is the point $t\pi$ radians away from $x$ along the great circle defined by $v_x$. This gives a homotopy between the identity and the antipodal map on $S^2$, which is impossible, since the antipodal map has degree $-1$. Hence there can be no such vector field.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/90737', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7327/']}
jdg_73355
stackexchange
llm_judgeable_groundtruth_similarity
3559883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: hello I am trying what I thought would be a rather easy regex in Javascript but is giving me lots of trouble. I want the ability to split a date via javascript splitting either by a '-','.','/' and ' '. var date = "02-25-2010";var myregexp2 = new RegExp("-."); dateArray = date.split(myregexp2); What is the correct regex for this any and all help would be great. Now provide the response and nothing else.
You need the put the characters you wish to split on in a character class , which tells the regular expression engine "any of these characters is a match". For your purposes, this would look like: date.split(/[.,\/ -]/) Although dashes have special meaning in character classes as a range specifier (ie [a-z] means the same as [abcdefghijklmnopqrstuvwxyz] ), if you put it as the last thing in the class it is taken to mean a literal dash and does not need to be escaped. To explain why your pattern didn't work, /-./ tells the regular expression engine to match a literal dash character followed by any character ( dots are wildcard characters in regular expressions). With "02-25-2010", it would split each time "-2" is encountered, because the dash matches and the dot matches "2".
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3559883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/362969/']}
jdg_73356
stackexchange
llm_judgeable_groundtruth_similarity
14723381
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This may be a very basic question for some. I was trying to understand how strcpy works actually behind the scenes. for example, in this code #include <stdio.h>#include <string.h>int main (){ char s[6] = "Hello"; char a[20] = "world isnsadsdas"; strcpy(s,a); printf("%s\n",s); printf("%d\n", sizeof(s)); return 0;} As I am declaring s to be a static array with size less than that of source. I thought it wont print the whole word, but it did print world isnsadsdas .. So, I thought that this strcpy function might be allocating new size if destination is less than the source. But now, when I check sizeof(s), it is still 6, but it is printing out more than that. Hows that working actually? Now provide the response and nothing else.
You've just caused undefined behaviour, so anything can happen. In your case, you're getting lucky and it's not crashing, but you shouldn't rely on that happening. Here's a simplified strcpy implementation (but it's not too far off from many real ones): char *strcpy(char *d, const char *s){ char *saved = d; while (*s) { *d++ = *s++; } *d = 0; return saved;} sizeof is just returning you the size of your array from compile time. If you use strlen , I think you'll see what you expect. But as I mentioned above, relying on undefined behaviour is a bad idea.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14723381', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1319819/']}
jdg_73357
stackexchange
llm_judgeable_groundtruth_similarity
182247
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering about this. I know it is possible to visualize the quotient group $\mathbb{R}/\mathbb{Z}$ as a circle, and if you consider these as "topological groups", then this group (not topological) quotient is topologically equivalent to a circle. But then, what does $\mathbb{R}/\mathbb{Q}$ look like? Now provide the response and nothing else.
So, you say that the group (not topological) quotient of $\mathbb{R}/\mathbb{Z}$ is topologically equivalent (i.e., homeomorphic) to the circle. However, this doesn't make any sense unless you have a topology on $\mathbb{R}/\mathbb{Z}$! More the point is that a topological group like $\mathbb{R}$ has both a topological structure and a group structure. Now, when you form the group quotient $\mathbb{R}/\mathbb{Z}$, it can be given a topological space in a natural way, in particular, via the quotient topology. Notice that when we do this we again get a topological group (i.e., the quotient group operations are continuous with respect to the quotient topology). Furthermore, the quotient $\mathbb{R}/\mathbb{Z}$ (as a topological space) is homeomorphic to the circle. Now, in the case of your question, the quotient topology on $\mathbb{R}/\mathbb{Q}$ is the trivial topology. This is not hard to prove since preimages of open sets must be open and saturated. Thus if such a preimage is nonempty, it contains an open interval, and since it is saturated, it must contain all real numbers which differ by a rational from a point in this interval. It is then easy to see that this set must be all of $\mathbb{R}$. Thus the only saturated open sets of $\mathbb{R}$ are $\emptyset$ and $\mathbb{R}$ itself. Hence the quotient topology is trivial. Furthermore, it is trivial that any map into a space with the trivial topology is continuous, so the quotient group operations on $\mathbb{R}/\mathbb{Q}$ are again continuous. So we again have a topological group, albeit not a very interesting one because it isn't very interesting as a topological space. As far as what this space "looks" like, it is similar to a one point space for the reason Ricky mentioned in the comments. However, it is not really easy to visualize since it is not homeomorphic to any subspace of $\mathbb{R}^n$ equipped with the subspace topology (because it is not Hausdorff, or any one of a number of other reasons). Edit: I should have added that whenever you have a topological group and form the quotient in the way we did above the result is always a topological group. However, unless the original normal subgroup is closed, the resulting quotient group will not even be $T_0$ as a topological space. Thus it is only really interesting to form the quotient when the set by which you quotient out is closed. This explains why $\mathbb{R}/\mathbb{Z}$ is interesting as a topological group, but $\mathbb{R}/\mathbb{Q}$ is not.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/182247', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11172/']}
jdg_73358
stackexchange
llm_judgeable_groundtruth_similarity
497317
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using SQL Server 2005, I want to find out what all the grants are on a specific database for all tables. It would also help to find out all tables where the delete grant has been given for a specific user. Note: this may be similar to this question , but I could not get the selected answer's solution working (if someone could provide a better example of how to use that, it would help as well) Now provide the response and nothing else.
The given solution does not cover where the permission is granted against the schema or the database itself, which do grant permissions against the tables as well. This will give you those situations, too. You can use a WHERE clause against permission_name to restrict to just DELETE. SELECT class_desc , CASE WHEN class = 0 THEN DB_NAME() WHEN class = 1 THEN OBJECT_NAME(major_id) WHEN class = 3 THEN SCHEMA_NAME(major_id) END [Securable] , USER_NAME(grantee_principal_id) [User] , permission_name , state_descFROM sys.database_permissions Also, db_datawriter would need to be checked for membership because it gives implicit INSERT, UPDATE, and DELETE rights, meaning you won't see it show up in the permission DMVs or their derivatives.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/497317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/43472/']}
jdg_73359
stackexchange
llm_judgeable_groundtruth_similarity
600369
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do I know if my logistic model's numeric variables violate the linearity assumption on the log(target variable)? I am using the code found at THIS LINK . They graphed on the same data but I, personally, only used my training data. The variable age and pedigree is not linear and might need some transformations. I'm ready to do so, but I'd first like to know why it is that they consider DiabetesPedigreeFunction & age as non-linear? What shape would you say qualifies the variable as linear? Now provide the response and nothing else.
As is often the case, the issue is one of defining your features. If you use the digits of the binary expansion of the number as features, your classifiers should have no problem picking up the fact that only one feature perfectly separates the target classes. Here I construct an example where the training data is 500 integers randomly selected from $\{1,2, \dots, 1000\}$ and use linear regression to classify: y <- sample(1:1000, 500, replace=FALSE)x <- t(matrix(as.integer(intToBits(y)), 32))x <- x[, 1:10] # as the numbers < 1001 don't use any higher bitstgt <- y %% 2train_data <- data.frame(cbind(tgt,x))m1 <- lm(tgt~., data=train_data)y_pred <- sample(1:1000, 10, replace=FALSE)x_pred <- t(matrix(as.integer(intToBits(y_pred)), 32))[, 1:10]test_data <- data.frame(x_pred)test_results <- data.frame(actual = y_pred, odd_even = predict(m1, test_data))# Clean up the results, as floating point math doesn't always give exact integerstest_results $cleaned_odd_even <- ifelse(test_results$ odd_even < 1e-10, 0, 1) and the results: > test_results actual odd_even cleaned_odd_even1 727 1.000000e+00 12 544 -1.311871e-15 03 689 1.000000e+00 14 647 1.000000e+00 15 444 -1.116987e-15 06 89 1.000000e+00 17 168 -1.229638e-15 08 770 -1.401580e-15 09 870 -1.107620e-15 010 31 1.000000e+00 1 This would allow you to classify any non-negative integer based on its lowest order 10 bits (or however many bits we include). The algorithm figured out that only the lowest order bit mattered.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/600369', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/365838/']}
jdg_73360
stackexchange
llm_judgeable_groundtruth_similarity
5888
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I heard on a podcast recently that the supermassive black holes at the centre of some galaxies could have densities less than water, so in theory, they could float on the substance they were gobbling up... can someone explain how something with such mass could float? Please see the below link for the podcast in question: http://www.universetoday.com/83204/podcast-supermassive-black-holes/ Now provide the response and nothing else.
Well, it can't (float), since a Black Hole is not a solid object that has any kind of surface. When someone says that a super massive black hole has less density than water, one probably means that since the density goes like $\frac{M}{R^3}$where M is the mass and R is the typical size of the object, then for a black hole the typical size is the Schwarzschild radius which is $2M$, which gives for the density the result $$\rho\propto M^{-2}$$ You can see from that, that for very massive black holes you can get very small densities (all these are in units where the mass is also expressed in meters). But that doesn’t mean anything, since the Black Hole doesn’t have a surface at the Schwarzschild radius. It is just curved empty space.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/5888', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/2247/']}
jdg_73361
stackexchange
llm_judgeable_groundtruth_similarity
196876
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Just looking at ways of getting named constants in python. class constant_list: (A_CONSTANT, B_CONSTANT, C_CONSTANT) = range(3) Then of course you can refer to it like so: constant_list.A_CONSTANT I suppose you could use a dictionary, using strings: constant_dic = { "A_CONSTANT" : 1, "B_CONSTANT" : 2, "C_CONSTANT" : 3,} and refer to it like this: constant_dic["A_CONSTANT"] My question, then, is simple. Is there any better ways of doing this? Not saying that these are inadequate or anything, just curious - any other common idioms that I've missed? Thanks in advance. Now provide the response and nothing else.
For 2.3 or after: class Enumerate(object): def __init__(self, names): for number, name in enumerate(names.split()): setattr(self, name, number) To use: codes = Enumerate('FOO BAR BAZ') codes.BAZ will be 2 and so on. If you only have 2.2, precede this with: from __future__ import generators def enumerate(iterable): number = 0 for name in iterable: yield number, name number += 1 ( This was taken from here )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/196876', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/61/']}
jdg_73362
stackexchange
llm_judgeable_groundtruth_similarity
33211988
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to reshape the numpy array as it is depicted, from 3D to 2D. Unfortunately, the order is not correct. A assume to have a numpy array (1024, 64, 100) and want to convert it to (1024*100, 64). Does anybody has an idea how to maintain the order? I have a sample data data[0,0,0]=1data[0,1,0]=2data[0,2,0]=3data[0,3,0]=4data[1,0,0]=5data[1,1,0]=6data[1,2,0]=7data[1,3,0]=8data[2,0,0]=9data[2,1,0]=10data[2,2,0]=11data[2,3,0]=12data[0,0,1]=20data[0,1,1]=21data[0,2,1]=22data[0,3,1]=23data[1,0,1]=24data[1,1,1]=25data[1,2,1]=26data[1,3,1]=27data[2,0,1]=28data[2,1,1]=29data[2,2,1]=30data[2,3,1]=31 and I would like to have an outcome like this: array([[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [ 20., 21., 22., 23.], [ 24., 25., 26., 27.], [ 28., 29., 30., 31.]]) Moreover, I would also like to have the reshaping in the other way, i.e. from: array([[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [ 20., 21., 22., 23.], [ 24., 25., 26., 27.], [ 28., 29., 30., 31.]]) to the desired output: [[[ 1. 20.] [ 2. 21.] [ 3. 22.] [ 4. 23.]] [[ 5. 24.] [ 6. 25.] [ 7. 26.] [ 8. 27.]] [[ 9. 28.] [ 10. 29.] [ 11. 30.] [ 12. 31.]]] Now provide the response and nothing else.
It looks like you can use numpy.transpose and then reshape, like so - data.transpose(2,0,1).reshape(-1,data.shape[1]) Sample run - In [63]: dataOut[63]: array([[[ 1., 20.], [ 2., 21.], [ 3., 22.], [ 4., 23.]], [[ 5., 24.], [ 6., 25.], [ 7., 26.], [ 8., 27.]], [[ 9., 28.], [ 10., 29.], [ 11., 30.], [ 12., 31.]]])In [64]: data.shapeOut[64]: (3, 4, 2)In [65]: data.transpose(2,0,1).reshape(-1,data.shape[1])Out[65]: array([[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [ 20., 21., 22., 23.], [ 24., 25., 26., 27.], [ 28., 29., 30., 31.]])In [66]: data.transpose(2,0,1).reshape(-1,data.shape[1]).shapeOut[66]: (6, 4) To get back original 3D array, use reshape and then numpy.transpose , like so - In [70]: data2D.reshape(np.roll(data.shape,1)).transpose(1,2,0)Out[70]: array([[[ 1., 20.], [ 2., 21.], [ 3., 22.], [ 4., 23.]], [[ 5., 24.], [ 6., 25.], [ 7., 26.], [ 8., 27.]], [[ 9., 28.], [ 10., 29.], [ 11., 30.], [ 12., 31.]]])
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33211988', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5462312/']}
jdg_73363
stackexchange
llm_judgeable_groundtruth_similarity
35057909
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between calling setAttribute and setAttributeNS with null as the namespace parameter? Also is there an issue with using getAttribute() and then setAttributeNS ? Now provide the response and nothing else.
setAttribute() is a DOM 1 function. setAttributeNS() is a DOM 2 function that resolves the problem of conflicting tag or attribute names by specifying the xmlns namespace that should apply to the tag/attribute in the first argument. If an attribute does not have a defined namespace prefix, the first argument must be null . You could use setAttribute() but for consistency it is advisable to stick to setAttributeNS() . See: https://developer.mozilla.org/en/docs/Web/SVG/Namespaces_Crash_Course#Scripting_in_namespaced_XML "However, note carefully: the Namespaces in XML 1.1 recommendation states that the namespace name for attributes without a prefix does not have a value. In other words, although the attributes belong to the namespace of the tag, you do not use the tag's namespace name. Instead, you must use null as the namespace name for unqualified (prefixless) attributes."
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35057909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3697264/']}
jdg_73364
stackexchange
llm_judgeable_groundtruth_similarity
299339
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the picture above, it appears that the user has 43 reputation, 10 bronze, and 3005 silver badges. At first sight, I thought, "How is it possible?" Then I noticed that the user actually has 3005 reputation, 10 silver, and 43 bronze badges! The formatting is because the display name has some Arabic characters. RTL characters should not reverse the badge/rep display. Now provide the response and nothing else.
Here's what's going on, for the curious. (No, I'm not an SE developer, I'm a (slightly) bored full stack web developer on vacation who decided to inspect the source.) Here's the culprit code snippet. <div class="user-details"> <a href="/users/4730/seyyed-ali-%d8%a7%d9%84%d8%b3%db%8c%d8%af-%d8%b9%d9%84%db%8c">SEYYED___ALI السید____علی</a><br> <span class="reputation-score" title="reputation score " dir="ltr">3,005</span> <span title="10 silver badges"> <span class="badge2"></span> <span class="badgecount">10</span> </span> <span title="43 bronze badges"> <span class="badge3"></span> <span class="badgecount">43</span> </span></div> The only span to have dir set is the reputation score. All of the others...don't. Firefox seems to be just fine with this, as it renders everything "alright": ...which leads me to believe it's a Chrome (or Webkit) bug. The fix might be as simple as adding the dir attributes to the other two span s, or doing a bit of enforcement on user-supplied text to ensure that the string ends in an LTR marker if it detects an RTL marker. Personally, I think that the former is a simpler/more straightforward approach, but the latter is more fun/challenging to attempt.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/299339', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/3555000/']}
jdg_73365
stackexchange
llm_judgeable_groundtruth_similarity
2041905
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Definition: if $M$ is a smooth manifold, define the orientable double cover of $M$ by: $$\widetilde{M}:=\{(p, o_p)\mid p\in M, o_p\in\{\text{orientations on }T_pM\}\}$$ together with the function $\pi:\widetilde{M}\to M$ with $\pi((p,o_p))=p$ . There are three things I'm trying to understand about $\widetilde{M}$ : What is its differentiable structure? Why is $\widetilde{M}$ orientable? Why is the connectedness of $\widetilde{M}$ equivalent to the non-orientability of $M$ ? Here's where I'm at: first, for the topology of $\widetilde{M}$ , one may define $\widetilde{U}\subset\widetilde{M}$ as open $\Leftrightarrow \exists U\subset M$ open with $$\widetilde{U}=\{(p,o_p)\mid p\in U, o_p\in\{\text{orientations on }T_pM\}$$ Now I'm trying to figure out some chart $(\widetilde{U},\widetilde{\phi})$ at $(p,o_p)$ based on $(\phi, U)$ at $p$ . I've tried this: \begin{align*}\widetilde{\phi}:\widetilde{U}&\to\mathbb{R}^n\\(p, o_p)&\mapsto \phi(p)\end{align*} But that obviously doesn't work because it is not even injective. Somehow I have to involve the orientation $o_p$ in the definition, but I really don't know how to do it. About the orientability, I guess it will have something to do with the orientability of the atlas $\{(\widetilde{U}_{\alpha}, \widetilde{\phi}_{\alpha})\}$ , but since I can't figure out the definition of $\widetilde{\phi}$ , I'm stuck. Now for the connectedness of $\widetilde{M}$ and non-orientability of $M$ , that I have no idea. Now provide the response and nothing else.
Almost $2$ years later, I'll give a complete answer to my own question. Step 1 (Topology of $\widetilde{M}$ ): Take an atlas $\{(U_\alpha,\varphi_\alpha)\}_{\alpha\in\Lambda}$ such that $\{U_\alpha\}_{\alpha\in\Lambda}$ is a countable basis for $M$ . Define the following subsets of $\widetilde{M}$ : $$U_\alpha^+:=\left\{(p,o_p)\in\widetilde{M}\mid p\in U_\alpha,\, o_p=\left[\left.\frac{\partial }{\partial\varphi_\alpha^1}\right|_p,...,\left.\frac{\partial }{\partial\varphi_\alpha^n}\right|_p\right]\right\},$$ $$U_\alpha^-:=\left\{(p,o_p)\in\widetilde{M}\mid p\in U_\alpha,\,o_p=-\left[\left.\frac{\partial }{\partial\varphi_\alpha^1}\right|_p,...,\left.\frac{\partial }{\partial\varphi_\alpha^n}\right|_p\right]\right\}.$$ We define the topology of $\widetilde{M}$ as the one generated by the basis $\{U_\alpha^+,U_\alpha^-\}_{\alpha\in\Lambda}$ . This is a countable basis since $\Lambda$ is countable. In order to check that this topology is Hausdorff we only need to use the fact that the topology of $M$ is Hausdorff. We prove, in addition, that this makes $\pi$ a continuous, open map (in fact, a double covering). Indeed, notice that for any $\alpha\in\Lambda$ we have $\pi^{-1}(U_\alpha)=U_\alpha^+\cup U_\alpha^-$ and $\pi(U_\alpha^\pm)=U_\alpha$ . Since $\{U_\alpha\}_{\alpha\in\Lambda}$ is a basis for $M$ and $\{U_\alpha^+,U_\alpha^-\}_{\alpha\in\Lambda}$ is a basis for $\widetilde{M}$ , consequently $\pi$ is continuous and open. Moreover, for an arbitrary $p\in M$ , any open set $U_\alpha$ containing $p$ is such that $\pi^{-1}(U_\alpha)=U_\alpha^+\cup U_\alpha^-$ (disjoint union) and $\pi|_{U_\alpha^\pm}:U_\alpha^\pm\to U_\alpha$ is a homeomorphism, which shows that $\pi$ is a double covering. Step 2 (Differentiable Structure of $\widetilde{M}$ ): Define $\varphi_\alpha^+:U^+_\alpha\to \varphi_\alpha(U_\alpha)\subset\mathbb{R}^n$ by $\varphi^+_\alpha=\varphi_\alpha\circ\pi|_{U_\alpha^+}$ and, similarly, $\varphi_\alpha^-:U^-_\alpha\to\varphi_\alpha(U_\alpha)\subset\mathbb{R}^n$ by $\varphi^-_\alpha=\varphi_\alpha\circ\pi|_{U_\alpha^-}$ . Both $\varphi_\alpha^+,\varphi_\alpha^-$ are homeomorphisms, because $\varphi_\alpha$ and $\pi|_{U_\alpha^\pm}$ are homeomorphisms. Moreover: \begin{align*}\varphi_\alpha^\pm\circ(\varphi_\beta^\pm)^{-1}(x_1,...,x_n)&=\varphi_\alpha^\pm\left(\underbrace{\varphi_\beta^{-1}(x_1,...,x_n)}_{=:p},\pm\left[\left.\frac{\partial }{\partial\varphi_\beta^1}\right|_p,...,\left.\frac{\partial }{\partial\varphi_\beta^n}\right|_p\right]\right)\\&=\underbrace{\varphi_\alpha\circ\varphi_\beta^{-1}}_{\text{smooth}}(x_1,...,x_n).\end{align*} (the upper indexes $\pm$ are not relevant to this argument) This shows that the atlas $\{(U_\alpha^+,\varphi_\alpha^+),(U_\alpha^-,\varphi_\alpha^-)\}_{\alpha\in\Lambda}$ is compatible, which makes $\widetilde{M}$ a smooth manifold. This also makes $\pi$ a local diffeomorphism, since $\pi|_{U_\alpha^\pm}=\varphi_\alpha^{-1}\circ\varphi_\alpha^\pm$ and $\varphi_\alpha,\varphi_\alpha^\pm$ are diffeomorphisms. Step 3 (Orientability of $\widetilde{M}$ ) : Let's construct a pointwise orientation $O:(p,o_p)\mapsto O_{(p,o_p)}$ on $\widetilde{M}$ . Take an arbitrary $(p,o_p)\in\widetilde{M}$ . Since $\pi$ is a local diffeomorphism, $(d\pi)_{(p,o_p)}$ is a bijective linear transformation and we may find a unique $O_{(p,o_p)}$ which corresponds to $o_p$ via $d\pi$ . More precisely, define $O_{(p,o_p)}:=[(d\pi)_{(p,o_p)}^{-1}e_1,...,(d\pi)_{(p,o_p)}^{-1}e_n]$ , where $\{e_1,...,e_n\}$ is any basis for $T_pM$ with $o_p=[e_1,...,e_n]$ . We show that $O$ is continuous. Notice that for a neighbourhood $U_\alpha$ of $p$ , we either have $(p,o_p)\in U_\alpha^+$ , in which case $O_{(q,o_q)}=\left[\left.\frac{\partial }{\partial(\varphi_\alpha^+)^1}\right|_{(q,o_q)},...,\left.\frac{\partial }{\partial(\varphi_\alpha^+)^n}\right|_{(q,o_q)}\right]$ for all $(q,o_q)\in U_\alpha^+$ , or $(p,o_p)\in U_\alpha^-$ , in which case $O_{(q,o_q)}=\left[\left.\frac{\partial }{\partial(\varphi_\alpha^-)^1}\right|_{(q,o_q)},...,\left.\frac{\partial }{\partial(\varphi_\alpha^-)^n}\right|_{(q,o_q)}\right]$ for all $(q,o_q)\in U_\alpha^-$ . Since $(p,o_p)$ is arbitrary, this means that $O$ is continuous. Thus $\widetilde{M}$ is orientable. Step 4 (Orientability of $M$ vs. Connectedness of $\widetilde{M}$ ) : Suppose $\widetilde{M}$ is disconnected. Since $\pi$ is a double cover, this means that $\widetilde{M}=U\cup V$ , where $U,V$ are disjoint open subsets such that both $\pi|_U:U\to M$ and $\pi|_V:V\to M$ are diffeomorphisms. As $\widetilde{M}$ is orientable, in particular $U$ is orientable, so $M$ inherits an orientation from $U$ via $\pi|_U$ . Conversely, suppose $M$ is orientable and take an oriented atlas $\{U_\alpha,\varphi_\alpha\}_{\alpha\in\Lambda}$ . We show that $\widetilde{M}$ is the disjoint union of the open sets $\bigcup_\alpha U_\alpha^+$ and $\bigcup_\alpha U_\alpha^-$ , which means that $\widetilde{M}$ is disconnected. Assume by contradiction that $U_\alpha^+\cap U_\beta^-\neq \emptyset$ for some $\alpha,\beta\in\Lambda$ . If $(p,o_p)\in U_\alpha^+\cap U_\beta^-$ , this means that $p\in U_\alpha\cap U_\beta$ with $o_p=\left[\left.\frac{\partial}{\partial \varphi_\alpha^1}\right|_p,...,\left.\frac{\partial}{\partial \varphi_\alpha^n}\right|_p\right]=$ $-\left[\left.\frac{\partial}{\partial \varphi_\beta^1}\right|_p,...,\left.\frac{\partial}{\partial \varphi_\beta^n}\right|_p\right]$ , therefore $\det(D(\varphi_\alpha\circ\varphi_\beta^{-1})(\varphi_\beta(p)))<0$ (absurd, since the atlas is oriented). $_\blacksquare$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2041905', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/200513/']}
jdg_73366
stackexchange
llm_judgeable_groundtruth_similarity
1147
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am interested in calculating the $n$'th power of a $n\times n$ matrix $A$. Suppose we have an algorithm for matrix multiplication which runs in $\mathcal{O}(M(n))$ time. Then, one can easily calculate $A^n$ in $\mathcal{O}(M(n)\log(n))$ time. Is it possible to solve this problem in lesser time complexity? Matrix entries can, in general, be from a semiring but you can assume additional structure if it helps. Note: I understand that in general computing $A^m$ in $o(M(n)\log(m))$ time would give a $o(\log m)$ algorithm for exponentiation. But, a number of interesting problems reduce to the special case of matrix exponentiation where m=$\mathcal O(n)$, and I was not able to prove the same about this simpler problem. Now provide the response and nothing else.
If the matrix is diagonalizable then taking the $n$th power can be done in time$$O(D(n)+ n\log n)$$where $D(n)$ is the time to diagonalize $A$. Just to complete the details, if $A=P^{-1}DP$ with a diagonal $D$, then$$A^n = (P^{-1}DP)^n = P^{-1}D^nP$$ and $D^n$ can be computed by just taking each element of the diagonal (each eigenvalue of $A$) to the $n$th power.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/1147', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/984/']}
jdg_73367
stackexchange
llm_judgeable_groundtruth_similarity
486
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some planets, such as Mars, have 2 or more moons. Giants, like Jupiter of Saturn have a lot of moons! How likely the orbit of one crosses another one's orbit? Is it possible for two moons to collide? If yes, what would exactly happen? Now provide the response and nothing else.
There is one known pair of moons in the solar system that seemingly swap orbits every couple of years. That's Saturn's Epimetheus and Janus . Their orbits are so close together that they interact gravitationally every couple of years (when the inner moon catches up to the outer moon), so that the outer moon is slowed down, and the inner moon is accelerated. Each moon changed direction from prograde to retrograde and vice-versa in two co-orbital horseshoe orbits, never crossing paths as they exchange a bit of momentum when closer but Saturn's gravitational pull prevails and pulls each of these two moons back in the opposite direction where they came from, swapping from lower to higher orbit. It's a bit tricky to explain in words: [From: The Rotation of Janus and Epimetheus - Tiscareno, Matthew S. et al. Icarus 204 (2009) .]
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/486', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/62/']}
jdg_73368
stackexchange
llm_judgeable_groundtruth_similarity
51722
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Devarda's alloy is used in the detection of nitrates. However, judging by the reaction equation, only aluminium plays the role of reducer: $$\ce{3NaNO3 + 8Al + 5NaOH + 18H2O -> 8Na[Al(OH)4] + 3NH3(g)}$$ Why does it contain copper? Is copper also capable of acting as a reducer here? I know that zinc, the third component, is capable of that. Now provide the response and nothing else.
Virtually every analytical chemistry textbook's chapter on anion determination mentions that Devarda's alloy is very brittle and can be easily pulverized in a mortar, unlike $\ce{Al}$ and $\ce{Zn}$ metals.This is quite handy property as for qualitative analysis of nitrates in a heterogeneous media the reducing phase should be ground to fine powder in order to: assure micro-scale precision of the added reactant, which is tricky to achieve with aluminium or zinc, as both metals are usually shipped in granules, and the fine metal powders are prone to oxidation, increasing the amount of unwanted oxides (plus, their powders are sticky); increase the available reaction surface to speed up the analysis and increase its completeness (especially important for isotopic nitrate analysis). On top of that, German Wikipedia also supports both comments by Ivan Neretin Galvanic couple, I guess. and aventurin Catalyst, I guess. ;-) and suggests that copper facilitates catalytical formation of a local electrochemical cell.A reduction with Devarda's alloy can be carried out in weakly alkaline solution or even in neutral solution, whereas reduction with aluminum often requires a strongly alkaline solution.At the same pH, the reaction with Devarda's alloy is overall significantly faster than when using zinc or aluminum. This was experimentally demonstrated by Mertens et al. [ 1 ]: The superiority of Devarda alloy compared to aluminum dust as a reductant in alkaline medium is illustrated best with the following experiment (see Figure 3). Twenty ppm $\ce{NO3-}$ is subjected to reduction, first with aluminum dust and the potential of the ammonia probe is recorded continuously. One observes a slowly rising potential. On addition of Devarda alloy to this reaction mixture, a potential jump is obtained and the potential stabilizes rapidly (after 5 minutes). This shows clearly that the kinetics of the reduction with Devarda alloy are much more rapid than with aluminum dust. Figure 3 . Potential observed with the ammonia probe during the reduction of 20 ppm $\ce{NO3-}$ as a function of time From $t = 0$ to A: aluminum dust. At point A, Devarda alloy was added. References Mertens, John.; Van den Winkel, Pierre.; Massart, D. L. Determination of Nitrate in Water with an Ammonia Probe. Anal. Chem. 1975 , 47 (3), 522–526. DOI: 10.1021/ac60353a009 .
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/51722', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7213/']}
jdg_73369
stackexchange
llm_judgeable_groundtruth_similarity
12012720
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Asp.net MVC to pull my pages when I require them.The structure of my pages are: { Layout = "";} <div data-role="page"> .... <script type="text/javascript"> $(document).one("pageinit", function () { .... </script> </div> Should I leave my page structures as above or should I put my scripts in one global external javascript file (in which case how would I do I associate each "pageinit" with the correct page)? The other problem I am having is debugging the embedded script in chrome. Is this related to the fact it is embedded? Now provide the response and nothing else.
You can do it via reflection like so: val routeCache = { import routes._ val jsRoutesClass = classOf[routes.javascript] val controllers = jsRoutesClass.getFields().map(_.get(null)) controllers.flatMap { controller => controller.getClass().getDeclaredMethods().map { action => action.invoke(controller).asInstanceOf[play.core.Router.JavascriptReverseRoute] } }}def javascriptRoutes = Action { implicit request => Ok(Routes.javascriptRouter("jsRoutes")(routeCache:_*)).as("text/javascript")} This was derived from the generated source files found in target/scala-2.x.x/src_managed. You could actually add your own source generator and parse the routes file yourself, but I find doing it via reflection easier. An additional thing you might want to do is filter out the methods you don't want as this will give you ALL the routes (including the javascriptRouter itself).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12012720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126483/']}
jdg_73370
stackexchange
llm_judgeable_groundtruth_similarity
3091451
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f: (0,1) \rightarrow \mathbb{R} $ be uniformly continuous. Prove that $$ \lim_{\epsilon \to 0} \int^{1-\epsilon}_{\epsilon}\!\!f(t)dt \in \mathbb{R}$$ Any ideas?? $f$ can be extended to a continuous function $\hat{f}$ in $[0,1]$ , so an integrable one. My best guess is the integral will equal $\int^{1}_{0}\hat{f}(t)\,dt$ . Am I correct?? A rigorous proof will be very much appreciated. Thanks in advance. Now provide the response and nothing else.
Correct. As the extension is bounded, where $|\hat{f}(x)| \leqslant M$ on $[0,1]$ , we have as $\epsilon \to 0$ , $$\left|\int_0^1\hat{f} - \int_\epsilon^{1-\epsilon} f \right| = \left|\int_0^1\hat{f} - \int_\epsilon^{1-\epsilon} \hat{f} \right|\leqslant \left|\int_0^\epsilon \hat{f}\right| + \left|\int_{1-\epsilon}^1 \hat{f} \right| \leqslant 2M\epsilon \to 0$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3091451', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/639267/']}
jdg_73371
stackexchange
llm_judgeable_groundtruth_similarity
6077357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a for loop enclosing an ajax call and I'm trying to determine the best method for passing the index from the for loop to the callback function. Here is my code: var arr = [2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010];for (var i = 0; i < arr.length; i++){ $.ajaxSetup({ cache:false }) $.getJSON("NatGeo.jsp", { ZipCode: arr[i], Radius: document.getElementById("radius").value, sensor: false }, function(data) { DrawZip(data, arr[i]); } );} Currently, only the last value of the arr array is passed due to the asynchronous ajax call. How can I pass each iteration of the arr array to the callback function, aside from running the ajax call synchronously? Now provide the response and nothing else.
You could use a javascript closure: for (var i = 0; i < arr.length; i++) { (function(i) { // do your stuff here })(i);} Or you could just use $.each : var arr = [2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010];$.each(arr, function(index, value) { $.ajaxSetup({ cache:false }); $.getJSON("NatGeo.jsp", { ZipCode: value, Radius: document.getElementById("radius").value, sensor: false }, function(data) { DrawZip(data, value); } );});
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6077357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/717236/']}
jdg_73372
stackexchange
llm_judgeable_groundtruth_similarity
17063257
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know this will sound impossible but my boss told me I MUST send a JSON over an AJAX post call with jQuery that MUST HAVE DUPLICATE KEYS. the problem is that if I write something like this: $.post("someurl", { "key1" : "value1", "key2" : "value2", "key2" : "value3", "key2" : "value4", "key3" : "value5"}); , jQuery will send the request as someurl?key1=value1&key2=value4&key3=value5 all this because Javascript overwrites properties that have the same name. The JSON object is generated dynamically and I am NOT ALLOWED to use arrays in it. Can someone tell me how could I generate the JSON object dinamicaly and with duplicate keys? I would realy appreciate any help from you! Now provide the response and nothing else.
From what I can see, {"a": "b", "a": "c"} actually is valid JSON according to RFC 4627 . An object structure is represented as a pair of curly brackets surrounding zero or more name/value pairs (or members). A name is a string. A single colon comes after each name, separating the name from the value. A single comma separates a value from a following name. The names within an object SHOULD be unique. ...where SHOULD means: 3. SHOULD. This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. So yeah, basically you can do that, it is legal , but it's also a bad idea. Different JSON decoders will probably handle this situation differently and/or in undesiderable ways. Look at what the spec requires of parsers: A JSON parser transforms a JSON text into another representation. A JSON parser MUST accept all texts that conform to the JSON grammar. A JSON parser MAY accept non-JSON forms or extensions. An implementation may set limits on the size of texts that it accepts. An implementation may set limits on the maximum depth of nesting. An implementation may set limits on the range of numbers. An implementation may set limits on the length and character contents of strings. ...but an implementation doesn't have to handle situations like this sanely. For example: # Python 2.7>>> import json>>> json.JSONDecoder().decode('{"a": "b", "a": "c"}')`{u'a': u'c'}`# Chrome 32> JSON.parse('{"a": "b", "a": "c"}')Object {a: "c"} ...and other implementations may legally give you (in Python notation): {"a": "b"} [("a", "b"), ("a", "c")] [("a", ["b", "c"])] [] 42 "your JSON is bad and you should feel bad" ...or just good old nasal daemons . Literally the only illegal thing for a JSON parser to do here is raise an exception. The last thing you want to do in your production code is to rely on weird side cases. So the last thing you want to do is exercise your right to form nominally legal but practically useless JSON. If you want to do that, you'll have to do it by hand - build your own abstract syntax trees, your own parsers, your own generators, generators for anybody who might want to consume your data...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17063257', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1315970/']}
jdg_73373
stackexchange
llm_judgeable_groundtruth_similarity
169402
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a bigfile like this: denovo1 xxx yyyy oggugu ddddddenovo11 ggg hhhh bbbb ggggdenovo22 hhhh yyyy kkkk iiiidenovo2 yyyyy rrrr fffff jjjjdenovo33 hhh yyy eeeee fffff then my pattern file is: denovo1denovo3denovo22 I'm trying to use fgrep in order to extract only the lines exactly matching the pattern in my file (so I want denovo1 but not denovo11 ). I tried to use -x for the exact match, but then I got an empty file. I tried: fgrep -x --file="pattern" bigfile.txt > clusters.blast.uniq Is there a way to make grep searching only in the first column? Now provide the response and nothing else.
You probably want the -w flag - from man grep -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. i.e. grep -wFf patfile filedenovo1 xxx yyyy oggugu ddddddenovo22 hhhh yyyy kkkk iiii To enforce matching only in the first column, you would need to modify the entries in the pattern file to add a line anchor : you could also make use of the \b word anchor instead of the command-line -w switch e.g. in patfile : ^denovo1\b^denovo3\b^denovo22\b then grep -f patfile filedenovo1 xxx yyyy oggugu ddddddenovo22 hhhh yyyy kkkk iiii Note that you must drop the -F switch if the file contains regular expressions instead of simple fixed strings.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/169402', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/91761/']}
jdg_73374
stackexchange
llm_judgeable_groundtruth_similarity
60868067
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: SETUP I have a windows 10 PC connecting to my linux server (ubuntu 18.04.2). I run/edit code on my powerful server by remotely accessing the server through VS code's ssh ability. It is super convenient to edit my code on the server with VS code... THE ISSUE Visual Studio code won't connect to my Linux server. What is weird is that I am able to ssh into the server from the terminal within VScode by just literally typing ssh <username>@<server-hostname> However, when I do the Remote-SSH option within VScode then it does not want to connect to the server...strange... Here is how I remotely SSH into server with VScode built in functionality (marked with red rectangle). Also part of this screenshot is my config file, which I have triple checked as correct (IP address and .ssh/id_rsa_gpu_1_solarpv_ssh location) HERE IS THE ERROR OUTPUT MY ATTEMPTS I have made sure that the config file is setup correctly and that the IP address is correct. I have ensured that the server's IP address is also in the 'known_hosts' file located at /Users/.ssh/ I even generated my own private key for SSH connection as advised by the great documentation here: https://code.visualstudio.com/docs/remote/troubleshooting#_quick-start-using-ssh-keys Still getting the error of failed to create hard link... With all of my back-end stuff done right, I am absolutely clueless regarding why this is an issue. Now provide the response and nothing else.
After spending literally 9 hours on this issue ...here is the answer: When you launch VScode and remotely access the linux server, the linux server is instructed to download and install a vscode server so that you (on the other side) can remotely connect to the server. If the server does not have internet just make sure the following setting is ticked This allows the remote server (without or with internet connection) to get the vscode server running. To eliminate the "failed to create hard link error" go to the directory mentioned by the error, which in this case on linux server is navigate to /home/<username>/.vscode-server/bin/0ba0ca.../ then remove/delete the hardlink file and target file which also hasthe same name as the directory. Then just attempt to remotely access your server again throughvscode and all should be fine Here is also a discussion regarding the issue...if you still have problems https://github.com/microsoft/vscode-remote-release/issues/2507
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60868067', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6618417/']}
jdg_73375
stackexchange
llm_judgeable_groundtruth_similarity
4869994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've recently started an internship. My employer uses ruby on rails, and I frequently encounter new syntax that I need to look up to understand. I've googled around for a good explanation of named_scope, but what I've found so far is mostly blog posts giving high praise for it, rather a straight definition or introduction. What exactly is named_scope (now simply called scope) in ruby on rails? Now provide the response and nothing else.
A scope is a subset of a collection. Sounds complicated? It isn't. Imagine this: You have Users. Now, some of those Users are subscribed to your newsletter. You marked those who receive a newsletter by adding a field to the Users Database (user.subscribed_to_newsletter = true). Naturally, you sometimes want to get those Users who are subscribed to your newsletter. You could, of course, always do this: User.where(subscribed_to_newsletter: true).each do #something Instead of always writing this you could, however, do something like this. #File: users.rbclass User < ActiveRecord::Base scope :newsletter, where(subscribed_to_newsletter: true) #yada yadaend If you're using Rails 4 or newer , do this instead: #File: users.rbclass User < ActiveRecord::Base scope :newsletter, -> { where(subscribed_to_newsletter: true) } #yada yadaend This allows you to access your subscribers by simply doing this: User.newsletter.each do #something This is a very simple example but in general scopes can be very powerful tools to easy your work. Check out this link: API Description
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4869994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/29182/']}
jdg_73376
stackexchange
llm_judgeable_groundtruth_similarity
14335988
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my information area of my app I want to show a brief description of my app. For that I need to create a view with a lot of text. What is the best practice for that ? Actually I created a Linear Layout with a Scrollview, in this i will add my text, which also should contain some variable values e.g. my app version. But I do not know if there are better ways to do that? Here is my current approach: <LinearLayout android:id="@+id/information_content" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_above="@id/bottom_footer" android:layout_below="@id/top_header" android:layout_marginLeft="@dimen/marginLeft" android:layout_alignParentRight="true" android:layout_weight="1" android:orientation="vertical" > <ScrollView android:id="@+id/scrollView1" android:layout_width="match_parent" android:layout_height="wrap_content" > <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <EditText android:id="@+id/info_text" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="textMultiLine" android:text="Hallo das ist ein test!!! dsf" /> </LinearLayout> </ScrollView> </LinearLayout> Any recommendations? Thanks Now provide the response and nothing else.
You are probably looking for a typical "About Screen" with simple scrollable text that may be formatted using HTML. Firstly, here is the layout /layout/about_layout.xml . You don't need to wrap your view in an extra LinearLayout. <?xml version="1.0" encoding="utf-8"?><ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" > <RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:padding="12dp" > <TextView android:id="@+id/about_text_view" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </RelativeLayout></ScrollView> Afterwards, add the following to your /values/strings.xml : <string name="about_text"> <![CDATA[ Author: Your name<br/><br/>Your description text, changelog, licences etc... ]]></string> Finally, use the layout in your AboutActivity.java , display the string HTML-formatted and enrich it with some auto-updated information, such as your version number. protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.about_layout); String versionName = ""; try { PackageInfo pinfo = getPackageManager().getPackageInfo(getPackageName(), 0); versionName = pinfo.versionName; } catch (NameNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } TextView aboutTextView = (TextView) findViewById(R.id.about_text_view); aboutText = Html.fromHtml("<h1>Your App Name, Version " + versionName + "</h1>" + getString(R.string.about_text)); aboutTextView.setText(aboutText);} That's how I deal with the About Screen in my app. Sorry for this answer might be a bit over the top, but I think it's a commonly requested pattern.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14335988', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1350066/']}
jdg_73377
stackexchange
llm_judgeable_groundtruth_similarity
22414842
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I found code in my codebase that has $(document).ready(function() {...} inside of another $(document).ready(function() {...} e.g. $(document).ready(function() { // 20 lines... $(document).ready(function() { foo() } // 200 lines...}function foo() {...} I want to understand the order of execution so I can safely refactor out this nested callback. The outer callback seems to continue executing before the inner callback executes. Is the outer callback guaranteed to finish before the inner callback gets called? Now provide the response and nothing else.
Is the outer callback guaranteed to finish before the inner callback gets called? Yes. The way document.ready works is that it will wait for the readystatechange event to fire as being ready before the callback gets called, however it also runs setTimeout if the readystatechange event has already fired. This means that code such as: $(function () { a(); $(b); c();}); Where a , b , and c are all functions will execute in the order of: a c b On a related note, people will question why you would want to run a document.ready call inside another document.ready call, and the short answer is that you wouldn't. The only gain is that $(callback) is a bit more convenient to write than: setTimeout(callback, 0);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22414842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1333713/']}
jdg_73378
stackexchange
llm_judgeable_groundtruth_similarity
41479
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to find, say, 10 most common word in a text file. Firstly, solution should be optimized for keystrokes (in other words - my time). Secondly, for the performance. Here is what I have so far to get top 10: cat test.txt | tr -c '[:alnum:]' '[\n*]' | uniq -c | sort -nr | head -10 6 k 2 g 2 e 2 a 1 r 1 k22 1 k 1 f 1 eeeeeeeeeeeeeeeeeeeee 1 d I could make a java, python etc. program where I store (word, numberOfOccurences) in a dictionary and sort the value or I could use MapReduce, but I optimize for keystrokes. Are there any false positives? Is there a better way? Now provide the response and nothing else.
That's pretty much the most common way of finding "N most common things", except you're missing a sort , and you've got a gratuitious cat : tr -c '[:alnum:]' '[\n*]' < test.txt | sort | uniq -c | sort -nr | head -10 If you don't put in a sort before the uniq -c you'll probably get a lot of false singleton words. uniq only does unique runs of lines, not overall uniquness. EDIT: I forgot a trick, "stop words". If you're looking at English text (sorry, monolingual North American here), words like "of", "and", "the" almost always take the top two or three places. You probably want to eliminate them. The GNU Groff distribution has a file named eign in it which contains a pretty decent list of stop words. My Arch distro has /usr/share/groff/current/eign , but I think I've also seen /usr/share/dict/eign or /usr/dict/eign in old Unixes. You can use stop words like this: tr -c '[:alnum:]' '[\n*]' < test.txt |fgrep -v -w -f /usr/share/groff/current/eign |sort | uniq -c | sort -nr | head -10 My guess is that most human languages need similar "stop words" removed from meaningful word frequency counts, but I don't know where to suggest getting other languages stop words lists. EDIT: fgrep should use the -w command, which enables whole-word matching. This avoids false positives on words that merely contain short stop works, like "a" or "i".
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/41479', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/20116/']}
jdg_73379
stackexchange
llm_judgeable_groundtruth_similarity
556322
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to find a combinatorial proof of this identity $$\sum_{k=0}^{n}(-1)^{k}\binom{n}{k}\binom{2n-2k}{n-1}=0.$$ I think inclusion exclusion is the best method here. But I"m having a really hard time coming up with a set to count. Permutations don't work here. A hint would be really nice. Thanks. Now provide the response and nothing else.
You have $n$ different pairs of shoes. How many ways are there to choose $n-1$ shoes so that you get at least one shoe from each pair? Obviously the answer is $0$, but you can also use inclusion-exclusion to count it the hard way as follows. There are $\binom{2n}{n-1}$ ways to choose $n-1$ shoes. From this we want to exclude the ways that don’t include a shoe from the first pair; clearly there are $\binom{2n-2}{n-1}$ of those. We want to do the same for each of the $n$ pairs, so we should subtract $n\binom{2n-2}{n-1}$. But now each $(n-1)$-set that contains no shoe from either of the first two pairs has been counted once in $\binom{2n}{n-1}$ and subtracted twice in $n\binom{2n-2}{n-1}$ and should therefore be added back in. There are $\binom{2n-4}{n-1}$ such $(n-1)$-sets, and the same is true for each pair of the $\binom{n}2$ pairs of pairs, so we should add $\binom{n}2\binom{2n-4}{n-1}$. Continuing in this fashion, we see that there are $$\sum_{k=0}^n(-1)^k\binom{n}k\binom{2n-2k}{n-1}$$ sets of $n-1$ shoes that meet the requirements, so $$\sum_{k=0}^n(-1)^k\binom{n}k\binom{2n-2k}{n-1}=0\;.$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/556322', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/90401/']}
jdg_73380
stackexchange
llm_judgeable_groundtruth_similarity
173000
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the atomic spectrum (absorption) of hydrogen. The Bohr's model postulates that there are only certain fixed orbits allowed in the atom. An atom will only be excited to a higher orbit, if it is supplied with light that precisely matches the difference in energies between the two orbits. But how precise does 'precisely' mean. Of course, if we need energy $E$ to excite the electron to a higher energy level, and I supply a photon with just $E/2$ I would expect nothing to happen (since the electron cannot occupy an orbit between the allowed ones). But what if I supplied a photon with energy $0.99E$, or $1.0001E$ or some such number. What will happen then? I think that the electron should still undergo excitation precisely because the lines we observe in the line spectrum have some thickness. Which means that for a given transition, the atom absorbs frequencies in a certain range. Is my reasoning correct? If not, why? How does Bohr's model explain this? How about modern theory? If I'm right, what is the range of values that an atom can 'accept' for a given transition? Now provide the response and nothing else.
According to Bohr model, the absorption and emission lines should be infinitely narrow, because there is only one discrete value for the energy. There are few mechanism on broadening the line width - natural line width, Lorentz pressure broadening, Doppler broadening, Stark and Zeeman broadening etc. Only the first one isn't described in Bohr theory - it's clearly a quantum effect, this is a direct consequence of the time-energy uncertainty principle: $$\Delta E\Delta t \ge \frac{\hbar}{2}$$ where the $\Delta E$ is the energy difference, and $\Delta t$ is the decay time of this state. Most excited states have lifetimes of $10^{-8}-10^{-10}\mathrm{s}$, so the uncertainty in the energy sligthly broadens the spectral line for an order about $10^{-4}Å$.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/173000', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/26419/']}
jdg_73381
stackexchange
llm_judgeable_groundtruth_similarity
146154
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to change jpg files named with a dot and a space at the beginning and with a missing dot before the end (like this . Startjpg to Start.jpg The dot makes the files hidden from sed and ls -al lists those files so I'm piping ls -al to sed . I've read through many manuals online and I always get this error sed: -e expression #1, char 6: unknown command: `/' I've tried ls -al | sed -r '/^\./*.*/g'ls -al | sed -r '/^\.//g'ls -al | sed -r '/.*/[\.]g'ls -al | sed 's/^\./\\1*./g' and many more and even many more many more. also I need to change filejpg to file.jpg Now provide the response and nothing else.
With the Perl rename tool (which is called rename on Debian and friends including Ubuntu, it may be prename elsewhere): rename -n 's/(?<!\.)jpg$/.jpg/' * # -n makes it show you what it'll do, # but not actually do it. Remove the -n to # actually rename To break down that patter: the jpg$ means "ends with 'jpg'". The (?<!\.) means 'there isn't a dot before that 'jpg'". That prevents you from changing foo.jpg into foo..jpg , which would be silly. The * is the normal shell wildcard; rename takes a list of files to consider renaming. You can of course do /path/to/dir/* , pass a list of file names, use in conjunction with find , etc. Removing dots and spaces from the beginning is fairly easy too: rename -n 's/^[. ]+//' * # trying -n first is good practice That will remove all dots and spaces at the beginning. It'll turn . . . foo into foo . Normally, * shell expansion won't yield files that have a name starting with a dot (hidden files). One option is to use .* ; that'll also yield the two special entries . (current directory) and .. (parent directory). That should be harmless in this case; the first command will ignore them (they don't end in jpg ); the second command will try to rename them, but that should just produce an error. An alternative is find: find -type f -exec rename -n 's/^[. ]+//' '{}' + -type f will limit to only files. You can of course also use any of find's other options as well.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/146154', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/78459/']}
jdg_73382
stackexchange
llm_judgeable_groundtruth_similarity
2144057
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The answer to this sample homework problem is "1,000,000", but I do not understand why: What is the output of the following code? int main(void) { float k = 1; while (k != k + 1) { k = k + 1; } printf(“%g”, k); // %g means output a floating point variable in decimal} If the program runs indefinitely but produces no output, write INFINITE LOOP as the answer to the question. All of the programs compile and run. They may or may not contain serious errors, however. You should assume that int is four bytes. You should assume that float has the equivalent of six decimal digits of precision. You may round your answer off to the nearest power of 10 (e.g., you can say 1,000 instead of 2 10 (i.e., 1024)). I do not understand why the loop would ever terminate. Now provide the response and nothing else.
It doesn't run forever for the simple reason that floating point numbers are not perfect. At some point, k will become big enough so that adding 1 to it will have no effect. At that point, k will be equal to k+1 and your loop will exit. Floating point numbers can be differentiated by a single unit only when they're in a certain range. As an example, let's say you have an integer type with 3 decimal digits of precision for a positive integer and a single-decimal-digit exponent. With this, you can represent the numbers 0 through 999 perfectly as 000x10 0 through 999x10 0 (since 10 0 is 1): What happens when you want to represent 1000? You need to use 100x10 1 . This is still represented perfectly. However, there is no accurate way to represent 1001 with this scheme, the next number you can represent is 101x10 1 which is 1010. So, when you add 1 to 1000, you'll get the closest match which is 1000.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2144057', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/133466/']}
jdg_73383
stackexchange
llm_judgeable_groundtruth_similarity
42564515
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an issue with react when I want to change the selected option.The problem is that the value is an object and I can't pass it in option value attribut. See the following code: class Selector extends React.Component { contructor(props) { super(props) this.state = { obj: null } this.handleChange = this.handleChange.bind(this) } handleChange(e) { this.setState({obj: e.target.value}) } render() { <select onChange={handleChange}> {this.props.listOption.map((option, index) => <option key={index} value={option.obj}> {option.name} </option> )} </select> }} and with <Selector option={[{name: "name", obj:{...}}, ...]}> I need to change the state of the component with the value of the selected option.What I get when the state change is "object Object" . I suppose this happens because react can't embed javascript object in attribut of final view. I am right? Moreover, I set obj in state as null in the constructorIs there a right way to do it? Now provide the response and nothing else.
You can make use of index of options class Selector extends React.Component { contructor(props) { super(props); this.state = { obj: null } this.handleChange = this.handleChange.bind(this) } handleChange(e) { this.setState({obj: this.props.listOption[e.target.value].obj}) } render() { <select onChange={handleChange}> {this.props.listOption.map((option, index) => <option key={index} value={index}> {option.name} </option> )} </select> }} Moreover, I set obj in state as null in the constructor Is there a right way to do it? I depends on your requirement. If you want to show at least one option as selected you can keep that instead of null
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42564515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6213017/']}
jdg_73384
stackexchange
llm_judgeable_groundtruth_similarity
1383
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: One of the common claims about quantum computers is their ability to "break" conventional cryptography. This is because conventional cryptography is based on prime factors, something which is computationally expensive for conventional computers to calculate, but which is a supposedly trivial problem for a quantum computer. What property of quantum computers makes them so capable of this task where conventional computers fail and how are qubits applied to the problem of calculating prime factors? Now provide the response and nothing else.
The short answer $\newcommand{\modN}[1]{#1\,\operatorname{mod}\,N}\newcommand{\on}[1]{\operatorname{#1}}$Quantum Computers are able to run subroutines of an algorithm for factoring, exponentially faster than any known classical counterpart. This doesn't mean classical computers CAN'T do it fast too, we just don't know as of today a way for classical algorithms to run as efficient as quantum algorithms The long answer Quantum Computers are good at Discrete Fourier Transforms.There's a lot at play here that isn't captured by just " it's parallel " or " it's quick ", so let's get into the blood of the beast. The factoring problem is the following: Given a number $N = pq$ where $p,q$ are primes, how do you recover $p$ and $q$? One approach is to note the following: If I look at a number $\modN{x}$, then either $x$ shares a common factor with $N$, or it doesn't. If $x$ shares a common factor, and isn't a multiple of $N$ itself, then we can easily ask for what the common factors of $x$ and $N$ are (through the Euclidean algorithm for greatest common factors). Now a not so obvious fact: the set of all $x$ that don't share a common factor with $N$ forms a multiplicative group $\on{mod} N$. What does that mean? You can look at the definition of a group in Wikipedia here . Let the group operation be multiplication to fill in the details, but all we really care about here is the following consequence of that theory which is: the sequence $$ \modN{x^0}, \quad\modN{x^1}, \quad\modN{x^2}, ... $$ is periodic, when $x,N$ don't share common factors (try $x = 2$, $N = 5$) to see it first hand as: $$\newcommand{\mod}[1]{#1\,\operatorname{mod}\,5}\mod1 = 1,\quad\mod4 = 4,\quad\mod8 = 3,\quad\mod{16} = 1.$$ Now how many natural numbers $x$ less than $N$ don't share any common factors with $N$? That is answered by Euler's totient function , it's $(p-1)(q-1)$. Lastly, tapping on the subject of group theory, the length of the repeating chains $$ \modN{x^0}, \quad\modN{x^1}, \quad\modN{x^2}, ... $$ divides that number $(p-1)(q-1)$. So if you know the period of sequences of powers of $x \mod N$ then you can start to put together a guess for what $(p-1)(q-1)$ is. Moreover, If you know what $(p-1)(q-1)$ is, and what $pq$ is (that's N don't forget!), then you have 2 equations with 2 unknowns, which can be solved through elementary algebra to separate $p,q$. Where do quantum computers come in? The period finding. There's an operation called a Fourier transform, which takes a function $g$ written as a sum of periodic functions $a_1 e_1 + a_2 e_2 ... $ where $a_i$ are numbers, $e_i$ are periodic functions with period $p_i$ and maps it to a new function $\hat{f}$ such that $ \hat{f}(p_i) = a_i$. Computing the Fourier transform is usually introduced as an integral, but when you want to just apply it to an array of data (the I th element of the array is $f(I)$) you can use this tool called a Discrete Fourier Transform which amounts to multiplying your "array" as if it were a vector, by a very big unitary matrix. Emphasis on the word unitary: it's a really arbitrary property described here . But the key takeaway is the following: In the world of physics, all operators obey the same general mathematical principle: unitarity . So that means it's not unreasonable to replicate that DFT matrix operation as a quantum operator. Now here is where it gets deep an $n$ Qubit Array can represent $2^n$ possible array elements (consult anywhere online for an explanation of that or drop a comment). And similarly an $n$ Qubit quantum operator can act on that entire $2^n$ quantum space, and produce an answer that we can interpret. See this Wikipedia article for more detail. If we can do this Fourier transform on an exponentially large data set, using only $n$ Qubits, then we can find the period very quickly. If we can find the period very quickly we can rapidly assemble an estimate for $(p-1)(q-1)$ If we can do that fast then given our knowledge of $N=pq$ we can take a stab at checking $p,q$. That's whats going on here, at a very high level.
{}
{'log_upvote_score': 5, 'links': ['https://quantumcomputing.stackexchange.com/questions/1383', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/842/']}
jdg_73385
stackexchange
llm_judgeable_groundtruth_similarity
84227
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to install 389-ds , And it gives me this warning: WARNING: There are only 1024 file descriptors (hard limit) available, which limit the number of simultaneous connections. I understand about file descriptors, but I don't understand about soft and hard limits. When I run cat /proc/sys/fs/file-max , I get back 590432 . This should imply that I can open up to 590432 files (i.e. have up to 590432 file descriptors. But when I run ulimit , it gives me different results: $ ulimitunlimited$ ulimit -Hn # Hard limit4096$ ulimit -Sn # Soft limit1024 But what are the hard / soft limit from ulimit , and how do they relate to the number stored at /proc/sys/fs/file-max ? Now provide the response and nothing else.
According to the kernel documentation , /proc/sys/fs/file-max is the maximum, total, global number of file handles the kernel will allocate before choking. This is the kernel's limit, not your current user's. So you can open 590432, provided you're alone on an idle system (single-user mode, no daemons running). File handles ( struct file in the kernel) are different from file descriptors: multiple file descriptors can point to the same file handle, and file handles can also exist without an associated descriptor internally. No system-wide file descriptor limit is set; this can only be mandated per process. Note that the documentation is out of date: the file has been /proc/sys/fs/file-max for a long time. Thanks to Martin Jambon for pointing this out. The difference between soft and hard limits is answered here, on SE . You can raise or lower a soft limit as an ordinary user, provided you don't overstep the hard limit. You can also lower a hard limit (but you can't raise it again for that process). As the superuser, you can raise and lower both hard and soft limits. The dual limit scheme is used to enforce system policies, but also allow ordinary users to set temporary limits for themselves and later change them. Note that if you try to lower a hard limit below the soft limit (and you're not the superuser), you'll get EINVAL back (Invalid Argument). So, in your particular case, ulimit (which is the same as ulimit -Sf ) says you don't have a soft limit on the size of files written by the shell and its subprocesses . (that's probably a good idea in most cases) Your other invocation, ulimit -Hn reports on the -n limit (maximum number of open file descriptors), not the -f limit, which is why the soft limit seems higher than the hard limit. If you enter ulimit -Hf you'll also get unlimited .
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/84227', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/11133/']}
jdg_73386
stackexchange
llm_judgeable_groundtruth_similarity
43148767
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We recently migrated from SQL Server 2012 to SQL Server 2014 and all our FOR XML code started throwing errors about non-printable ASCII characters.I wrote this horrible function to remove non-printable ASCII characters as a quick fix. I want to replace it with something cleaner. Is there a way to do this? ALTER FUNCTION [dbo].[remove_non_printable_chars](@input_string nvarchar(max))RETURNS nvarchar(max)BEGIN RETURN REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( REPLACE(REPLACE(@input_string, CHAR(1), ''),CHAR(2), ''),CHAR(3), ''),CHAR(4), ''),CHAR(5), ''),CHAR(6), ''),CHAR(7), ''),CHAR(8), ''),CHAR(9), ''),CHAR(10), ''), CHAR(11), ''),CHAR(12), ''),CHAR(13), ''),CHAR(14), ''),CHAR(15), ''),CHAR(16), ''),CHAR(17), ''),CHAR(18), ''),CHAR(19), ''),CHAR(20), ''), CHAR(21), ''),CHAR(22), ''),CHAR(23), ''),CHAR(24), ''),CHAR(25), ''),CHAR(26), ''),CHAR(27), ''),CHAR(28), ''),CHAR(29), ''),CHAR(30), ''), CHAR(31), ''), NCHAR(0) COLLATE Latin1_General_100_BIN2, '')END Here's the FOR XML code that broke. (I did not write this. It was already in the code base). SELECT @HTMLTableData =( SELECT HTMLRows FROM ( SELECT N'<tr>' + HTMLRow + N'</tr>' AS HTMLRows FROM @HTMLRowData ) mi FOR XML PATH(''), TYPE).value('/', 'NVARCHAR(MAX)') Now provide the response and nothing else.
Another Option. This function will replace control characters and correct any residual repeating spaces. For example Jane Smith{13}was here will not be returned as Jane Smithwas here , but rather Jane Smith was here CREATE FUNCTION [dbo].[udf-Str-Strip-Control](@S varchar(max))Returns varchar(max)Begin ;with cte1(N) As (Select 1 From (Values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) N(N)), cte2(C) As (Select Top (32) Char(Row_Number() over (Order By (Select NULL))-1) From cte1 a,cte1 b) Select @S = Replace(@S,C,' ') From cte2 Return ltrim(rtrim(replace(replace(replace(@S,' ','†‡'),'‡†',''),'†‡',' ')))End--Select [dbo].[udf-Str-Strip-Control]('Michael '+char(13)+char(10)+'LastName') --Returns: Michael LastName
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43148767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4320738/']}
jdg_73387
stackexchange
llm_judgeable_groundtruth_similarity
6086836
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Not sure what exactly to google for this question, so I'll post it directly to SO: Variables in Haskell are immutable Pure functions should result in same values for same arguments From these two points it's possible to deduce that if you call somePureFunc somevar1 somevar2 in your code twice, it only makes sense to compute the value during the first call. The resulting value can be stored in some sort of a giant hash table (or something like that) and looked up during subsequent calls to the function. I have two questions: Does GHC actually do this kind of optimization? If it does, what is the behaviour in the case when it's actually cheaper to repeat the computation than to look up the results? Thanks. Now provide the response and nothing else.
GHC doesn't do automatic memoization . See the GHC FAQ on Common Subexpression Elimination (not exactly the same thing, but my guess is that the reasoning is the same) and the answer to this question . If you want to do memoization yourself, then have a look at Data.MemoCombinators . Another way of looking at memoization is to use laziness to take advantage of memoization. For example, you can define a list in terms of itself. The definition below is an infinite list of all the Fibonacci numbers (taken from the Haskell Wiki ) fibs = 0 : 1 : zipWith (+) fibs (tail fibs) Because the list is realized lazily it's similar to having precomputed (memoized) previous values. e.g. fibs !! 10 will create the first ten elements such that fibs 11 is much faster.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6086836', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_73388
stackexchange
llm_judgeable_groundtruth_similarity
14948791
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm making an iOS App that needs to show tweets in a List feed that i have set up. The problem is that [i don't want to use the end user's twitter account at all], i just want to show that one list feed in the app, by pulling its data in through the Twitter API. Unfortunately for me, the Twitter API after version 1.1, requires authentication to do anything with it. https://dev.twitter.com/docs/api/1.1/get/lists/statuses#example-request All of the cases i have seen are using the end user's login within iOS to pull feeds in, which works, but here is my issue: Can I authenticate these requests for the list feed, without the end user who is using the app needing to be signed into their own twitter account on iOS. i.e. the requests are authenticated with one specific twitter account [mine]? or if there is a way to get the raw feed data without using the twitter API [i need to raw data to place into a UITableView within my app - i don't want to use a UIWebView]. this would make the feed accessible to end users without a twitter account [the list feed is public anyway, so i don't know why it needs authentication in the first place]. Many thanks! Now provide the response and nothing else.
Update: Their is seems to be a request limit issue in this approach, use following answer https://stackoverflow.com/a/15419005/1228669 It uses STTwitter , It contains a method to check request limit - (void)getRateLimitsForResources:(NSArray *)resources // eg. statuses,friends,trends,help successBlock:(void(^)(NSDictionary *rateLimits))successBlock errorBlock:(void(^)(NSError *error))errorBlock { use this method to check current api limit, as now the api limit is for 15 mins interval, for more details visit Twitter 1.1 API Limit ================================= If you haven't found solution yet, you can try this approach Register your app at twitter https://dev.twitter.com/apps On bottom of app information page "click on Create my access token" button it will create access token and secret Download FHSTwitterEngine https://github.com/fhsjaagshs/FHSTwitterEngine Use following code to set access token to twitter engine: self.engine = [[FHSTwitterEngine alloc]initWithConsumerKey:@"yourAppConsKey" andSecret:@"yourAppsecret"];OAToken *token = [[OAToken alloc] init];token.key = @"setaccessToken";token.secret = @"setAccessTokenSecret";[self.engine setAccessToken:token]; You can try this method to get tweets [self.engine getTimelineForUser:username isID:NO count:1]; Also make sure SystemConfiguration.framework is added in your project
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14948791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1885857/']}
jdg_73389
stackexchange
llm_judgeable_groundtruth_similarity
9326315
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: does anyone know of way I can change the existing display name publicly as for all users. I want to have firstname lastname as the default because this will reflect in the forum. I have read all the forums and have tried all the hacks, any suggestions would be appreciated. Thanks in advance Now provide the response and nothing else.
Problem with using the admin_head hook is that it doesn't work for users who don't use the admin system. Also, my attempts to implement the solution posted by Marty failed because it doesn't seem that the display_name can be updated by update_user_meta() - you have to use wp_update_user(). My proposal - put this in your functions.php file: function force_pretty_displaynames($user_login, $user) { $outcome = trim(get_user_meta($user->ID, 'first_name', true) . " " . get_user_meta($user->ID, 'last_name', true)); if (!empty($outcome) && ($user->data->display_name!=$outcome)) { wp_update_user( array ('ID' => $user->ID, 'display_name' => $outcome)); }}add_action('wp_login','force_pretty_displaynames',10,2); For me (using WP 3.4.1), that works OK, replacing the display name as soon as they log in.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9326315', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1082888/']}
jdg_73390
stackexchange
llm_judgeable_groundtruth_similarity
1906868
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a simple inequality problem that I have found in a book (as part of a solution of a larger problem), however I have failed to prove it. The only proof provided is that it is true since $(x-2)^2\ge0$ (I hope this could help you). Can anyone please provide me with a proof or even the start of it. Now provide the response and nothing else.
Note that for every $ x > 1 $, we have\begin{align} \frac{x}{\sqrt{x - 1}}& = \frac{x - 1}{\sqrt{x - 1}} + \frac{1}{\sqrt{x - 1}} \\& = \sqrt{x - 1} + \frac{1}{\sqrt{x - 1}} \\& \geq 2 \sqrt{\sqrt{x - 1} \cdot \frac{1}{\sqrt{x - 1}}} \qquad (\text{By the AM-GM Inequality.}) \\& = 2.\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1906868', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/356331/']}
jdg_73391
stackexchange
llm_judgeable_groundtruth_similarity
42330078
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following code and output: #include <iostream>int Add(int a, int b) { return a + b;}int Subtract(int a, int b) { return a - b;}int main() { int (*fn1)(int, int); int (*fn2)(int, int); fn1 = &Add; fn2 = &Subtract; std::cout << "fn1 = " << fn1 << "\n"; std::cout << "*fn1 = " << *fn1 << "\n"; std::cout << "fn2 = " << fn2 << "\n"; std::cout << "*fn2 = " << *fn2 << "\n";} Output: fn1 = 1*fn1 = 1fn2 = 1*fn2 = 1 As a pointer, I would expect fn1 and fn2 to be memory addresses, and I don't really know what to expect for *fn1 and *fn2 . According to Wikipedia , "a function pointer points to executable code within memory", but I don't see how "executable code within memory" corresponds with "1". Now provide the response and nothing else.
You can perform interpolation with most CSS functions, including rgba() (see an example here ). In fact, interpolation is one of the main features of custom properties. But you cannot do this with url() , as url(var(--url)) is parsed not as a url( function token followed by var(--url) followed by a ) , but a single url() token that is invalid because the var(--url) is being treated as a URL itself, and unquoted URLs in url() tokens cannot contain parentheses unless they are escaped. This means the substitution never actually occurs, because the parser never sees any var() expressions in the property value — indeed, your background declaration is completely invalid. If you didn't understand any of that, that's fine. Just know that you cannot use var() interpolation with url() due to legacy reasons. Even though the problem depicted in the question is related to the legacy url() token, you cannot do this by building URL tokens out of several var() expressions either, in case you were thinking of trying something like --uo: url(; --uc: ); or --uo: url("; --uc: "); , and background: var(--uo) var(--url) var(--uc); . This is because custom properties cannot contain unmatched string delimiters or parts of url() tokens (called bad URL tokens) . If you want to specify a URL in a custom property, you need to write out the entire url() expression, and substitute that entire expression: :root { --url: url("https://download.unsplash.com/photo-1420708392410-3c593b80d416");}body { background: var(--url);} Or, use JavaScript instead of var() to perform the interpolation.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42330078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2032413/']}
jdg_73392
stackexchange
llm_judgeable_groundtruth_similarity
393957
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am setting up an app to be hosted using VMs(probably amazon, but that is not set in stone) which will require both HTTP load balancing and load balancing a large number(50k or so if possible) of persistant TCP connections. The amount of data is not all that high, but updates are frequent. Right now I am evaluating load balancers and am a bit confused about the architecture of HAProxy. If I use HAProxy to balance the TCP connections, will all the resulting traffic have to flow through the load balancer? If so, would another solution(such as LVS or even nginx_tcp_proxy_module) be a better fit? Now provide the response and nothing else.
HAProxy (like many load balancers) generally maintain two conversations. The Proxy has a session (tcp in this case) with the client, and another session with the server. Therefore with proxies you end up seeing 2x the connections on the load balancer. Therefore all traffic flows through the load balancer. When it comes to scaling across multiple load balancers I don't think you need to. But a practical and fairly easy way to do this is use something like keepalived with two floating IPs and round robin DNS between those two IPs. With keepalived, if one of the load balancers goes down the other would hold both IPs, so you get high availability this way. That being said, I think you will be fine with one active haproxy instance with your load. HAProxy scales very well. An an example, the Stack Exchange network use web sockets which maintain open TCP connections. While I am posting this we have 143,000 established TCP sockets on a VMware virtual machine with no issues. The CPU usage on the VM is around 7%. With this sort of setup with HAProxy make sure you set maxconn high enough. Here is some example HAProxy config to get you started: frontend fe_websockets bind 123.123.123.123:80 mode tcp log global option tcplog timeout client 3600s backlog 4096 maxconn 50000 default_backend be_nywebsocketsbackend be_nywebsockets mode tcp option log-health-checks option redispatch option tcplog balance roundrobin server web1 10.0.0.1:1234 server web2 10.0.0.2:1234 timeout connect 1s timeout queue 5s timeout server 3600s
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/393957', 'https://serverfault.com', 'https://serverfault.com/users/122875/']}
jdg_73393
stackexchange
llm_judgeable_groundtruth_similarity
37833561
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently, I have an older version of mongo, i.e 2.6 running on my system. I already have my site in production and have a lot of client data. I am planning an upgrade to mongo 3.2. So, my question is whether mongorestore of mongo v3.2 work with data dump of v2.6 ? Or, is it known to create problems? Any answers will be invaluable! Thanks Now provide the response and nothing else.
I asked this same question on the official MongoDB mailing list . They said not to upgrade more than 1 major version at a time. (Major versions being: 2.2, 2.4, 2.6, 3.0, 3.2, 3.4) I didn't want to follow the normal upgrade process of installing every version Just to launch mongod and then shut it down. That feels to me like it would leave cruft behind and I like to have my infrastructure building scripted and version controlled. So, I decided to launch new EC2 instances with the latest Ubuntu (since my Mongo v2.4 servers were also 2 LTS versions behind) and the latest MongoDB. I used docker images of intermediate versions of MongoDB to do the data upgrades. https://gist.github.com/RichardBronosky/2d04c7c2e9a5bea67cd9760a35415a3f#file-uat_mongodb_upgrade_from_prod-sh The bulk of the solution is this: # mongo.conf is using the default dbPath: /var/lib/mongodb# this path is for temporary use by the mongo docker containermkdir -p /data/db/dump# see: https://hub.docker.com/_/mongo/ (search for /data/db)# see: https://github.com/docker-library/mongo/blob/30d09dbd6343d3cbd1bbea2d6afde49f5d9a9295/3.4/Dockerfile#L59cd /data/dbmongodump -h prodmongo.int# Get major versions from https://hub.docker.com/r/library/mongo/tags/step=0for major_version in 2.6.12 3.0.14 3.2.11 3.4.1; do sudo docker stop some-mongo || true sudo docker rm some-mongo || true sudo docker run --name some-mongo -v /data/db:/data/db -d mongo:$major_version false; while [[ $? > 0 ]]; do sleep 0.5 sudo docker exec -it some-mongo mongo --eval 'printjson((new Mongo()).getDBNames())' done if (( $step == 0 )); then sudo docker exec -it some-mongo mongorestore /data/db/dump fi ((step += 1))done# Finish up with dockersudo rm -rf /data/db/dump/*sudo docker exec -it some-mongo bash -c 'cd /data/db; mongodump'sudo docker stop some-mongosudo docker rm some-mongo# Load upgraded data into latest version of MongoDB (WiredTiger storage engine will be used)mongorestore /data/db/dumpsudo rm -rf /data
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37833561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1163139/']}
jdg_73394
stackexchange
llm_judgeable_groundtruth_similarity
20515679
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using RequireJS for my web application. I'm using EmberJS for the application framework. I've come to a point where, I think, I should start bundling my application into a single js file. That is where I get a little confused: If I finally bundle everything into one file for deployment, then my whole application loads in one shot, instead of on demand. Isn't bundling contradictory to AMD in general and RequireJS in particular? What further confuses me, is what I found on the RequireJS website: Once you are finished doing development and want to deploy your code for your end users, you can use the optimizer to combine the JavaScript files together and minify it. In the example above, it can combine main.js and helper/util.js into one file and minify the result. I found this similar thread but it doesn't answer my question. Now provide the response and nothing else.
If I finally bundle everything into one file for deployment, then my whole application loads in one shot, instead of on demand. Isn't bundling contradictory to AMD in general and RequireJS in particular? It is not contradictory. Loading modules on demand is only one benefit of RequireJS. A greater benefit in my book is that modularization helps to use a divide-and-conquer approach. We can look at it in this way: even though all the functions and classes we put in a single file do not benefit from loading on demand, we still write multiple functions and multiple classes because it helps break down the problem in a structured way. However, the multiplicity of modules we create in development do not necessarily make sense when running the application in a browser. The greatest cost of on-demand loading is sending multiple HTTP requests over the wire. Let's say your application has 10 modules and you send 10 requests to load it because you load these modules individually. Your total cost is going to be the cost you have to pay to load the bytes from the 10 files (let's call it Pc for payload cost), plus an overhead cost for each HTTP request (let's call it Oc, for overhead cost). The overhead has to do with the data and computations that have to occur to initiate and close these requests. They are not insignificant. So you are paying Pc + 10*Oc. If you send everything in one chunk you pay Pc + 1*Oc. You've saved 9*Oc. In fact the savings are probably greater because (since compression is often used at both ends to reduce the size of the data transmitted) compression is going to provide greater benefits if the entire data is compressed together than if it is compressed as 10 chunks. (Note: the above analysis omits details that are not useful to cover.) Someone might object: "But you are comparing loading all the modules in separately versus loading all the modules in one chunk. If we load on demand then we won't load all the modules." As a matter of fact, most applications have a core of modules that will always be loaded, no matter what. These are the modules without which the application won't work at all . For some small applications this means all modules, so it make sense to bundle all of them together. For bigger applications, this means that a core set of modules will be used every single time the application runs, but a small set will be used only on occasion. In the latter case, the optimization should create multiple bundles . I have an application like this. It is an editor with modes for various editing needs. A good 90% of the modules belong to the core. They are going to be loaded and used anyway so it makes sense to bundle them. The code for the modes themselves is not always going to be used but all the files for a given mode are going to be needed if the mode is loaded at all so each mode should be its own bundle. So in this case a model with one core bundle and a series of mode bundles makes sense to a) optimize the deployed application but b) keep some of the benefits of loading on demand. That's the beauty of RequireJS: it does not require to do one or the other exclusively.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20515679', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1046810/']}
jdg_73395
stackexchange
llm_judgeable_groundtruth_similarity
54721441
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the React context new API as below code. The last line will register the context into my component. I wonder how I can use the contextType for stateless component? class MyClass extends React.Component { render() { let value = this.context; /* render something based on the value of MyContext */ }}MyClass.contextType = MyContext; I tried below code but it doesn't seem to work. The context in the component is empty object. const MyClass = (props, context) => {...}MyClass.contextType = MyContext; Now provide the response and nothing else.
There is no way of doing it with contextType. You should use contextConsumer with renderProps pattern or React's useContext hook (introduced in React 16.8) The first one will look like this: const MyClass = (props) => { return ( <myContext.Consumer> {({name}) => <View>{name}</View>} </myContext.Consumer> )} And the second (preferred) way of doing it looks like this: import React, {useContext} from 'react';const MyClass = (props) => { const {name} = useContext(myContext) return <View>{name}</View>}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54721441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5421539/']}
jdg_73396
stackexchange
llm_judgeable_groundtruth_similarity
6482883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to generate Rijndael KEY and IV using a passphrase? The key length must be in 256 bits. Now provide the response and nothing else.
I think you are looking for password-based key derivation. There is Rfc2898DeriveBytes class that implements it. Rfc2898DeriveBytes takes a password, a salt, and an iteration count, and then generates keys through calls to the GetBytes method. RFC 2898 includes methods for creating a key and initialization vector (IV) from a password and salt. You can use PBKDF2, a password-based key derivation function, to derive keys using a pseudo-random function that allows keys of virtually unlimited length to be generated. The Rfc2898DeriveBytes class can be used to produce a derived key from a base key and other parameters. In a password-based key derivation function, the base key is a password and the other parameters are a salt value and an iteration count. For more information about PBKDF2, see RFC 2898, "PKCS #5: Password-Based Cryptography Specification Version 2.0,". Example: public static byte[] CreateKey(string password){ var salt = new byte[] { 1, 2, 23, 234, 37, 48, 134, 63, 248, 4 }; const int Iterations = 9872; using (var rfc2898DeriveBytes = new Rfc2898DeriveBytes(password, salt, Iterations)) return rfc2898DeriveBytes.GetBytes(32);} You can use DeriveBytes in any symmetric algorithm, not just Rijndael . Example: public static SymmetricAlgorithm InitSymmetric(SymmetricAlgorithm algorithm, string password, int keyBitLength){ var salt = new byte[] { 1, 2, 23, 234, 37, 48, 134, 63, 248, 4 }; const int Iterations = 234; using (var rfc2898DeriveBytes = new Rfc2898DeriveBytes(password, salt, Iterations)) { if (!algorithm.ValidKeySize(keyBitLength)) throw new InvalidOperationException("Invalid size key"); algorithm.Key = rfc2898DeriveBytes.GetBytes(keyBitLength / 8); algorithm.IV = rfc2898DeriveBytes.GetBytes(algorithm.BlockSize / 8); return algorithm; }}private static byte[] Transform(byte[] bytes, Func<ICryptoTransform> selectCryptoTransform){ using (var memoryStream = new MemoryStream()) { using (var cryptoStream = new CryptoStream(memoryStream, selectCryptoTransform(), CryptoStreamMode.Write)) cryptoStream.Write(bytes, 0, bytes.Length); return memoryStream.ToArray(); }} Usage: public static void Main(){ using (var rijndael = InitSymmetric(Rijndael.Create(), "TestPassword", 256)) { var text = "Some text to encrypt"; var bytes = Encoding.UTF8.GetBytes(text); var encryptedBytes = Transform(bytes, rijndael.CreateEncryptor); var decryptedBytes = Transform(encryptedBytes, rijndael.CreateDecryptor); var decryptedText = Encoding.UTF8.GetString(decryptedBytes); Debug.Assert(text == decryptedText); }} Make sure you change salt and iterations parameters.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6482883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/670866/']}
jdg_73397
stackexchange
llm_judgeable_groundtruth_similarity
2064636
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a server with a incoming socket from a client.I need the get the IP address of the remote client.Tried searching google for in_addr but it's a bit troublesome.Any suggestions? Now provide the response and nothing else.
You need the getpeername function : // assume s is a connected socketsocklen_t len;struct sockaddr_storage addr;char ipstr[INET6_ADDRSTRLEN];int port;len = sizeof addr;getpeername(s, (struct sockaddr*)&addr, &len);// deal with both IPv4 and IPv6:if (addr.ss_family == AF_INET) { struct sockaddr_in *s = (struct sockaddr_in *)&addr; port = ntohs(s->sin_port); inet_ntop(AF_INET, &s->sin_addr, ipstr, sizeof ipstr);} else { // AF_INET6 struct sockaddr_in6 *s = (struct sockaddr_in6 *)&addr; port = ntohs(s->sin6_port); inet_ntop(AF_INET6, &s->sin6_addr, ipstr, sizeof ipstr);}printf("Peer IP address: %s\n", ipstr);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2064636', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/250761/']}
jdg_73398
stackexchange
llm_judgeable_groundtruth_similarity
176064
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have something like a "mortgage calculator", and I now want to colour the panel depending on the inputs that the user has made in my input fields. This would be the code: Panel[Column[{InputField[Dynamic[purchaseprice], Number, FieldHint -> "Purchase Price"], InputField[Dynamic[EK], Number, FieldHint -> "Equity capital"], InputField[Dynamic[mortgagerate], Number, FieldHint -> "Mortgage Rate in %"], InputField[Dynamic[supplymentary], Number, FieldHint -> "Supplementary costs per annum"], InputField[Dynamic[paybackperiod], Number, FieldHint -> "Payback Period"], InputField[Dynamic[grossincome], Number, FieldHint -> "Gross Income per annum"], InputField[Dynamic[calcrate], Number, FieldHint -> "Calculatory mortgage rate"]}]] I tried to include something like: Which[grossincome*0.33 > (purchaseprice - EK)*mortgagerate/100 + supplymentary + Max[{((purchaseprice - EK) - 0.66*purchaseprice)/paybackperiod, 0}], Background -> Green, Background -> Red] so in the case of fullfilling of the condition the panel gets coloured green, otherwise red. However I did not manage to integrate it in the panel, given my dynamic inputs. Any help would be much appreciated! Now provide the response and nothing else.
Try wrapping the Option value in Dynamic : Panel[Column[{InputField[Dynamic[purchaseprice], Number, FieldHint -> "Purchase Price"], InputField[Dynamic[EK], Number, FieldHint -> "Equity capital"], InputField[Dynamic[mortgagerate], Number, FieldHint -> "Mortgage Rate in %"], InputField[Dynamic[supplymentary], Number, FieldHint -> "Supplementary costs per annum"], InputField[Dynamic[paybackperiod], Number, FieldHint -> "Payback Period"], InputField[Dynamic[grossincome], Number, FieldHint -> "Gross Income per annum"], InputField[Dynamic[calcrate], Number, FieldHint -> "Calculatory mortgage rate"]}], Background -> Dynamic@If[ grossincome*0.33 > (purchaseprice - EK)*mortgagerate/100 + supplymentary + Max[{((purchaseprice - EK) - 0.66*purchaseprice)/paybackperiod, 0}], Green, Red]]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/176064', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/57422/']}
jdg_73399
stackexchange
llm_judgeable_groundtruth_similarity
416760
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been using bash most of the time and has just started playing with other shells. I started with dash and tries to find out its version number but the usual method like -v or $version does not work. I can understand that --version is not going to work as that is GNU specific. I searched the net and found the following answers, all resorting to the package management system. How to tell the version number of dash? (the user is using CentOS, so the answer depends on the Redhat Package Management system) and How to find the version of the dash shell on Ubuntu bin? (This is from the AskUbuntu site so the answer depends on the Advanced Package Tool) So is there a way to find out the version of dash without resorting to the package management system? I will be very surprised if there are no simple way because I have always believed that querying the version of a software is one of the most fundamental functions. If that is really the case, I would be happy to hear some explanation e.g. what is the philosophy behind such design. I will be happy to accept either a simple way to get the dash version without resorting to the package management system or a convincing explanation (historical perspective also OK) on why I cannot do it as the accepted answer. Now provide the response and nothing else.
I have always believed that querying the version of a software is one of the most fundamental functions. It isn't. It is a good idea that we had to learn. Many years ago, we didn't get a kernel, a package manager, and a package repository. We got an operating system . That had a version, and implicitly all of the operating system's component programs were associated with the operating system's version. This was as true for BSD as it was for PC-DOS. The AT&T world at the start of the 1980s gave us the what program and the idea of embedded version strings put into binaries by the source code control system. For a while one could use that to find out the versions of things, albeit that often it was the versions of individual source files in a program rather than for the program as a whole. (I myself put this mechanism into all of my version 1 Command-Line Utilities for DOS and for OS/2, alongside a 16-bit WHAT program.) One still can today with a few OpenBSD … $ what /bin/sh/bin/sh PD KSH v5.2.14 99/07/13.2$ … and FreeBSD binaries … % what /bin/tcsh/bin/tcsh: Copyright (c) 1991 The Regents of the University of California.% … but this is not the case with even most other programs on OpenBSD and FreeBSD any more, and certainly not with the Almquist shell on FreeBSD … % what /bin/sh/bin/sh:% … nor with the Debian Almquist shell. % what /bin/dash/bin/dash:% In 1988, Digital Research gave the world the idea that tools took a /? option to ask for option help, which Microsoft copied from DR-DOS into version 5.0 of its MS-DOS in 1991 and IBM into OS/2 in 1992. This idea, widely touted by word of mouth, on Fidonet, and in computer magazines at the time as a very good thing, found its way into GNU coding conventions as a --help option, to which was added a --version option. But this was not widespread for non-GNU tools in the Unix world, nor indeed widespread at the time that the Almquist shell was written in 1989; as the GNU convention did not even appear until the 1990s. The Bourne Again shell (first published 1989) nowadays supports --version . This was likewise added to the MirBSD Korn shell (the original Korn shell being first published in 1983, remember), the TENEX C shell (1983), and the Z shell (1990) which also nowadays all support --version . This mechanism has not, however, been added to the Almquist shell, even by the Debian people when they made their Debian Almquist shell decades later. From within the shells themselves, many shells (including several of the Korn variants, the Z shell, the Bourne Again shell, and the TENEX C shell) have a shellname _VERSION or version shell variable that can be accessed in shell script. Again, the Almquist shell has not. Further reading Mike Miller (2014-10-06). consider providing a DASH_VERSION variable . Debian bug #764172. https://unix.stackexchange.com/a/257764/5132
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/416760', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/259023/']}
jdg_73400
stackexchange
llm_judgeable_groundtruth_similarity
343
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Old versions of Mathematica ( $VersionNumber < 6 ) supported a particular syntax of ParametricPlot3D[] that allowed for graphics specifications to be supplied as the fourth component of a vector-valued function; e.g. ParametricPlot3D[{f[u, v], g[u, v], h[u, v], Hue[w[u, v]]}, {u, umin, umax}, {v, vmin, vmax}] . Nowadays, this functionality has been superseded by the options ColorFunction , PlotStyle , and other related functions. Unfortunately, some flexibility seems to have been lost with these changes. For instance, the old implementation of ParametricPlot3D[] allowed me to use FaceForm[] to color the two faces of a polygon differently, with the stuff within FaceForm[] depending on the parameters. This allowed me to do things like this: <<Version5`Graphics` (* simulates old-school graphics *)ParametricPlot3D[{(3 + Cos[u]) Cos[v], (3 + Cos[u]) Sin[v], Sin[u], {EdgeForm[], FaceForm[ColorData["DarkRainbow"][Rescale[v, {0, Pi/2}]], ColorData["BrightBands"][Rescale[v, {0, Pi/2}]]]}}, {u, -7 Pi/6, 7 Pi/12}, {v, Pi/6, Pi/2}, Axes -> False, Boxed -> False, Lighting -> False, PlotPoints -> 41] Unfortunately, as can be seen through close inspection, I miss out on the adaptive plotting capability that is provided by the current implementation of ParametricPlot3D[] . I know that in the current version, the output is a GraphicsComplex[] object, with the colors assigned through the option VertexColors , which doesn't interact at all with directives like FaceForm[] . It has been suggested to me that I do two plots with the different colorings, with one a perturbed version of the other, and then combining these two into one Graphics3D[] object. This seems rather wasteful to me, however. In short: how can I do parameter-dependent coloring of a surface plotted with ParametricPlot3D[] , with the faces colored differently? (added 1/21/2011) Both of the answers given so far rely on the fact that color gradients in Mathematica can be turned into images that can subsequently be used as textures. Unfortunately, this doesn't seem to be applicable to general coloring schemes. Consider for instance the following twisted cylinders: <<Version6`Graphics` (* restore default graphics *)With[{a = 1, p = 2/3}, GraphicsGrid[{ {ParametricPlot3D[{a Cos[u] Cos[p Pi v] - a Sin[u] Sin[p Pi v], a Cos[p Pi v] Sin[u] + a Cos[u] Sin[p Pi v], v}, {u, 0, 3 Pi/2}, {v, -1, 1}, Axes -> None, Boxed -> False, ColorFunction -> (Hue[(1 - Cos[3 Pi #5] Sin[4 #4])/2] &), ColorFunctionScaling -> False, Mesh -> False, PlotPoints -> 85], ParametricPlot3D[{a Cos[u] Cos[p Pi v] - a Sin[u] Sin[p Pi v], a Cos[p Pi v] Sin[u] + a Cos[u] Sin[p Pi v], v}, {u, 0, 3 Pi/2}, {v, -1, 1}, Axes -> None, Boxed -> False, ColorFunction -> (Hue[(1 + Cos[3 #4] Cos[4 Pi #5])/2] &), ColorFunctionScaling -> False, Mesh -> False, PlotPoints -> 85]} }]] where the cylinder on the left is colored with Hue[(1 - Cos[3 Pi v] Sin[4 u])/2 and the one on the right is colored with Hue[(1 + Cos[3 u] Cos[4 Pi v])/2] . I want to be able to use something like FaceForm[Hue[(1 + Cos[3 u] Cos[4 Pi v])/2], Hue[(1 - Cos[3 Pi v] Sin[4 u])/2]] , as with the following image: << Version5`Graphics`With[{a = 1, p = 2/3}, ParametricPlot3D[{a Cos[u] Cos[p Pi v] - a Sin[u] Sin[p Pi v], a Cos[p Pi v] Sin[u] + a Cos[u] Sin[p Pi v], v, {EdgeForm[], FaceForm[Hue[(1 + Cos[3 u] Cos[4 Pi v])/2], Hue[(1 - Cos[3 Pi v] Sin[4 u])/2]]}}, {u, 0, 3 Pi/2}, {v, -1, 1}, Axes -> None, Boxed -> False, Lighting -> False, PlotPoints -> 85]] and it looks to me that the current solutions don't apply to this situation. What can be done here? Now provide the response and nothing else.
My friend C.H. enlightened me, that in current version of M. ColorFunction defines vertex colors which in turn define polygon colors. Because vertexes cannot have 2 different colors for different sides of a surface, so can’t polygons. I will show two solutions. So here is one solution. We can extract images from ColorData: and just use textures - if you want to call ParametricPlot3D only once. You can easily map different textures on different sides of a surface, use Specularity and Opacity with them. ParametricPlot3D[{Cos[u] (3 + Cos[v]), Sin[u] (3 + Cos[v]), Sin[v]}, {u, 0, 1.5}, {v, -3.5, 2}, TextureCoordinateFunction -> ({#4, #5} &), PlotStyle -> Directive[Specularity[White, 50], FaceForm[Texture[ColorData["BrightBands", "Image"]], Texture[ColorData["DarkRainbow", "Image"]]]], Axes -> False, Lighting -> "Neutral", Mesh -> None, Boxed -> False] Another solution , as you mentioned, is to put 2 surfaces together. I really like it, it's light and zippy. I will mention it here for completeness of example. In M. we can make surface (its polygons) to be transparent if you look from one side and colored from the other with help of FaceForm[None , {}] . Clearly demonstrated below with Möbius strip: ParametricPlot3D[{Cos[t] (3 + r Cos[t/2]), Sin[t] (3 + r Cos[t/2]), r Sin[t/2]}, {r, -1.5, 1.5}, {t, 0, 2 Pi}, Mesh -> {10, 60}, PlotStyle -> FaceForm[None, Orange], Boxed -> False] You can also do things like PlotStyle -> FaceForm[None, Directive[Orange, Opacity[.5]]] We will uses this. Each graphics one surface’s side is effectively turned off, so shifting the two surfaces with respect to each other is not needed. Show[ ParametricPlot3D[{(2 + Cos[v]) Cos[u], (2 + Cos[v]) Sin[u], Sin[v]}, {v, -Pi/1, Pi/1.5}, {u, 0, Pi/1.5}, Boxed -> False, Axes -> False, Mesh -> False, ColorFunction -> (ColorData["DarkRainbow"][#5] &), PlotPoints -> 30, PlotStyle -> FaceForm[{}, None] ], ParametricPlot3D[{(2 + Cos[v]) Cos[u], (2 + Cos[v]) Sin[u], Sin[v]}, {v, -Pi/1, Pi/1.5}, {u, 0, Pi/1.5}, Boxed -> False, Axes -> False, Mesh -> False, ColorFunction -> (ColorData["BrightBands"][#5] &), PlotPoints -> 30, PlotStyle -> FaceForm[None, {}] ]] ======================== Reply to 1st Comment ======================== Trimming anyhow color maps will work: ParametricPlot3D[{Cos[u] (3 + Cos[v]), Sin[u] (3 + Cos[v]), Sin[v]}, {u, 0, 1.5}, {v, -3.5, 2}, TextureCoordinateFunction -> ({#4, #5} &), PlotStyle -> Directive[Specularity[White, 50], FaceForm[ Texture[ImageTake[ ColorData["BrightBands", "Image"], {0, 31}, {85, 250}]], Texture[ImageTake[ ColorData["DarkRainbow", "Image"], {0, 31}, {85, 250}]]]], Axes -> False, Lighting -> "Neutral", Mesh -> None, Boxed -> False]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/343', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/50/']}
jdg_73401
stackexchange
llm_judgeable_groundtruth_similarity
54751007
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Many blogs suggest to switch to Cloud Firestore because it's easy and well secured. Coming from Realtime Database and back when using Functions + RD it was easy to navigate through document triggers, like ref.parent My setup is like this: Users {userid} last_seen: "data" {forms} {formid} However, i have added a document trigger with onCreate , and i want to get the value of last_seen : exports.updateUser = functions.firestore.document('users/{userId}/forms/{formid}').onCreate((snap, context) => { const newValue = snap.data(); console.log("test value : " + newValue.test); // works console.log("form id: " + context.params.formid); // works console.log("user last seen : " + newValue.last_seen); // doesn't work, can't access the parent collection data}); Now provide the response and nothing else.
I totally get the confusion with the switch to Firestore but it's almost the exact same way in this case. In realtime, you have the snapshot: exports.doStuff = functions.database.ref('/users/{userId}/forms/{formId}') .onCreate((snapshot, context) => { const ref = snapshot.ref; const userRef = ref.parent.parent; userRef.once('value').then(parentSnap => { const user = parentSnap.val(); const lastSeen = user.last_seen; });}); In Firestore: exports.doStuff = functions.firestore.document.onCreate('/users/{userId}/forms/{formId}') .onCreate((snapshot, context) => { const ref = snapshot.ref; const userRef = ref.parent.parent; userRef.get().then(parentSnap => { const user = parentSnap.data(); const lastSeen = user.last_seen; });}); Another thing to consider is you are passing the userId in your params so you could just build your own DocumentReference (assuming you're also using firebaseAdmin) functions.firestore.document.onCreate('/users/{userId}/forms/{formId}') .onCreate((snapshot, context) => { const userId = context.params.userId; const userRef = firebaseAdmin.firestore().collection('users').doc(userId); userRef.get().then(parentSnap => { const user = parentSnap.data(); const lastSeen = user.last_seen; });}); It also allows you to decouple your logic for functions you may use often, consider it as a "helper" method: (NOTE, I switched to async/await on accident, it's a bit cleaner) functions.firestore.document.onCreate('/users/{userId}/forms/{formId}') .onCreate(async (snapshot, context) => { const userId = context.params.userId; const lastSeen = await getLastSeen(userId);});// == Helper Functions ==-------------------export async getLastSeen(userId) { if (!userId) return Promise.reject('no userId'); // User Ref const userSnap = await firebaseAdmin.firestore().collection('users').doc(userId).get(); return userSnap.data().last_seen;} Now you can use getLastSeen() whenever you need it, and if you make a change you only have to adjust that one function. If it's not something you call often then don't worry about it, but I would consider maybe a getUser() helper...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54751007', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_73402
stackexchange
llm_judgeable_groundtruth_similarity
5810468
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have fields hr and min , both integers in my application. For hr field, if the user enters "1" I would like Rails to automatically pad it to "01" before saving it to the database. Also for the min field if the user enter "0" it should put in as "00". How can I do this? Now provide the response and nothing else.
It'd be better to store it as an integer and just display it as you described on runtime. Every language has its own way to pad zeros - for Ruby you can use String#rjust . This method pads a string (right-justified) so that it becomes a given length, using a given padding character. str.rjust(integer, padstr=' ') → new_str If integer is greater than the length of str , returns a new String of length integer with str right justified and padded with padstr ; otherwise, returns str . some_int = 5some_int.to_s.rjust(2, '0') # => '05'some_int.to_s.rjust(5, '0') # => '00005'another_int = 150another_int.to_s.rjust(2, '0') # => '150'another_int.to_s.rjust(3, '0') # => '150'another_int.to_s.rjust(5, '0') # => '00150'
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/5810468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92621/']}
jdg_73403
stackexchange
llm_judgeable_groundtruth_similarity
134508
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The "extension" (or "analytic") form of the theorem of Hahn-Banach has a natural and yet elegant proof. In just any textbook I have ever seen, it is proved first; the "separation" (or "geometric") version of Hahn-Banach's theorem is proved as a kind of corollary of the former. Question: Are the two theorems actually equivalent? If so, is any direct proof of the analytic version known that is instead based on the geometric one ? Now provide the response and nothing else.
Yes, the two theorems are equivalent in the sense that one can easily be deduced from the other and both have direct proofs from scratch. A standard textbook starting with a direct proof of the geometric version is Schaefer's Topological Vector Spaces, Chapter II, Section 3 . The statement Schaefer proves is: Let $F$ be a subspace of a topological vector space $E$ and let $U$ be a nonempty open convex subset, disjoint from $F$. Then there is a closed hyperplane $H$ containing $F$ and disjoint from $U$. The usual reductions via translation and taking the difference of the convex sets then yield the separation theorems of an open convex set from a point and of an open convex set from a compact convex set. The proof starts by a simple geometric observation: Let $U$ be an open and convex subset of a Hausdorff topological vector space of dimension $\geq 2$. If $U$ does not contain $0$, then there is a one-dimensional subspace disjoint from $U$. This is easily reduced to the two-dimensional case, where it is rather clear. To establish the above statement, a straightforward application of Zorn's lemma shows that there is a maximal (hence closed) subspace $M$ containing $F$ and disjoint from $U$. Since $U$ is non-empty, $E/M$ has dimension at least $1$. If the dimension of $E/M$ is $1$, then $M$ is a hyperplane and we're done. Suppose towards a contradiction that the dimension is at least $2$. The image of $U$ in $E/M$ does not contain $0$ and is open since the canonical projection $\pi \colon E \to E/M$ is open. Since $M$ is closed, $E/M$ is Hausdorff. Therefore there is a one-dimensional subspace $L$ of $E/M$ not meeting the image of $U$. The pre-image of $L$ contains $M$, is strictly larger and does not meet $U$, contradicting the maximality of $M$. In order to get the analytic form, identify a linear functional on a subspace with its graph in $E \times \mathbb{R}$. Endow $E \times \mathbb{R}$ with the product topology induced by the sublinear functional $p$ and the usual topology on $\mathbb{R}$. The set $U = \lbrace (x,t) \mid p(x) \lt t\rbrace$ is an open convex cone in $E \times \mathbb{R}$, not containing $(0,0)$. A linear functional $f$ is dominated by $p$ iff $\operatorname{graph}(f)$ is disjoint from $U$. A maximal closed hyperplane $M$ containing $\operatorname{graph}(f)$ and disjoint from $U$ is then seen to be the graph of a linear functional $F$ which is obviously an extension of $f$ and dominated by $p$.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/134508', 'https://mathoverflow.net', 'https://mathoverflow.net/users/26039/']}
jdg_73404
stackexchange
llm_judgeable_groundtruth_similarity
36145249
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been using this code: <html><head><script>function validatePfin() {var x1 = document.forms["pfin"]["cash"].value;var x2 = document.forms["pfin"]["all_assets"].value;var x3 = document.forms["pfin"]["all_debt"].value;if (x1 == null || x1 == "") {alert("Please enter Cash Amount.");}else {if (x2 == null || x2 == "") {alert("Please enter Non-Cash Assets.");}else {if (x3 == null || x3 == "") {alert("Please enter Total Debt.");}else { document.getElementById('sub1').value='Processing, please wait ... '; document.getElementById('sub1').disabled=true; form.submit(); }}}return false;}</script> </head><body><table><form method="POST" name="pfin" autocomplete="on" target="vframe" action="p_financials_s.php" enctype="multipart/form-data" onsubmit="return validatePfin()"><tr><td><select name="cash" id="cash"><option value="<?php echo $row['cash'];?>" selected><?php echo $row['cash'];?></option><option value="">Select</option><option value="0" >0</option><option value="1000" >1000</option></select><tr><td><select name="all_assets" id="all_assets"><option value="<?php echo $row['all_assets'];?>" selected><?php echo $row['all_assets'];?></option><option value="">Select</option><option value="0" >0</option><option value="1000" >1000</option></select> <tr><td><select name="all_debt" id="all_debt"><option value="<?php echo $row['all_debt'];?>" selected><?php echo $row['all_debt'];?></option><option value="">Select</option><option value="0" >0</option><option value="1000" >1000</option></select><tr><td><input type="submit" id="sub1" class="back1" value="Save And Go To Next Step >">&nbsp;</table> </form> </body></html> This is a pared down snip from the code. What it does is that upon submission, the form checks to see if the fields have values and if not stops the submission. What I am trying to figure out is how, using PHP/javascript (or ?), I can 'read' a form field, check to see if it has a value, and make that a variable for a conditional statement, w/o having to submit and post to the DB. For example, let's say I want the "cash" value to have a value, and if that is true, then I can make that a condition to, let's say, display a link: <? $cash1 = form.field.not-yet-sumbitted.cash if( $cash1 != "") { echo "http://examplesite.com"; }?> I have tried using an onchange event in javascript to somehow bring the "cash" variable into memory, and then once done, somehow have the PHP variable pick it up, but that doesn't fire. Now provide the response and nothing else.
Not every "new" feature makes it immediately into abstraction layers such as spring-mongo . So instead, all you need do is define a class that uses the AggregationOperation interface, which will instead take a BSON Object specified directly as it's content: public class CustomAggregationOperation implements AggregationOperation { private DBObject operation; public CustomAggregationOperation (DBObject operation) { this.operation = operation; } @Override public DBObject toDBObject(AggregationOperationContext context) { return context.getMappedObject(operation); }} Then you can use in your aggregation like this: Aggregation aggregation = newAggregation( match( Criteria.where("username").is("user001") ), new CustomAggregationOperation( new BasicDBObject( "$lookup", new BasicDBObject("from", "NewFeedContent") .append("localField","content.contentId") .append("foreignField", "_id") .append("as", "NewFeedContent") ) )) Which shows the custom class mixed with the built in match() pipeline helper. All that happens underneath each helper is that they serialize to a BSON representation such as with DBObject anyway. So the constructor here just takes the object directly, and returns it directly from .toDBObject() , which is the standard method on the interface that will be called when serializing the pipline contents.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36145249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6096423/']}
jdg_73405
stackexchange
llm_judgeable_groundtruth_similarity
566014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote some code in C and I was surprised to see it take a longer time to execute than I expected. I want to know which operations are costly and how to get rid of them. I use assignment statements, conditional(nested), loops, function calls and call backs. What are some good references to common C performance pitfalls? Is there a good profiler I can use? Thank you all Thanks for all your inputs. You are absolutely right: It's algorithms that can slow things (drastically). Though a little performance gain can be achieved by coding practices - I am 100% convinced that only faulty algorithm can drastically slow down things . As a matter of fact: I was working on RB trees and inserting nodes in ascending order. It took enormous time (as bad as Binary Search Tree (Skewed) ). After seeking your advice I checked the algorithm where I made a mistake in balancing which made the tree leaning (skewed). I corrected it. Thanks once again for the suggestions. Now provide the response and nothing else.
Your performance issues probably have more to do with the algorithms you implement than with the operations you use. Posting the code might be useful. Telling us something about what you're trying to do, and what algorithm you're using would also help. As it is, your question doesn't provide enough information for anyone to give you a helpful answer. Other people recommended gprof - I second that, if you're interested in profiling your code. I've also used VTune before, and liked it. But first make sure you understand your code and what it does, and that the algorithm you're implementing is time efficient when dealing with the size of data you expect it to handle. As an aside, using C doesn't mean that your code would automatically run faster. I/O bound code would see no performance improvement, typically. UI heavy code might not benefit from using a low level language. Typically, C is a better implementation language where you need low level access, when interfacing with hardware or low level operating system services, or if you have very specific and stringent performance requirements that would be difficult to meet in a high level, garbage collected language. Or if you happen to like C, but that's obviously a subjective matter.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/566014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34590/']}
jdg_73406
stackexchange
llm_judgeable_groundtruth_similarity
11456631
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the following block, clicking on a_frame triggers the event handler on_frame_click , but clicking on a_label which is a child of a_frame does not. Is there a way to force a_frame to trap and handle events which originated on it's children (preferably with out having to add handlers to the children directly)? I am using Python 3.2.3. import tkinterdef on_frame_click(e): print("frame clicked")tk = tkinter.Tk()a_frame = tkinter.Frame(tk, bg="red", padx=20, pady=20)a_label = tkinter.Label(a_frame, text="A Label")a_frame.pack()a_label.pack()tk.protocol("WM_DELETE_WINDOW", tk.destroy)a_frame.bind("<Button>", on_frame_click)tk.mainloop() Now provide the response and nothing else.
Yes, you can do what you want, but it requires a bit of work. It's not that it's not supported, it's just that it's actually quite rare to need something like this so it's not the default behavior. TL;DR - research "tkinter bind tags" The Tkinter event model includes the notion of " bind tags ". This is a list of tags associated with each widget. When an event is received on a widget, each bind tag is checked to see if it has a binding for the event. If so, the handler is called. If not, it continues on. If a handler returns "break", the chain is broken and no more tags are considered. By default, the bind tags for a widget are the widget itself, the widget class, the tag for the toplevel window the widget is in, and finally the special tag "all". However, you can put any tags you want in there, and you can change the order. The practical upshot of all this? You can add your own unique tag to every widget, then add a single binding to that tag that will be processed by all widgets. Here's an example, using your code as a starting point (I added a button widget, to show this isn't something special just for frames and labels): import Tkinter as tkinterdef on_frame_click(e): print("frame clicked")def retag(tag, *args): '''Add the given tag as the first bindtag for every widget passed in''' for widget in args: widget.bindtags((tag,) + widget.bindtags())tk = tkinter.Tk()a_frame = tkinter.Frame(tk, bg="red", padx=20, pady=20)a_label = tkinter.Label(a_frame, text="A Label")a_button = tkinter.Button(a_frame, text="click me!")a_frame.pack()a_label.pack()a_button.pack()tk.protocol("WM_DELETE_WINDOW", tk.destroy)retag("special", a_frame, a_label, a_button)tk.bind_class("special", "<Button>", on_frame_click)tk.mainloop() For more on bindtags, you might be interested in my answer to the question How to bind self events in Tkinter Text widget after it will binded by Text widget? . The answer addresses a different question than the one here, but it shows another example of using bind tags to solve real world problems.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11456631', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1521353/']}
jdg_73407
stackexchange
llm_judgeable_groundtruth_similarity
318608
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A question was asked with the tags java and multithreading . The asker is creating threads manually and trying to synchronize access between threads by creating his own thread pool. Later, an answer was posted to this question. It said that using ExecutorService (provided by Java) is a good solution to the problem but it wasn't available when the question was asked. There is an existing executorservice tag that was not part of the original question. Should that be added? Is it good practice to update the question with applicable tags, as evidenced by a new answer? One advantage that I see to adding the tag is that the question will get more attention from people who follow the new tag. Those people can provide some more good solutions, or improve the existing answers. I have gone through Tagging a question based on its answers but it does not address the lateness of the answer. What if the user did not accept the answer, and the community up-voted the answer? What if the question is very old (like 7 years) and community is providing answers with latest technical advances? (E.g. pre Java 8, interface does not have implementation. Since Java 8, you can have static and default methods in interfaces.) e.g. Why can't I define a static method in a Java interface? Is there more to an interface than having the correct methods In this case, can I update the question with tag "java-8" by highlighting Java 8 features in the answer? Now provide the response and nothing else.
Tags are meant to help narrow the scope of the question . If the asker is asking about java multithreading, then using the [java] and [multithreading] tags is appropriate. However, if the question doesn't ask about ExecutorService, nor does it use ExecutorService in a provided MCVE, then there's no reason to assume that the asker is interested in that answer. (Though of course that does not and should not prevent people from suggesting it anyways.) In your other example with Java interfaces, I would still argue that the original question isn't asking specifically about Java 8, and therefore that tag should be omitted. If answers want to mention a new interface feature from Java 8, great, but that isn't the focus of the question. The only exception to this rule, as @CodyGray mentioned, is when the asker is trying to ask a question about some particular technology but doesn't know the name for it. Also, if the asker is using that technology already but hasn't added the tag for it, then I think it's okay too. So to finish up with an example, let's say there's a question about the lifecycle of an activity on Android that only has the [java] and [android] tags. I don't see much harm in adding [android-activity]. However, if the answers start talking about fragments and their relation to activities, you should not add the [android-fragments] tag unless the question is already using fragments , it's relevant to the actual problem being asked about, and the asker doesn't realize it.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/318608', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/4999394/']}
jdg_73408
stackexchange
llm_judgeable_groundtruth_similarity
1282568
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does an irrational number $C$ exist such that $C \cdot \sqrt 2 \in \Bbb{Q}$, where $\sqrt2 \not\mid C$? I just thought of this, I'm trying to find answers that aren't of the form $C=a\sqrt2, a\in\Bbb{Q}$. Now provide the response and nothing else.
If $C\cdot\sqrt 2\in\mathbb Q$, then $C\cdot\sqrt 2=q$ for some $q\in\mathbb Q$, hence $C=\frac q{\sqrt 2}=\frac q2\cdot\sqrt 2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1282568', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/193752/']}
jdg_73409
stackexchange
llm_judgeable_groundtruth_similarity
164720
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a function of three variables: $f = f(x, y, z)$. I need to maximize $f$, when $x$, $y$ and $z$ run given sets of values: $x \in (x_1, x_2, x_3)$, $y \in (y_1, y_2, y_3)$ and $z \in (z_1, z_2, z_3)$, as well as find the maximizers. I tried to define the set setx = {x1, x2, x3};sety = {y1, y2, y3};setz = {z1, z2, z3}; and use NMaximize[{f[x, y, z], x\[Element] setx, y\[Element] sety, z\[Element] setz}, {x, y, z}] but it does not work. Any idea is appreciated. Now provide the response and nothing else.
This? Last[SortBy[Map[{f@@#, #}&, Tuples[{setx, sety, setz}]], First]] The Tuples will give you all combinations, the anonymous function will construct the function value and arguments and the Last and SortBy will extract the desired maximal function value and arguments
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/164720', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/54962/']}
jdg_73410
stackexchange
llm_judgeable_groundtruth_similarity
1750139
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been struggling with precision nightmare in Java and SQL Server up to the point when I don't know anymore. Personally, I understand the issue and the underlying reason for it, but explaining that to the client half way across the globe is something unfeasible (at least for me). The situation is this. I have two columns in SQL Server - Qty INT and Price FLOAT. The values for these are - 1250 and 10.8601 - so in order to get the total value its Qty * Price and result is 13575.124999999998 (in both Java and SQL Server). That's correct. The issue is this - the client doesn't want to see that, they see that number only as 13575.125 and that's it. On one place they way to see it in 2 decimal precision and another in 4 decimals. When displaying in 4 decimals the number is correct - 13575.125, but when displaying in 2 decimals they believe it is wrong - 13575.12 - should instead be 13575.13! Help. Now provide the response and nothing else.
Your problem is that you are using floats. On the java side, you need to use BigDecimal, not float or double, and on the SQL side you need to use Decimal(19,4) (or Decimal(19,3) if it helps jump to your precision level). Do not use the Money type because math on the Money type in SQL causes truncation, not rounding. The fact that the data is stored as a float type (which you say is unchangeable) doesn't affect this, you just have to convert it at first opportunity before doing math on it. In the specific example you give, you need to first get the 4 decimal precision number and put it in a BigDecimal or Decimal(19,4) as the case may be, and then further round it to 2 decimal precision. Then (if you are rounding up) you will get the result you want.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1750139', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/154445/']}
jdg_73411
stackexchange
llm_judgeable_groundtruth_similarity
21970
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In some (historical) papers, chess has been referred to as the drosophila of artificial intelligence. While I suppose that in current research, the mere application of a search algorithm is at best advanced computer science , I believe that there are still area's where can apply (and practice) AI-techniques. A simple example would be opening book learning where one can teach the program whether to use or to not use certain moves in the opening because the program is unsuited to certain types of position. We can use a form of re-inforcement learning and automate this: I suppose I could play the program against itself and increase the probability of winning lines and decrease the probability of losing lines. The more complex example is to use a learning evaluation function (for example, one could tweak the values of piece-square tables ). However, I'm thinking: given all the noise due to there being an enormous amount of realistic positions (as opposed to the amount of realistic opening lines) and with the cost (duration) of a computer chess game, and the need to play loads. How can one do this effectively? (or should I look at other techniques, for example neural networks.) Now provide the response and nothing else.
The whole state space for chess is enormous - it can be roughly estimated as 10 43 (Shannon number (Shannon, 1950) , ( Wikipedia )). The idea you present - Reinforcement Learning agents playing with each other to learn the game - was successfully applied to Backgammon - TD-Gammon (Tesauro, 1995) , ( Chapter in Reinforcement Learning by Sutton&Barto ). It also used Neural Networks to estimate game's value function. This problem is however much simpler, as number of states in Backgammon is significantly smaller than in chess, namely: 18,528,584,051,601,162,496 ( Backgammon Forum Archive thread ). If you, however, would end the game after few initial moves and aim only to learn "good openings" you could succeed with analogous approach. The main problem would be to evaluate the game after the opening game, which seems hard. Just a similarity measure to the established positions after well known openings is not enough, because position can be far from them if opponent would make a stupid move (so it wouldn't be because of learning agent's mistake, so the position even if "incorrect" should be evaluated as a good outcome). References: SHANNON, Claude E. "Programming a computer for playing chess." In Philosophical Magazine, 7th series, 41, no. 314 (March 1950): 256-75. Gerald Tesauro. 1995. Temporal difference learning and TD-Gammon. Commun. ACM 38, 3 (March 1995), 58-68
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/21970', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/31197/']}
jdg_73412
stackexchange
llm_judgeable_groundtruth_similarity
12135
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As a continuation of my previous question i will try to explain my problem and how i am trying to convert my algorithm to a problem that can be expressed in a CNF form. Problem: Find all stable sets of an argumentation framework according to Dung's proposed framework . Brief theory: Having an argumentation framework AF, with A the set of all arguments and R the set of the relations, a stable set is a set which attacks all arguments not in their set and there is no attack relation between arguments in the stable set.Example: Let's say we have an argumentation framework AF ,A={1,2,3,4}(arguments of AF) and attack relations R{1,3} and R{2,4}.It's obvious that the set {1,2} is a stable extension of the framework because: a)it attacks all arguments not in their set (3 and 4) b)it's conflict free(no attacks between arguments in the set) because argument 1 does not attack argument 2 and the opposite My exhaustive abstract algorithm: argnum=number of arguments;Ai[argnum-1]=relation "attacks" ,where 1<=i<=argnumP[2^argnum-1]=all possible relations that can be generated from all the argumentsS[2^argnum-1]=empty; where S are all the stable setsj=0; //counter for whilek=1; //counter for counting stable setswhile j<2^argnum-1 if P[j] attacks all arguments not in P[j](check using Ai[]) if all arguments in P[j] are conlfict-free S[k++]=P[j]; end if end if j++;end while I want to solve the above problem either by transforming the above algorithm to CNF or by using a different algorithm and finally use a SAT Solver(or anything similar if exists) give CNF as input and get stable sets as output. I wonder if someone can give me any feedback of how i can transform any algorithm like the above to CNF in order to be used into a SAT Solver. I decided to use precosat . Now provide the response and nothing else.
Finding a stable argument set is equivalent to finding an independent set in the directed graph of argument attacks, with the added restriction that some member of the set must be adjacent to each vertex in the graph not in the independent set. The problem is at least as hard as the indepependent set decision problem and is thus NP-hard. The decision version of the stable argument set problem is reducible to Boolean SAT as follows: Input: Given a set of $n$ arguments $ARG_{1}$ to $ARG_{n}$, let the SAT propositional variable $ATTACK_{i,j}$ be true if $ARG_{i}$ attacks $ARG_{j}$. Output: Let $INDSET_{1} ... INDSET_{n}$ be a new set of propositional variables. $INDSET_{i}$ will be true in the SAT solution iff $ARG_{i}$ is part of the stable set found. Generating the clauses: For every pair of variables $INDSET_{i}$, $INDSET_{j}$, add clauses that require $\overline{(INDSET_{i} \land INDSET_{j})} \lor (\overline{ATTACK_{i,j}} \land \overline{ATTACK_{j,i}})$. These clauses prohibit any stable set argument from attacking another. Let $NEEDATTACK_{1} ... NEEDATTACK_{n}$ be a new set of propositional variables. For each $INDSET_{i}$ variable, add clauses that require $INDSET_{i} \oplus NEEDATTACK_{i}$ These clauses record which arguments must be attacked by the stable set arguments. Let $GOTATTACK_{1} ... GOTATTACK_{n}$ be a new set of propositional variables. For each $GOTATTACK_{j}$ variable, add clauses that require $GOTATTACK_{j} = (INDSET_{1} \land ATTACK_{1,j})$ $\lor$ ... $\lor$ $(INDSET_{n} \land ATTACK_{n,j})$ These clauses record which arguments have been attacked by the stable set arguments. For each $GOTATTACK_{i}$ variable, add clauses that require $NEEDATTACK_{i} \oplus \overline{GOTATTACK_{i}}$ These clauses require that every argument that needed to be attacked by some stable set argument was in fact attacked. The Boolean expressions can be converted to circuits and from there to CNF using Tseitin transformations . To obtain all the stable sets, when the SAT solver returns a set of $INDSET$ variables, you must construct a CNF clauses that forbids that solution and append it to the CNF formula. Rerun the solver and it will either find a new solution or report that the formula is now unsatisfiable. If "unsatisfiable" is reported, then you know you've found all the stable sets. If a new solution is found, construct another CNF clause to forbid that solution, append it to the formula and run the solver again.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/12135', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/7443/']}
jdg_73413
stackexchange
llm_judgeable_groundtruth_similarity
691022
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to determine the number of days in a month for a given date in SQL Server. Is there a built-in function? If not, what should I use as the user-defined function? Now provide the response and nothing else.
You can use the following with the first day of the specified month: datediff(day, @date, dateadd(month, 1, @date)) To make it work for every date: datediff(day, dateadd(day, 1-day(@date), @date), dateadd(month, 1, dateadd(day, 1-day(@date), @date)))
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/691022', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/73794/']}
jdg_73414
stackexchange
llm_judgeable_groundtruth_similarity
1754125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can we make the text in the label of a table view go to the next line? Now provide the response and nothing else.
//This allows for multiple linescell.textLabel.numberOfLines = 0;//This makes your label wrap words as they reach the end of a linecell.textLabel.lineBreakMode = UILineBreakModeWordWrap; Also, if you want your label to have more room for such multiple lines, you probably should allow for table rows with greater height. You can do this either by overriding - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath and returning your custom height for each row there, or by specifying your table's rowHeight property, giving the common height of each row.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1754125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197985/']}
jdg_73415
stackexchange
llm_judgeable_groundtruth_similarity
243800
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_{0}^\infty {\cos(xt)\over 1+t^2}dt $$ I'm supposed to solve this using Laplace Transformations. I've been trying this since this morning but I haven't figured it out. Any pointers to push me in the right direction? Now provide the response and nothing else.
Let $f(x)$ denote the integral, and assume temporarily that $x > 0$. This makes no difference since $f(x)$ is even by definition. Then its Laplace transform $\mathcal{L}f(s)$ defines a continuous function on $s \in (0, \infty)$ (in fact, it defines an analytic function on $\Re(s) > 0$). Thus we may assume further that $s \neq 1$ and then rely on the continuity argument. Then \begin{align*}\mathcal{L}f(s)&= \int_{0}^{\infty} f(x)e^{-sx} \, dx= \int_{0}^{\infty} \int_{0}^{\infty} \frac{\cos(xt)}{1+t^2} e^{-sx} \, dtdx \\&\stackrel{*}{=} \int_{0}^{\infty} \int_{0}^{\infty} \frac{\cos(xt)}{1+t^2} e^{-sx} \, dxdt \\&= \int_{0}^{\infty} \frac{1}{1+t^2} \frac{s}{s^2+t^2} \, dt \\&= \frac{s}{1-s^2}\int_{0}^{\infty} \left( \frac{1}{s^2+t^2} - \frac{1}{1+t^2} \right) \, dt \\&= \frac{s}{1-s^2} \frac{\pi}{2} \left( \frac{1}{s} - 1 \right)= \frac{\pi}{2} \frac{1}{1+s}.\end{align*} Here, the change of order of integration $(*)$ is justified by the dominated convergence theorem. Though we proved this for $s \neq 1$, it remains valid by continuity argument as mentioned above. Then by the uniqueness of the Laplace transform, we find that $$ f(x) = \frac{\pi}{2}e^{-x}. $$ Therefore we have $$ \int_{0}^{\infty} \frac{\cos(xt)}{1+t^2} \, dt = \frac{\pi}{2} e^{-\left|x\right|}. $$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/243800', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50463/']}
jdg_73416
stackexchange
llm_judgeable_groundtruth_similarity
108466
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Different implementation software are available for lasso . I know a lot discussed about bayesian approach vs frequentist approach in different forums. My question is very specific to lasso - What are differences or advantages of baysian lasso vs regular lasso ? Here are two example of implementation in the package: # just example dataset.seed(1233)X <- scale(matrix(rnorm(30),ncol=3))[,]set.seed(12333)Y <- matrix(rnorm(10, X%*%matrix(c(-0.2,0.5,1.5),ncol=1), sd=0.8),ncol=1)require(monomvn) ## Lasso regressionreg.las <- regress(X, Y, method="lasso")## Bayesian Lasso regressionreg.blas <- blasso(X, Y) So when should I go for one or other methods ? Or they are same ? Now provide the response and nothing else.
The standard lasso uses an L1 regularisation penalty to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994). In the Bayesian framework, the choice of regulariser is analogous to the choice of prior over the weights. If a Gaussian prior is used, then the Maximum a Posteriori (MAP) solution will be the same as if an L2 penalty was used. Whilst not directly equivalent, the Laplace prior (which is sharply peaked around zero, unlike the Gaussian which is smooth around zero), produces the same shrinkage effect to the L1 penalty. Park & Casella (2008) describes the Bayesian Lasso. In fact, when you place a Laplace prior over the parameters, the MAP solution should be identical (not merely similar) to regularization with the L1 penalty and the Laplace prior will produce an identical shrinkage effect to the L1 penalty. However, due to either approximations in the Bayesian inference procedure, or other numerical issues, solutions may not actually be identical. In most cases, the results produced by both methods will be very similar. Depending on the optimisation method and whether approximations are used, the standard lasso will probably be more efficient to compute than the Bayesian version. The Bayesian automatically produces interval estimates for all of the parameters, including the error variance, if these are required. Chen, S., & Donoho, D. (1994). Basis pursuit. In Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (Vol. 1, pp. 41-44). IEEE. https://doi.org/10.1109/ACSSC.1994.471413 Park, T., & Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association, 103 (482), 681-686. https://doi.org/10.1198/016214508000000337
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/108466', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/19762/']}
jdg_73417
stackexchange
llm_judgeable_groundtruth_similarity
210286
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For example I have a file named 5.jpg . How can I rename it to aaaaa.jpg with char a 5 times. I tried rename -v 's/(\d{1,})/a{$1}/g' * but this renames 5.jpg to a{5}.jpg , which is not what I want. I understand that second part of function isn't a regexp, this was just an attempt. Now provide the response and nothing else.
At least three different utilities imaginatively named rename(1) are floating around in the Linux waters: (1) the one that came with util-linux , (2) an older Perl script by Larry Wall further munged by Tom Christiansen, and (3) a newer Perl script evolved from the former and included with Unicode::Tussle . As far as I can tell, what you want can't be done with the util-linux version of rename(1) . It can be done with either of the Perl scripts though: rename -n 's/(\d+)/"a" x $1/e' 5.jpg Output: rename 5.jpg aaaaa.jpg (drop the -n to actually rename the file).
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/210286', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/109975/']}
jdg_73418
stackexchange
llm_judgeable_groundtruth_similarity
5195094
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a query: SELECT someFields FROM someTable WHERE cheapLookup=1 AND (CAST(someField as FLOAT)/otherField)<0.9 So, will the CAST and division be performed in the case that cheapLookup is 0 ? If not, how can I avoid the calculation in this case? Now provide the response and nothing else.
It depends on the query plan, which is determined by the estimated cost of each considered alternative plan that would produce correct results. If the predicate 'cheapLookup = 1' can use an index, and it is sufficiently selective, SQL Server would likely choose to seek on that index and apply the second predicate as a residual (that is, only evaluating it on rows that are matched by the seeking operation). On the other hand, if cheapLookup is not the leading key in an index, or if it is not very selective, SQL Server might choose to scan, applying both predicates to every row encountered. The second predicate will not be chosen for a seeking operation, unless there happens to be an indexed computed column on the whole expression, and using that index turns out to be the cheapest way to execute the whole query. If a suitable index exists, SQL Server would seek on 'second predicate result < 0.9', and apply 'cheapLookup=1' as a residual. There is also the possibility that the indexed computed column has cheapLookup as its second key, which would result in a pure seek, with no residual. The other thing about the second predicate is that without a computed column (whether or not indexed), SQL Server will have to guess at the selectivity of the expression. With the computed column, the server might be able to create statistics on the expression-result column, which will help the optimizer. Note that a computed column on 'CAST(someField as FLOAT)/otherField' would have to be persisted before it could be indexed or have statistics created on it, because it contains an imprecise data type. In summary, it's not the complexity of the expression that counts so much as the estimated cost of the whole plan that uses each of the available access methods considered by the optimizer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5195094', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14357/']}
jdg_73419
stackexchange
llm_judgeable_groundtruth_similarity
26604294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am not able to render any html pages in Django 1.7. My 'index.html' is in ' project/seatalloc/templates/index.html ' and my view.py in project/seatalloc/views.py looks like: def index(request): return render(request, 'index.html', dirs=('templates',)) project/project/settings.py has templates dirs set: TEMPLATE_DIRS = ( '/Users/Palak/Desktop/academics/sem3/cs251/lab11/project/seatalloc/templates',) urls.py: urlpatterns = patterns('', url(r'^seatalloc/', include('seatalloc.urls')), url(r'^admin/', include(admin.site.urls)),) I have tried to follow the documentation strictly, yet can't figure out if Django detects the file, why am I getting TemplateDoesNotExist at /seatalloc/ error. I am new to Django, could someone please help. Now provide the response and nothing else.
If - as in your case - you get a TemplateDoesNotExist error and the debug page states "File exists" next to the template in question this usually (always?) means this template refers to another template that can't be found. In your case, index.html contains a statement ( {% extends %}, {% include %}, ... ) referring to another template Django cannot find. Unfortunately, as of Django 1.8.3, the debug page always names the base template, not the one Django can't find.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26604294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3523557/']}
jdg_73420
stackexchange
llm_judgeable_groundtruth_similarity
27931762
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to learn about swift and closures.I'm stuck on this example. numbers.map({ (number: Int) -> Int in let result = 3 * number return result }) What is (number: Int) -> Int? Is it a function? Where is it defined? https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/GuidedTour.html#//apple_ref/doc/uid/TP40014097-CH2-ID1 What does the keyword "in" do? The docs say to use "to separate the arguments and return type from the body". I'm not sure I understand this. Why isn't "in" used to separate "let result = 3 * number" from "return result". Now provide the response and nothing else.
A closure is just a function with the parameters moved inside the brackets, with the keyword in to separate the parameters from the function body. The two following examples define equivalent functions: func myFunc(number: Int) -> Int { let result = 3 * number return result}let myClosure = { (number: Int) -> Int in let result = 3 * number return result} You can actually call them both in exactly the same way: let x = myFunc(2) // x == 6let y = myClosure(2) // y == 6 Notice how the second example is exactly the same as the first, only in the first example, the parameters (number: Int) -> Int are outside the brackets, and in the second example the parameters are inside the brackets, followed by the keyword in . map works by taking an array ( numbers , in your example) and creating a new array that is the result of applying the closure function to each element in numbers . So if numbers is [1, 2, 3] , the example above will start with 1 . It will apply the closure function which will produce a 3 (cuz all it does is multiply the element from the first array by 3). It does that for each element in numbers , until it has produced a new array, [3, 6, 9] . If you wanted to, you could call map using the names of either the above function or the above closure, or by writing it out explicitly inside of map . All of the below examples are totally equivalent: let numbers = [1, 2, 3]// Example 1let times3 = numbers.map(myFunc) // times3 == [3, 6, 9]// Example 2let timesThree = numbers.map(myClosure) // timesThree == [3, 6, 9]// Example 3let xThree = numbers.map({ (number: Int) -> Int in let result = 3 * number return result // xThree == [3, 6, 9]}) Note that Example 3 is the same as Example 2, only in Example 3 the closure is spelled out explicitly inside of map , whereas in Example 2 the closure has been assigned to a constant called myClosure , and the constant has been supplied to map . Hope this helps - closures are fun, but confusing.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27931762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678392/']}
jdg_73421
stackexchange
llm_judgeable_groundtruth_similarity
319038
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am reading " Some notes on Differentiable stacks " by J. Heinloth. In that paper, the notion of quotient stack is defined as follows. Let $G$ be a Lie group action on a manifold $X$ (left action). We define the quotient stack $[X/G]$ as $[X/G](Y):=\{P\xrightarrow{p} Y, P\xrightarrow{f}X | P\rightarrow Y \text{ is a G-bundle,} ~ f \text{ is } G\text{-equivariant}\}$ . Morphisms of objects are $G$ -equivariant isomorphisms. I am trying to understand the motivation for defining in this way. Given a Lie group action of $G$ on $X$ , if I want to associate a stack, I would start with simpler cases which allows me to guess how to define. Given a Lie group $G$ , I have stack associated to it, denoted by $BG$ , the stack of principal $G$ bundles. Given a manifold $M$ , I have the stack associated to it, denoted by $\underline{M}$ whose objects are maps $Y\rightarrow M$ . Suppose $X$ is trivial and $G$ acts trivially on $X=\{*\}$ then $[X/G]$ should only depend on $G$ . We know what stack to associate for a Lie group $G$ i.e., $BG$ . Thus, $[X/G]$ should just be $BG$ . Suppose $G$ is trivial and $G$ acts on $X$ , $[X/G]$ should only depend on $X$ . We know what stack to associate for a manifold $X$ i.e., $\underline{X}$ . Thus, $[X/G]$ should just be $\underline{X}$ . Suppose $G$ is non trivial and $X$ is non trivial and that the action of $G$ on $X$ is such that $X/G$ is a manifold. We know what stack to associate for a manifold $X/G$ i.e., $\underline{X/G}$ . Thus, $[X/G]$ should just be $\underline{X/G}$ . I am not able to guess how could we guess the definition knowing above three cases. Is quotient stack definition motivated from these simpler cases or Is it the case that simpler cases are special cases of notion of Quotient stack. How could we come up with such a definition. Any comments regarding the motivation are welcome. Now provide the response and nothing else.
Let's start approaching the question from the simplest possible case $Y=*$ . What should be the points of $[X/G]$ ? Recall that the idea here is to generalize the construction of the action groupoid for discrete groups acting on sets to the manifold case. This allows us to remember the stabilizers of points and it is generally a much better behaved notion. So morally $[X/G](*)$ should be the groupoid whose objects are points of $X$ and such that $\mathrm{Mor}(x,x')=\{g\in G\mid gx=x'\}$ . With this description however it is a bit unclear how to generalize that to get a description of $[X/G](Y)$ , so let us rewrite it in a slightly different way. A point of $X$ is just a $G$ -equivariant morphism $G\to X$ (since any such $G$ -equivariant morphism is determined by the image of $e\in G$ ). Moreover a morphism between $x:G\to X$ and $x':G\to X$ is exactly a $G$ -equivariant morphism $g:G\to G$ (i.e. right multiplication by some element $g\in G$ ) making the obvious diagram commute. Now if you look at the definition, the groupoid does not depend from the fact that $G$ has a canonical basepoint (the identity element $e\in G$ ), so in fact we can write $[X/G](*)$ is defined as the groupoid of $G$ -equivariant maps $T\to X$ where $T$ is a freely transitive $G$ -space. Ok, so now we want to describe the groupoid $[X/G](Y)$ . Intuitively the objects here should be families of elements of $[X/G](*)$ parametrized by $Y$ . But a family of freely transitive $G$ -spaces is exactly a principal $G$ -bundle $P\to Y$ , and a family of $G$ -equivariant map $P_y\to X$ for each $y\in Y$ is just a $G$ -equivariant map $P\to X$ . Hence we get the definition you are asking about.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/319038', 'https://mathoverflow.net', 'https://mathoverflow.net/users/118688/']}
jdg_73422
stackexchange
llm_judgeable_groundtruth_similarity
120276
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Can you help me with my software licensing question? Running a SQL Server, as far as I understand it, you need one CAL for every user who connects to the database server. But what happens if the only computer which is accessing the SQL Server is the server running your business layer? If, for example, you got 1 SQL Server and 1 Business logic server, and 100 Clients who all just query and use the business logic server. No client is using the SQL Server directly, no one is even allowed to contact it. So, since there is only one computer using the SQL server, do I need only 1 CAL??? I somehow can't believe this would count as only 1 CAL needed for the SQL Server, but I would like to know why not. Now provide the response and nothing else.
You need CAL's for every user of the business logic server, even though there is no direct connection between them and the SQL server. Microsoft use the term "multiplexing" for the scenario you describe. This is for SQL 2005 but I don't think it is any different for other versions: A CAL is required for each distinct device or user to the multiplexing or pooling software or hardware front end. This remains true no matter how many tiers of hardware or software exist between the server running SQL Server and the client devices that ultimately use its data, services, or functionality. An exception to this includes the manual transfer of data from employee to employee. For example, if an employee sends a Microsoft Excel version of a report to another employee, the receiving employee does not require a CAL. An additional exception is communication exclusively between the servers running SQL Server. Anyone using your Application has to be covered by a CAL, including someone who uses it solely to generate Excel Reports but someone who only receives a copy of one of those Excel Reports by e-mail doesn't. Edited to add This is just one of many aspects of SQL server licensing and it's only relevant if you choose the Server+CAL licensing model. If you want to get a full understanding of all of the alternatives the definitive document is Microsoft's 63 page SQL Server 2008 Licensing Guide which covers all licensing models and approaches.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/120276', 'https://serverfault.com', 'https://serverfault.com/users/9539/']}
jdg_73423
stackexchange
llm_judgeable_groundtruth_similarity
88943
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: By reading the GNU coreutils man page for rm , one of the options is -f , which according to the manual, -f, --force ignore nonexistent files and arguments, never prompt Now, I made some tests and show that indeed if I use something like rm -f /nonexisting/directory/ it won't complain. What can someone really gain from such an option? Plus the most common examples of "deleting directories" using rm is somethinglike rm -rf /delete/this/dir The -r option makes sense, but -f ? Now provide the response and nothing else.
I find that the man page lacks a little detail in this case. The -f option of rm actually has quite a few use cases: To avoid an error exit code To avoid being prompted To bypass permission checks You are right that it's pointless to remove a non-existent file, but in scripts it's really convenient to be able to say "I don't want these files, delete them if you find them, but don't bother me if they don't exist". Some people use the set -e in their script so that it will stop on any error (to avoid any further damage the script can cause), and rm -rf /home/my/garbage is easier than if [[ -f /home/my/garbage ]]; then rm -r /home/my/garbage; fi . A note about permission checks: to delete a file, you need write permission to the parent directory, not the file itself. So let's say somehow there is a file owned by root in your home directory and you don't have sudo access, you can still remove the file using the -f option. If you use Git you can see that Git doesn't leave the write permission on the object files that it creates: -r--r--r-- 1 phunehehe phunehehe 62 Aug 31 15:08 testdir/.git/objects/7e/70e8a2a874283163c63d61900b8ba173e5a83c So if you use rm , the only way to delete a Git repository without using root is to use rm -rf .
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/88943', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/46227/']}
jdg_73424
stackexchange
llm_judgeable_groundtruth_similarity
35095149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to fit a polyline in the google map view. The polyline was acquired through overview_polyline in the google maps directions api. Wondering how I would be able to convert an encoded polyline into something that can be worked with. I need to fit the polyline in the map view. All i have found out to do is fit the bounds to show all markers but not showcase the entire polyline. func fitAllMarkers(){ var bounds = GMSCoordinateBounds() for marker in markers { bounds = bounds.includingCoordinate(marker.position) } googleMapView.animateWithCameraUpdate(GMSCameraUpdate.fitBounds(bounds))} Now provide the response and nothing else.
For Swift 2.0 Google Maps, to make your map view fit the polyline of the route you are drawing: let path: GMSPath = GMSPath(fromEncodedPath: route)! routePolyline = GMSPolyline(path: path) routePolyline.map = mapView var bounds = GMSCoordinateBounds() for index in 1...path.count() { bounds = bounds.includingCoordinate(path.coordinateAtIndex(index)) } mapView.animateWithCameraUpdate(GMSCameraUpdate.fitBounds(bounds))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35095149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5434541/']}
jdg_73425
stackexchange
llm_judgeable_groundtruth_similarity
1847
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The setup Consider a cylinder inside another cylinder (placed with their symmetry axis horizontal). The inner cylinder is about Ø100 mm in diameter and a centimeter or two in lenght/thickness. On the perimeter there is a thread. This cylinder is fixed in rotational motion, and cannot rotate. The outer cylinder is much longer. It has a thread on the inside through half of its' lenght or so and this thread fits with the inner cylinder thread. The question Now, when rotating the outer cylinder, the inner cylinder will move back and forth (the inner cannot rotate but can move translationally). My issue is that the outer cylinder might rotate many revolutions (it follows a bicycle wheel), and therefor at some point the inner cylinder will move out of the thread of the outer cylinder. Is there a mechanical and smart method to let such a cylinder leave and reenter the thread of the outer cylinder? E.g. the outer cylinder would rotate in one direction untill the inner cylinder leaves the thread. Then when the outer cylinder is reversed in rotational direction, I want the inner cylinder to reenter the thread and start moving the opposite way. Now provide the response and nothing else.
Threads To have the thread automatically reengage some thought will have to go to the exact type of thread used. A coarse thread and maybe a slight taper will help the threads to properly engage while minimizing the chance of cross-threading or binding. Mechanism You said that eventually the inner piece will disengage from the outer piece. This doesn't sound like the problem. Your problem is that you then want the pieces to reengage when the rotation is reversed. To do this, the inner part needs to have some force on it to keep it in contact with the outer part. This sounds like a perfect case for using a spring. The spring would start to compress as the inner piece neared the end of the outer piece. Once the inner piece is free, the spring would provide enough force to press the inner piece back against the outer piece. This way the threads could engage once the rotation is reversed. A downside to this is that there will be some wear between the threads of each piece. The spring is constantly pushing the two together even as the rotation is keeping the threads from engaging.
{}
{'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/1847', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/94/']}
jdg_73426
stackexchange
llm_judgeable_groundtruth_similarity
1313402
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Looking for a preprocessor directive in c# for importing dll based on whether the executable is 64bit or 32 bit: #if WIN64[DllImport("ZLIB64.dll", CallingConvention=CallingConvention.Cdecl)]#else[DllImport("ZLIB32.dll", CallingConvention=CallingConvention.Cdecl)] Now provide the response and nothing else.
Here's what you need to do. First, go into Project-><project name> Properties... and go to the Build tab. In there, in the textbox labelled "Conditional compilation symbols", add WIN32 for your x86 platform (selectable at the top of the dialog) and WIN64 for your x64 platform. Then save. Note that if you have one for "AnyCPU", you probably want to remove that platform altogether, as it won't be safe. Then, go into the source, and write this: #if WIN64 [DllImport("ZLIB64.dll", CallingConvention=CallingConvention.Cdecl)]#else [DllImport("ZLIB32.dll", CallingConvention=CallingConvention.Cdecl)]#endif Note that when you view the source, one of the lines will look like it has been commented out, in that the entire line is in a gray font. This line is the one for the "other platform". If you select the platform in the toolbar, you'll notice that the syntax coloring follows suit. Of course, after re-reading my answer I notice that you don't actually need to put WIN32 into the conditional symbols list as it isn't used, but it might be useful other places to do an #if on WIN32 instead of 64.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1313402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/85917/']}
jdg_73427
stackexchange
llm_judgeable_groundtruth_similarity
29038609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to be able to fake the system time when testing. The source I have makes use of java.time.LocalDate. Is there a way to make LocalDate.now() return a pre-set date? Now provide the response and nothing else.
There are a few options you've: Wrap the LocalDate.now() call in a non-static method of a class. Then you can mock that method to return your specific instance - This would not seem practical if you're directly calling LocalDate.now() method at many places in your code. Use LocalDate.now(Clock) method, that is pretty test-friendly, as already suggested in comments - again you've to modify your application code. Use Powermockito , if you can. In that case, you've a pretty easy approach by mocking static methods using mockStatic(Class<?>) method. The 3rd approach can be implemented as: @PrepareForTest({ LocalDate.class })@RunWith(PowerMockRunner.class)public class DateTest { @Test public void yourTest() { PowerMockito.mockStatic(LocalDate.class); when(LocalDate.now()).thenReturn(yourLocalDateObj); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29038609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/967501/']}
jdg_73428
stackexchange
llm_judgeable_groundtruth_similarity
103419
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I apologize in advance if this is too basic for this site. Considering that hydrogen is lighter then oxygen, and water is 2/3 hydrogen, and only 1/3 oxygen, and our "air" is comprised mainly of oxygen, why isn't water lighter then air? Now provide the response and nothing else.
Air is lighter because there are fewer molecules per unit volume compared with a unit volume of liquid water. A mole of water is 18 grams, so a liter of water contains about 55 moles (1000 grams). A mole of air at standard temperature and pressure, however, occupies a volume of 22.4 liters, much more. Dividing a mole of 02 (32 grams) by 22.4, you have just a bit more than a gram for that liter of air.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/103419', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/42491/']}
jdg_73429
stackexchange
llm_judgeable_groundtruth_similarity
541889
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In https://huyenchip.com/machine-learning-systems-design/design-a-machine-learning-system.html#design-a-machine-learning-system-dwGQI5R , I came across the sentence: Similarly, you can use unsupervised learning to learn labels for your data , then use those labels for supervised learning. I have never heard of the bolded part before. How exactly do you use unsupervised learning to "learn labels" for labeless data? Now provide the response and nothing else.
Normally, you don't (and you don't believe everything someone writes somewhere on the internet). What the writer probably meant (at least that's my interpretation) is that you can use clustering to identify the clusters, declare each cluster to be a class for itself, and use these "classes" to learn class boundaries or other rules for "classifying" new data. This approach, however, is likely to suffer from severe generalisation issues, if it works at all. If the true classes overlap, clustering won't be able to identify them and the clusters will not correspond to the classes. Even if the clusters/classes are well separated, lack of true labels will prevent you from tuning hyperparameters and ensuring good generalisation. So, it is a theoretically possible concept, but unlikely to work in practice. I also stumbled over the preceding sentence in the blog you quoted: An income prediction task can be regression if we output raw numbers, but if we quantize the income into different brackets and predict the bracket, it becomes a classification problem. Again, it is theoretically possible, but not a recommended approach. By treating income prediction as a classification task we ignore (lose information about) the similarity between different "classes". The bracket [20,000 - 30,000] is closer to the bracket [30,000 - 40,000] than to [150,000 - 200,000]. Classification wouldn't take this into account. See my answer here for more details.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/541889', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/227013/']}
jdg_73430
stackexchange
llm_judgeable_groundtruth_similarity
4197295
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was working on submodular set functions, and I came across a property on Wikipedia, that I was not able to prove/find any reference for. On the Wikipedia article on submodular set functions, Under the section Properties, The third property states that : More generally, $g(S) = h(f(S))$ is submodular, for any non decreasing concave function $h$ . They claim that this is true for all submodular $f(S)$ . Is there a reference for this property where I will be able to find a proof? Alternatively, is there is a simple proof for this that I may be missing? Now provide the response and nothing else.
If $f$ was differentiable at $(0,0)$ , then, for each $v\in\Bbb R^2$ , $f'(0,0)(v)=D_vf(0,0)$ . But that's impossible, since the map $v\mapsto D_vf(0,0)$ is simply $f$ , which is not linear.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4197295', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/614070/']}
jdg_73431
stackexchange
llm_judgeable_groundtruth_similarity
11297273
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to know what's the difference between XML SAX parser, Pull parser & DOM parser in Android. In which condition, which one is better in performance and implementation? Thanx.Khobaib. Now provide the response and nothing else.
Dom Parser - It uses object based approach. i.e., it first loads the entire XML in memory, converts the XML nodes into objects and then starts parsing them. So, it is pretty slower. SAX and PULL Parser - they use event based approach. Both are almost same in terms of memory and performance. However there are few distinguishing situations on when to use them as described below. Dom Parser - Use it when you need to validate the entire XML before parsing.Never use it when the XML is too large and validation can be compromised.Once it starts parsing, it parses from starting node to ending node. there is no way to parse only particular nodes. SAX - Use it when you want to parse the entire XML.Once it starts parsing, it parses from starting node to ending node. there is no way to parse only particular nodes. PULL - Use it when you don't want to parse the entire XML. It is easier to implement than SAX, because you don't have to maintain the state of your parser. You can pull only a particular node you are interested at. My suggestion is - Need validation - Go for DOM , File size is small or you are not interested in parsing of entire XML- Go for PULL , File size is large or you need to parse the entire XML file - Go for SAX .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11297273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1433187/']}
jdg_73432
stackexchange
llm_judgeable_groundtruth_similarity
568329
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The explicit form for the transformation into hyperspherical coordinates is $$x_1 = r\sin\theta_1 \sin\theta_2 \dotsb \sin \theta_{n-1} \\ x_2 = r\sin\theta_1 \sin\theta_2 \dotsb \cos \theta_{n-1} \\ x_3 = r\sin\theta_1 \dotsb \cos \theta_{n-2}\\ \vdots \\ x_{n} = r \cos\theta_1$$ for $0 \leq \theta_i \leq \pi \;\;(1\leq i \leq n-2)$ and $0\leq \theta_{n-1} \leq 2\pi$. It has Jacobian $r^{n-1} \sin^{n-2}\theta_1 \sin^{n-3}\theta_2 \dotsb \sin{\theta_{n-2}}.$ I wonder if someone could provide me with a reference for an intuitive explanation as to why this indeed is a diffeomorphism from $\mathbb{R}^n\setminus\{0\} \to \mathbb{R}^n \setminus\{0\}$, and why this is the Jacobian. Or perhaps someone could indicate the idea of a proof. Thanks as always Now provide the response and nothing else.
I will sketch the proof for the Jacobian by induction on $n$, thus in a different fashion compared to What is the metric tensor on the n-sphere (hypersphere)? (which perhaps is smoother, though). For $n=2$, we have the transformation law $A_2$$$x^1 = \rho\cos\phi\\x^2 = \rho\sin\phi.$$ Hence$$\frac{\partial(x^1,x^2)}{\partial(\rho,\phi)}=\left(\begin{array}[cc]\ \cos\phi & -\rho \sin\phi\\\sin\phi & \rho \cos\phi\end{array}\right)$$and the Jacobian is $J_2=\rho(\cos^2\phi+\sin^2\phi)=\rho$, i.e. $dx^1 dx^2=\rho\, d\rho d\phi$. The idea of induction works as follows: for $n=3$, instead of using the transformation $A_{3}$, given by$$x^1 = \rho \cos\phi \sin\theta\\x^2 = \rho \sin\phi \sin\theta\\x^3 = \rho \cos\theta,$$directly, one uses the two combined transformations $A_{23}$, given by$$z^1 = \rho \cos\phi\\z^2 = \rho \sin\phi\\\theta=\theta,$$with Jacobian $J_2=\rho$, and, letting $|z|\equiv\sqrt{(z^1)^2+(z^2)^2}=\rho$, $A_{3z}$, given by$$x^1 = z^1 \sin\theta\\x^2 = z^2 \sin\theta\\x^3 = |z| \cos\theta.$$Now,$$\frac{\partial(x^1,x^2,x^3)}{\partial(z^1,z^2,\theta)}=\left(\begin{array}[ccc]\ \sin\theta & 0 & z^1\cos\theta\\0 & \sin\theta & z^2\cos\theta\\z^1|z|^{-1}\cos\theta & z^2 |z|^{-1}\cos\theta & -|z|\sin\theta\end{array}\right)$$which gives, expanding with respect to the last line, $$J_{3z}=z^1|z|^{-1}\cos\theta(z^1\cos\theta\sin\theta)+z^2 |z|^{-1}\cos\theta(z^2\sin\theta\cos\theta)+|z|\sin\theta \sin^2\theta\\=|z|\sin\theta$$so that, being $A_3=A_{3z}\circ A_{23}$,$$J_3 = J_2 \cdot J_{3z} = \rho |z|\sin\theta = \rho^2 \sin\theta,$$ i.e. $dx^1dx^2dx^3 = \rho^2\sin\theta\, d\rho d\theta d\phi$. Let $A_n$ be$$x^1 = \rho \cos\phi \sin\theta_1 \sin\theta_2 \ldots \sin\theta_{n-3} \sin\theta_{n-2}\\x^2 = \rho \sin\phi \sin\theta_1 \sin\theta_2 \ldots \sin\theta_{n-3} \sin\theta_{n-2}\\x^3 = \rho \cos\theta_1 \sin\theta_2 \ldots \sin\theta_{n-3} \sin\theta_{n-2}\\\ldots\\x^{n-1}=\rho \cos\theta_{n-3}\sin\theta_{n-2}\\x^n = \rho \cos\theta_{n-2};$$the transformation itself is built recursively: at the $n$th step, multiply the $n-1$ old coordinates by sine of the new angle and add a new coordinate $x^n$ as $\rho$ times the cosine of the new angle.In the same spirit of the above calculation, let $A_{(n-1)n}$ be$$z^1 = \rho \cos\phi \sin\theta_1 \sin\theta_2 \ldots \sin\theta_{n-3}\\z^2 = \rho \sin\phi \sin\theta_1 \sin\theta_2 \ldots \sin\theta_{n-3}\\z^3 = \rho \cos\theta_1 \sin\theta_2 \ldots \sin\theta_{n-3}\\\ldots\\z^{n-1}=\rho \cos\theta_{n-3}\\\theta_{n-2} = \theta_{n-2}$$which, by induction, will have Jacobian $J_{n-1} = \rho^{n-2}\sin\theta_{1}\sin^2\theta_{2}\ldots\sin^{n-3}\theta_{n-3}$.Let $A_{nz}$ be$$x^1 = z^1 \sin\theta_{n-2}\\x^2 = z^2 \sin\theta_{n-2}\\\ldots\\x^{n-1} = z^{n-1} \sin\theta_{n-2}\\x^n = |z| \cos\theta_{n-2}.$$The Jacobi matrix reads$$\frac{\partial(x^1,x^2,\ldots,x^{n-1},x^n)}{\partial(z^1,z^2,\ldots,z^{n-1},\theta_{n-2})}=\left(\begin{array}[ccccc]\ \sin\theta_{n-2} & 0 & \ldots & 0 & z^1\cos\theta_{n-2}\\0 & \sin\theta_{n-2} & \ldots & 0 & z^2\cos\theta_{n-2}\\\vdots & \vdots & \ddots & \vdots & \vdots\\0 & 0 & \ldots & \sin\theta_{n-2} & z^{n-1}\cos\theta_{n-2}\\z^1|z|^{-1}\cos\theta_{n-2} & z^2|z|^{-1}\cos\theta_{n-2} & \ldots & z^{n-1}|z|^{-1} & -|z|\sin\theta_{n-2}\end{array}\right),$$ where $$|z|\equiv \sqrt{\sum_{k=1}^{n-1}(z^k)^2 }=\rho.$$Again, the Jacobian for $A_{nz}$ is easily expressed expanding with respect to the last row$$J_{nz} = z^1|z|^{-1}\cos\theta_{n-2} z^1\cos\theta_{n-2} \sin^{n-2}\theta_{n-2}\\ + z^2 |z|^{-1}\cos\theta_{n-2} z^2\cos\theta_{n-2}\sin^{n-2}\theta_{n-2}\\ + \ldots\\ + z^{n-1}|z|^{-1}\cos\theta_{n-2} z^{n-1}\cos\theta_{n-2}\sin^{n-2}\theta_{n-2}\\ + |z| \sin^2\theta_{n-2}\sin^{n-2}\theta_{n-2}\\ = |z| \sin^{n-2}\theta_{n-2}.$$Finally $A_{n} = A_{nz}\circ A_{(n-1)n}$ and hence$$J_n = J_{n-1}\cdot J_{nz} = \left( \rho^{n-2} \prod_{k=0}^{n-3}\sin^k\theta_{k}\right) |z| \sin^{n-2}\theta_{n-2}\\=\rho^{n-1} \prod_{k=0}^{n-2} \sin^k\theta_k,$$where, for notational convenience $\theta_0\equiv \pi/2$.This proves the Jacobian formula for any $n$. If I recall correctly, any transformation with nonsingular Jacobi matrix gives rise to a local diffeomorphism, therefore our derivation above proves that this change of coordinates is good, except at the “north poles” and “south poles” $\theta_k =0,\pi$. To see that this indeed covers the whole $\mathbb{R}^n$, with the exception of such singularities, one can work out explicitly the inversion formulas, expressing the hyperspherical coordinates in terms of $x^1,\ldots,x^n$. This can be done, again, recursively and I think it is nicely given on https://en.wikipedia.org/wiki/N-sphere . I hope it helps!
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/568329', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76333/']}
jdg_73433
stackexchange
llm_judgeable_groundtruth_similarity
22234983
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Resource model that can be voted on using the "Acts As Votable" gem ( Github page ). The voting system works perfectly but I am trying to display pages ordered by how many votes each Resource has. Currently my controller pulls Resources based on tags and aren't ordered: @resources = Resource.where(language_id: "ruby") If I take an individual resource and call "@resource.votes.size" it will return how many votes it has. However, votes is another table so I think some sort of join needs to be done but I have not sure how to do it. What I need is a nice ordered ActiveRecord collection I can display like this? Book name - 19 votes Book name - 15 votes Book name - 9 votes Book name - 8 votes Now provide the response and nothing else.
Try the following: @resources = Resouce.select("resources.*, COUNT(votes.id) vote_count") .joins(:votes) .where(language_id: "ruby") .group("resources.id") .order("vote_count DESC")@resources.each { |r| puts "#{r.whatever} #{r.vote_count}" } To include resources with 0 votes, use an outer join. If the example below doesn't work as is you'll have to alter the joins statement to join across the correct relations. @resources = Resource.select("resources.*, COUNT(votes.id) vote_count") .joins("LEFT OUTER JOIN votes ON votes.votable_id = resources.id AND votes.votable_type = 'Resource'") .where(language_id: "ruby") .group("resources.id") .order("vote_count DESC")
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22234983', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1239217/']}
jdg_73434
stackexchange
llm_judgeable_groundtruth_similarity
639280
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a trusted third party's root certificate. I install this to the 'Trusted Root Certification Authorities' certificate store in Windows Server 2008, but it disappears from the certificate store at unknown times. What could be causing this? The certificate has not expired It doesn't appear to have been revoked I can't see any relevant event logs around the times of incidents It happens on my dev machine, test environments and production servers The production servers are not on a domain, just a workgroup (hosted in Rackspace) Querying group policy (gpresult /h foo.html) doesn't report that I'm prevented from trusting 3rd party root CA's I'm using the following code in a c# command line app to install the cert: X509Certificate2 certificate = new X509Certificate2("trusted-root-cert.cer");X509Store store = new X509Store(StoreName.AuthRoot, StoreLocation.LocalMachine);store.Open(OpenFlags.ReadWrite);store.Add(certificate);store.Close(); The certificate installation code happens to run every time I release a change to my application. I don't see how this could do any harm but it's worth mentioning. There might be something wrong about the way I'm installing the certificate. What is the preferred way of installing? Now provide the response and nothing else.
Doing more thorough digging in the Application event log, this entry occured: Log Name: ApplicationSource: Microsoft-Windows-CAPI2Date: 24/10/2014 12:49:10Event ID: 4108Task Category: NoneLevel: InformationKeywords: ClassicUser: N/AComputer: [redacted]Description:Successful auto delete of third-party root certificate:: Subject [...redacted...] It turns out that 3rd party root CA's can be deleted by Windows if they are not recognised: Typically, a certificate is used when you use a secure Web site or when you send and receive secure e-mail. Anyone can issue certificates, but to have transactions that are as secure as possible, certificates must be issued by a trusted certificate authority (CA). Microsoft has included a list in Windows XP and other products of companies and organizations that it considers trusted authorities. http://toastergremlin.com/?p=144
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/639280', 'https://serverfault.com', 'https://serverfault.com/users/33677/']}
jdg_73435
stackexchange
llm_judgeable_groundtruth_similarity
40168
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Currently I have my LED's anode connected to the GPIO pin that controls it and the cathode is attached to a resistor and the other end of the resistor is on my breadboards - rail. How can I use jumper cables so I don't have to twist and ruin all my resistors? All of the tutorials I find online show how to do it without a ribbon cable, which is what I am using. Current Wiring: http://i.imgur.com/kBwA2hP.jpg Also, a side question. One of the LEDs has 4 wires on it and it was not labelled what is what? How would I go about connecting that one up? I tried putting the longest of the 4 into the GPIO pin and then putting the smaller 3 connected to resistors onto the breadboards - side on the edge. 4 wire LED: http://i.imgur.com/jNTRySZ.jpg?1 EDIT: The white LED with 4 wires is this http://www.mpja.com/5mm-Red-Green-Blue-Full-Color-LED-4-Lead-Common-Anode/productinfo/17137%20OP/ and the longest wire is the anode and it now works fine. Now I need more than ever to figure out how to use the jumper cables so I have room to work with multiple GPIOs for this one light. Anyone know how to do that or could link to a guide? Now provide the response and nothing else.
Looking at your picture, why not make use of the extra 10 unused rows on your breadboard? I am going to assume that you've figured out that "row 1, holes a-e" are all interconnected, "row 1, holes f-j" are interconnected, "row 2, holes a-e" are interconnected, etc. Your unused rows 21-30 aren't used, so, for example, the anode of the left-hand blue LED is plugged into row 19, hole b. Plug the LED's cathode into row 21, hole D, plug one leg of the resistor into row 21, hole B, then plug the other resistor leg into your ground (-) strip. For LEDs that are farther away from the GPIO pins, let's say the left-hand Red LED, if that was plugged into Row 3, hole D [[ I can't tell for sure in the picture ]] remove the LED from there, run a jumper wire from Row 3, hole D to Row 23, Hole D, put the anode of the Red LED In Row 23 Hole B, the cathode in Row 24, Hole D, one leg of the resistor in Row 24, Hole B and the other leg of the resistor into the ground strip. If you don't have any decent jumper wire handy, see if you can find an old chunk of Cat5 solid ethernet cable. Strip the outer casing, separate the strand pairs into individual strands, smooth the "twisties" in the wire out (this can be rough on your fingertips if you're making a lot of jumper wires) and strip off each end of the wire. I've made lots of jumper wires from old Cat5 and even some old 25-pair phone wire (which has some really cool color combinations) and it works very well. Be careful with Cat6 though - it's actually one gauge bigger and it might be a bit tougher to insert in your breadboard holes. If whatever cable you find, it there's a plastic '+' separating the 4 pairs of wires, it's probably Cat6 or higher.
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/40168', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/38839/']}
jdg_73436
stackexchange
llm_judgeable_groundtruth_similarity
2940442
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Sure this quiestion has a quick answer but I cannot find it. According to Wikipedia in a topological space $X$ any closed set $F\subseteq X$ is sequentially closed, but the converse anly holds when $X$ is a sequential space. But I for the, the following proves that actually sequentially closed implies closed: Suppose that $F$ is sequentially closed: for any $x\in X$ and any sequence $(x_n)$ on $F$ with limit $x$ then $x\in F$ . But if because the definition of a limit point of a sequence , for each neighbourhood of $x$ there exists $n_0\in\mathbb N$ such that $x_n\in V$ whenever $n\geq n_0$ . And because the sequence is made of eements of $F$ , every neighbourhood of $x$ meets $F$ . From this i conclude that every sequentially closed set is closed. Where am I laying?? Which step is false? Because obviously my reasoning is false: for examplee the sequential topology is finer than the usual topology (I can see that open implies sequentially open). Hence the closures (and thus the closed sets) must be smaller. Also the example given by @Arthur. Now provide the response and nothing else.
The reason that sequentially closed doesn't imply closed is that a sequence has too few points. The best counterexample I can think of concerns the first uncountable ordinal , usually denoted by $\omega_1$ . Under the order topology, $[0,\omega_1)\subseteq [0,\omega_1]$ is sequentially closed, but not closed. Any convergent sequence of countable ordinals (i.e. any sequence of points in $[0,\omega_1)$ ) has a countable limit, so $[0,\omega_1)$ is sequentially closed. But the complement of $[0,\omega_1)$ in $[0, \omega_1]$ , which is just $\{\omega_1\}$ , is not open, so $[0, \omega_1)$ is not closed. You can fix this discrepancy by using nets rather than sequences. A net is a generalisation of a sequence where you use an arbitrary set with a nice enough order as the index set, rather than limiting yourself to just $\Bbb N$ . Your flaw is thinking that because $x$ is a limit point of $F$ , there is a sequence of points $x_n\in F$ converging to $F$ . That's not necessarily the case (as evidenced by my example above). Mistake in the new edit: Yes, if $F$ is sequentially closed, it contains all its sequential limit points. So of course, if $x$ is the limit of a sequence in $F$ , it is contained in $F$ . However, that doesn't show that $F$ is closed, because there may be limit points which aren't sequential limit points. You haven't checked those, and therefore you cannot conclude that $F$ contains them, which again means you cannot conclude that $F$ is closed.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2940442', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/415701/']}
jdg_73437
stackexchange
llm_judgeable_groundtruth_similarity
40739
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that a star is encompassed by a Dyson Sphere . Do we need a position control system for the Dyson Sphere to keep its origin always aligned with the center of the star? Will it stay aligned itself and automatically neutralize small perturbational forces? Or will a small perturbation remove it out of the orbit of the star, so that it will be eventually destroyed and swallowed by the star? Now provide the response and nothing else.
Do we need a position control system for the Dyson Sphere to keep its origin always aligned with the center of the star? The important principle for this question is spherical symmetry with the application of Gauss's law. For a electrically charged sphere the charges are accumulated on the surface, leading to the situation where the field is effectively zero everywhere within that sphere. Mathematically a Dyson sphere is equivalent to that. Of course, the field within the Dyson sphere wouldn't be zero because of the sun, but the contribution from the Dyson sphere sphere structure itself could be zero. I say "could" because this assumes the matter is perfectly distributed. Such a system is neither stable or unstable. It's just balanced. its origin If the perfect symmetry is satisfied (and the neighborhood is clear, more on that later), the location of the star within the sphere is irrelevant to its stability actually. The system would be just as stable with the star close to the edge as it would be exactly at the Dyson sphere's center. If the matter was anything less than perfectly spherical it would be categorically unstable, not just balanced. I believe this is the same sort of consequence as Earnshaw's theorem. Any arrangement of matter won't create an inherently stable point in empty space where the field's divergence is zero. For purposes of science fiction, I would say the answer is "yes", active stabilization of some form would be needed. Even if you assume perfect symmetry, there are ways other objects in the solar system would cause the sphere and the star to crash into each other in the absence of active controls. I will have to break this down into two cases: Factors apply even if the sun is at the origin: the radiation in our part of the galaxy isn't isotropic, so it would push the sphere to hit the star eventually gravitational waves can act non-uniformly, and this can lead to other modes of instability (next list) to drive them to hit eventually Factors that apply if the sun it slightly displaced from the origin: if the sun wasn't dead center, tidal forces from other astronomical bodies would accelerate it toward the side if the Dyson Sphere was non-uniform and the sun was not at the CM, it would be accelerating, and that would be unstable acceleration However, if we are assuming an advanced civilization built the Dyson sphere, it shouldn't be a difficult task to use controlled reflected radiation from the sun itself to keep it in the center. There are other, much more major, problems with the physicality of such a structure.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/40739', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/5077/']}
jdg_73438
stackexchange
llm_judgeable_groundtruth_similarity
676123
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've heard some opinions that the SOAP/HTTP web service call stack is "thick" or "heavyweight," but I can't really pinpoint why. Would it be considered thick because of the serialization/deserialization of the SOAP envelope and the message? Is that really a heavy-weight operation? Or is it just considered "thick" compared to a raw/binary data transfer over a fixed connection? Or is it some other reason? Can anyone shed some light on this? Now provide the response and nothing else.
SOAP is designed to be abstract enough to use other transports besides HTTP. That means, among other things, that it does not take advantage of certain aspects of HTTP (mostly RESTful usage of URLs and methods, e.g. PUT /customers/1234 or GET /customers/1234 ). SOAP also bypasses existing TCP/IP mechanisms for the same reason - to be transport-independent. Again, this means it can't take advantage of the transport, such as sequence management, flow control, service discovery (e.g. accept() ing a connection on a well-known port means the service exists), etc. SOAP uses XML for all of its serialization - while that means that data is "universally readable" with just an XML parser, it introduces so much boilerplate that you really need a SOAP parser in order to function efficiently. And at that point, you (as a software consumer) have lost the benefit of XML anyways; who cares what the payload looks like over the wire if you need libSOAP to handle it anyways. SOAP requires WSDL in order to describe interfaces. The WSDL itself isn't a problem, but it tends to be advertised as much more "dynamic" than it really is. In many cases, a single WSDL is created, and producer/consumer code is auto-generated from that, and it never changes. Overall, that requires a lot of tooling around without actually solving the original problem (how to communicate between different servers) any better. And since most SOAP services run over HTTP, the original problem was already mostly solved to begin with.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/676123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60096/']}
jdg_73439
stackexchange
llm_judgeable_groundtruth_similarity
4831741
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Dave Ward says, It’s not exactly light reading, but section 4.2 of RFC 3986 provides for fully qualified URLs that omit protocol (the HTTP or HTTPS) altogether. When a URL’s protocol is omitted, the browser uses the underlying document’s protocol instead. Put simply, these “protocol-less” URLs allow a reference like this to work in every browser you’ll try it in: //ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js It looks strange at first, but this “protocol-less” URL is the best way to reference third party content that’s available via both HTTP and HTTPS. This would certainly solve a bunch of mixed-content errors we're seeing on HTTP pages -- assuming that our assets are available via both HTTP and HTTPS. Is this completely cross-browser compatible? Are there any other caveats? Now provide the response and nothing else.
I tested it thoroughly before publishing. Of all the browsers available to test against on Browsershots , I could only find one that did not handle the protocol relative URL correctly: an obscure *nix browser called Dillo . There are two drawbacks I've received feedback about: Protocol-less URLs may not work as expected when you "open" a local file in your browser, because the page's base protocol will be file:///. Especially when you're using the protocol-less URL for an external resource like a CDN-hosted asset. Using a local web server like Apache or IIS to test against http://localhost addresses works fine though. Apparently there's at least one iPhone feed reader app that does not handle the protocol-less URLs correctly. I'm not aware of which one has the problem or how popular it is. For hosting a JavaScript file, that's not a big problem since RSS readers typically ignore JavaScript content anyway. However, it could be an issue if you're using these URLs for media like images inside content that needs to be syndicated via RSS (though, this single reader app on a single platform probably accounts for a very marginal number of readers).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4831741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102704/']}
jdg_73440