source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 37363121 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to better understand services by means of a very simple application which gets and updates the value of string within a service and displays it in a component. Here's the service: import {Injectable} from 'angular2/core';@Injectable()export class SharedService { dataString: string; insertData(data) { this.dataString = data }} Here's the main 'app' component: import {Component} from 'angular2/core';import {OtherComponent} from './other';import {SharedService} from'./shared.service';@Component({ selector: 'my-app', providers: [SharedService], directives: [OtherComponent], template: ` <input type="text" #someValue> <button (click)="setSharedValue(someValue.value)">Change value in shared service</button> <br><br> <other></other> `})export class AppComponent { constructor(private _sharedService: SharedService){} setSharedValue(value){ this._sharedService.insertData(value); }} ...and here's the 'other' component: import {Component, OnInit} from 'angular2/core';import {SharedService} from './shared.service';@Component({ selector : "other", template : ` I'm the other component. The shared data is: <p>{{data}}</p> `,})export class OtherComponent implements OnInit{ data: string; constructor(private _sharedService: SharedService){} ngOnInit() { this.data = this._sharedService.dataString; }} Here's a plunkr. When text is added to the input and the button is clicked I want to display the value entered in the 'other' component, just to demonstrate getting and setting the service data. However, it's just silently failing. Can anyone explain what I'm doing wrong? Thanks
Now provide the response and nothing else.
| Your code is correct, it's just your other component do not know that you updated the service, so it will not request a new Data.For this case Angular2 is using Observables : The Service : @Injectable()export class SharedService { // Observable string source private dataStringSource = new Subject<string>(); // Observable string stream dataString$ = this.dataStringSource.asObservable(); // Service message commands insertData(data: string) { this.dataStringSource.next(data) }} The Main Component @Component({ selector: 'my-app', providers: [SharedService], directives: [OtherComponent], template: ` <input type="text" #someValue> <button (click)="setSharedValue(someValue.value)">Change value in shared service</button> <br><br> <other></other> `})export class AppComponent { constructor(private _sharedService: SharedService){} setSharedValue(value){ this._sharedService.insertData(value); }} The Other Component @Component({ selector : "other", template : ` I'm the other component. The shared data is: <p>{{data}}</p> `,})export class OtherComponent implements OnInit{ data: string; constructor(private _sharedService: SharedService){} ngOnInit() { this._sharedService.dataString$.subscribe( data => { this.data = data; }); }} The updated plunker can be found here : https://plnkr.co/edit/neM6EdYYUkGkRpF0fKGS?p=preview Interaction between components in Angular2 can be found here : https://angular.io/docs/ts/latest/cookbook/component-communication.html#!#bidirectional-service | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37363121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/405781/']} | jdg_85341 |
stackexchange | llm_judgeable_groundtruth_similarity | 9530 |
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When I edit large documents, I would like to see where I'm at by seeing the outline (with no-content) in a separate buffer. Like when you read a PDF file there is a TOC on the left. (see below) In org-mode it's possible to expand/collapse outline. But is it possible to have a static outline in the left(or right) in a separate buffer so that when you click on the headings, the other buffer moves to that position? Kinda like this but for org-mode? [Edit] The clone-indirect-buffer is very close to what I want. The missing piece of the puzzle is to jump to the same location when clicking a heading/(or any point really). For this I have tried to write some code: Move to other cloned buffer to same point? (sync position of indirect buffers) (org-mode) But it doesn't function if content is collapsed. If that can be made to work, then the clone-inderect-buffer is a complete solution to this. [Edit2 Solution] The code in the link above and in the answer below combine niceley to solve the jumping back and forth. ;first call 'clone-indirect-buffer'. Then...;This function works between buffer and it's clone.(defun my/goto-same-spot-in-other-buffer () "Go to the same location in the other buffer. Useful for when you have cloned indirect buffers" (interactive) (let ((my/goto-current-point (point))) (other-window 1) (goto-char my/goto-current-point) (when (invisible-p (point)) (org-reveal))));This function is a clone-to-buffer jump only:; It does find the other buffer first thou instead of just jumping to the other ; window as does the function above.(defun my/jump-to-point-and-show () "Switch to a cloned buffer's base buffer and move point to thecursor position in the clone." (interactive) (let ((buf (buffer-base-buffer))) (unless buf (error "You need to be in a cloned buffer!")) (let ((pos (point)) (win (car (get-buffer-window-list buf)))) (if win (select-window win) (other-window 1) (switch-to-buffer buf)) (goto-char pos) (when (invisible-p (point)) (show-branches)))))(global-set-key (kbd "<s-mouse-1>") 'my/goto-same-spot-in-other-buffer)(global-set-key (kbd "s-m") 'my/goto-same-spot-in-other-buffer)(global-set-key (kbd "<C-s-mouse-1>") 'my/jump-to-point-and-show)(global-set-key (kbd "C-s-m") 'my/jump-to-point-and-show)
Now provide the response and nothing else.
| A few options come to mind. The first two are speedbar , which allegedly plays well with org-mode , and minimap , although I've not used them so can't vouch for them personally. What might be the simplest option (and also the most flexible), would be to use an indirect buffer . In practice, you would go to the org buffer for which you'd like an outline, hit M-x clone-indirect-buffer (use C-u M-x clone-indirect-buffer if you'd like to control what the clone is called), and then boom, there's another copy of the buffer for you to use. Put that clone in a window or frame side-by-side with the original buffer and, in the clone, adjust the outline to your tastes. It doesn't get you the "click on heading in outline" functionality you mentioned, but gets you the spirit of the sidebar. Edit: in response to your comment, here's a simple command that, when invoked from the buffer clone , will switch to the base buffer and move point to wherever the cursor was in the buffer clone: (defun jump-to-point-and-show () "Switch to a cloned buffer's base buffer and move point to thecursor position in the clone." (interactive) (let ((buf (buffer-base-buffer))) (unless buf (error "You need to be in a cloned buffer!")) (let ((pos (point)) (win (car (get-buffer-window-list buf)))) (if win (select-window win) (other-window 1) (switch-to-buffer buf)) (goto-char pos) (when (invisible-p (point)) (show-branches))))) | {} | {'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/9530', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/5350/']} | jdg_85342 |
stackexchange | llm_judgeable_groundtruth_similarity | 6877659 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've created an string tokenizer like this: stringTokenizer = CFStringTokenizerCreate( NULL , (CFStringRef)str , CFRangeMake(0, [str length]) , kCFStringTokenizerUnitSentence , userLocale); But how do I obtain those sentences now from the tokenizer? The CF String Programming Guide doesn't mention CFStringTokenizer or tokens (did a full-text search in the PDF).
Now provide the response and nothing else.
| Here is an example of CFStringTokenizer usage: CFStringRef string; // Get string from somewhereCFLocaleRef locale = CFLocaleCopyCurrent();CFStringTokenizerRef tokenizer = CFStringTokenizerCreate( kCFAllocatorDefault , string , CFRangeMake(0, CFStringGetLength(string)) , kCFStringTokenizerUnitSentence , locale);CFStringTokenizerTokenType tokenType = kCFStringTokenizerTokenNone;unsigned tokensFound = 0;while(kCFStringTokenizerTokenNone != (tokenType = CFStringTokenizerAdvanceToNextToken(tokenizer))) { CFRange tokenRange = CFStringTokenizerGetCurrentTokenRange(tokenizer); CFStringRef tokenValue = CFStringCreateWithSubstring( kCFAllocatorDefault , string , tokenRange); // Do something with the token CFShow(tokenValue); CFRelease(tokenValue); ++tokensFound;}// Clean upCFRelease(tokenizer);CFRelease(locale); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6877659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/221023/']} | jdg_85343 |
stackexchange | llm_judgeable_groundtruth_similarity | 6798304 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible that Entity Framework supports MongoDb database? Maybe somebody will write a EF MongoDb Provider?
Now provide the response and nothing else.
| Short answer - no, it's for sure possible, but not reasonable. MongoDB is document database and not support any physical relations between collections. EF is a good fit for relational databases like SQL, MySQL, etc. MongoDB works faster with embedded documents. So it's just two different approaches. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6798304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/481241/']} | jdg_85344 |
stackexchange | llm_judgeable_groundtruth_similarity | 5129211 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Qt4.6 and I have a QComboBox with a QCompleter in it. The usual functionality is to provide completion hints (these can be in a dropdown rather than inline - which is my usage) based on a prefix. For example, given chicken soupchilli peppersgrilled chicken entering ch would match chicken soup and chilli peppers but not grilled chicken . What I want is to be able to enter ch and match all of them or, more specifically, chicken and match chicken soup and grilled chicken . I also want to be able to assign a tag like chs to chicken soup to produce another match which is not just on the text's content. I can handle the algorithm but, Which of QCompleter's functions do I need to override? I'm not really sure where I should be looking...
Now provide the response and nothing else.
| Based on @j3frea suggestion, here is a working example (using PySide ). It appears that the model needs to be set every time splitPath is called (setting the proxy once in setModel doesn't work). combobox.setEditable(True)combobox.setInsertPolicy(QComboBox.NoInsert)class CustomQCompleter(QCompleter): def __init__(self, parent=None): super(CustomQCompleter, self).__init__(parent) self.local_completion_prefix = "" self.source_model = None def setModel(self, model): self.source_model = model super(CustomQCompleter, self).setModel(self.source_model) def updateModel(self): local_completion_prefix = self.local_completion_prefix class InnerProxyModel(QSortFilterProxyModel): def filterAcceptsRow(self, sourceRow, sourceParent): index0 = self.sourceModel().index(sourceRow, 0, sourceParent) return local_completion_prefix.lower() in self.sourceModel().data(index0).lower() proxy_model = InnerProxyModel() proxy_model.setSourceModel(self.source_model) super(CustomQCompleter, self).setModel(proxy_model) def splitPath(self, path): self.local_completion_prefix = path self.updateModel() return ""completer = CustomQCompleter(combobox)completer.setCompletionMode(QCompleter.PopupCompletion)completer.setModel(combobox.model())combobox.setCompleter(completer) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5129211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/123415/']} | jdg_85345 |
stackexchange | llm_judgeable_groundtruth_similarity | 782458 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
The situation In our organisation I made a GPO that creates a scheduled task. This task triggers at logon of two user accounts. It executes a powershell script that changes the DNS servers for the network connection. (To block some websites for these users, using dnsmasq. I know this is not a bulletproof solution, but its good enough.) The action for the scheduled task is this command: C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe And these are the parameters: -ExecutionPolicy Bypass –NoProfile –Command "& {C:\ProgramData\ORGNAME\scripts\SetDNS.ps1}" > C:\ProgramData\ORGNAME\scripts\SetDNS.log As you can see, the output gets sent to a log file. This is the content of the script: $wmi = get-wmiobject Win32_NetworkAdapterConfiguration -filter "ipenabled = 'true'"foreach($adapter in $wmi){ if($adapter.description -NotLike "*VMware*") { $adapter.SetDNSServerSearchOrder("XXX.XXX.XXX.XXX") }}invoke-expression -command "c:\windows\system32\ipconfig /flushdns" The problem The problem is that this works fine, approximately 9 out of 10 times. When it doesn't work, the task scheduler still reports exit code 0, but it seems the script does not even begin to execute because, nothing happens and the log file is not created. Some extra info The task runs under the SYSTEM account It runs with highest privilleges When the task is ran on demand it works fine All computers run Windows 7 enterprise (x64) Some things i've tried I thought maybe the task scheduler was triggering the script too fast and some things might not yet have initialized so i tried setting a 30s delay. Re-running the task every 5 minutes for 15 minutes. Restarting the task when it fails, this obviously doesn't work, since powershell.exe seems to return error code 0.
Now provide the response and nothing else.
| I think the exit code task scheduler is reporting is for running powershell.exe not your script. Try changing it to this: -ExecutionPolicy Bypass –NoProfile –Command "& {C:\ProgramData\ORGNAME\scripts\SetDNS.ps1; exit $LastExitCode}" > C:\ProgramData\ORGNAME\scripts\SetDNS.log $LastExitCode has the exit code of the your script, you have to bubble it up one level to the task scheduler. It would also be a good idea to add some error handling to your script and use Exit statements to set the actual exit codes. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/782458', 'https://serverfault.com', 'https://serverfault.com/users/73714/']} | jdg_85346 |
stackexchange | llm_judgeable_groundtruth_similarity | 7538444 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Google Map API V3 and I noticed there are a lot of markers which are here even though I don't need them. For examples, some schools or other places with InfoWindows appearing when clicking on them. Is there any way I can remove them or is it just not possible?
Now provide the response and nothing else.
| The only markers that should show up on the map are those you add yourself. Care to share your code or a page where we can see this happening? Update: ok, these aren't really 'markers' in the normal sense of the word, they're just points of interest, which happen to behave like markers in that you can click on them and see infowindows. It seems to me that these might be of the class MapTypeStyleFeatureType , probably of types like poi.medical, poi.park, transit.station.rail and so on. I wonder if you could use the MapTypeStyle. Maybe something like this: var myStyles =[ { featureType: "poi", elementType: "labels", stylers: [ { visibility: "off" } ] }];var myOptions = { zoom: 10, center: homeLatlng, mapTypeId: google.maps.MapTypeId.ROADMAP, styles: myStyles }; You might also want to look at the Styled Map Wizard Update, July 2016: The Maps API also now has an option you can specify in the MapOptions, clickableIcons , which if you set to false, the icons for these POIs will appear but clicking them doesn't open Google's infowindows. This saves you having to set the styles to hide the icons unless you want to, if all you need to do is prevent the clicks opening the infowindows.Just set clickableIcons: false in the options you initialise the Map with. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7538444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/943524/']} | jdg_85347 |
stackexchange | llm_judgeable_groundtruth_similarity | 102617 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Something fascinating is happening at the moment. force[[All, 6]]={0.5, 0.55, 0.56, 0.57, 0.58, 0.76, 0.76, 0.76, 0.76, 0.76, \0.76, 0.76, 0.76, 0.76, 0.76, 0.79, 0.79, 0.79, 0.79, 0.79, 0.8, 0.8, \0.8, 0.9, 0.96, 0.97, 0.98, 0.99} Above you will find a table of my numbers. I want to check if the number on i th place is even or not! I tried to do that with the following code: i = 2;EvenQ[IntegerPart[force[[i, 6]]*100]]Out[164]= False and the output is ok, since the second number in the table is 0.55 which is odd and not even. However, 4th number in the table is also odd, yet not according to mathematica: i = 4;EvenQ[IntegerPart[force[[i, 6]]*100]]Out[162]= True Dear god! I am a bit lost what went wrong here and what to do...
Now provide the response and nothing else.
| This is because you are working with floating point numbers. Welcome to the world of floating point arithmetic! The computer represents floating point numbers in binary, not decimal. The actual number stored for 0.57 is not precisely 0.57 , but slightly smaller. Why? Because 0.57 cannot be represented exactly in binary, just like 1/3 = 0.33333333... cannot be represented exactly in decimal. 0.33333333 is smaller than 1/3 , right? Thus IntegerPart[0.57*100] is 56 and not 57 . You can get the exact binary representation using RealDigits[0.57, 2] . | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/102617', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19601/']} | jdg_85348 |
stackexchange | llm_judgeable_groundtruth_similarity | 56406610 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need help. We need to insert data between two dates in the database using C# on button click For example, we have two calendars, Calendar1 & Calendar2 . Now from Calendar1 , we choose date 01/12/2018 And from Calendar2 , we choose date 30/12/2018 When we click the "Submit" button, we need to insert the data into the database 01/12/2018 to 30/12/2018 Ex. 01/12/201802/12/201803/12/2018
Now provide the response and nothing else.
| jsonEncode requires a Map<String, dynamic> , not a Week object. Calling your toJson() method should do the trick. var json = jsonEncode(week.toJson()); However, keep in mind that your toJson() method is also incorrect, as things like _goalList and the dates are still objects, not Maps or Lists. You'll need to implement toJson methods on those as well. To answer your specific questions: Because dart is not javascript / typescript. Dart checks types at runtime, therefore you have to explicitly tell it how to convert things - also there is no reflection in dart, so it can't figure it out by itself. You can use a library that uses code generation to do these things automatically for you - it still wont be possible at runtime though - read more about JSON serialization . The easiest way would be to implement the methods directly in the classes, as that's where you have access to in your root object. Keep in mind that the structure that jsonEncode needs is a Map<String, dynamic> , but the dynamic part really means List<dynamic> , Map<String, dynamic> or a primitive that is json compatible such as String or double - if you try to imagine how such a nested structure of said types looks, you'll realise that it's basically json. So when you do something like 'goalList': _goalList, you're giving it an object, which is not one of the allowed types. Hope this clears things up a bit. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/56406610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9521009/']} | jdg_85349 |
stackexchange | llm_judgeable_groundtruth_similarity | 173974 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my existing (Pre-SVN 1.5) merge strategy, we create a copy of the Trunk (called BasePoint) at the moment of branch-creation for referencing later during the merge. When we need to merge a branch back into the trunk, we perform 2 operations. Merge from BasePoint to LatestTrunk (Trunk has likely moved on since the original branch) into Working copy of Branch and then commit. At this point we typically check that the merge into the branch has not damaged anything Merge from LatestTrunk to LatestBranch back into Working copy of trunk and then commit. Documentation suggests that I use the new reintegrate merge on the Trunk and Merge from the Branch. Do I need to merge from the trunk into the dev branch first or is this included in the new reintegrate option? To put it another way, does the new merge --reintegrate functionality represent 'each of my previous merges' or 'the whole operation' ? (FWIW I am using TortoiseSVN 1.5.1)
Now provide the response and nothing else.
| The short answer is, You still have to do both steps. The SVN book explains the process for merging as: svn merge http://trunk/path while in a branch working copy svn merge --reintegrate http://branch/path while in a trunk working copy Notice the lack of revision numbers. This probably doesn't feel like a huge win. The new coolness is the ability to re-run the merge as you are coding in your branch, allowing you to keep the branch up to date with changes in trunk (without recording revision numbers by hand!). SVN keeps track of what needs to be merged in from trunk and what changes are unique to the branch. When you are done with the branch, --reintegrate uses that data to automatically merge only the branch changes back to trunk. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/173974', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11356/']} | jdg_85350 |
stackexchange | llm_judgeable_groundtruth_similarity | 33211287 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Assume I define a class Foo , which does not implement a default constructor.In addition, I have a class Bar , which "owns" an instance of Foo : class Foo() { private: int m_member; public: Foo( int value ) : m_member(value) { }};class Bar() { private: Foo m_foo; public: Bar( /* ... */ ) { int something; /* lots of code to determine 'something' */ /* should initialize m_foo to 'Foo(something)' here */ }}; The code as shown won't run, since Bar is trying to call the default constructor of Foo . Now what I am trying to do is to have the constructor of Bar first determine something and then pass the result to the constructor of Foo . One way to solve this is to have Bar only own a reference/pointer to Foo and initialize it after m_something was determined. However, I'd like to avoid that to make clear that the lifetime of m_foo is completely dependent on the lifetime of the owning class. Another way would be to implement a default constructor in Foo and set the value later, which I would also like to avoid, since any instance of Foo should have a valid value for it's member (at any time). What is the proper way to implement this? Am I stuck with a reference/pointer here?
Now provide the response and nothing else.
| The best idea will be to create helper function, that will calculate something and then just initialize m_foo in constructor initialized list. class Bar { private: Foo m_foo; public: Bar( /* ... */ ) : m_foo(calculate_something()) { }private: static int calculate_something() { int something = 0; // lot of code to calculate something return something; }}; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33211287', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3672544/']} | jdg_85351 |
stackexchange | llm_judgeable_groundtruth_similarity | 551082 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning File Handling in C++, but there is a problem here. I am trying to read a file.This code is meant to output Hello World. but it outputs 0x22fed8. #include <iostream>#include <fstream>using namespace std; int main() { fstream file; file.open("test.txt",ios::in|ios::out); file << "Hello World"; cout << file; file.close(); return 0; } What am I doing wrong?
Now provide the response and nothing else.
| Simple solution As others have pointed out, directly printing a file to a stream does't work. Printing the file contents would require opening another stream that reads from the file, or re-setting your stream's read pointer to the beginning and then reading the whole file again (as others have shown). C++ doesn't do this automatically but you can do it manually (here, opening a new stream): ifstream ifs("filename"); Now, writing the file contents to another stream is a trivial addition. Instead of writing the file, simply write the file buffer: cout << ifs.rdbuf() << endl; That's all! No loop needed to read the file line by line. Testing for valid streams While we're on the subject of loops, beware of code that reads files in a loop in the following manner: while ( !file.eof() ) This code produces an endless loop when there's a reading error. This an happen in many, many situations. Consider e.g. that the file is deleted while you read it, or that someone removes the USB device containing the file or that the file is wrongly formatted. All these cases would create an infinity loop here. Never only test for eof in a stream. Luckily, the solution to this problem is also quite simple. Furthermore, it explains why your original code yielded such a weird result. In fact, streams in C++ have an implicit conversion to a bool -like type. For reasons explained elsewhere (cue: safe bool idiom ), it is actually converted to void* . This makes it easy to test whether a stream is in a valid, not-at-end state and can safely be read from. Therefore, we can reformulate the loop appropriately: while (file) … The above code relies on the conversion to void* taking place. Any non null pointer indicates a valid stream. Now, the same happens in your code: cout << file; Since there's no appropriate overload for operator << that takes a stream, C++ looks for other overloads and finds an overload for pointers. So it implicitly calls something like this: cout << static_cast<void*>(file); Better solution I've explained a simple, working solution above. However, this solution requires re-opening the file and reading it to memory again. This doubles the work required. We can make this better by introducing a new class that acts like a stream and that actually sends each output to two streams at once. This way, you can write your data both to the file and to the standard stream at the same time. No need to re-read the file. The class in itself is quite simple. The following complete code demonstrates the general principle: #include <iostream>#include <fstream>struct sinkpair { sinkpair(std::ostream& a, std::ostream& b) : a(a), b(b) { } // Forward all ouputs to both streams. template <typename T> sinkpair& operator <<(T const& value) { a << value; b << value; return *this; } // Explicit overload needed for manipulators such as `endl`. sinkpair& operator <<(std::ostream& (*manip)(std::ostream&)) { a << manip; b << manip; return *this; }private: std::ostream& a; std::ostream& b;};int main() { std::ofstream ofs("test.txt"); sinkpair sp(std::cout, ofs); sp << "Hello" << std::endl;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/551082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/66437/']} | jdg_85352 |
stackexchange | llm_judgeable_groundtruth_similarity | 10893747 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this data type data Struct val = Empty | Exec1 val | Exec2 val And two dummy functions apply :: Struct -> Stringapply (Empty) = "matched Empty" apply (exec struct) = "matched Exec1 or Exec2" apply' :: Struct val -> Stringapply' (Empty) = "matched Empty" apply' (Exec1 _) = "matched Exec1"apply' (Exec2 _) = "matched Exec2" Second one is working fine, but the first one is causing error: "Parse error in pattern: exec". Can you plase explain why can't I match on constructor this way: apply (exec struct) = ... ? It's causing a lot of boilerplate code when I have multiple constructors in my datatype and must pattern match them all separately.
Now provide the response and nothing else.
| In general, if you have several constructors that share data, then it's usually better to refactor the data declaration to something like data Struct val = Empty | NonEmpty StructType valdata StructType = Exec1 | Exec2 Now you can pattern match in apply like this apply :: Struct -> Stringapply (Empty) = "matched Empty" apply (NonEmpty exec struct) = "matched Exec1 or Exec2" and you can still also pattern match to a specific Exec-type apply' :: Struct val -> Stringapply' (Empty) = "matched Empty" apply' (NonEmpty Exec1 _) = "matched Exec1"apply' (NonEmpty Exec2 _) = "matched Exec2" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10893747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1432699/']} | jdg_85353 |
stackexchange | llm_judgeable_groundtruth_similarity | 9095017 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to check whether two arrays contain the same data in any order.Using the imaginary compare method, I would like to do: arr1 = [1,2,3,5,4]arr2 = [3,4,2,1,5]arr3 = [3,4,2,1,5,5]arr1.compare(arr2) #true arr1.compare(arr3) #false I used arr1.sort == arr2.sort , which appears to work, but is there a better way of doing this?
Now provide the response and nothing else.
| Sorting the arrays prior to comparing them is O(n log n). Moreover, as Victor points out, you'll run into trouble if the array contains non-sortable objects. It's faster to compare histograms, O(n). You'll find Enumerable#frequency in Facets, but implement it yourself, which is pretty straightforward, if you prefer to avoid adding more dependencies: require 'facets'[1, 2, 1].frequency == [2, 1, 1].frequency #=> true | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9095017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1089642/']} | jdg_85354 |
stackexchange | llm_judgeable_groundtruth_similarity | 48928 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Question: Given a finite group $G$, how do I find the smallest $n$ for which $G$ embeds in $S_n$? Equivalently, what is the smallest set $X$ on which $G$ acts faithfully by permutations? This looks like a basic question, but I seem not to be able to find answers or even this question in the literature. If this is known to be hard, is there at least a good strategy that would give a small (if not the smallest) $n$ for many groups? Note: I do not care whether $G$ acts transitively on $X$, so for example for $G=C_6$ the answer is $n=5$ (mapping the generator to (123)(45)), not $n=6$ (regular action). Edit: If this is not specific enough, is there a method that could find the smallest $n$ (or one close to the smallest one) for any group of size $\le 10^7$ in 5 seconds on some computer algebra system?
Now provide the response and nothing else.
| Maybe this part of the answer helps. It was sufficient for many tasks, but fails at some reasonable problems. A permutation action is a multi-set of conjugacy classes of subgroups of a group. The degree of the action is the total of the indices of representatives from each class (with multiplicity). The kernel of the action is the intersection of all the classes, or equivalently, the intersection of the normal cores of the representatives from each class. If you have a collection of subgroups, organize them into conjugacy classes, sort them by their normal core (first by size, then by actual subgroup). For each normal core (especially starting with the small ones), choose the largest subgroup (smallest index) with that core. These largest subgroups are your ingredients. Now roughly speaking try all combinations: compute the index and the kernel, and keep the best one, save any improvements to disk if you plan on letting this run for a while. If you don't have a collection of subgroups handy, then you need to use group-specific ideas to get yourself some. If the Fitting subgroup is small, then cores are unlikely to be a real problem, so you just want big subgroups that are cheap. For small (≤10 7 or so) groups, you can compute local subgroups pretty cheaply. If the Fitting subgroup is large or weird, then cores will be plentiful and weird or at least hard to avoid (a particularly awful situation is a unique minimal normal subgroup of order 2). In this case, one computes a full subgroup lattice. You can use recent versions of magma to get a fast answer, but be sure to read the changelogs to make sure you weren't affected by a missing subgroup. At any rate, in practice this method failed to handle some of the perfect groups in the perfect group library. Perfect groups with large Fitting can require very large permutation representations, but the theoretical lower bounds were often quite a bit lower than what I was able to achieve in practice. If your groups are finitely presented and you have no good starting permutation rep, then you may find that coset enumeration is faster for finitely presented groups than for millions-of-points permutation groups. In other words, typically speaking you start with some permutation representation, because it is going to be faster than any finite presentation. However, for really bad permutation representations (close to regular), you may find coset enumeration is much faster. In particular, finding the index or core of a subgroup might be faster to use ACE than to use permutation group code. If your groups are small and solvable with low sectional rank, just compute the subgroup lattice and sort. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/48928', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3132/']} | jdg_85355 |
stackexchange | llm_judgeable_groundtruth_similarity | 29898 |
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to take a bare ATmega328 chip, and with an Arduino as ISP, burn a sketch directly to the chip (e.g. not needing a bootloader), or is the bootloader required to run Arduino sketches? I want to embed the ATmega328 into a different device, and do not want the overhead of the entire Arduino board. I have looked at https://www.arduino.cc/en/Tutorial/ArduinoToBreadboard and https://www.arduino.cc/en/Tutorial/ArduinoISP , and want to do the "minimal circuit" approach (as my application does not require 16MHz). The tutorial described has steps to first burn a bootloader, then remove the chip from the Arduino, and then install the sketch. Ideally, I would like to skip the bootloader and removing the chip from my Arduino. Ideally, I would like to do it all from Arduino Studio (not AVRDUDE CLI). I don't mind "dedicating" my Arduino to this (as I assume it's just burning the ArduinoISP sketch whenever I want to burn to a new ATMega328). Thanks
Now provide the response and nothing else.
| It is possible to skip the Burn Bootloader step. When you do Upload Using Programmer the bootloader is overwritten and thus serves no purpose. However, there is another reason for the Burn Bootloader step, to set the correct fuses for your board selection. The fuse settings configure things on the ATmega328 such as clock speed and clock source. If the ATmega328 you are using doesn't happen to have the correct fuse settings for the board selection you're using in the IDE then things may not work or may not work correctly. For this reason it's best to always do the Burn Bootloader step the first time you use the ATmega328 and any time you change the board selection or other settings related to that board selection(such as custom Tools menu settings). Since you're not using an external clock source for your ATmega328 make sure that you don't select a board that is configured for an external clock as this will cause the chip to no longer work until you have connected an external clock. | {} | {'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/29898', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/26494/']} | jdg_85356 |
stackexchange | llm_judgeable_groundtruth_similarity | 4767523 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Perl variable I populate from the database. Its name is $path . I need to get another variable $file which has just the filename from the pathname. I tried: $file = $path =~ s/.*\///; I am very new to Perl.
Now provide the response and nothing else.
| Why reinvent the wheel? Use the File::Basename module: use File::Basename;...$file = basename($path); Why did $file=$path=~s/.*\///; not work? =~ has higher precedence than = So $file = $path =~s/.*\///; is treated as: $file = ($path =~s/.*\///); which does the replacement in $path and assigns either 1 (if replacement occurs) or '' (if no replacement occurs). What you want is: ($file = $path) =~s/.*\///; which assigns the value of $path to $file and then does the replacement in $path . But again there are many problems with this solution: It is incorrect. A filename in Unix based systems (not sure about Windows) can contain newline. But . by default does not match a newline. So you'll have to use a s modifier so that . matches newline as well: ($file = $path) =~s/.*\///s; Most importantly it is not portable as it is assuming / is the path separator which is not the case with some platforms like Windows (which uses \ ), Mac (which uses : ). So use the module and let it handle all these issues for you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4767523', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/585531/']} | jdg_85357 |
stackexchange | llm_judgeable_groundtruth_similarity | 7121678 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two servers for a website. One server will have php code and database and another less speed server to store files only. I need to implement these in a way that file uploaded through the website must store at another server and also then can be downloaded from there. Can anyone suggest me best way to achieve that. I know the files can be transferred to another server by FTP functions of php just after uploading through website but doesn't seems a correct way. Or two server can be used for static media content like images only. Thanks
Now provide the response and nothing else.
| The best idea is to just have ALL the files, including the websites files on the "storage server". Basically what you do is mount the "shared folder", this means the websites files and other files you will be needing. (Most of the times you just have a /var/www-local/ folder on the storage server, which you mount in /var/www/ on the webserver). Make sure you mount it using NFS by adding it in the /etc/fstab file on the webserver.( More info on NFS ) The advantage of this idea is that when you want to expand, this is easily possible by putting a software loadbalancer (like HAProxy), adding as much webservers as you like and you will have your data in sync. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7121678', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246938/']} | jdg_85358 |
stackexchange | llm_judgeable_groundtruth_similarity | 1048220 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any method to solve for integer solution of a quadratic equation like following: $$ax^2 + bx + c = 0$$ where $a, b, c \in \mathbb{Z}$ If not is it possible for the Special case: ? $$x^2 -x + c = 0$$ where $c \in \mathbb{Z^+}$ I will prefer to have a analytical solution if it exists, other wise a polynomial time solution is also welcome (if it exists). Note: I've put $c \in \mathbb{Z^+}$ for your convenience. It's also fine if some one can give a solution for $c \in \mathbb{Z}$
Now provide the response and nothing else.
| For a general polynomial, $a_0+a_1x+\dots+a_nx^n$ , with integer coefficients, you may find all rational roots as follows. If a root is $x=\frac pq$ , with coprime $p,q$ , then $$a_0q^n+a_1pq+\dots+a_np^n=0$$ Thus $p$ is a divisor of $a_0$ and $q$ is a divisor of $a_n$ . You have thus a finite number of solutions for both $p$ and $q$ : respectively all divisors of $a_0$ (up to sign) and all divisors of $a_n$ . It's absolutely free of any heuristic. However, there may be many cases to check if $a_0$ or $a_n$ is a highly composite number, and anyway, you have to factor them first. And of course, if the $a_i$ share a common factor, you should factor it out first to reduce the computation (this is easy with GCD). To find only integer roots, consider only $q=1$ in the above. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1048220', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/189458/']} | jdg_85359 |
stackexchange | llm_judgeable_groundtruth_similarity | 40506390 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to use a lambda function to do a row by row computation. For example create some dataframe import pandas as pdimport numpy as npdef myfunc(x, y): return x + ycolNames = ['A', 'B']data = np.array([np.arange(10)]*2).Tdf = pd.DataFrame(data, index=range(0, 10), columns=colNames) using 'myfunc' this does work df['D'] = (df.apply(lambda x: myfunc(x.A, x.B), axis=1)) but this second case does not work! df['D'] = (df.apply(lambda x: myfunc(x.colNames[0], x.colNames[1]), axis=1)) giving the error AttributeError: ("'Series' object has no attribute 'colNames'", u'occurred at index 0') I really need to use the second case (access the colNames using the list) which gives an error, any clues on how to do this?
Now provide the response and nothing else.
| When you use df.apply() , each row of your DataFrame will be passed to your lambda function as a pandas Series. The frame's columns will then be the index of the series and you can access values using series[label] . So this should work: df['D'] = (df.apply(lambda x: myfunc(x[colNames[0]], x[colNames[1]]), axis=1)) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40506390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6056160/']} | jdg_85360 |
stackexchange | llm_judgeable_groundtruth_similarity | 61450 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I think my digital PLL is almost ready, but I've encountered this problem and I don't know what might be causing it, so I'd appreciate if you could help. I'm using the DPLL in a Costas loop. It seems to work fine but whenever the message crosses zero, lock is lost. After that, it ends up locking again. You can see here: Input to the Costas loop is: $$x(t) = m(t)cos(2\pi f_ct+\theta_{in})$$ $$m(t) = cos(2\pi f_mt)$$ where $f_m << f_c$ Not sure if this is a known phenomenom (it probably is) but I haven't been able to find anything related. It'd be nice if you could point out what's going on and/or how to fix the temporary loss of lock? One way I've managed to get it "working" consisted of squaring the message signal (although phase error rises a bit when message goes near zero), but I don't wish to square it so that wouldn't be of use to me. This is the Costas loop I'm implementing for this test:
Now provide the response and nothing else.
| You are having trouble because that's not a Costas loop. A Costas loop uses demodulated data in some form to change the phase that's expected from the signal. You're just taking the I/Q demodulated signal and applying it to the atan2 function; that makes a sort of linearized extended phase detector, but without determining that the phase should have flipped by 180 degrees, your loop is naturally trying to follow the phase that's actually coming in. You could make your loop into a simple Costas loop by changing the atan2 function into a multiply, and following it by a proper loop filter. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/61450', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/43958/']} | jdg_85361 |
stackexchange | llm_judgeable_groundtruth_similarity | 3227 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an algorithm to generate an array of N random numbers, such that the sum of the N numbers is 1, and all numbers lie within 0 and 1. For example, N=3, the random point (x, y, z) should lie within the triangle: x + y + z = 10 < x < 10 < y < 10 < z < 1 Ideally I want each point within the area to have equal probability. If it's too hard, I can drop the requirement. Thanks.
Now provide the response and nothing else.
| Let us first assume that you want to sample within x + y + z = 10 ≤ x ≤ 10 ≤ y ≤ 10 ≤ z ≤ 1 This doesn't make quite a difference, since the sample point will still lie in your requested area with high probability. Now you are left with sampling a point from a simplex . In the 3d example you get a 2d simplex (triangle) realized in 3d. How to pick a point uniformly at random was discussed in this blog post (see the comments). For your problem it would mean that you take $n-1$ random numbers from the interval $(0,1)$, then you add a $0$ and $1$ to get a list of $n+1$ numbers. You sort the list and then you record the differences between two consecutive elements. This gives you a list of $n$ number that will sum up to $1$. Moreover this sampling is uniform. This idea can be found in Donald B. Rubin, The Bayesian bootstrap Ann. Statist. 9, 1981, 130-134. For example ($n=4$) you have the three random numbers 0.4 0.2 0.1 then you obtain the sorted sequence 0 0.1 0.2 0.4 1 and this gives the differences 0.1 0.1 0.2 0.6 , and by construction these four numbers sum up to 1. Another approach is the following: first sample from the hypercube (that is you forget about x+y+z=1 ) and then normalize the sample point. The normalization is a projection from the $d$-hypercube to the $d-1$-simplex. It should be intuitively clear that the points at the center of the simplex have more "pre-image-points" than at the outside . Hence, if you sample uniformly from the hypercube, this wont give you a uniform sampling in the simplex. However, if you sample from the hypercube with an appropriate Exponential Distribution, than this effect cancels out. The Figure gives you an idea how both methods will sample. However, I prefer the "sorting" method due to its simple form. It's also easier to implement. | {} | {'log_upvote_score': 6, 'links': ['https://cs.stackexchange.com/questions/3227', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2553/']} | jdg_85362 |
stackexchange | llm_judgeable_groundtruth_similarity | 9945363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
See http://jsfiddle.net/aJ333/1/ in Chrome and then in either Firefox or Internet Explorer. The image is originally 120px, and I'm scaling down to 28px, but it looks bad pretty much no matter what you scale it down to. The image is a PNG and it has an alpha channel (transparency). Here's the relevant code: HTML: <a href="http://tinypic.com?ref=2z5jbtg" target="_blank"> <img src="http://i44.tinypic.com/2z5jbtg.png" border="0" alt="Image and video hosting by TinyPic"></a> CSS: a { width: 28px; height: 28px; display: block;}img { max-width: 100%; max-height: 100%; image-rendering: -moz-crisp-edges; -ms-interpolation-mode: bicubic;} The image-rendering and -ms-interpolation-mode lines of CSS didn't seem to do anything, but I found them online while doing some research on the problem.
Now provide the response and nothing else.
| It seems that you are right. No option scales the image better: http://www.maxrev.de/html/image-scaling.html I've tested FF14, IE9, OP12 and GC21. Only GC has a better scaling that can be deactivated through image-rendering: -webkit-optimize-contrast . All other browsers have no/poor scaling. Screenshot of the different output: http://www.maxrev.de/files/2012/08/screenshot_interpolation_jquery_animate.png Update 2017 Meanwhile some more browsers support smooth scaling: ME38 (Microsoft Edge) has good scaling. It can't be disabled and it works for JPEG and PNG, but not for GIF. FF51 (Regarding @karthik 's comment since FF21) has good scaling that can be disabled through the following settings: image-rendering: optimizeQualityimage-rendering: optimizeSpeedimage-rendering: -moz-crisp-edges Note: Regarding MDN the optimizeQuality setting is a synonym for auto (but auto does not disable smooth scaling): The values optimizeQuality and optimizeSpeed present in early draft (and coming from its SVG counterpart) are defined as synonyms for the auto value. OP43 behaves like GC (not suprising as it is based on Chromium since 2013 ) and its still this option that disables smooth scaling: image-rendering: -webkit-optimize-contrast No support in IE9-IE11. The -ms-interpolation-mode setting worked only in IE6-IE8, but was removed in IE9 . P.S. Smooth scaling is done by default. This means no image-rendering option is needed! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9945363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/710360/']} | jdg_85363 |
stackexchange | llm_judgeable_groundtruth_similarity | 63775 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $S=Spec(R)$ where $R$ is a Henselian local ring with fraction field $K$. Let $G$ and $G'$ be finite, flat group schemes of odd order over $S$ with isomorphic generic fibers (over $Spec(K)$). Does this isomorphism extend to one over $S$?When $R=\mathbb{Z}_p$, the answer is yes following Fontaine's discussion in his 1975 paper 'Groupes finis commutatifs sur les vecteurs de Witt' (where he works over Witt vectors of a perfect field. Mazur uses this result to prove Theorem I.4 in his Eisenstein Ideal paper). What happens when $R$ is any general local ring (not necessarily Henselian)? Are there rings other than $\mathbb{Z}_p$ for which the answer to the above question is also a yes?
Now provide the response and nothing else.
| No. Here is a counterexample. Let $R=\mathbf{Z}_p[\zeta_p]$, where $p$ is a prime number and $\zeta_p$ is a primitive $p$-th root of unity. Let $G=\mu_p=\mathrm{Spec}(R[x]/(x^p-1))$ and let $G'$ be the constant group scheme $\mathbf{Z}/p\mathbf{Z}$. Then $G$ and $G'$ are not isomorphic because the special fiber of $G$ is connected but that of $G'$ isn't. But there is an isomorphism $G'\to G$ over the fraction field of $R$ given by $1\mapsto \zeta_p$. A lot more is known about these things, but not so much more by me. There are a few people around here who could probably say a lot more. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/63775', 'https://mathoverflow.net', 'https://mathoverflow.net/users/13628/']} | jdg_85364 |
stackexchange | llm_judgeable_groundtruth_similarity | 3325601 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a custom ContentProvider I use to store fixes obtained from the GPS into a SQLite database. This class overrides ContentProvider methods (delete, insert, query...), but notice I have also included a method to query the last fix. public class FixesContentProvider extends ContentProvider { @Override public int delete(...) { ... } ... @Override public Cursor query(...) { ... } public Cursor getLastFix(Uri uri) { ... }} It is pretty easy to query data from other services or activities by calling getContentResolver() . ContentResolver cR = getContentResolver();Cursor fixes = cR.query(CONTENT_URI, ...); But I am not able to call my custom method ( FixesContentProvider#getLastFix ). How could I call custom methods defined in a ContentProvider? Maybe I don't understand how ContentProviders/ContentResolvers work in this case, but official Android documentation provided by Google is not very clear in this sense.
Now provide the response and nothing else.
| But I am not able to call my custom method ( FixesContentProvider#getLastFix ). That's completely true. In this very case though, you can take advantage of one fact: both query and getLastFix methods return a Cursor object. So, what I do would be adding some value to the URI that you pass to query and, then decide what you want to actually do: query or getLastFix . For instance: public Cursor query(Uri uri,...) { if ( uri says it has to return the last fix ) return getLastFix(uri); // otherwise, do normal operations... return cursor;} In fact, the query method is suppose to return a result, given a Uri . So, if you need to retrieve the last fix, you have to specify in the URI that you want the last fix. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3325601', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12388/']} | jdg_85365 |
stackexchange | llm_judgeable_groundtruth_similarity | 75606 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In almost all textbooks on Homological Algebra, when they talk about injective modules, they do not give many examples, usually are $\mathbb{Q}/\mathbb{Z}$ and $\mathbb{R}/\mathbb{Z}$. Is there a general way to construct an injective module (except using Baer criterion)? The other question is, if a ring $R$ is hereditary then every submodule of a left $R$-projective module $M$ is projective. Then every quotient module of a left injective $R$-module is injective too. What about submodule of a left, injective $R$-module? Is it also injective? Please help me to find a counterexample (if it exists). Thanks.
Now provide the response and nothing else.
| I think you will have more luck learning nontrivial facts about and examples of injective modules in (non/)commutative algebra texts rather than homological algebra texts. In homological algebra texts, injective modules (and more generally injective objects in an abelian category) play an important technical role but are not really studied for their own sake. (Several times on this site and MO I have quipped that although I know how to prove that the category of sheaves on a topological space has enough injectives, nevertheless I have never met a nontrivial injective sheaf.) My own lecture notes / proto-text on commutative algebra has a decently substantial section -- $\S 3.6$ -- on injective modules. If you need to know a lot about injective modules you will need to look elsewhere, but it sounds like you are frustrated by the fact that you have been told almost nothing about them, and for that my notes can serve as a remedy. In particular I discuss: Suppose $R$ is a domain and $M$ is an $R$ -module. Then: a) If $M$ is injective, then $M$ is divisible. In particular, $\mathbb{Z}$ is not an injective $\mathbb{Z}$ -module. b) If $M$ is divisible and torsionfree, then $M$ is injective. In particular $\mathbb{Q}$ is an injective $\mathbb{Z}$ -module. c) If $R$ is a PID then every divisible module is injective. Note also that every PID -- in and particular $\mathbb{Z}$ -- is a hereditary ring, so this answers your second question: the submodule $\mathbb{Z}$ of $\mathbb{Q}$ is a noninjective submodule of an injective module over a hereditary ring. (Injective Production Lemma) If $M$ is a flat $R$ -module and $N$ is an injective $R$ -module, then $\operatorname{Hom}_R(M,N)$ is injective. This is used to prove that the category of modules over any ring has "enough injectives". | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/75606', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18188/']} | jdg_85366 |
stackexchange | llm_judgeable_groundtruth_similarity | 31211842 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Eclipse, while coding in Java and press Ctrl + Shift + O auto import all the Classes automatically. In NetBeans, this is done with Ctrl + Shift + I . Is any way to do this in IntelliJ IDEA? I searched an equivalent shortcut in google, StackOverflow, IntelliJ IDEA configuration and in the official IntelliJ website Keyboard Shortcuts You Cannot Miss
Now provide the response and nothing else.
| IntelliJ IDEA does not have an action to add imports. Rather it has the ability to do such as you type. If you enable the "Add unambiguous imports on the fly" in Settings > Editor > General > Auto Import , IntelliJ IDEA will add them as you type without the need for any shortcuts. You can also add classes and packages to exclude from auto importing to make a class you use heavily, that clashes with other classes of the same name, unambiguous. For classes that are ambiguous (or is you prefer to have the "Add unambiguous imports on the fly" option turned off), just type the name of the class (just the name is OK, no need to fully qualify). Use code completion and select the particular class you want: Notice the fully qualified names to the right. When I select the one I want and hit enter, IDEA will automatically add the import statement. This works the same if I was typing the name of a constructor. For static methods, you can even just keep typing the method you want. In the following screenshot, no "StringUtils" class is imported yet. Alternatively, type the class name and then hit Alt + Enter or ⌥ + Enter to "Show intention actions and quick-fixes" and then select the import option. Although I've never used it, I think the Eclipse Code Formatter third party plug-in will do what you want. It lists "emulates Eclipse's imports optimizing" as a feature. See its instructions for more information. But in the end, I suspect you'll find the built in IDEA features work fine once you get use to their paradigm. In general, IDEA uses a "develop by intentions" concept. So rather than interrupting my development work to add an import statement, I just type the class I want (my intention) and IDEA automatically adds the import statement for the class for me. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31211842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2226595/']} | jdg_85367 |
stackexchange | llm_judgeable_groundtruth_similarity | 30850158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am going through an html structure to get a value I need. It appears to be a "NamedNodeMap" and I have gotten as far as this call: ui.handle.attributes.getNamedItem("data-value") which when printed in chromes console shows this: data-value="12:00 AM" ->"12:00 AM" I need to get the "12:00 AM" as a value, string, anything will work. I just dont know how to get it. .value on the end of my call does not work, i also tried .text and many other things. Heres what the ui object looks like before i expand handle and attributes: and then after: html markup:
Now provide the response and nothing else.
| .attributes appears to be a "NamedNodeMap" and I have gotten as far as calling .attributes.getNamedItem("data-value") .attributes is a NamedNodeMap of Attr ibute nodes , which is a pretty deprecated interface. If you really want to use this, the .value , .nodeValue and .textContent properties of the attribute node should yield the text value you're after: ui.handle.attributes.getNamedItem("data-value").value // "12:00 AM" However, the standard approach would be to just use the getAttribute method of your element: ui.handle.getAttribute("data-value") // "12:00 AM" With HTML5, there is even the .dataset DOMStringMap specifically designed to access data attributes: ui.handle.dataset["value"] // "12:00 AM" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30850158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4113753/']} | jdg_85368 |
stackexchange | llm_judgeable_groundtruth_similarity | 26962163 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a game and currently working on an undo move. This should be very simple but I am getting this error and cant seem to figure it out. Here is my function... bb_undo(BBoard board){board->score = board->previousScore;board->boardDim = board->previousBoard;} And here is the board structure... struct bboard {char boardDim[MAX_ROWS][MAX_COLS];int score;int rows;int cols;char previousBoard[MAX_ROWS][MAX_COLS];int previousScore;}; I should also probably add that bboard is a pointer.Anybody have any ideas?Thanks.
Now provide the response and nothing else.
| After reading the answer above, and some other questions and sites ( 1 , 2 , 3 , 4 , 5 ), I put this together for a gaussian kernel in svm.SVC() . Call svm.SVC() with kernel=precomputed . Then compute a Gram Matrix a.k.a. Kernel Matrix (often abbreviated as K). Then use this Gram Matrix as the first argument ( i.e. X) to svm.SVC().fit() : I start with the following code : C=0.1model = svmTrain(X, y, C, "gaussian") that calls sklearn.svm.SVC() in svmTrain() , and then sklearn.svm.SVC().fit() : from sklearn import svmif kernelFunction == "gaussian": clf = svm.SVC(C = C, kernel="precomputed") return clf.fit(gaussianKernelGramMatrix(X,X), y) the Gram Matrix computation - used as a parameter to sklearn.svm.SVC().fit() - is done in gaussianKernelGramMatrix() : import numpy as npdef gaussianKernelGramMatrix(X1, X2, K_function=gaussianKernel): """(Pre)calculates Gram Matrix K""" gram_matrix = np.zeros((X1.shape[0], X2.shape[0])) for i, x1 in enumerate(X1): for j, x2 in enumerate(X2): gram_matrix[i, j] = K_function(x1, x2) return gram_matrix which uses gaussianKernel() to get a radial basis function kernel between x1 and x2 ( a measure of similarity based on a gaussian distribution centered on x1 with sigma=0.1 ): def gaussianKernel(x1, x2, sigma=0.1): # Ensure that x1 and x2 are column vectors x1 = x1.flatten() x2 = x2.flatten() sim = np.exp(- np.sum( np.power((x1 - x2),2) ) / float( 2*(sigma**2) ) ) return sim Then, once the model is trained with this custom kernel, we predict with "the [custom] kernel between the test data and the training data" : predictions = model.predict( gaussianKernelGramMatrix(Xval, X) ) In short, to use a custom SVM gaussian kernel, you can use this snippet: import numpy as npfrom sklearn import svmdef gaussianKernelGramMatrixFull(X1, X2, sigma=0.1): """(Pre)calculates Gram Matrix K""" gram_matrix = np.zeros((X1.shape[0], X2.shape[0])) for i, x1 in enumerate(X1): for j, x2 in enumerate(X2): x1 = x1.flatten() x2 = x2.flatten() gram_matrix[i, j] = np.exp(- np.sum( np.power((x1 - x2),2) ) / float( 2*(sigma**2) ) ) return gram_matrixX=...y=...Xval=...C=0.1clf = svm.SVC(C = C, kernel="precomputed")model = clf.fit( gaussianKernelGramMatrixFull(X,X), y )p = model.predict( gaussianKernelGramMatrixFull(Xval, X) ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26962163', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3829538/']} | jdg_85369 |
stackexchange | llm_judgeable_groundtruth_similarity | 8267181 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
One of the kool things about R is if I type the function name I get to see the implementation. But this one is confusing me, recursively: > library(xts)> align.timefunction (x, ...) { UseMethod("align.time")}<environment: namespace:xts> x is an XTS object, so doesn't that mean it will call the XTS align.time method... but that is what I'm looking at! (Typing xts::align.time gives exactly the same response.)
Now provide the response and nothing else.
| The short answer is that you are looking for the function xts:::align.time.xts . The longer answer is that you can find which methods exist for align.time by calling methods : > methods(align.time)[1] align.time.POSIXct* align.time.POSIXlt* align.time.xts* Non-visible functions are asterisked This tells you that there is a method align.time.xts that is not exported from the namespace. At this point you can probably guess that it can be found in package xts , but you can confirm that with getAnywhere : > getAnywhere("align.time.xts")A single object matching 'align.time.xts' was foundIt was found in the following places registered S3 method for align.time from namespace xts namespace:xtswith valuefunction (x, n = 60, ...) { if (n <= 0) stop("'n' must be positive") .xts(x, .index(x) + (n - .index(x)%%n), tzone = indexTZ(x), tclass = indexClass(x))}<environment: namespace:xts> You can, of course, read the source directly, but since the function is not exported, you need to use package:::function (i.e. three colons): > xts:::align.time.xtsfunction (x, n = 60, ...) { if (n <= 0) stop("'n' must be positive") .xts(x, .index(x) + (n - .index(x)%%n), tzone = indexTZ(x), tclass = indexClass(x))}<environment: namespace:xts> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8267181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/841830/']} | jdg_85370 |
stackexchange | llm_judgeable_groundtruth_similarity | 15747352 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As you can see from the code, the user first enter name and message and click send.The problem is the text area. When i make line breaks, there still isn't any line breaks in the e-mail i get. <?php$to = "[email protected]";$subject = "Message from contact form";//begin of HTML message$message = <<<EOF<html><body><b>Name:</b><br>{$_POST['name']}<br><b>Message:</b><br>{$_POST['message']}</body></html>EOF;//end of message$header = "MIME-Version: 1.0" . "\r\n";$header .= "Content-type: text/html; charset=iso-8859-1" . "\r\n";$header .= "from:[email protected]";mail($to, $subject, $message, $header);echo "Email sent!";?><form action="sendmail.php" method="post">Name: <br><input type="text" name="name" style="width: 200px;"> <br><br>Message: <br><textarea name="message" style="width: 200px; height: 100px;"></textarea> <br><br><input type="submit" value="Send"></form>
Now provide the response and nothing else.
| That's how HTML is meant to be. No matter how many times you hit Enter in your HTML source, the browser will only show line feeds when there's an HTML tag that tells it to do so. You can insert such tags with nl2br() . You also want to escape your raw input with htmlspecialchars() . $name = htmlspecialchars($_POST['name']);$message = nl2br(htmlspecialchars($_POST['message']));//begin of HTML message$message = <<<EOF<html><body><b>Name:</b><br>{$name}<br><b>Message:</b><br>{$message}</body></html>EOF; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15747352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2232809/']} | jdg_85371 |
stackexchange | llm_judgeable_groundtruth_similarity | 67672 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using a third party proprietary application on CentOS 5.4, which is known to malfunction on pristine 5.9 or later. I would like to incrementally upgrade to the latest CentOS version where that application works. Since yum update brings me to 5.9 right away, that's not an option. I thought of burning all four Install-DVDs 5.5 through 5.8 and installing each to find out the hard way where the breakage sets in, but maybe there's an easier way frobbing the /etc/yum.conf.d files to go from 5.4 to 5.5 etc. Can anyone provide some guidance? (Sadly, it is not an option to ask the vendor of that application for a fix, since it was acquired by a big blue company and terminated a little while after that.) Edit : Further investigation revealed it is an incompatibility in the GNU C library. With glibc-2.5-58 the app runs fine on CentOS 5.9, with stock glibc-2.5-107 it hangs. I now point LD_LIBRARY_PATH at the older glibc when running the app.
Now provide the response and nothing else.
| The following is somewhat simpler, and has the added advantage of ignoring numbers in the command names: pstree -p $pid | grep -o '([0-9]\+)' | grep -o '[0-9]\+' Or with Perl: pstree -p $pid | perl -ne 'print "$1\n" while /\((\d+)\)/g' We're looking for numbers within parentheses so that we don't, for example, give 2 as a child process when we run across gif2png(3012) . But if the command name contains a parenthesized number, all bets are off. There's only so far text processing can take you. So I also think that process groups are the way to go. If you'd like to have a process run in its own process group, you can use the 'pgrphack' tool from the Debian package 'daemontools': pgrphack my_command args Or you could again turn to Perl: perl -e 'setpgid or die; exec { $ARGV[0] } @ARGV;' my_command args The only caveat here is that process groups do not nest, so if some process is creating its own process groups, its subprocesses will no longer be in the group that you created. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/67672', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/7107/']} | jdg_85372 |
stackexchange | llm_judgeable_groundtruth_similarity | 519789 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The /usr/bin/printf util argument list length is limited to the shell's maximum command line length, ( i.e. getconf ARG_MAX , on my system that'd be 2097152 ); example: # try using a list that's way too long/usr/bin/printf '%s\n' $(seq $(( $(getconf ARG_MAX) * 2 ))) | tail -1 Output: bash: /usr/bin/printf: Argument list too long Today I'm informed that shell builtin printf s don't have that limit; test: printf '%s\n' $(seq $(( $(getconf ARG_MAX) * 2 ))) | tail -1 Output: 4194304 Questions: A skim of man bash dash doesn't seem to say much about this advantage of builtin printf . Where is it documented? Do builtin printf s ( e.g. bash ) have an argument list maximum length in chars, and if so, what is that length?
Now provide the response and nothing else.
| It's not really the manual's job to advocate the use of any particular utility. It should primarily describe the available built-in utilities. The advantages of using a built-in utility over an external one is primarily speed and the availability of extended features ( printf in bash can, for example, write directly into a variable with -v varname , which no external printf could ever do). Executing external utilities is slow in comparison to executing a built-in utility, especially if done often in e.g. a loop, and, as you have noticed, they also allow for longer argument lists (this is not something that only the built-in printf allows, but all built-in utilities). The length of the argument list to the built-in printf utility in bash is limited by the resource restrictions on the bash process itself. On some systems, this may even mean that you could use most of the available RAM to construct its command line argument list. The documentation from where you would find these various bits of information is The bash source code , where you will see that the argument list for printf is a dynamically allocated linked list, and also that it's not using execve() to run printf (which is what limits the length of an argument list when running an external utility). An example of a shell where printf in not a built-in utility is the ksh shell of OpenBSD. The utility can also be disabled in bash using enable -n printf . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/519789', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/165517/']} | jdg_85373 |
stackexchange | llm_judgeable_groundtruth_similarity | 39804667 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Being new to Prolog, I came across a very interesting discussion that happened in late 2012.What I noticed was that there were, at the time, two notions of'semidet' in Prolog community, namely: A computation that succeeds at most once. A computation that, upon success, leaves no choice points open. Clearly the second one implies the first, but not vice versa. Reading the thread, I understood that the first one wasDr.Neumerkel's notion, and the second was Drs.Wielemaker, O'Keefe, and others'. Googling around, I've seen some database researchers mean by'semi-deterministic' a query that would answer at most one equivalenceclass, nearer to the first notion. Dr. Neumerkel says (refering to the predicate called call_semidet there): The implementation might be improved, but prior to optimizing andbechnmarking the actual meaning needs to be settled. So, has the meaning been settled? How about 'det'? It seems customary to classify predicates according to their numberof solutions. According to the SWI-Prolog's definition (see below), a 'det' can do fullynondeterministic (say parallel) computations, provided it commits toa solution which is now guaranteed to exist. So, by analogy I guess theremay be two notions of 'det': A computation which succeeds exactly once. A computation which succeeds exactly once and which, upon success, leavesno choice points. The first one is more logical but undecidable in general until the end of thecomputation. The second is easily decidable once a solution is found, butprocedural and its meaning depends on the particular search strategy Prologemploys, i.e. the depth first search. I wonder if there is not a community's consensus yet? Why not namethese two different concepts differently? Here's the excerpt from the SWI-Prolog's page above: det [determinism] Short for deterministic. deterministic A predicate is deterministic if it succeeds exactly one timewithout leaving a choice point. semidet Shorthand for semi deterministic. semi deterministic A predicate that is semi deterministic either fails or succeedsexactly once without a choice point. See also deterministic.
Now provide the response and nothing else.
| That's a really excellent question! From the Mercury determinism categories where this is also explained quite authoritatively: 6.1 Determinism categories For each mode of a predicate or function, we categorise that mode according to how many times it can succeed , and whether or not it can fail before producing its first solution. If all possible calls to a particular mode of a predicate or function which return to the caller (calls which terminate, do not throw an exception and do not cause a fatal runtime error) have exactly one solution, then that mode is deterministic (det); either have no solutions or have one solution, then that mode is semideterministic (semidet) ; have at least one solution but may have more, then that mode is multisolution (multi); have zero or more solutions, then that mode is nondeterministic (nondet); have exactly zero solutions, i.e, fail without producing a solution, then that mode has a determinism of failure (failure). ( emphases mine ) Note that whether or not a choice point is left is not even mentioned in this definition, nor in the whole section. Mercury is not the same as Prolog, but the point is that this definition is in principle 100% applicable also to Prolog. Clearly, it then corresponds to your variant (1). In my opinion, this is right in this way: Whether or not a choice point is left is rather immaterial, and depends on—for example—how powerful and versatile your system's argument indexing is. A good indexing scheme may prevent the creation of choice points that other systems introduce. A notion that depends on particular idiosyncrasies of a specific Prolog system and may break from one version to the next (with the introduction of better argument indexing etc.) is not very robust, and not of much value. It is true that we often say "the predicate is deterministic" when we mean: "the predicate is deterministic and no choice points are left", but even so the main point is almost always also in such cases that the predicate succeeds exactly once . Note that "deterministic" is a rather overloaded adjective with other meanings too. In the SWI documentation, this ambiguity carries over to semi deterministic. However, even SWI back-pedals a bit from this rather implementation-oriented definition in other places : 2.2.2 Testing semi-deterministic predicates Semi-deterministic predicates are predicates that either fail or succeed exactly once and, for well behaved predicates , leave no choicepoints. So a semi-deterministic predicate that is not well behaved (?) can also leave choice points... In the discussion, note especially the following: Ulrich is here using the weaker and more robust notion to obtain a predicate that is applicable for both definitions . So, no matter which variant you pick, call_semidet/1 is useful! From this, the meaning of the quote becomes clearer. When Ulrich says: (The implementation might be improved, but prior to optimizing and bechnmarking the actual meaning needs to be settled.) it is evidently not meant that the meaning of "semidet" must be settled between the two variants, but that it should first be clear what call_semidet/1 actually guarantees: It is a lot more useful than what people thought Ulrich posted. For example, the definition that Jan gives: call_semidet(Goal) :- call_cleanup(Goal, Det=true), ( Det == true -> true ; throw(error(mode_error(semidet,Goal),_)) ). works only with your second definition of "semidet". | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39804667', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3117573/']} | jdg_85374 |
stackexchange | llm_judgeable_groundtruth_similarity | 29808718 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is driving me nuts... I am serializing a List to JSON using Json.net. I expect this JSON: { "fieldsets": [ { "properties": [ { "alias": "date", "value": "2014-02-12T00:00:00" }, { "alias": "time", "value": null } ], "alias": "eventDates", "disabled": false } ]} But instead I get this: { "fieldsets": [ { "properties": [ { "values": [ { "alias": "date", "value": "2014-07-13T00:00:00" }, { "alias": "time", "value": "Registration begins at 8:00 AM; walk begins at 9:00 AM" } ] } ], "alias": "eventDates", "disabled": false } ]} The "values" collection I'd like as just a JSON array, but I can't for the life of me figure out how to get it to do this. I have a property on my "properties" objects called "values", so I understand why it's doing it, but I need just the straight array, not a JSON object.
Now provide the response and nothing else.
| For that response, you need this class structure public class Property{ [JsonProperty("alias")] public string Alias { get; set; } [JsonProperty("value")] public string Value { get; set; }}public class Fieldset{ [JsonProperty("properties")] public Property[] Properties { get; set; } [JsonProperty("alias")] public string Alias { get; set; } [JsonProperty("disabled")] public bool Disabled { get; set; }}public class Response{ [JsonProperty("fieldsets")] public Fieldset[] Fieldsets { get; set; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29808718', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/167878/']} | jdg_85375 |
stackexchange | llm_judgeable_groundtruth_similarity | 3258545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've looked around, I need an example for Singleton class that works across 2 or more C++/CLI files. How do you declare a singleton in C++/CLI, not C# ? How do you share that singleton across two or more C++/CLI files? I keep getting Variable redefinitions when I try to share that singleton.
Now provide the response and nothing else.
| This is for C++/CLI, not ".NET Managed Extensions for C++" aka C++.NET. Don't use the Managed Extensions (Visual Studio 2002-2003), they're buggy. ref class Singleton{private: Singleton() {} Singleton(const Singleton%) { throw gcnew System::InvalidOperationException("singleton cannot be copy-constructed"); } static Singleton m_instance; public: static property Singleton^ Instance { Singleton^ get() { return %m_instance; } }}; As for "across multiple files", other compilation units in the same project use #include , other assemblies use a reference (or #import ). Then there won't be any redefinition issues. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3258545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/366611/']} | jdg_85376 |
stackexchange | llm_judgeable_groundtruth_similarity | 28614591 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to initialize a constexpr reference with no success. I tried #include <iostream>constexpr int& f(int& x) // can define functions returning constexpr references{ return x;}int main(){ constexpr int x{20}; constexpr const int& z = x; // error here} but I'm getting a compile time error error: constexpr variable 'z' must be initialized by a constant expression Dropping the const results in error: binding of reference to type 'int' to a value of type 'const int' drops qualifiers even though I had the feeling that constexpr automatically implies const for variable declarations. So my questions are: Are constexpr references ever useful? (i.e., "better" than const references) If yes, how can I effectively define them? PS: I've seen a couple of questions related to mine, such as Which values can be assigned to a `constexpr` reference? , but I don't think they address my questions.
Now provide the response and nothing else.
| Are constexpr references ever useful? (i.e., "better" than const references) They are guaranteed to be initiailized before the program starts, whereas a reference to const can be initialized during dynamic initialization, after the program starts running. If yes, how can I effectively define them? A constexpr reference has to bind to a global, not a local variable (or more formally, it has to bind to something with static storage duration). A reference is conceptually equivalent to taking the address of the variable, and the address of a local variable is not a constant (even in main which can only be called once and so its local variables are only initialized once). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28614591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3093378/']} | jdg_85377 |
stackexchange | llm_judgeable_groundtruth_similarity | 31280817 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently looking at the various parts of the CPU pipeline which can detect branch mispredictions. I have found these are: Branch Target Buffer (BPU CLEAR) Branch Address Calculator (BA CLEAR) Jump Execution Unit (not sure of the signal name here??) I know what 2 and 3 detect, but I do not understand what misprediction is detected within the BTB. The BAC detects where the BTB has erroneously predicted a branch for a non-branch instruction, where the BTB has failed to detect a branch, or the BTB has mispredicted the target address for a x86 RET instruction. The execution unit evaluates the branch and determines if it was correct. What type of misprediction is detected at the Branch Target Buffer? What exactly is detected as a misprediction here? The only clue I could find was this inside Vol 3 of the Intel Developer Manuals (the two BPU CLEAR event counters at the bottom): BPU predicted a taken branch after incorrectly assuming that it was not taken. This seems to imply the prediction is not done "synchronously", but rather "asynchronously", hence the "after incorrectly assuming"?? UPDATE: Ross, this is the CPU branch circuitry, from the original Intel Patent (hows that for "reading"?): I don't see "Branch Prediction Unit" anywhere? Would it be reasonable that somebody having read this paper would assume that "BPU" is a lazy way of grouping the BTB Circuit, BTB Cache, BAC and RSB together?? So my question still stands, which component raises the BPU CLEAR signal?
Now provide the response and nothing else.
| This is a good question! I think the confusion that it's causing is due to Intel's strange naming schemes which often overload terms standard in academia. I will try to both answer your question and also clear up the confusion I see in the comments. First of all. I agree that in standard computer science terminology a branch target buffer isn't synonymous with branch predictor. However in Intel terminology the Branch Target Buffer (BTB) [in capitals] is something specific and contains both a predictor and a Branch Target Buffer Cache (BTBC) which is just a table of branch instructions and their targets on a taken outcome. This BTBC is what most people understand as a branch target buffer [lower case]. So what is the Branch Address Calculator (BAC) and why do we need it if we have a BTB? So, you understand that modern processors are split into pipelines with multiple stages. Whether this is a simple pipelined processor or an out of order supersclar processor, the first stages are typically fetch then decode . In the fetch stage all we have is the address of the current instruction contained in the program counter (PC). We use the PC to load bytes from memory and send them to the decode stage. In most cases we increment the PC in order to load the subsequent instruction(s) but in other cases we process a control flow instruction which can modify the contents of the PC completely. The purpose of the BTB is to guess if the address in the PC points to a branch instruction, and if so, what should the next address in the PC be? That's fine, we can use a predictor for conditional branches and the BTBC for the next address. If the prediction was right, that's great! If the prediction was wrong, what then? If the BTB is the only unit we have then we would have to wait until the branch reaches the issue / execute stage of the pipeline. We would have to flush the pipeline and start again. But not every situation needs to be resolved so late. This is where the Branch Address Calculator (BAC) comes in. The BTB is used in the fetch stage of the pipeline but the BAC resides in the decode stage. Once the instruction we fetched is decoded, we actually have a lot more information which can be useful. The first new piece of information we know is: "is the instruction I fetched actually a branch?" In the fetch stage we have no idea and the BTB can only guess, but in the decode stage we know it for sure. It is possible that the BTB predicts a branch when in fact the instruction is not a branch; in this case the BAC will halt the fetch unit, fix the BTB, and reinitiate fetching correctly. What about branches like unconditional relative and call ? These can be validated at the decode stage. The BAC will check the BTB, see if there are entries in the BTBC and set the predictor to always predict taken. For conditional branches, the BAC cannot confirm if they are taken/not-taken yet, but it can at least validate the predicted address and correct the BTB in the event of a bad address prediction. Sometimes the BTB won't identify/predict a branch at all. The BAC needs to correct this and give the BTB new information about this instruction. Since the BAC doesn't have a conditional predictor of its own, it uses a simple mechanism (backwards branches taken, forward branches not taken). Somebody will need to confirm my understanding of these hardware counters, but I believe they mean the following: BACLEAR.CLEAR is incremented when the BTB in fetch does a badjob and the BAC in decode can fix it. BPU_CLEARS.EARLY isincremented when fetch decides (incorrectly) to load the nextinstruction before the BTB predicts that it should actually load fromthe taken path instead. This is because the BTB requires multiple cycles and fetch uses that time to speculatively load a consecutive block of instructions. This can be due to Intel using two BTBs, one quick and the other slower but more accurate. It takes more cycles to get a better prediction. This explains why the penalty of a detecting a misprediction in the BTB is 2/3 cycles whereas the detecting a misprediction in the BAC is 8 cycles. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31280817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/997112/']} | jdg_85378 |
stackexchange | llm_judgeable_groundtruth_similarity | 14742664 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the utility of having static functions in a file ? How are they different from having global functions in a file ? static int Square(int i){ return i * i;} vs int Square(int i){ return i * i;}
Now provide the response and nothing else.
| What is the utility of having static functions in a file? You can use these functions to provide shared implementation logic to other functions within the same file. Various helper functions specific to a file are good candidates to be declared file-static. How are they different from having global functions in a file? They are invisible to the linker, allowing other compilation units to define functions with the same signature. Using namespaces alleviates this problem to a large degree, but file- static functions predate namespaces, because they are a feature inherited from the C programming language. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14742664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/602798/']} | jdg_85379 |
stackexchange | llm_judgeable_groundtruth_similarity | 1844874 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When I say "characteristic class of a manifold" I mean the characteristic class of the tangent bundle. I assume that all Chern/Pontrjagin classes/numbers are invariant under diffeomorphism, if they are not I will be very confused. Then I hear talk about "topological invariants of smooth manifolds". I think this is one of my difficulties. Does this means that if you have two smooth manifolds and a homeomorphism between these then the invariant is preserved and has nothing to do with the chosen smooth structure? It seems like a strange thing to say. I believe that Milnor has shown that the integer Pontrjagin classes are NOT topological invariants, and Novikov has proved it is true that the rational pontrjagin classes are. I haven't seen anything indicating either way for Chern classes. What about characteristic numbers then? The signature of a manifold is a topological invariant so certainly certain combinations of characteristic numbers can be, but is it true in general? The literature is usually a bit advanced so I don't think I could go through it easily but of course these are important basic questions, if anyone could clarify that would be great. I would like to know most about Chern/Pontrjagin classes/numbers but if anyone has something else to throw in I'll gladly take it :)
Now provide the response and nothing else.
| I assume what you're asking about is whether the Stiefel-Whitney class/numbers of a smooth manifold only depend on the homotopy type; similarly with the Pontryagin numbers of an oriented such manifold; similarly for the Chern classes of a complex manifold. SW: Yes. These can be defined entirely in terms of Steenrod squares, and indeed they only depend on the cohomology ring of the manifold. They're therefore actually the same for any pair of homotopy equivalent manifolds. (As a curious corollary, homotopy equivalent manifolds are cobordant!) Pontryagin classes: As you say Milnor has found examples where the Pontryagin classes do not agree, but Novikov proved that the rational Pontryagin classes are topological invariants. The Pontryagin numbers are well-defined up to homeomorphism as a corollary of the above, since torsion can't be taken to something nonzero in top homology via cup product. So Pontryagin numbers are well-defined; again a cute corollary is that homeomorphic smooth manifolds are smoothly oriented bordant. I don't know an elementary proof of this, sadly. Chern classes aren't even well-defined up to diffeomorphism; a complex structure is a very serious addition to a manifold. (This is why you don't see them discussed in most other sources - smooth manifolds don't come with complex structures on their tangent bundle!) $c_n(M) = \chi(M)$, so that's invariant, but usually you should be able to find diffeomorphic manifolds with different say $c_1(M)$. (I don't have a good example off the top of my head. The best I can do is point out that it's a hard theorem that K3 surfaces, which are defined by the property that $c_1(M) = 0$, are all diffeomorphic.) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1844874', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/309615/']} | jdg_85380 |
stackexchange | llm_judgeable_groundtruth_similarity | 21206745 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am loading a UIWebView and in the meantime I wan't to show a blank page with this activity indicator spinning (siri activity indicator). From what I have understand you can not change the image, but can't I use that image and create an animation with it rotating 360° and looping? Or will that drain the battery? something like this?: - (void)webViewDidStartLoad:(UIWebView *)webView { //set up animation [self.view addSubview:self.loadingImage]; //start animation}- (void)webViewDidFinishLoad:(UIWebView *)webView{ //stop animation [self.loadingImage removeFromSuperview];} What should I do? Thanks in advance!
Now provide the response and nothing else.
| Most of this is found in Stack Overflow. Let me summarize: Create an UIImageView which will serve as an activity indicator (inside storyboard scene, NIB, code ... wherever you wish). Let's call it _activityIndicatorImage Load your image: _activityIndicatorImage = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"activity_indicator"]]; You need to use animation to rotate it. Here is the method I use: + (void)rotateLayerInfinite:(CALayer *)layer{ CABasicAnimation *rotation; rotation = [CABasicAnimation animationWithKeyPath:@"transform.rotation"]; rotation.fromValue = [NSNumber numberWithFloat:0]; rotation.toValue = [NSNumber numberWithFloat:(2 * M_PI)]; rotation.duration = 0.7f; // Speed rotation.repeatCount = HUGE_VALF; // Repeat forever. Can be a finite number. [layer removeAllAnimations]; [layer addAnimation:rotation forKey:@"Spin"];} Inside my layoutSubviews method I initiate rotation. You could place this in your webViewDidStartLoad and webViewDidFinishLoad if this is better for your case: - (void)layoutSubviews{ [super layoutSubviews]; // some other code [Utils rotateLayerInfinite:_activityIndicatorImage.layer];} You could always always stop rotation using [_activityIndicatorImage.layer removeAllAnimations]; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21206745', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3127637/']} | jdg_85381 |
stackexchange | llm_judgeable_groundtruth_similarity | 15965716 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After writing a shader using GLee to my OpenGL project and compiling, I received the error LNK1104: cannot open file 'LIBC.lib' . I've tried adding it and ignoring it as others suggested, but not nothing resolved the issue. Is there an alternative way to solve this that I've missed?
Now provide the response and nothing else.
| Here are several possible solutions: This may due to code generation conflict. If your libc.lib 's thread support is different then your project's settings, this may be causing the problem. If so, right click project , then go to properties->C++->code generation->Runtime Library and change the value to Multi-threaded(MT) . Your libc.lib may not support multi threading. Right click project , then go to properties->Linker->Input->Additional Dependencies and change libc.lib to libcmt.lib Your application statically link multiple copies of the same library. Right click project , then go to properties->Linker->Input->Ignore Specific Library and add libc.lib Maybe your project needs this lib, but your system doesn't have it. In that case, you have to download the .lib file from Microsoft and add the path of where you downloaded it by right clicking project , then going to properties->Linker->General->Additional Library directories . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15965716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85382 |
stackexchange | llm_judgeable_groundtruth_similarity | 45998984 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I write a dynamic SELECT INTO query inside a PL/pgSQL function in Postgres? Say I have a variable called tb_name which is filled in a FOR loop from information_schema.tables . Now I have a variable called tc which will be taking the row count for each table. I want something like the following: FOR tb_name in select table_name from information_schema.tables where table_schema='some_schema' and table_name like '%1%'LOOPEXECUTE FORMAT('select count(*) into' || tc 'from' || tb_name);END LOOP What should be the data type of tb_name and tc in this case?
Now provide the response and nothing else.
| CREATE OR REPLACE FUNCTION myfunc(_tbl_pattern text, _schema text = 'public') RETURNS void AS -- or whatever you want to return$func$DECLARE _tb_name information_schema.tables.table_name%TYPE; -- currently varchar _tc bigint; -- count() returns bigintBEGIN FOR _tb_name IN SELECT table_name FROM information_schema.tables WHERE table_schema = _schema AND table_name ~ _tbl_pattern -- see below! LOOP EXECUTE format('SELECT count(*) FROM %I.%I', _schema, _tb_name) INTO _tc; -- do something with _tc END LOOP;END$func$ LANGUAGE plpgsql; Notes I prepended all parameters and variables with an underscore ( _ ) to avoid naming collisions with table columns. Just a useful convention. _tc should be bigint , since that's what the aggregate function count() returns. The data type of _tb_name is derived from its parent column dynamically: information_schema.tables.table_name %TYPE . See the chapter Copying Types in the manual . Are you sure you only want tables listed in information_schema.tables ? Makes sense, but be aware of implications. See: How to check if a table exists in a given schema a_horse already pointed to the manual and Andy provided a code example . This is how you assign a single row or value returned from a dynamic query with EXECUTE to a (row) variable. A single column (like count in the example) is decomposed from the row type automatically, so we can assign to the scalar variable tc directly - in the same way we would assign a whole row to a record or row variable. Related: How to get the value of a dynamically generated field name in PL/pgSQL Schema-qualify the table name in the dynamic query. There may be other tables of the same name in the current search_path , which would result in completely wrong (and very confusing!) results without schema-qualification. Sneaky bug! Or this schema is not in the search_path at all, which would make the function raise an exception immediately. How does the search_path influence identifier resolution and the "current schema" Always quote identifiers properly to defend against SQL injection and random errors. Schema and table have to be quoted separately ! See: Table name as a PostgreSQL function parameter Truncating all tables in a Postgres database I use the regular expression operator ~ in table_name ~ _tbl_pattern instead of table_name LIKE ('%' || _tbl_pattern || '%') , that's simpler. Be wary of special characters in the pattern parameter either way! See: PostgreSQL Reverse LIKE Escape function for regular expression or LIKE patterns Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL I set a default for the schema name in the function call: _schema text = 'public' . Just for convenience, you may or may not want that. See: Assigning default value for type Addressing your comment : to pass values , use the USING clause like: EXECUTE format('SELECT count(*) FROM %I.%I WHERE some_column = $1', _schema, _tb_name,column_name)USING user_def_variable; Related: INSERT with dynamic table name in trigger function | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45998984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6908656/']} | jdg_85383 |
stackexchange | llm_judgeable_groundtruth_similarity | 22984129 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a game over libgdx; I'm using the junit framework to simplify unit-testing my code.Now there's part of the code (a map generator, a class converting my own map format into TiledMap...) which I need to test thoroughly, but it uses libgdx code: from file handling to asset loading.I'm not planning to test the actual graphical output, or the game itself, in this way: but I want to test the single components (calculation, asset access...) to avoid blatant errors. I've tried to do something like this in the "setUpBeforeClass" method: LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration(); cfg.useGL20 = true; cfg.width = 480; cfg.height = 320; cfg.resizable = true; LwjglApplication app = new LwjglApplication( new TestApplicationListener(), cfg); And calling within tearDownAfterClass(): Gfx.app.exit() But it does create a window I do not need, and seems overkill when all I need is the file handling initialized. Is there a better way to initialize the libGDX components without creating an entire application object?Thanks. EDIT Going back over it (thanks to Sam in the comments), I realize GL access is needed (loading assets requires it), but this approach does not seem to work: the graphic library does not seem to be initialized. GDX documentation hasn't helped. Any clue?
Now provide the response and nothing else.
| This question hasn't been answered and I am surprised nobody has pointed out the headless backend , which is ideal for this situation. Combine this with your favorite mocking library and you should be good to go. public class HeadlessLauncher { public static void main(final String[] args) { final HeadlessApplicationConfiguration config = new HeadlessApplicationConfiguration(); config.renderInterval = Globals.TICK_RATE; // Likely want 1f/60 for 60 fps new HeadlessApplication(new MyApplication(), config); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22984129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2410393/']} | jdg_85384 |
stackexchange | llm_judgeable_groundtruth_similarity | 7297576 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Yesterday on SO, i saw a thread asking for a code which some what does this way. I mean, you (manager thread) start number of tasks using TPL APIs and once they are completed job, that thread should notify you (Manager) back who maintains Task pool. So here is the code which i tried. Although i must say it works as i describe above. class TaskJob{ public delegate void NotificationDelegate(int? taskId,string message); public event NotificationDelegate NotifyCompletion; public void TaskToRun() { try { if (Task.CurrentId == 4)//If its a 4th thread, simulate exception throw new Exception(); Console.WriteLine("Task started with thread id " + Task.CurrentId); Thread.Sleep(100000); Console.WriteLine("Task finished with thread id " + Task.CurrentId); NotifyCompletion(Task.CurrentId, "Successfully completed"); } catch { NotifyCompletion(Task.CurrentId, "Faulted error"); } } }class Program{ static List<Task> taskList = new List<Task>(); public static void Main() { for (int i = 0; i < 5; i++)//starting 5 threads/tasks { TaskJob tb = new TaskJob(); tb.NotifyCompletion += new ConsoleApplication1.TaskJob.NotificationDelegate(tb_NotifyCompletion); Task t = Task.Factory.StartNew(tb.TaskToRun); taskList.Add(t); } Task.WaitAll(taskList.ToArray()); CheckAndDispose(); Console.ReadLine(); } private static void CheckAndDispose() { foreach (var item in taskList) { Console.WriteLine("Status of task = " + item.Id + " is = " + item.Status); item.Dispose(); } } static void tb_NotifyCompletion(int? taskId, string message) { Console.WriteLine("Task with id completed ="+ taskId + " with message = " + message); }} Couple of heads up: Dont worry about not maintaining Task[] rather than having a list and then converting to array. Its just a code. Not that i am concentrating on efficiency. Am not worried on custom Dispose implementation as of now. Now i am asking myself couple of questions, but fail to convince myself with an suitable answer. Here are they: Is this way solving the problem good? Or is there a better way to do it? Code please :) How to make sure that the tasks objects are really disposed (upon calling Dispose) but not implementing the custom Dispose pattern. Have not done any memory leak test using tools. But just visual sake, do you see leaks? In the for loop (main method) i am creating TaskJob class objects which are local to loop. Thus these objects gets gc'd (lets say out of scope) after loop completion. So, how the event is fired back which i am hooking up using this object but it has been disposed after loop when actually the event is fired. Any thing else you wish to add? Thanks alot :)
Now provide the response and nothing else.
| There's no need to do this yourself at all - use continuations with Task.ContinueWith or Task<T>.ContinueWith . That basically lets you say what to do when a task completes - including executing different code on failure, cancellation, and success. It also returns a task, so you can then continue when that's completed etc. Oh, and you can give it a TaskScheduler , so that from a UI thread you can say, "When this background task finishes, execute the given delegate on the UI thread" or similar things. This is the approach that C# 5 async methods are built on. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7297576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/785375/']} | jdg_85385 |
stackexchange | llm_judgeable_groundtruth_similarity | 45421 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Where is the hottest spot above a lit candle? In the flame or just above the flame tip or some cm above the flame or other? Update My question more precisely concerns the heat energy I can get from a candle. I can see that isn't necessarily the hottest part. So to make the question more accurate: From which part/point of the candle can I get the most heat energy pr. time unit? E.g. if I was to warm something up, which part the candle would then be the most efficient heater?
Now provide the response and nothing else.
| In the flame. According to NASA it's the white part. According to ChemistryViews it's mid-way between wick and edge ... hot reaction Zones II and III ... The concentration of OH radicals is highest at the outer edge of the reaction zones, which is why one finds there also the highest temperature, ca. 1400 °C. I guess these views may not be inconsistent, as the visible edge of the flame may not be at the boundary of the flow of hot or warm gases and particles from combustion (and perhaps entrained/convected air). | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/45421', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4962/']} | jdg_85386 |
stackexchange | llm_judgeable_groundtruth_similarity | 46125182 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First, I must admit that my statistics knowledge is rusty at best: even when it was shining new, it's not a discipline I particularly liked, which means I had a hard time making sense of it. Nevertheless, I took a look at how the barplot graphs were calculating error bars, and was surprised to find a "confidence interval" (CI) used instead of (the more common) standard deviation. Researching more CI led me to this wikipedia article which seems to say that, basically, a CI is computed as: Or, in pseudocode: def ci_wp(a): """calculate confidence interval using Wikipedia's formula""" m = np.mean(a) s = 1.96*np.std(a)/np.sqrt(len(a)) return m - s, m + s But what we find in seaborn/utils.py is: def ci(a, which=95, axis=None): """Return a percentile range from an array of values.""" p = 50 - which / 2, 50 + which / 2 return percentiles(a, p, axis) Now maybe I'm missing this completely, but this seems just like a completely different calculation than the one proposed by Wikipedia. Can anyone explain this discrepancy? To give another example, from comments, why do we get so different results between: >>> sb.utils.ci(np.arange(100)) array([ 2.475, 96.525]) >>> ci_wp(np.arange(100)) [43.842250270646467,55.157749729353533] And to compare with other statistical tools: def ci_std(a): """calculate margin of error using standard deviation""" m = np.mean(a) s = np.std(a) return m-s, m+s def ci_sem(a): """calculate margin of error using standard error of the mean""" m = np.mean(a) s = sp.stats.sem(a) return m-s, m+s Which gives us: >>> ci_sem(np.arange(100))(46.598850802411796, 52.401149197588204)>>> ci_std(np.arange(100))(20.633929952277882, 78.366070047722118) Or with a random sample: rng = np.random.RandomState(10)a = rng.normal(size=100)print sb.utils.ci(a)print ci_wp(a)print ci_sem(a)print ci_std(a) ... which yields: [-1.9667006 2.19502303](-0.1101230745774124, 0.26895640045116026)(-0.017774461397903049, 0.17660778727165088)(-0.88762281417683186, 1.0464561400505796) Why are Seaborn's numbers so radically different from the other results?
Now provide the response and nothing else.
| Your calculation using this Wikipedia formula is completely right. Seaborn just uses another method: https://en.wikipedia.org/wiki/Bootstrapping_(statistics) . It's well described by Dragicevic [1]: [It] consists of generating many alternative datasets from the experimental data by randomly drawing observations with replacement. The variability across these datasets is assumed to approximate sampling error and is used to compute so-called bootstrap confidence intervals. [...] It is very versatile and works for many kinds of distributions. In the Seaborn's source code, a barplot uses estimate_statistic which bootstraps the data then computes the confidence interval on it: >>> sb.utils.ci(sb.algorithms.bootstrap(np.arange(100)))array([43.91, 55.21025]) The result is consistent with your calculation. [1] Dragicevic, P. (2016). Fair statistical communication in HCI. In Modern Statistical Methods for HCI (pp. 291-330). Springer, Cham. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46125182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1174784/']} | jdg_85387 |
stackexchange | llm_judgeable_groundtruth_similarity | 22033487 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am parsing date strings from user input with MM-dd-yyyy HH:mm:ss format, and I found 12-20-2012 10:10:10 abcdexxxx could be pasred as well. How can this happen? Here is my code: SimpleDateFormat df = new SimpleDateFormat( "MM-dd-yyyy HH:mm:ss" );String currColValue = "12-20-2012 10:10:10 abcdexxxx";try{ d=df.parse( currColValue );}catch( ParseException e ){ System.out.println("Error parsing date: "+e.getMessage());} But there is no exception, the String value is parsed to be a Date. Why?
Now provide the response and nothing else.
| Per the Javadoc of the parse method : Parses text from the beginning of the given string to produce a date. The method may not use the entire text of the given string. (emphases mine). Contrary to the implication of comments above, this has nothing to do with lenient parsing; rather, it's just that this method is not intended to consume the whole string. If you wish to validate that it consumed the whole string, I suppose you could set up a ParsePosition object and use the two-arg overload , and then examine the ParsePosition afterward to see if it parsed to the end of the string. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22033487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1533462/']} | jdg_85388 |
stackexchange | llm_judgeable_groundtruth_similarity | 4605 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a way how to secure communication between our applications. There is a "server" application listening on some TCP port and "client" application which is communicating with the "server". Both applications are actually installed on some server, there can be multiple load-balanced instances of these applications. Is using SSL certificates the right way so that the "server" can be absolutely sure that the communication is coming from the "client" application? How does it work - should every application have its own certificate? Where are the certificates stored? If you have some resource where similar solution is discussed, I'll appreciate it.
Now provide the response and nothing else.
| I like the answer from this.josh, but there was a couple of points I wanted to add: TLS does not use the certificate for protecting the channel - that is done by a negotiated symmetric key. TLS uses certificates (server or client) authentication and key encipherment. I agree that a certificate used for encryption should not be the same as the certificate used for authentication - however, in this case, I think it's safe to say the certificate used for TLS is used for authentication, not encryption. It's true that if you dig down into it, there is key encipherment being done with the certificate, and I'm sure that if you get a high end system, you will end up with people who disagree with me on this, but it seems like overkill to go through a challenge/response protocol on top of TLS for many use cases. implied in the previous answer, but I'll say it point blank - TLS by default is usually server authentication only. To get client authentication, you will need to turn on a switch for that, and in the process you'll specify: what credentials are OK for client auth - for example digital certificates. what to do to check the credential - for example, verify against a trusted Certificate Authority that signed the certificate, and verify against an OCSP or CRL source for certificate status - this is a whole area on PKI, and I think if you dig in this Stack Exchange you'll find more on this. In general, checking all factors of a provided certificate on both the client and the server is the best path to the assurance that the entity connecting to this channel is the entity it says it is. Typical mechanisms include: Trusted signer - certificates are generally signed by a Certificate Authority (CA). That Certificate Authority is trusted to verify the requestor prior to certificate issuance. Most systems that have a widely distributed need for trust, trust a set of CAs to issue credentials. They do this by keeping a set of CA certificates on hand, so that when an end entity shows up with a trusted CA signed certificate, they can check that one of the approved (known) CAs signed the cert. Status - when a CA signs a cert, it also takes responsibility for keeping the status up to date - it does this by issuing CRLs (Certificate revocation lists) which may or may not be rendered via OCSP (a request/response protocol) for fast querying. Then endpoints verify the status of the certificate, when it's received, to make sure that the certificate is still valid according to its issuing CA. Self signed certificates - if a certificate isn't signed by a CA, it's self-signed. This it the super-cheap way to do it, but it makes configuration a bear. Self-signed certificates assert their own goodness, so the party (application) verifying it has to manually trust each self-signed certificate individually by keeping a copy of each certificate on hand. Cheap, and good enough if you have a very small installation, but it scales abysmally. Validity - in all cases, certificates are valid from a start date to an end date, and should be reviewed to make sure of current validity. Authentication v. Authorization Authentication - is the entity who it says it is? This is true as long as the credential is a valid proof of identity. Even if I'm 16, my driver's license is valid if it was properly issued by my state of residence. It doesn't mean I'm authorized to buy liquor, however (if I live in the U.S., at least, I have to be 21). Authorization is the entity allowed to do what it wants to do? I'm authorized to buy liquor if I'm over 21. Technically, my over-21ness is all I need, but I prove that with authentication - my driver's license or passport or state issued id. It sounds like you want both. You want each side to prove its identity (authentication) and be of a certain restricted group (created by your company). So you need more than just TLS client authentication - you need some added way of checking criteria to verify authorization. I can think of a couple ways to approach that: set up a trusted users list - with PKI, the typical way is a list of subject DNs (distinguished names) for each client or server. The scalability is directly related to where you store this list - a public store - like an LDAP server - is a good bet, so that all servers can check a common location. make sure there be a quality in your certificates - the organization value in a DN comes to mind - it's typically the value that describes the organization the entity comes from. For example, my personal digital certificates have my company's name as the "O" value because my company paid for me to get them. You can then construct some added code (or in a very smart server, you can just configure it...) to check that "O= your company " - and you're good - so long as you can assure that no one else can make certificates with your company's name in the "O" value. One way to do that is some of the higher end commercial certificate authorities, where you set up an agreement and an account with that CA provider, and you have an appointed approval person in your company to do the paperwork. That person certifies that each request for a certificate comes from your company before the CA will make a certificate. You can do the same thing running your own CA - essentially you're trading off time vs. money. WARNING: I'm glossing over a lot of detail here. Figuring out "how secure" authentication and authorization controls should be and how they should be administrated is a full time job. In fact, it's a whole industry. What I'm throwing up here is the grass roots concepts for a bare bones effort. If you told me that you were putting together something that was going to protect millions or billions of dollars of corporate investments, or human life, or the state of the world as we know it - I would tell you to contact me offline and I have a nice team of smart people handy who can take this problem off your hands... for a fee. The whole area becomes a chicken and egg scenario pretty fast, as every communication and procedure around checking this stuff needs to be handled with care, since any system is only as strong as its weakest link. What I'm throwing in here assumes that you don't have anything huge, you're mostly working on a newish system that needs just a bit of protection - the equivalent of throwing up a nice gate around your house. Not a gate, guard dogs, security cameras, and a big guy with a spiked club. :) Now... key storage Key storage has two main elements: How safe does it have to be from attack? How does it need to be accessed? "Safe" is often rated in FIPS compliance levels - it's a way of testing key storage mechanisms and that they meet certain sets of escalating security requirements. The low end is software storage - the key uses one of several commercially supported storage mechanisms and it sits on your hard drive (Price = free or the price of a tiny bit of memory on your hard drive). The high end is a big, expensive box (an HSM - hardware security module) that will hold your key in such a way that no one can get the key out of the box... ever... and when they try, if they try too hard, the box will zerioze itself, erasing the key. There's levels in between. How much safety you want to pay for is entirely a risk/cost decision. You have to calculate what you have to loose vs. what you want to pay. "How to access" has a couple of elements: Software keys are easy to access on the box - there's a couple of commercially supported ways of storing them and most servers support most storage mechanisms. PKCS12, and JKS are two examples. Hardware keys are typically accessed by the PKCS 11 protocol (but there are others), most high end servers have this built in, but sometimes you need extensions to handle this or APIs for custom built code. Also, you have a chicken/egg problem - you usually need a password to get access to your HSM - where and how you store that password is another area of security. for clients, you also have the mobility problem. If "client" is a fixed point - you have the same HW/SW options as for servers, but if "client" means a human, then you also need to think about user mobility and compatibility with any GUI mechanism (for example, a browser). It still boils down to HW/SW, but often the HW element is called a "token", because remove-ability becomes a key feature. In all cases, you can store your certificates just about anywhere - they are public information that can be distributed far and wide. The critical element is your private key - which must be stored in a place that only the entity using it can get to. Who can use a single certificate It depends on how you are storing your keys and what risk you are willing to accept. A key on a network accessible HSM could technically be used by a bank of servers, which would let all servers look like one server (pretty cool for a web app). But if you need to have proof of exactly which client or server did a specific action, then you must have 1 certificate per entity. Also, you can only revoke a certificate - so you want a level of separation that if you believe a key store had been compromised, you could disable just one thing. Generally, it's a checkpoint if you find yourself putting the private key in multiple places - that's a serious risk and a good indicator that you want to get another certificate. From there, the level of separation is, IMO, mostly a matter of what level you want to track and any other needs you have for controlling access. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/4605', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/89/']} | jdg_85389 |
stackexchange | llm_judgeable_groundtruth_similarity | 12332975 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to install a package from PyPi straight within my script.Maybe there's some module or distutils ( distribute , pip etc.) feature which allows me to just execute something like pypi.install('requests') and requests will be installed into my virtualenv.
Now provide the response and nothing else.
| The officially recommended way to install packages from a script is by calling pip's command-line interface via a subprocess. Most other answers presented here are not supported by pip . Furthermore since pip v10, all code has been moved to pip._internal precisely in order to make it clear to users that programmatic use of pip is not allowed. Use sys.executable to ensure that you will call the same pip associated with the current runtime. import subprocessimport sysdef install(package): subprocess.check_call([sys.executable, "-m", "pip", "install", package]) | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/12332975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/998523/']} | jdg_85390 |
stackexchange | llm_judgeable_groundtruth_similarity | 39731560 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use Contact Form 7 in my WordPress theme. It is currently returning span and input : <span class="wpcf7-form-control-wrap name"> <input type="text" name="name" class="wpcf7-form-control wpcf7-text wpcf7-validates-as-required form-control" id="name"></span> But I need only input : <input type="text" name="name" class="wpcf7-form-control wpcf7-text wpcf7-validates-as-required form-control" id="name"> How can I remove the span wrapper?
Now provide the response and nothing else.
| I faced the same problem and finally ending by using the wpcf7_form_elements filter to remove the <span> tag with a regex. You can for example copy-paste this code in your functions.php file. Here I pass an an anonymous function as a callback, so be sure to have PHP >= 5.3. add_filter('wpcf7_form_elements', function($content) { $content = preg_replace('/<(span).*?class="\s*(?:.*\s)?wpcf7-form-control-wrap(?:\s[^"]+)?\s*"[^\>]*>(.*)<\/\1>/i', '\2', $content); return $content;}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39731560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6452878/']} | jdg_85391 |
stackexchange | llm_judgeable_groundtruth_similarity | 17217736 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What would be the idiomatic way to do something like a while loop with promises. So: do somethingif the condition still stands do it againrepeatthen do something else. dosomething.then(possilblydomoresomethings).then(finish) I've done it this way I was wondering if there were any better/more idomatic ways? var q = require('q');var index = 1;var useless = function(){ var currentIndex = index; console.log(currentIndex) var deferred = q.defer(); setTimeout(function(){ if(currentIndex > 10) deferred.resolve(false); else deferred.resolve(true); },500); return deferred.promise; }var control = function(cont){ var deferred = q.defer(); if(cont){ index = index + 1; useless().then(control).then(function(){ deferred.resolve(); }); } else deferred.resolve(); return deferred.promise; }var chain = useless().then(control).then(function(){console.log('done')}); Output:1234567891011done
Now provide the response and nothing else.
| Here's a reusable function that I think is pretty clear. var Q = require("q");// `condition` is a function that returns a boolean// `body` is a function that returns a promise// returns a promise for the completion of the loopfunction promiseWhile(condition, body) { var done = Q.defer(); function loop() { // When the result of calling `condition` is no longer true, we are // done. if (!condition()) return done.resolve(); // Use `when`, in case `body` does not return a promise. // When it completes loop again otherwise, if it fails, reject the // done promise Q.when(body(), loop, done.reject); } // Start running the loop in the next tick so that this function is // completely async. It would be unexpected if `body` was called // synchronously the first time. Q.nextTick(loop); // The promise return done.promise;}// Usagevar index = 1;promiseWhile(function () { return index <= 11; }, function () { console.log(index); index++; return Q.delay(500); // arbitrary async}).then(function () { console.log("done");}).done(); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17217736', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/201920/']} | jdg_85392 |
stackexchange | llm_judgeable_groundtruth_similarity | 1920431 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given an integer $n \ge 1$, I'd like to have a not-very-loose upper bound for the integer $$u(n) := \Pi_{k=1}^n k^k = n^n(n-1)^{(n-1)}\ldots2^21^1.$$ It's easy see that, $u(n) \le n^{n(n+1)/2}$, but this is not very interesting. Update We have $u(n) \le e^{\left(\frac{1}{2}n(n+1)\log\left(\frac{2n + 1}{3}\right)\right)}$, and we can't really do much better! Indeed, using Euler-Maclaurin , we have $ \log(u(n)) = \int_2^nx\log x dx = \frac{1}{4}n^2(2\log(n) - 1) - 2\log(2) + \frac{1}{4} + \text{error terms}$, which is comparable to the bound $\log(u(n)) \le \frac{1}{2}n(n+1)\log\left(\frac{2n + 1}{3}\right)$ in the accepted answer (see below). In particular, we can conclude that accepted answer's bound is tight!
Now provide the response and nothing else.
| Using Jensen's inequality : Letting $A= \sum_{k=1}^n k = \frac{n (n+1)}{2}$ and $B= \sum_{k=1}^n k^2 = \frac{n (n+1)(2n+1)}{6}$ We have $$\begin{align}\log(u(n))&=\sum_{k=1}^n k \log(k) \\&= A \sum_{k=1}^n \frac{k}{A} \log(k) \tag{1}\\ &\le A \log \left( \sum_{k=1}^n \frac{k}{A} k \right) =A \log \left( \frac{B}{A} \right) \tag{2}\end{align}$$ Hence $$\log(u(n)) \le \frac{n(n+1)}{2} \log\left(\frac{2 n+1}{3}\right) \tag{3}$$ The bound seems to be quite tight: Update: as noted by comments and OP, the bound $(3)$ agrees with the true order of growth; this can be checked by applying the trapezoidal rule to the integral: $$ -\frac{1}{4}=\int_{0}^{1} x \log(x) dx \approx \frac{1}{n+1} \sum_{k=1}^n \frac{k}{n} \log\left(\frac{k}{n}\right) $$which gives $$ \log(u(n)) \approx\frac{ n(n+1)}{2}\left( \log n -\frac{1}{2} \right) \tag{4}$$ If one is interested in an approximation (instead of a bound), $(4)$ might be preferable. Better asymptotics here (from comments). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1920431', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/168758/']} | jdg_85393 |
stackexchange | llm_judgeable_groundtruth_similarity | 16045332 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a two dimensional array in that form: reg arr[5:0][0:5]; and when I try to assign a value to it lets say assign arr[1] = 22; it gives some errors saying that: "Reference to scalar reg array 'arr' is not a legal net lvalue" and "Illegal left hand side of continuous assign". So my intention is to assign a number in the index of the array. How does this assignment work? Any help, suggestion would be highly appreciated.
Now provide the response and nothing else.
| First of all, you can't assign to regs. assigns drive wire types, not reg types. To drive a reg type you need a statement inside a logic block like an always block. Secondly, based on what you've written, I think you're looking for an array of multi-bit elements, not a 2d array. reg arr[5:0][0:5]; Defines a 2D array of single bits. If you want an array of multi-bit values that can hold larger than a bit, you declare it like this: reg [M:0] arr[0:N] - This describes an array of (N+1) elements, where each element is a M+1 bit number. If you declare it in this fashion, then you should be able to store a value like 22 into it, assuming you use an always block. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16045332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1445803/']} | jdg_85394 |
stackexchange | llm_judgeable_groundtruth_similarity | 4730053 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like my program to throw an error when it tries to create files in protected locations like the root of the C:\ drive (eg: FILE* FileHandle = fopen("\\file.txt", a) ). Instead the file gets created in the Virtual Store under %APPDATA%. How can I disable that Virtual Store? Thanks EDIT: Just to be clear, I'm not asking how to circumvent the security and create my file in a protected location. I want the file creation to FAIL so that I can tell the user he was an idiot.
Now provide the response and nothing else.
| You add an application manifest. Choose asInvoker, highestAvailable, or requireAdministrator. It sounds like you want asInvoker. From http://msdn.microsoft.com/en-us/library/bb756929.aspx : <?xml version="1.0" encoding="UTF-8" standalone="yes"?><assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" processorArchitecture="X86" name="IsUserAdmin" type="win32"/> <description>Description of your application</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo></assembly> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4730053', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21539/']} | jdg_85395 |
stackexchange | llm_judgeable_groundtruth_similarity | 54944080 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this task in Ansible: - name: Install mongodb yum: name: - "mongodb-org-{{ mongodb_version }}" - "mongodb-org-server-{{ mongodb_version }}" - "mongodb-org-mongos-{{ mongodb_version }}" - "mongodb-org-shell-{{ mongodb_version }}" - "mongodb-org-tools-{{ mongodb_version }}" state: present notify: Restart mongodb Is there a way I can indicate the version without having to use a loop like this? What is a more "elegant" way of writing this? - name: Install mongodb yum: name: "{{ item }}-{{ mongodb_version }}" state: present loop: - mongodb-org-server - mongodb-org-mongos - mongodb-org-shell - mongodb-org-tools notify: Restart mongodb
Now provide the response and nothing else.
| To my surprise I didn't find the simplest solution in all the answers, so here it is . Referring to the question title Installing multiple packages in Ansible this is (using the yum module ): - name: Install MongoDB yum: name: - mongodb-org-server - mongodb-org-mongos - mongodb-org-shell - mongodb-org-tools state: latest update_cache: true Or with the apt module : - name: Install MongoDB apt: pkg: - mongodb-org-server - mongodb-org-mongos - mongodb-org-shell - mongodb-org-tools state: latest update_cache: true Both modules support inline lists! The second part of your question is how to integrate specific version numbers into the package lists . No problem - simply remove the state: latest (since using specific version numbers together with state: latest would raise errors) and add the version numbers to the package names using a preceding = like this: - name: Install MongoDB yum: name: - mongodb-org-server=1:3.6.3-0centos1.1 - mongodb-org-mongos=1:3.6.3-0centos1.1 - mongodb-org-shell=1:3.6.3-0centos1.1 - mongodb-org-tools=1:3.6.3-0centos1.1 update_cache: true You could also optimize further and template the version numbers. In this case don't forget to add quotation marks :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54944080', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56543/']} | jdg_85396 |
stackexchange | llm_judgeable_groundtruth_similarity | 6558707 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Note: Please note that the code below is essentially non-sense, and just for illustration purposes. Based on the fact that the right-hand side of an assignment must always be evaluated before it's value is assigned to the left-hand side variable, and that increment operations such as ++ and -- are always performed right after evaluation, I would not expect the following code to work: string[] newArray1 = new[] {"1", "2", "3", "4"};string[] newArray2 = new string[4];int IndTmp = 0;foreach (string TmpString in newArray1){ newArray2[IndTmp] = newArray1[IndTmp++];} Rather, I would expect newArray1[0] to be assigned to newArray2[1] , newArray1[1] to newArray[2] and so on up to the point of throwing a System.IndexOutOfBoundsException . Instead, and to my great surprise, the version that throws the exception is string[] newArray1 = new[] {"1", "2", "3", "4"};string[] newArray2 = new string[4];int IndTmp = 0;foreach (string TmpString in newArray1){ newArray2[IndTmp++] = newArray1[IndTmp];} Since, in my understanding, the compiler first evaluates the RHS, assigns it to the LHS and only then increments this is to me an unexpected behaviour. Or is it really expected and I am clearly missing something?
Now provide the response and nothing else.
| It is instructive to see exactly where your error is: the right-hand side of an assignment must always be evaluated before it's value is assigned to the left-hand side variable Correct. Clearly the side effect of the assignment cannot happen until after the value being assigned has been computed. increment operations such as ++ and -- are always performed right after evaluation Almost correct. It is not clear what you mean by "evaluation" -- evaluation of what? The original value, the incremented value, or the value of the expression? The easiest way to think about it is that the original value is computed, then the incremented value, then the side effect happens. Then the final value is that one of the original or the incremented value is chosen, depending on whether the operator was prefix or postfix. But your basic premise is pretty good: that the side effect of the increment happens immediately after the final value is determined, and then the final value is produced. You then seem to be concluding a falsehood from these two correct premises, namely, that the side effects of the left hand side are produced after the evaluation of the right hand side. But nothing in those two premises implies this conclusion! You've just pulled that conclusion out of thin air. It would be more clear if you stated a third correct premise: the storage location associated with the left-hand-side variable also must be known before the assignment takes place. Clearly this is true. You need to know two things before an assignment can happen: what value is being assigned, and what memory location is being mutated. You can't figure those two things out at the same time; you have to figure out one of them first , and we figure out the one on the left hand side -- the variable -- first in C#. If figuring out where the storage is located causes a side effect then that side effect is produced before we figure out the second thing -- the value being assigned to the variable. In short, in C# the order of evaluations in an assignment to a variable goes like this: side effects of the left hand side happen and a variable is produced side effects of the right hand side happen and a value is produced the value is implicitly converted to the type of the left hand side, which may produce a third side effect the side effect of the assignment -- the mutation of the variable to have the value of the correct type -- happens, and a value -- the value just assigned to the left hand side -- is produced. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6558707', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/813761/']} | jdg_85397 |
stackexchange | llm_judgeable_groundtruth_similarity | 33748328 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some JUnit tests on code that uses a kafka topic. The mock kafka topics I've tried do not work and the examples found online are very old so they also do not work with 0.8.2.1. How do I create a mock kafka topic using 0.8.2.1? To clarify: I'm choosing to use an actual embedded instance of the topic in order to test with a real instance rather than mocking the hand off in mockito. This is so I can test that my custom encoders and decoders actually work and it doesn't fail when I go to use a real kafka instance.
Now provide the response and nothing else.
| https://gist.github.com/asmaier/6465468#file-kafkaproducertest-java This example was updated to be working in the new 0.8.2.2 version. Here is the code snippet with maven dependencies: pom.xml: <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.8.2.2</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.8.2.2</version> <classifier>test</classifier> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.8.2.2</version> </dependency></dependencies> KafkaProducerTest.java: import java.nio.charset.StandardCharsets;import java.util.ArrayList;import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.Properties;import org.I0Itec.zkclient.ZkClient;import org.junit.Test;import kafka.admin.TopicCommand;import kafka.consumer.ConsumerConfig;import kafka.consumer.ConsumerIterator;import kafka.consumer.KafkaStream;import kafka.javaapi.consumer.ConsumerConnector;import kafka.producer.KeyedMessage;import kafka.producer.Producer;import kafka.producer.ProducerConfig;import kafka.server.KafkaConfig;import kafka.server.KafkaServer;import kafka.utils.MockTime;import kafka.utils.TestUtils;import kafka.utils.TestZKUtils;import kafka.utils.Time;import kafka.utils.ZKStringSerializer$;import kafka.zk.EmbeddedZookeeper;import static org.junit.Assert.*;/** * For online documentation * see * https://github.com/apache/kafka/blob/0.8.2/core/src/test/scala/unit/kafka/utils/TestUtils.scala * https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/admin/TopicCommand.scala * https://github.com/apache/kafka/blob/0.8.2/core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala */public class KafkaProducerTest { private int brokerId = 0; private String topic = "test"; @Test public void producerTest() throws InterruptedException { // setup Zookeeper String zkConnect = TestZKUtils.zookeeperConnect(); EmbeddedZookeeper zkServer = new EmbeddedZookeeper(zkConnect); ZkClient zkClient = new ZkClient(zkServer.connectString(), 30000, 30000, ZKStringSerializer$.MODULE$); // setup Broker int port = TestUtils.choosePort(); Properties props = TestUtils.createBrokerConfig(brokerId, port, true); KafkaConfig config = new KafkaConfig(props); Time mock = new MockTime(); KafkaServer kafkaServer = TestUtils.createServer(config, mock); String [] arguments = new String[]{"--topic", topic, "--partitions", "1","--replication-factor", "1"}; // create topic TopicCommand.createTopic(zkClient, new TopicCommand.TopicCommandOptions(arguments)); List<KafkaServer> servers = new ArrayList<KafkaServer>(); servers.add(kafkaServer); TestUtils.waitUntilMetadataIsPropagated(scala.collection.JavaConversions.asScalaBuffer(servers), topic, 0, 5000); // setup producer Properties properties = TestUtils.getProducerConfig("localhost:" + port); ProducerConfig producerConfig = new ProducerConfig(properties); Producer producer = new Producer(producerConfig); // setup simple consumer Properties consumerProperties = TestUtils.createConsumerProperties(zkServer.connectString(), "group0", "consumer0", -1); ConsumerConnector consumer = kafka.consumer.Consumer.createJavaConsumerConnector(new ConsumerConfig(consumerProperties)); // send message KeyedMessage<Integer, byte[]> data = new KeyedMessage(topic, "test-message".getBytes(StandardCharsets.UTF_8)); List<KeyedMessage> messages = new ArrayList<KeyedMessage>(); messages.add(data); producer.send(scala.collection.JavaConversions.asScalaBuffer(messages)); producer.close(); // deleting zookeeper information to make sure the consumer starts from the beginning // see https://stackoverflow.com/questions/14935755/how-to-get-data-from-old-offset-point-in-kafka zkClient.delete("/consumers/group0"); // starting consumer Map<String, Integer> topicCountMap = new HashMap<String, Integer>(); topicCountMap.put(topic, 1); Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap); KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0); ConsumerIterator<byte[], byte[]> iterator = stream.iterator(); if(iterator.hasNext()) { String msg = new String(iterator.next().message(), StandardCharsets.UTF_8); System.out.println(msg); assertEquals("test-message", msg); } else { fail(); } // cleanup consumer.shutdown(); kafkaServer.shutdown(); zkClient.close(); zkServer.shutdown(); }} Be sure to check your mvn dependency:tree for any conflicting libraries. I had to add exclusions for slf and log4j: <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.8.2.2</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </exclusion> </exclusions></dependency><dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.8.2.2</version> <classifier>test</classifier> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </exclusion> </exclusions></dependency><dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.8.2.2</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </exclusion> </exclusions></dependency> Another option I'm looking into is using apache curator: Is it possible to start a zookeeper server instance in process, say for unit tests? <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-test</artifactId> <version>2.2.0-incubating</version> <scope>test</scope></dependency> TestingServer zkTestServer; @Beforepublic void startZookeeper() throws Exception { zkTestServer = new TestingServer(2181); cli = CuratorFrameworkFactory.newClient(zkTestServer.getConnectString(), new RetryOneTime(2000));}@Afterpublic void stopZookeeper() throws IOException { cli.close(); zkTestServer.stop();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33748328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/433922/']} | jdg_85398 |
stackexchange | llm_judgeable_groundtruth_similarity | 17300370 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to take Redhat kickstart files and modify them in python before using them in a server setup. My app uses python to curl the original kickstart file from my Redhat Satellite server then I'm doing a string replace on certain values in the kickstart file. When I curl the file in python it comes back as a multi-line string, which is what I need in order for redhat kickstart to interpret the file correctly. But when I return the string variable through one of these frameworks (web2py, bottle, flask) something is happening and it is not returning it as a multi-line string, I need it to preserve the exact format of the original file besides the areas I change. I don't want to put my kickstart files in templates because I manage them through satellite, if I curl the file from satellite then it picks up any modifications without needing to go into the template ever time. Then in a template or something I either return the string without a template or in a template file I only pass 1 variable to the template as the entire kickstart file. @route('/kickstart/<name>')def kickstart(name): ks = vula.kickstarter.kickstart.Kickstarter() ks_file = ks.getKickstartFile() return pystache.render('{{kickstart}}', {'kickstart': ks_file}) Here is the method in my vula package. It returns the file exactly the way I need it. But again something is happening between this and returning this value through the framework. def getKickstartFile(self): response = urllib2.urlopen('https://my-satellite-server/core-kickstarter') ks_file = response.read() return ks_file I started off using Bottle as a framework but I found a statement that says they are not capable of returning multi-line strings, so scratch that. I moved over to Flask but currently Flask is doing the same thing. I am still learning python and possibly I'm doing something wrong, but I need any help possible to get this working correctly. I would like to output a multi-line string. I understand that you use either """ or ''' for multi-line strings, but even after you do that and send it through a framework it will still print to the screen as one continuous line. What am I doing wrong? As a vary last resort I will be forced to put the kickstart files into templates if I cannot output multi-line strings.
Now provide the response and nothing else.
| Both Bottle and Flask can handle multi-line strings just fine. Your issue is that your data is being interpreted as text/html by default and in HTML any combination of whitespace is collapsed into a single space when displayed. In order to ensure that your data comes back exactly as you sent it you'll want to set the Content-Type header to text/plain . In Flask: # If you want *all* your responses to be text/plain# then this is what you [email protected]_requestdef treat_as_plain_text(response): response.headers["content-type"] = "text/plain" return response# If you want only *this* route to respond# with Content-Type=text/[email protected]("/plain-text")def a_plain_text_route(): response = make_response(getKickstartFile()) response.headers["content-type"] = "text/plain" return response In Bottle: @route("/plain-text")def plain_text(): response.content_type = "text/plain" return """This multi-line string will show up just fine""" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17300370', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/976709/']} | jdg_85399 |
stackexchange | llm_judgeable_groundtruth_similarity | 2768308 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is a UI thread? Is there only one UI thread in a .NET application?
Now provide the response and nothing else.
| (Simplification ahead) A UI thread is a Single Threading Apartment thread that is used to create various user interface objects (in Winforms, this means Controls). By convention and rule, a Control may only be accessed from within the thread that was used to create it; doing otherwise can and will produce unexpected results, from visual oddities all the way to a crash. Unless you explicitly create more, there is only one UI thread within a Windows Forms application. While you can create another thread and start a message loop, there are very few reasons why you'd want to do this, and two different UI threads cannot "talk" to each other any more than any other thread can talk to a UI thread. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2768308', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85400 |
stackexchange | llm_judgeable_groundtruth_similarity | 30764510 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am making a website and am running into an issue with website cache for my users. I develop my website and have set chrome developer tools to disable cache for my website for development. The issue is when i release a new change to prod all my users don't get the update because of their browser cache. When i delete the cache for my website manually on a friends computer it works but i obviously cant expect everyone to do this to get the new updates. Is there anyway for me to get around this with versioning or something? i have looked around but cant seem to find anything. edit: i know i can prevent caching at all but i don't want to completely prevent caching that seems like a bad design
Now provide the response and nothing else.
| What are the resources that are being cached? I suspect js/css files, a good way to handle this is to add a query param with a version to the path of those resources in order to force the browser to load the new file if the version changed, something like this: <script type="text/javascript" src="your/js/path/file.js?v=1"></script><link href="/css/main.css?v=1" media="screen,print" rel="stylesheet" type="text/css"> And when you release a new update of your website, replace the version as follows: <script type="text/javascript" src="your/js/path/file.js?v=2"></script><link href="/css/main.css?v=2" media="screen,print" rel="stylesheet" type="text/css"> The browser will thing that the file is a new file and it will update the cache. Hope this helps. In order to disable html caching, you can add a metatag to your file as follows: <META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE"> But this will entirely disable caching of html files that have this metatag, I don't think there is a way to handle this as easily as with js/css files, you can set the metatag to refresh the html in a future date though. Here is an article describing how to use that metatag if you need more info: http://www.metatags.info/meta_http_equiv_cache_control | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30764510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4111447/']} | jdg_85401 |
stackexchange | llm_judgeable_groundtruth_similarity | 16207 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do we know that there are different types of orbitals? For example, what evidence is there for the existence of $\mathrm{p}$ orbitals instead of there being multiple $\mathrm{s}$ orbitals (for example, why isn't the electronic configuration of sodium $\mathrm{1s^1, 2s^2, 2s^2, 2s^2, 2s^2, 3s^2}$ instead of $\mathrm{1s^2 2s^2 2p^6 3p^1}$ )?
Now provide the response and nothing else.
| The answer lies with experimental chemistry, specifically successive ionisation energies (i.e how much energy is required to remove the first electron, the second electron, the third electron and so on). Each point on the graph corresponds to an element. The first one is hydrogen, the second is helium.The height of each point shows how much energy is required to remove the first electron. You can note that generally, the energy increases over a period. This is because In each successive element there is one more proton, and this stronger nuclear charge 'hold the outer electrons' more tightly. Now to explain your question. Observe that within a period (e.g the 3rd dot to the 10th), it is not a constant increase. You can see that between the 4th-5th there is a slight drop, likewise between the 7th-8th. The explanation for this is the sub orbitals. Some knowledge you need to know is that electron sub shells are only stable when empty, full, or half full (if you need an explanation for this comment on it later). Let us examine the fourth dot, which represents Beryllium. It has an electronic config of 1s2 2s2. All of its sub orbitals are full, meaning that it is quite stable. Compare it to the 5th dot, boron. Boron has a config of 1s2 2s2 3p1. Now the P orbital has room for 6 electrons, but this only has 1! It is not happy. It is not full or half full. Because of this it is trying to 'get rid' of the electron to be more stable. That's why it doesn't require so much energy to remove the outer electron. The decrease between the 6th and 7th is explained by the fact that the p orbital is stable when empty, full or half full. The 7th dot (nitrogen) has 3 electrons in its p orbital (half full). Contrasted with oxygen, which has 4/6. This is not stable, so less energy is required to remove it. TL-DR: By analysing the ionisation energy graphs, we can see patterns that can be explained by sub orbitals. If you need a more basic/complex explanation, comment. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/16207', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7825/']} | jdg_85402 |
stackexchange | llm_judgeable_groundtruth_similarity | 34888245 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Alan Kay said that reading the code closely and finding the 1 and only bug in the code on page 13 of the Lisp 1.5 manual, helped him understand Computer Science by a factor of 100 better . The code in question is the 1st release of eval & apply that looks anything remotely like modern lisp (that I'm aware of). Since the correct answer is likely known but lost (my google-fu is decent and I've searched for 20 mins at least) I will award the 1st correct answer (I will be looking at edit times so don't try to cheat) a 250 point bounty As Soon As Possible. I would suggest others contribute to the bounty as well, remember from the video above Alan Kay said this stuff is reminiscent of the environment Einstein was in when he discovered the Theory of Relativity . The code in the text is written in M-Expressions. I'm working on a translator to translate from M-expressions to S-expressions (lisp code) @ https://github.com/Viruliant/MccarthyMCEval-1.5 . Anyways here is the translated quote from page 13: ;______________________________________Lisp Meta-Circular Evaluator S-Expression;this code is written in the order it appears on pages 10-13 in the Lisp 1.5 Manual ;and is translated from the m-expressions into s-expressions(label mc.equal (lambda (x y) (mc.cond ((atom x) ((mc.cond ((atom y) (eq x y)) ((quote t) (quote f))))) ((equal (car x)(car y)) (equal (cdr x) (cdr y))) ((quote t) (quote f)))))(label mc.subst (lambda (x y z) (mc.cond ((equal y z) (x)) ((atom z) (z)) ((quote t) (cons (subst x y (car z))(subst x y (cdr z)))))))(label mc.append (lambda (x y) (mc.cond ((null x) (y)) ((quote t) (cons (car x)) (append (cdr x) y)))))(label mc.member (lambda (x y) (mc.cond ((null y) (quote f)) ((equal x (car y)) (quote t)) ((quote t) (member x (cdr y))))))(label mc.pairlis (lambda (x y a) (mc.cond ((null x) (a)) ((quote t) (cons (cons (car x)(car y)) (pairlis (cdr x)(cdr y) a)))))(label mc.assoc (lambda (x a) (mc.cond ((equal (caar a) x) (car a)) ((quote t) (assoc x (cdr a))))))(label mc.sub2 (lambda (a z) (mc.cond ((null a) (z)) (((eq (caar a) z)) (cdar a)) ((quote t) ( sub2 (cdr a) z)))))(label mc.sublis (lambda (a y) (mc.cond ((atom y) (sub2 a y)) ((quote t) (cons (sublis a (car y)))) (sublis a (cdr y)))))(label mc.evalquote (lambda (fn x) (apply fn x nil)))(label mc.apply (lambda (fn x a) (mc.cond ((atom fn) ( (mc.cond ((eq fn car) (caar x)) ((eq fn cdr) (cdar x)) ((eq fn cons) (cons (car x)(cadr x))) ((eq fn atom) (atom (car x))) ((eq fn eq) (eq (car x)(cadr x))) ((quote t) (apply (eval (fn a)x a)))))) ((eq (car fn) lambda) (eval (caddr fn) (parlis (cadr fn) x a))) ((eq (car fn) label) (apply (caddr (fn)x cons (cons (cadr (fn))) (caddr fn))a)))))(label mc.eval (lambda (e a) (mc.cond ((atom e) (cdr (assoc e a))) ((atom (car e)) (mc.cond ((eq (car e) quote) (cadr e)) ((eq (car e) cond) (evcon (cdr e) a)) ((quote t) (apply (car e) (evlis (cdr e) a) a)))) ((quote t) (apply (car e) (evlis (cdr e) a) a))))))(label mc.evcon (lambda (c a) (mc.cond ((eval (caar c) a) (eval (cadar c) a)) ((quote t) (evcon (cdr c) a)))))(label mc.evlis (lambda (m a) (mc.cond ((null m) (nil)) ((quote t) (cons (eval (car m) a) (evlis (cdr m) a)))))))
Now provide the response and nothing else.
| There are two different issues: First: Dynamic binding as a bug Not sure what he means, but generally in McCarthy's EVAL the use of dynamic binding can be seen as a bug . He does not implement lexical scope for variables. The bug shows up for example here: http://www-formal.stanford.edu/jmc/recursive/node3.html See the functions maplist and diff . Both use x . This won't work as shown, since the early Lisp provides dynamic binding. A simpler example, which shows that the evaluator uses dynamic binding Note the use of eval. , which is the eval from McCarthy. CL-USER 36 > (eval. '((lambda (f) ((lambda (x) (f)) 'foo)) '(lambda () x)) nil) This returns FOO , since the value of X is looked up from the dynamic binding. If we look at the code, it requires us to pass a function as a list: '(lambda () x)) . This evaluates to a list. Later the list will be called via (f) - with no arguments. The list then is interpreted as a lambda expression and x will be resolved by looking at the current dynamic binding for x . There is the binding of x to FOO introduced by ((lambda (x) (f)) 'foo) . This will be used then. In the Lisp 1.5 implementation from the 60s, one might write something similar to: ((lambda (f) ((lambda (x) (f)) 'foo)) (function (lambda () x))) Note that (function (lambda () x)) evaluates to a list of a marker, dynamic environment and the function. Unfortunately the Lisp 1.5 implementation still used dynamic binding . So that was already the right direction, but the bug wasn't really fixed then. Improved was the situation when one was passing functions to other functions as arguments. The FUNARG problem It took quite some time during the 60s/early 70s to figure out this problem. It was known as the FUNARG problem . See for example Joel Moses paper: The Function of FUNCTION in LISP, or Why the FUNARG Problem Should be Called the Environment Problem . There were various solutions to create closures and to use lexical binding. Often the interpreter used dynamic binding and the compiler used lexical binding. In the Lisp world this was basically solved in Scheme , which introduced lexical binding by default. This lexical binding then allows for example to use closures to emulate object systems (something that Kay probably finds useful). See the paper: SCHEME: An Interpreter for Extended Lambda Calculus from 1975. In Common Lisp, which uses lexical scope by default like the Lisp dialect Scheme, above example would be an error (here we use the eval from Common Lisp, with slightly changed code to make it legal Common Lisp code): CL-USER 43 > (eval '((lambda (f) ((lambda (x) (funcall f)) 'foo)) (function (lambda () x))))Error: The variable X is unbound. As you can see in Common Lisp (and Scheme), (lambda () x) is a real lambda expression , not a quoted list and (function (lambda () x)) evaluates to a function object - if there are bindings , then it is a closure - a function object and its bindings. This function object / clojure is passed and then later called via (funcall f) . Since the x refers to nothing (it is a free variable ) and is not looked up via bindings, an error occurs when the code is executed. That's what we want: we want lexical binding and this error in our code is a consequence. That this error does not happen in McCarthy's original Lisp is one result of the bug . Fixing this bug (which took more than a decade to full satisfaction), enables us to use closures in Lisp - like in Common Lisp, which learned it from Scheme. Probably Kay also saw dynamic binding as a bug . This is a very fundamental problem and understanding/solving it, helped to design and develop better programming languages. Note that typical early Smalltalk implementations (example Xerox' Smalltalk 80) also used dynamic binding. McCarthy about that bug In From LISP 1 to LISP 1.5 (1979) McCarthy writes (bold by me): d. Free variables . In all innocence, James R. Slagle programmed the following LISP function definition and complained when it didn't work right: The object of the function is to find a subexpression of x satisfying p[x] and return f[x]. If the search is unsuccessful, then the continuation function u[] of no arguments is to be computed and its value returned. The difficulty was that when an inner recursion occurred, the value of car[x] wanted was the outer value, but the inner value was actually used. In modern terminology, lexical scoping was wanted, and dynamic scoping was obtained. I must confess that I regarded this difficulty as just a bug and expressed confidence that Steve Russell would soon fix it. He did fix it but by inventing the so-called FUNARG device that took the lexical environment along with the functional argument. Similar difficulties later showed up in Algol 60, and Russell's turned out to be one of the more comprehensive solutions to the problem. While it worked well in the interpreter, comprehensiveness and speed seem to be opposed in compiled code, and this led to a succession of compromises. Unfortunately, time did not permit writing an appendix giving the history of the problem, and the interested reader is referred to (Moses 1970) as a place to start. (David Park tells me that Patrick Fischer also had a hand in developing the FUNARG device). This is unrelated to the second problem: Second: bugs in a different version of EVAL in the same book See Paul Graham's The Roots of Lisp for a discussion of a bug in a definition of EVAL later in the book. On page 12 you find a description of a bug in McCarthy's code which causes double evaluation of arguments to a named function. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34888245', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144020/']} | jdg_85403 |
stackexchange | llm_judgeable_groundtruth_similarity | 347811 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A few years ago, it was proven that PRIMES is in P . Are there any algorithms implementing their primality test in Python? I wanted to run some benchmarks with a naive generator and see for myself how fast it is. I'd implement it myself, but I don't understand the paper enough yet to do that.
Now provide the response and nothing else.
| Quick answer: no, the AKS test is not the fastest way to test primality. There are much much faster primality tests that either assume the (generalized) Riemann hypothesis and/or are randomized. (E.g. Miller-Rabin is fast and simple to implement.) The real breakthrough of the paper was theoretical, proving that a deterministic polynomial-time algorithm exists for testing primality, without assuming the GRH or other unproved conjectures. That said, if you want to understand and implement it, Scott Aaronson's short article might help. It doesn't go into all the details, but you can start at page 10 of 12, and it gives enough. :-)There is also a list of implementations (mostly in C++) here. Also, for optimization and improvements (by several orders of magnitude), you might want to look at this report , or (older) Crandall and Papadopoulos's report , or (older still) Daniel J Bernstein's report . All of them have fairly detailed pseudo-code that lends itself well to implementation. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/347811', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15055/']} | jdg_85404 |
stackexchange | llm_judgeable_groundtruth_similarity | 56973019 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Cypress to test my spa. Sometimes the page displays quickly and sometimes it is very slow. I need to be able to check for a button or text that will display once the page is loaded, but do not want to wait for eternity. I've been using excessively long wait periods but would like the test to run faster. let targeturl = 'http:\\something.com'let longwait = 2500describe('Update Case', () => { it('Create Case', () => { cy.visit(targeturl) cy.wait(longwait) cy.get('button').contains('Create').click() I want to be able to set a wait that waits until the button 'Create' is displayed.
Now provide the response and nothing else.
| By default, Cypress will wait for your page to load all resources before continuing the test. Your cy.wait() shouldn't be necessary. The default timeout for cy.visit() is 60 seconds. Also, cy.get() will retry finding your element until the element exists. The default timeout is 4 seconds, but you can increase it. Something like this is generally all you need: describe('Update Case', () => { it('Create Case', () => { cy.visit(targeturl) cy.get('button').contains('Create').click() If you find that this isn't working, perhaps because your page takes more than 4 seconds to render after it loads all resources, then you can do something like this: describe('Update Case', () => { it('Create Case', () => { cy.visit(targeturl) // wait up to 30 seconds for some element to exist before continuing the test cy.get('.some-element-in-your-app-that-only-exists-once-page-has-loaded', { timeout: 30000 }) cy.get('button').contains('Create').click() In general, you shouldn't ever need to use cy.wait() to delay your tests by a certain number of milliseconds. Doing so is almost always a bad practice. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56973019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11765488/']} | jdg_85405 |
stackexchange | llm_judgeable_groundtruth_similarity | 17470419 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know there are other similar questions on how to pass data between Angular controllers. What I wonder is how to deal with this in a view.. Lets say I have a UserController for login, registration etc.And an AppController for the actual app functionallity . The UserController would be fairly easy, its sort of standalone from the rest.But what if the app needs to know about stuff from the user controller? Lets say the app view needs to hide/show stuff depending on if the user is logged in or not.Or it could be if the user is male or female etc. Should the app model keep its own copy of the user model state?e.g. appModel.isLoggedIn , appModel.gender etc ? feels a bit redundant, but at the same time more testable. So what is the correct way to do this?
Now provide the response and nothing else.
| Short answer Create a service, see Creating Services for details. Long answer Services are - per se - application-wide singletons, hence they are perfect for keeping state across views, controllers & co.: app.factory('myService', [ function () { 'use strict'; return { // Your service implementation goes here ... };}]); Once you have written and registered your service, you can require it in your controllers using AngularJS' dependency injection feature: app.controller('myController', [ 'myService', '$scope', function (myService, $scope) { 'use strict'; // Your controller implementation goes here ...}]); Now, inside your controller you have the myService variable which contains the single instance of the service. There you can have a property isLoggedIn that represents whether your user is logged in or not. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17470419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317384/']} | jdg_85406 |
stackexchange | llm_judgeable_groundtruth_similarity | 541416 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I keep wondering about how can we be so sure about what happened before the CMB, given that it is the oldest thing we can actually see.It seems like we are very confident about what really happened from $10^{-12}\ \mathrm s$ after the Big Bang until $380\,000$ years after the Big Bang, but how? How do we know the universe was expanding before the CMB? We are sure it is expanding since after the CMB until now, but are we really sure it was expanding before the CMB, and if yes, how can we be so sure? How do we know the temperature of the universe was approximately $10^{12}\ \mathrm K$ at $10^{-12}\ \mathrm s$ after the Big Bang? I have read that we were able to recreate temperatures as high as $10^{12}\ \mathrm K$ in the Large Hadron Collider and to observe it directly. OK, but how would that be an evidence about what happened $10^{-12}\ \mathrm s$ after the Big Bang?
Now provide the response and nothing else.
| You are quite correct that we can't see what happened before the CMB (this time is known as recombination ) but this is not unusual in Physics. For example we can't see what happens at collisions in the Large Hadron Collider. All we can see is the debris that comes flying out of the collisions. But we understand the physics involved so by measuring the properties of the debris we can calculate what happened in the collision. That's how the Higgs boson was discovered. It wasn't directly observed but its existence was shown by precise measurements of the particles that we can detect. And the same applies to the universe. The CMB is the debris that came flying out of the Big Bang, so by measuring the properties of the CMB we can calculate what happened at times before recombination. The obvious question is how we know our calculations are correct. The way we approach this is to try calculating the same thing in different ways. For example Higgs bosons can be detected in several different ways, and if those different measurements gave different masses for the Higgs boson we'd know at least some of our calculations must be wrong. This is harder for the universe since we only have the one universe and the creation of the universe isn't an experiment we can repeat. But we can still cross check various different calculations and at least make sure they are consistent, which is exactly what is done. Recombination happened about 370000 years after the Big Bang, and in fact the physical properties of the universe at this time are easy to understand. The density and temperature are in the range that we can recreate in the lab so we can directly probe the properties of plasma under these conditions. Indeed even as far back as nucleosynthesis , which happened only a few minutes after the Big Bang we still understand the physics well from experiment. For example you mention a time $10^{-12}$ seconds after the Big Bang, and this time is normally taken to be the end of the electroweak epoch . From this time on the interactions between particles in the universe occur at energies that can be probed in colliders so we can experimentally determine what would be happening from this time onwards. Incidentally the temperature at this time was more like $10^{15}$ K than $10^{12}$ K. But it is certainly true that as we go back towards the Big Bang there comes a point where the density and temperature exceed anything we can study experimentally, and we can be less sure what happened then. This is still an active area of research. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/541416', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/259419/']} | jdg_85407 |
stackexchange | llm_judgeable_groundtruth_similarity | 5744760 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Everytime I run my ajax jquery function I get an error, this goes for all my ajax calls.here is an example of my code function FindContact(CompanyName,DivisionName,FirstName,LastName) { $.ajax({ url: 'Path', dataType: "json", async:false, type:'post', data: {'FirstName':FirstName,'LastName':LastName,'DivisionName':DivisionName,'CompanyName':CompanyName}, success: DisplayContacts, error: ErrorMsg }); } to get around this I use this function ErrorMsg(result) { if (result.status == 200 && result.statusText == 'OK') { DisplayContacts(result); } else { alert("FAILED : " + result.status + ' ' + result.statusText); } } this is tough because I need to create method like this for every ajax request. why does it run the error code 1st??? Please help!
Now provide the response and nothing else.
| Ensure that what you are returning is valid json. If its not, and everything goes correct on the server, $.ajax will go to error rather than success. function FindContact(CompanyName,DivisionName,FirstName,LastName) { $.ajax({ url: 'Path', dataType: "html or json", async:false, type:'post', data: {'FirstName':FirstName,'LastName':LastName,'DivisionName':DivisionName,'CompanyName':CompanyName}, success: DisplayContacts, error: ErrorMsg }); } A quick to check if the json is valid is to just switch dataType to html and see the success is being fired. If it is than your json is invalid, if your still getting the same problem thers something else wrong. Another way to check if valid json is being returned is, open up firebug and when the request gets sent, click on the response tab, copy the response and got to jsonlint.com to check if its valid. Jquery ajax docs | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5744760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/351709/']} | jdg_85408 |
stackexchange | llm_judgeable_groundtruth_similarity | 46320189 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I run my asp.net core 2 projects I get the following error message: InvalidOperationException: Unable to resolve service for type 'Microsoft.EntityFrameworkCore.DbContext' while attempting to activate 'ContosoUniversity.Service.Class.StudentService'. Here is my project structure: -- solution 'ContosoUniversity'----- ContosoUniversity----- ContosoUniversity.Model----- ContosoUniversity.Service IEntityService (related code) : public interface IEntityService<T> : IService where T : BaseEntity{ Task<List<T>> GetAllAsync(); } IEntityService (related code) : public abstract class EntityService<T> : IEntityService<T> where T : BaseEntity{ protected DbContext _context; protected DbSet<T> _dbset; public EntityService(DbContext context) { _context = context; _dbset = _context.Set<T>(); } public async virtual Task<List<T>> GetAllAsync() { return await _dbset.ToListAsync<T>(); }} Entity : public abstract class BaseEntity { }public abstract class Entity<T> : BaseEntity, IEntity<T> { public virtual T Id { get; set; }} IStudentService : public interface IStudentService : IEntityService<Student>{ Task<Student> GetById(int Id);} StudentService : public class StudentService : EntityService<Student>, IStudentService{ DbContext _context; public StudentService(DbContext context) : base(context) { _context = context; _dbset = _context.Set<Student>(); } public async Task<Student> GetById(int Id) { return await _dbset.FirstOrDefaultAsync(x => x.Id == Id); }} SchoolContext : public class SchoolContext : DbContext{ public SchoolContext(DbContextOptions<SchoolContext> options) : base(options) { } public DbSet<Course> Courses { get; set; } public DbSet<Enrollment> Enrollments { get; set; } public DbSet<Student> Students { get; set; }} And finally here is my Startup.cs class : public class Startup{ public Startup(IConfiguration configuration, IHostingEnvironment env, IServiceProvider serviceProvider) { Configuration = configuration; var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true); Configuration = builder.Build(); } public IConfiguration Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddDbContext<SchoolContext>(option => option.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddScoped<IStudentService, StudentService>(); services.AddMvc(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Home/Error"); } app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); }} What should I do to resolve this problem?
Now provide the response and nothing else.
| StudentService expects DbContext but the container does not know how to resolve it based on your current startup. You would need to either explicitly add the context to the service collection Startup services.AddScoped<DbContext, SchoolContext>();services.AddScoped<IStudentService, StudentService>(); Or update the StudentService constructor to explicitly expect a type the container knows how to resolve. StudentService public StudentService(SchoolContext context) : base(context){ //...} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46320189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7389871/']} | jdg_85409 |
stackexchange | llm_judgeable_groundtruth_similarity | 44928 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In image processing and especially edge detection, when we apply sobel convolution matrix to a given image, we say that we got the first derivative of the input image, and when applying the laplacian matrix to the initial image we say that we got the second derivative . Taking into consideration that in both cases we applied the same operation (img * matrix), then why the first operation gave us the first derivative, and the second one gave us the second derivative and not the fifth ? is the values of the mask (convolution matrix) which decide which derivative we got ? or I'm already mistaken in my description ?
Now provide the response and nothing else.
| Look at the numbers in the filter kernel in just 1 dimension for simplicity. For a Sobel and Prewitt matrix you have something that roughly looks like this [-1,0,1] . Convolving this with your image basically computes the difference between the pixel values of the neighboring pixels. You apply 0 to the current pixel, 1 to the pixel on the right and -1 to the pixel on the left. This gives a first order difference: next pixel - previous pixel , i.e. first derivative. But now look at a Laplacian operator. It looks something like [1, -2, 1] . This computes the difference of differences. To see how, note that [1,-2,1] corresponds to next - 2 x current + previous i.e. next - current - current + previous i.e. (next-current) - (current-previous) Now notice how this is a diference of differences. (next - current) is like a 1st derivative. (current - previous) is like 1st derivative. Their difference is like a 2nd derivative. | {} | {'log_upvote_score': 5, 'links': ['https://dsp.stackexchange.com/questions/44928', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/31778/']} | jdg_85410 |
stackexchange | llm_judgeable_groundtruth_similarity | 160172 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I run commands: tar -cf myArchive.tar myDirectory/gzip myArchive.tar then I copy the file over a lot of unreliable mediums, and later I unpack it using: tar -xzf myArchive.tar.gz The fact that I compressed the tar-ball, will that in any way guarantee the integrity, or at least a CRC of the unpacked content?
Now provide the response and nothing else.
| tar itself does not write down a checksum for later comparsion. If you gzip the tar archive you can have that functionality. tar uses compress . If you use the -Z flag while creating the archive tar will use the compress program when reading or writing the archive. From the gzip manpage: The standard compress format was not designed to allow consistency checks. But, you can use the -z parameter. Then tar reads and writes the archive through gzip . And gzip writes a crc checksum. To display that checksum use that command: $ gzip -lv archive.tar.gzmethod crc date time compressed uncompressed ratio uncompressed_namedefla 3f641c33 Sep 25 14:01 24270 122880 80.3% archive.tar From the gzip manpage: When using the first two formats ( gzip or zip is meant ), gunzip checks a 32 bit CRC. | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/160172', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/39392/']} | jdg_85411 |
stackexchange | llm_judgeable_groundtruth_similarity | 36596925 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to ignore a catch and return back to the chain? promiseA() // <-- fails with 'missing' reason .then(promiseB) // <-- these are not going to run .then(promiseC) .catch(function(error, ignore){ if(error.type == 'missing'){ ignore() // <-- ignore the catch and run promiseB and promiseC } }) Is something like this possible?
Now provide the response and nothing else.
| Here's the synchronous analogy: try { action1(); // throws action2(); // skipped action3(); // skipped} catch (e) { // can't resume} vs try { action1(); // throws} catch (e) { handleError(e);}action2(); // executes normallyaction3(); Here's the promise version: asyncActionA() // <-- fails with 'missing' reason.catch(error => { if(error.type == 'missing'){ return; // Makes sure the promise is resolved, so the chain continues } throw error; // Otherwise, rethrow to keep the Promise rejected}).asyncActionB(promiseB) // <-- runs.asyncActionC(promiseC).catch(err => { // Handle errors which are not of type 'missing'.}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36596925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/380562/']} | jdg_85412 |
stackexchange | llm_judgeable_groundtruth_similarity | 3001595 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose f is a continuous function on [a, infinity) and lim f(x) = y [this is the limit as x goes to infinity] and f(a) < y. Prove that 1) for any f(a) < b < y there is x0 ∈ [a, ∞) such that f(x0) = b.2) f attains its minimum on [a, infinity) I have partial proofs - i need help completing/ understanding them For part 1): Choose M>y. Then there is c in [a, infinity) such that f(x)>M for any x>c (however I do not understand why this holds). After this it says to use intermediate value theorem to prove the result which also I don't know how to apply For part 2): There exists M>0 such that f(x) > f(a) for any x>=M (which theorem or property does this follow from?). Then by extreme value theorem, on [a,M], f attains its minimum y0 at some point c in [a,M]. Then y0 is also the minimum of f on [a, infinity) (Why is this true??)
Now provide the response and nothing else.
| You have found $3^{x-2}=\frac{12}{9}=\frac{4}{3}$ . So replace the $\frac{4}{3}$ in your expression: $$\left(\sqrt{\frac43}\right)^{1/(x-2)}=\left(\sqrt{3^{x-2}}\right)^{1/(x-2)}=\cdots$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3001595', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/592870/']} | jdg_85413 |
stackexchange | llm_judgeable_groundtruth_similarity | 173470 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that the simple group of order $660$ is isomorphic to a subgroup of the alternating group of degree $12$. I have managed to show that it must be isomorphic to a subgroup of $S_{12}$ (through a group action on the set of Sylow $11$-subgroups). Any suggestions are appreciated!
Now provide the response and nothing else.
| This is a comment made into an answer. Using the Sylow theorems, if $G$ is a simple group of order $660=60\times 11$, the number of Sylow-$11$ subgroups $n_{11}$ has $n_{11}\equiv 1~\mathrm{mod}~11$, $n_{11}|60$ and $n_{11}\neq1$ by simplicity, so $n_{11}=12$. Again, by Sylow's theorem, $G$ acts transitively on the $12$-element-set of its $11$-Sylows by conjugation, and so we get a non trivial morphism $G\rightarrow S_{12}$. By simplicity of $G$, this is an embedding. Now apply the signature morphism $\epsilon:S_{12}\rightarrow\lbrace-1,+1\rbrace$:$$G\hookrightarrow S_{12}\rightarrow \lbrace-1,+1\rbrace$$The kernel is a normal subgroup of $G$. Since $G$ is simple, it is either $G$ or $1$, and by cardinality reasons it has to be $G$. Thus the embedding $G\hookrightarrow S_{12}$ maps $G$ into $\mathrm{Ker}(\epsilon)=A_{12}$, and $G$ is isomorphic to a subgroup of $A_{12}$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/173470', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/36222/']} | jdg_85414 |
stackexchange | llm_judgeable_groundtruth_similarity | 499792 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Linux, how do /etc/hosts and DNS work together to resolve hostnames to IP addresses? if a hostname can be resolved in /etc/hosts , does DNS apply after /etc/hosts to resolve the hostname or treat the resolved IP address by /etc/hosts as a "hostname" to resolve recursively? In my browser (firefox and google chrome), when I add to /etc/hosts : 127.0.0.1 google.com www.google.com typing www.google.com into the address bar of the browsers andhitting entering won't connect to the website. After I remove thatline from /etc/hosts , I can connect to the website. Does it meanthat /etc/hosts overrides DNS for resolving hostnames? After I re-add the line to /etc/hosts , I can still connect to thewebsite, even after refreshing the webpage. Why doesn't /etc/hosts apply again, so that I can't connect to the website? Thanks.
Now provide the response and nothing else.
| This is dictated by the NSS (Name Service Switch) configuration i.e. /etc/nsswitch.conf file's hosts directive. For example, on my system: hosts: files mdns4_minimal [NOTFOUND=return] dns Here, files refers to the /etc/hosts file, and dns refers to the DNS system. And as you can imagine whichever comes first wins . Also, see man 5 nsswitch.conf to get more idea on this. As an aside, to follow the NSS host resolution orderings, use getent with hosts as database e.g.: getent hosts example.com | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/499792', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/674/']} | jdg_85415 |
stackexchange | llm_judgeable_groundtruth_similarity | 342713 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My thesis title is "Powerbank with sound to eletrical energy converter" meaning I'll be using sound to charge the powerbank, and I decide to used dynamic microphone instead of condenser microphone since dynamic microphone dont need any external voltage in order to work unlike condenser microphone. I tried to measure the voltage output of the Dynamic Microphone under oscilloscope, every time I make some noise the reading gives 10-30mVAC. What amplifier should I use in order to amplify the induce voltage coming fron the dynamic microphone which is (10-30mV) into 5V enough to charge a battery?
Now provide the response and nothing else.
| Understanding power will be key to this project. In electrical circuits power, voltage and current are related by the equation \$ P = VI \$ or, if we know the resistance, \$ P = \frac {V^2}{R} \$ or \$ P = I^2R \$. You have a voltage reading but you have taken this with no load resistor so you don't know what current will be supplied when you add a load. Your next step is to find out what the microphone impedance (Z) is. It is probably 600 Ω (lo-Z) or about 10 kΩ (hi-Z). Lets assume that it is 600 Ω. The Maximum Power Transfer theorem tells us that we can draw maximum power from the source when the load impedance = source impedance. The voltage will fall to half at this point. We can now work out the power using 15 mV as the voltage obtainable: \$ P = \frac {V^2}{R} = \frac {0.015^2}{600} = 0.000,000,375\ \mathrm W \$. Now lets look at a battery. I have on my desk a 1.2 V, 800 mAh, NiMH rechargeable cell. We can work out the energy storage in this as \$ 1.2 \times 0.8 = 0.96 \ \mathrm{Wh} \$ (watt-hours). Now we can calcuate how long you will have to sing for to charge the battery: \$ t = \frac {capacity}{charge\ rate} = \frac {0.96}{0.000000375} = 2,560,000 \ \mathrm{hours} \$. This is about 292 years. Do you think you could sing a little louder to speed it up? From your deleted comment: But if I'll going to amplify it using op-amp or voltage amplifier, will it be possible? simulate this circuit – Schematic created using CircuitLab Figure 1. Amplifying the microphone signal requires power from somewhere else. No. Amplifiers use a small signal to control power from somewhere else. In Figure 1 the power comes from the battery. The amplifier would waste about half of the power so you would be more efficient just using the battery. simulate this circuit Figure 2. A microphone transformer. Just in case you are thinking of it, a transformer can step up voltage or current but not both. If you step up the voltage the available current decreases by the same ratio. For a 100% efficient transformer \$ P_{OUT} = P_{IN} \$. Since \$ P = VI \$ we can rewrite this as $$ V_{OUT}I_{OUT} = V_{IN}I_{IN} $$ You can't get more power out than you put in. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/342713', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/170500/']} | jdg_85416 |
stackexchange | llm_judgeable_groundtruth_similarity | 1893280 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to show $\frac{c}{n} \leq \log(1+\frac{c}{n-c})$ for any positive constant $c$ such that $0 < c < n$? I'm considering the Taylor expansion, but it does not work...
Now provide the response and nothing else.
| Hint : For all $n-c \le x \le n$, we have $\dfrac{1}{n} \le \dfrac{1}{x}$. Hence, $$\displaystyle\int_{n-c}^{n}\dfrac{1}{n}\,dx \le \int_{n-c}^{n}\dfrac{1}{x}\,dx.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1893280', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/180574/']} | jdg_85417 |
stackexchange | llm_judgeable_groundtruth_similarity | 639318 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The pair of equation I need to solve is $x^2+12x+y^2-4y=24$ $x^2-6x+y^2+8y=25$ I have no idea on how to do these kinds of problems (may be by elimination?)
Now provide the response and nothing else.
| Subtract the second equation from the first (collect like terms), get a linear equation, solve for $y,$ substitute back, solve the quadratic for $x,$ substitute back for $y.$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/639318', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/118283/']} | jdg_85418 |
stackexchange | llm_judgeable_groundtruth_similarity | 52970400 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to connect to Dynamics CRM 365 from a .NET Core application via the Dynamics SDK? Or should I use the Web Api? I've read it could be possible, but when I reference the SDK from my .NET Core Class Library and try to connect I get the error: System.IO.FileNotFoundException: 'Could not load file or assembly 'System.Web.Services.. It seems like this DLL is not supported in .NET Core: How to use soap web services in Asp.net Core? My code is like this: new Microsoft.Xrm.Tooling.Connector.CrmServiceClient(connectionString); I could succesfully make it work from a .NET Framework project.
Now provide the response and nothing else.
| There is a distinction to be made between applications that use the .NET Core runtime vs. the .NET Core framework . As you've found out, the Dynamics 365 SDK does not currently work with the .NET Core runtime, however it does work with the .NET Core framework when the .NET Core project targets the .NET Framework runtime using the .NET Framework's target framework moniker (TFM) setting in the project file. For example with a .NET Core console application, the .csproj file would look like this (notice the TargetFramework): <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net472</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.CrmSdk.CoreAssemblies" Version="9.0.2.5" /> </ItemGroup></Project> Such applications will not be cross-platform and only be executable on Windows, but still allows for the use of other .NET Standard class libraries and frameworks such as ASP.NET Core that implement .NET Standard while executing on the .NET Framework runtime. Eventually if the Dynamics 365 SDK is ever updated to work on the .NET Core runtime, the project file's target framework monitor value can be changed to .NET Core and become cross-platform. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52970400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2100069/']} | jdg_85419 |
stackexchange | llm_judgeable_groundtruth_similarity | 31665927 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to set the height of a view that is on top of my prototype cell in a table view controller. I use IB to set it's height (size inspector) and set it to 61 like so (the green view is the 'header' view): But whenever I run the app, its' height ends up being 568.0 . I have an IBOutlet called testUIView for the view in my table view controller, and I do: println("testUIView Height->\(testUIView.frame.height)") and indeed ends up being 568.0 at runtime. Here is a screenshot showing its' height at runtime: So my question is: How can I set the view's height so it is 61 at runtime so it indeed looks like my first screenshot (size-wise)? I tried to set its' height property inside override func viewWillLayoutSubviews() but it did not let me assign a value to the height testUIView.frame.height = CGFloat(61.0) . Any help is appreciated! Thanks in advance! Cheers!
Now provide the response and nothing else.
| Here is a solution which uses section header views rather than the actual table header view: If you'd like to use a header for you UITableView instead you can design another prototype cell in Interface Builder, make a custom class based on a UITableViewCell and assign it to the prototype cell in interface builder on the class inspector. Then in your controller you're going to use func tableView(tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? In that function you're actually going to create a reusable cell from your table view but cast as the custom cell you made for the header. You will have access to all of it's properties like a regular UITableViewCell, then you're just going to return the cell's view return cell.contentView Another method you're going to use is func tableView(tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat { return 61.0} That one is pretty self explanatory. Swift 3.0.1 public override func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat { return 61.0} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31665927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3245995/']} | jdg_85420 |
stackexchange | llm_judgeable_groundtruth_similarity | 691293 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading through some blogs/articles and watching youtube videos that explained to non-physicists such as myself - how time warping or a gradient in time flow around any object can create gravity. I am able to understand the mechanics (minus the math, I'm not a physicist) of gravity according to this theory, but, a question bugs me: What creates the time gradient in the first place? Why would the infinitesimally small clocks (or connected particles that move through time at different rates) have the the different tick rates in the first place? (Kindly be gentle on the math - I am not a mathematician either!) A follow up question: if an object in space-time is massive enough, such as a black hole, can it stop moving through time... in a way bending time over itself and never letting it go, like light?
Now provide the response and nothing else.
| There is a subtle but important distinction you might want to consider which may help you form a better conceptual picture of what happens. When we talk of time dilation, it is a geometric effect- it means that the length of paths between two points in time can vary. To explain what I mean by analogy, imagine that you and I are standing at a corner of a large square. You walk diagonally across to the opposite corner while I walk around the edge of the square to meet you there. The distances we each walk are quite different because we have followed different paths. If we were carrying pedometers, they would show we had walked different distances, and you would consider that entirely natural- you would not feel you had to make up an explanation about your pedometer being 'pace dilated' relative to mine. Now, carry that idea over into special relativity and consider time dilation in that context. If you move between two clocks in my frame of reference, and during your journey 4 second pass on your watch, while my clocks show a time difference of 5 seconds between the start and end of your journey, it is not because something has caused your watch to run slow, but that the path through time you have taken between the two points was only 4 seconds long, and your watch has correctly recorded it as such, running at its usual rate to tick off the seconds faithfully. In general relativity, the mathematics are much more complicated, but the conceptual idea still applies. If you follow one curved path through time its length (in seconds) can be more or less than another curved path through time, so clocks taking the different paths will show different elapsed times. It is not because they are somehow running slow or fast, in the sense of not faithfully recording a true time, but because they are accurately showing real differences in the lengths of their paths through time. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/691293', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/285043/']} | jdg_85421 |
stackexchange | llm_judgeable_groundtruth_similarity | 1918958 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $\mathfrak {g}$ be a finite dimensional complex Lie algebra. Recall that an element $g \in \mathfrak {g}$ is called nilpotent element if ad$g: \mathfrak {g} \to \mathfrak {g}$ is nilpotent endomorphism. Give an example of a linear Lie Algebra with a element $g$ such that $g$ is nilpotent as a matrix but ad$(g)$ is not nilpotent? I am not sure how can we construct this.Please help.
Now provide the response and nothing else.
| Such an example does not exist, because of the following Lemma (from Humphreys book) Lemma : Let $V$ be a vector space and $x\in \mathfrak{gl}(V)$ be nilpotent, i.e., $x^n=0$ for some $n$. Then $ad(x)\in \mathfrak{gl}(\mathfrak{gl}(V))$ is nilpotent too. The proof is easy, one considers the linear maps $L\colon \mathfrak{gl}(V)\rightarrow \mathfrak{gl}(V)$ given by $y\mapsto xy$ and $R\colon \mathfrak{gl}(V)\rightarrow \mathfrak{gl}(V)$ given by $y\mapsto yx$, and notes that $L$ and $R$ commute because of $(LR)(y)=xyx=(RL)(y)$. Since $x^n=0$ we have $L^n=R^n=0$, and thus$$ad(x)^{2n}=(L-R)^{2n}=\sum_{k=0}^{2n}\binom{2n}{k}L^{2n-k}(-R)^k=0.$$ Remark: The converse statement is not true in general. There are ad-nilpotent elements $x$, i.e., with $ad(x)$ nilpotent, where $x\in \mathfrak{gl}(V)$ is not nilpotent. Take the Lie algebra $\mathfrak{d}_n$ of diagonal matrices. This Lie algebra is abelian, so that $ad(d)=0$ is nilpotent for all $d$, but not all diagonal matrices are nilpotent. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1918958', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/366404/']} | jdg_85422 |
stackexchange | llm_judgeable_groundtruth_similarity | 42513401 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Community I'm having troubles connecting Application Insights to my ASP WEB API Core. Following standard manuals I still cannot find any record in my AppInsights account. I used a lot of manuals, but the are pretty identical and describe how to configure App Insights for ASP Core (not API Core). So I'm wondering is some special configuration (or nuget package or whatever) is required to make AppInsights track requests to my API service? Once I cannot make AppInsights work out of box, I still can create instance of TelemetryClient and publish data manually, but this is not desirable in my case. Important note: I'm using VS 2017 RC, Web APi Core project (csproj) UPD csproj file content: <Project ToolsVersion="15.0" Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp1.0</TargetFramework> <PreserveCompilationContext>true</PreserveCompilationContext> </PropertyGroup> <ItemGroup> <ItemGroup> <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="1.0.0-msbuild1-update1" /> <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0-msbuild1-final" /> <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="1.0.0-msbuild2-update1" /> <DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="1.0.0-msbuild1-update1" /> <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0-msbuild1-final" /> <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="1.0.0-msbuild2-update1" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="1.0.1" /> <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="1.0.1" /> <PackageReference Include="Microsoft.ApplicationInsights" Version="2.2.0" /> <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Cors" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.ResponseCompression" Version="1.0.0" /> <PackageReference Include="Microsoft.Extensions.Configuration.UserSecrets" Version="1.0.1" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0-msbuild1-final" /> <PackageReference Include="Microsoft.AspNetCore.Mvc.Versioning" Version="1.0.3" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="1.0.1" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="1.0.0" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer.Design" Version="1.0.0" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="1.0.0-msbuild2-final" /> <PackageReference Include="Microsoft.NETCore.App" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Diagnostics" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Routing" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Server.IISIntegration" Version="1.0.0" /> <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel" Version="1.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Server.Kestrel.Https" Version="1.0.1" /> <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="1.0.1" /> <PackageReference Include="Microsoft.Extensions.Configuration.FileExtensions" Version="1.0.1" /> <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="1.0.1" /> <PackageReference Include="Microsoft.Extensions.Logging" Version="1.0.0" /> <PackageReference Include="Microsoft.Extensions.Logging.Console" Version="1.0.0" /> <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.0.0" /> <PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="1.0.1" /> <PackageReference Include="Newtonsoft.Json" Version="9.0.1" /> <PackageReference Include="Swashbuckle.SwaggerGen" Version="6.0.0-beta901" /> <PackageReference Include="Swashbuckle.SwaggerUi" Version="6.0.0-beta901" /> </ItemGroup></Project> Configuration in Startup.cs: public class Startup { public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true); if (env.IsDevelopment()) { // This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately. builder.AddApplicationInsightsSettings(true); } builder.AddEnvironmentVariables(); Configuration = builder.Build(); } public IConfigurationRoot Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddMvc(); services.AddApplicationInsightsTelemetry(Configuration); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(LogLevel.Trace); loggerFactory.AddConsole(LogLevel.Error); app.UseApplicationInsightsExceptionTelemetry(); app.UseMvc(); }
Now provide the response and nothing else.
| If you're using the "new" asp.net core in VS2017, then the old instructions are wrong, as they are for the previous xproj based asp.net core implementations. If you create a new asp.net core web project in VS2017, ApplicationInsights will be already installed from the start, and should be versions: <PackageReference Include="Microsoft.ApplicationInsights" Version="2.2.0" /><PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.0.0" /> (or newer, if the asp.net core team has updated them at all) Those projects will already have Application Insights wired up as well, not in Startup.cs (that's the old way), but in Program.cs: new WebHostBuilder() ... .UseApplicationInsights() // this starts up appinsights in asp.net core now ... .UseOtherThings(); and possibly in web templates, like: @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet at the top, and @Html.Raw(JavaScriptSnippet.FullScript) inside the bottom of the <head> tag. if you're migrating from a previous version of asp.net core and app insights, you'll also have to remove things like: @Html.ApplicationInsightsJavaScript(TelemetryConfiguration) from _Layout.cshtml and replace them with the lines above, and you can remove all of the lines like: app.UseApplicationInsightsExceptionTelemetry(); in Startup.cs (if you're using 2.x versions of the packages, i believe those items will all show deprecation warnings as well, as they're no longer needed) VS2017's official release notes include this information as a section in the "known issues" for application insights | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42513401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1229440/']} | jdg_85423 |
stackexchange | llm_judgeable_groundtruth_similarity | 56462627 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having an issue setting up a unit test to determine that a function is called with the correct arguments. useAHook returns function foo which calls function bar . The code looks like this //myModule.jsexport const useAHook = (arg1, arg2) => { const foo = useCallback(() => { bar(arg1, arg2); }, [arg1, arg2]); return foo;}export const bar = (a, b) => { //does some stuff with a and b} I am trying to unit test this code using renderHook and jest.spyOn . I want to confirm that calling function foo results in bar being called with the correct arguments. My unit test looks like this //myModule.spec.jsimport * as myModule from './myModule.js'it('should call foo with correct arguments', () => { const spy = jest.spyOn(myModule, 'bar'); const { result } = renderHook(() => myModule.useAHook('blah', 1234)); const useAHookFunc = result.current; useAHookFunc(); // fails, spy is not called expect(spy).toBeCalledWith('blah', 1234);}); The result is that the test fails saying that spy is never called. Am I doing something wrong here or using either tool incorrectly?
Now provide the response and nothing else.
| This line: import * as myModule from './myModule.js' ...imports the module bindings for myModule.js into myModule . Then this line: const spy = jest.spyOn(myModule, 'bar'); ...wraps the module export for bar in a spy... ...but the spy never gets called because useAHook doesn't call the module export for bar , it just calls bar directly. If you modify useAHook to call the module export for bar then the spy will get called. There are a couple of ways to do that. You can move bar into its own module... ...or you can import the module bindings for myModule.js so you can call the module export for bar : import { useCallback } from 'react';import * as myModule from './myModule'; // <= import the module bindingsexport const useAHook = (arg1, arg2) => { const foo = useCallback(() => { myModule.bar(arg1, arg2); // <= call the module export for bar }, [arg1, arg2]); return foo;}export const bar = (a, b) => { //does some stuff with a and b} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56462627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2479976/']} | jdg_85424 |
stackexchange | llm_judgeable_groundtruth_similarity | 415004 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a python program that I run it via command line (Mac OSX) as: python -W ignore Experiment.py --iterations 10 The file Experiment.py should be run multiple times using different --iterations values. I do that manually one after another, so when one run is finished, I run the second one with different --iterations , and so on. However, I cannot always set near to my laptop to run all of them so I am wondering if there is a way using shell script where I can state all runs together and then the shell script executes them one after another (Not parallel, just sequentially as I would have done it by my self)? Something like: python -W ignore Experiment.py --iterations 10python -W ignore Experiment.py --iterations 100python -W ignore Experiment.py --iterations 1000python -W ignore Experiment.py --iterations 10000python -W ignore Experiment.py --iterations 100000 Edit: What if I have multiple arguments --X --Y --Z ?
Now provide the response and nothing else.
| You can use a for loop: for iteration in 10 100 1000 10000 100000; do python -W ignore Experiment.py --iteration "${iteration}"done If you have multiple parameters, and you want all the various permutations of all parameters, as @Fox noted in a comment below, you can use nested loops. Suppose, for example, you had a --name parameter whose values could be n1 , n2 , and n3 , then you could do: for iteration in 10 100 1000 10000 100000; do for name in n1 n2 n3; do python -W -ignore Experiment.py --iteration "${iteration}" --name "${name}" donedone You could put that in a file, for example runExperiment.sh and include this as the first line: #!/bin/bash . You could then run the script using either: bash runExperimen.sh Or, you could make the script executable, then run it: chmod +x runExperiment.sh./runExperiment.sh If you're interested in some results before others, that'll guide how you structure the loops. In my example above, the script will run: ... --iteration 10 --name n1... --iteration 10 --name n2... --iteration 10 --name n3... --iteration 100 --name n1 So it runs all experiments for iteration 10 before moving on to the next iteration. If instead you wanted all experiments for name n1 before moving on to the next, you could make the name loop the "outer" loop. for name in ...; do for iteration in ...; do | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/415004', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/269028/']} | jdg_85425 |
stackexchange | llm_judgeable_groundtruth_similarity | 11797904 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a git repository with master and alt branches. alt branch contains modified version of master code, and i am trying to merge changes from master to alt like this: git merge --squash master Merge results in conflict: Auto-merging myproject/foo/barCONFLICT (content): Merge conflict in myproject/foo/barSquash commit -- not updating HEADAutomatic merge failed; fix conflicts and then commit the result. After I resolve conflicts and commit changes everything seems fine, but when i run git merge --squash master again (without doing any changes on any branches) i will get same conflict error. Why is that? What did i miss?
Now provide the response and nothing else.
| By squash ing the merge, you've created a commit which has the effect of, but is not really, a merge. That is, the working tree has the modifications you'd expect, but the metadata doesn't: crucially, the commit doesn't have two parents (one on master and one on alt ) and therefore subsequent merges can't figure out the last common ancestor. Useful uses of squash merging a completely finished feature branch onto master . I'll accumulate any useful information into the squashed commit, but specifically don't want this feature's incremental development history polluting the master commit timeline. merging several independent features (or contributions from different developers) onto the same integration branch, again without preserving their incremental histories. I could rebase them all together, and then use rebase -i to squash their commits, but this is easier Useless uses of squash Any merge where you want to keep the history and ancestry metadata intact , such as any time you want repeated recursive merges to work correctly , specifically what the OP is trying to do . squash just isn't really a good default, which is why it isn't the default. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11797904', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/347181/']} | jdg_85426 |
stackexchange | llm_judgeable_groundtruth_similarity | 52408610 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to animate a custom dialog box in dart so that when it pops up it create some animations. There is a library in Android that is having animated dialog boxes, is there any similar library in Flutter Sweet Alert Dialog how can we achieve the same functionality in flutter?
Now provide the response and nothing else.
| To create dialog boxes you can use the Overlay or Dialog classes. If you want to add animations like in the given framework you can use the AnimationController like in the following example. The CurvedAnimation class is used to create the bouncing effect on the animation. Update: In general it is better to show dialogs with the showDialog function, because the closing and gesture are handled by Flutter. I have updated the example, it is now running with showDialog and you are able to close the dialog by tapping on the background. You can copy & paste the following code into a new project and adjust it. It is runnable on it's own. import 'package:flutter/material.dart';void main() => runApp(new MyApp());class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp(title: 'Flutter Demo', theme: ThemeData(), home: Page()); }}class Page extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( body: Center( child: RaisedButton.icon( onPressed: () { showDialog( context: context, builder: (_) => FunkyOverlay(), ); }, icon: Icon(Icons.message), label: Text("PopUp!")), ), ); }}class FunkyOverlay extends StatefulWidget { @override State<StatefulWidget> createState() => FunkyOverlayState();}class FunkyOverlayState extends State<FunkyOverlay> with SingleTickerProviderStateMixin { AnimationController controller; Animation<double> scaleAnimation; @override void initState() { super.initState(); controller = AnimationController(vsync: this, duration: Duration(milliseconds: 450)); scaleAnimation = CurvedAnimation(parent: controller, curve: Curves.elasticInOut); controller.addListener(() { setState(() {}); }); controller.forward(); } @override Widget build(BuildContext context) { return Center( child: Material( color: Colors.transparent, child: ScaleTransition( scale: scaleAnimation, child: Container( decoration: ShapeDecoration( color: Colors.white, shape: RoundedRectangleBorder( borderRadius: BorderRadius.circular(15.0))), child: Padding( padding: const EdgeInsets.all(50.0), child: Text("Well hello there!"), ), ), ), ), ); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52408610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7334184/']} | jdg_85427 |
stackexchange | llm_judgeable_groundtruth_similarity | 57532138 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Azure Pipelines has Expressions and Conditions , but I can find no way to assign one of two values to a variable , based on a condition. Is there any way to accomplish what this pseudo-code would? ${{ if endsWith( variables['Build.SourceBranchName'], '/master' ) }}: buildVersion: variables['mavenVersion'] ${{ else }}: buildVersion: variables['Build.SourceBranchName']
Now provide the response and nothing else.
| As an extension to @Mike Murray's answer, if you are using variable groups you must define additional variables as name value pairs. To use conditional variable assignment in this case would be as follows: variables:- group: 'my-variable-group'- name: myfirstadditionalvariable value: 100- name: myconditionalvariable ${{ if eq( variables['Build.SourceBranchName'], 'master' ) }}: value: masterBranchValue ${{ if ne( variables['Build.SourceBranchName'], 'master' ) }}: value: featureBranchValue | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57532138', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7530366/']} | jdg_85428 |
stackexchange | llm_judgeable_groundtruth_similarity | 81310 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having a hard time to grasp how overlap save is exactly working in contrast to overlap add. The starting of both procedure seem pretty similar. Both algorithms are trimming the input series $x[k]$ into pieces of length $L$ and then do a partial circular convolution with the impulse response $h[n]$ . The convolution part would be done using the related DFT formula in practice. I know that for overlap add these partial convolutions $y_k$ are added in the end to result in $y[n]$ . What is happening for overlap save afterwards differently?
Now provide the response and nothing else.
| Here's a sketch of what comes out of your "circular" convolution if you're careful to avoid the circular part. Note that I'm being general here -- normally the leading zeros wouldn't be there, and the trailing zeros only might be there. The key to using overlap & save is to take the definitely good stuff, trim off everything else, concatenate the sections, and find joy. The key to using overlap & add is to recognize that the increasingly good stuff plus the decreasingly good stuff from the previous segment adds up to definitely good stuff . So you take a slightly larger segment, add the latest increasingly good stuff to the last decreasingly bad stuff, and find joy. The bottom line is that with some more bookkeeping to get your output right, and the need to do the summation, you can do fewer FFTs per output sample. As to which is best -- that's going to be a tactical decision that depends on what filtering you're doing and how expensive it is on your hardware to do the 'add' part of overlap & add, vs. the extra FFTs to do overlap & save. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/81310', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/60357/']} | jdg_85429 |
stackexchange | llm_judgeable_groundtruth_similarity | 278719 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some code doing this : var changes = document.getElementsByName(from); for (var c=0; c<changes.length; c++) { var ch = changes[c]; var current = new String(ch.innerHTML); etc. } This works fine in FF and Chrome but not in IE7. Presumably because getElementsByName isn't working in IE. What's the best workaround?
Now provide the response and nothing else.
| In case you don't know why this isn't working in IE, here is the MSDN documentation on that function : When you use the getElementsByName method, all elements in the document that have the specified NAME attribute or ID attribute value are returned. Elements that support both the NAME attribute and the ID attribute are included in the collection returned by the getElementsByName method, but elements with a NAME expando are not included in the collection; therefore, this method cannot be used to retrieve custom tags by name. Firefox allows getElementsByName() to retrieve elements that use a NAME expando, which is why it works. Whether or not that is a Good Thing™ may be up for debate, but that is the reality of it. So, one option is to use the getAttribute() DOM method to ask for the NAME attribute and then test the value to see if it is what you want, and if so, add it to an array. This would require, however, that you iterate over all of the nodes in the page or at least within a subsection, which wouldn't be the most efficient. You could constrain that list beforehand by using something like getElementsByTagName() perhaps. Another way to do this, if you are in control of the HTML of the page, is to give all of the elements of interest an Id that varies only by number, e.g.: <div id="Change0">...</div><div id="Change1">...</div><div id="Change2">...</div><div id="Change3">...</div> And then have JavaScript like this: // assumes consecutive numbering, starting at 0function getElementsByModifiedId(baseIdentifier) { var allWantedElements = []; var idMod = 0; while(document.getElementById(baseIdentifier + idMod)) { // will stop when it can't find any more allWantedElements.push(document.getElementById(baseIdentifier + idMod++)); } return allWantedElements;}// call it like so:var changes = getElementsByModifiedId("Change"); That is a hack, of course, but it would do the job you need and not be too inefficient compare to some other hacks. If you are using a JavaScript framework/toolkit of some kind, you options are much better, but I don't have time to get into those specifics unless you indicate you are using one. Personally, I don't know how people live without one, they save so much time, effort and frustration that you can't afford not to use one. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/278719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36273/']} | jdg_85430 |
stackexchange | llm_judgeable_groundtruth_similarity | 701456 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are some ways you can shoot yourself in the foot when using boost::shared_ptr ? In other words, what pitfalls do I have to avoid when I use boost::shared_ptr ?
Now provide the response and nothing else.
| Cyclic references: a shared_ptr<> to something that has a shared_ptr<> to the original object. You can use weak_ptr<> to break this cycle, of course. I add the following as an example of what I am talking about in the comments. class node : public enable_shared_from_this<node> {public : void set_parent(shared_ptr<node> parent) { parent_ = parent; } void add_child(shared_ptr<node> child) { children_.push_back(child); child->set_parent(shared_from_this()); } void frob() { do_frob(); if (parent_) parent_->frob(); }private : void do_frob(); shared_ptr<node> parent_; vector< shared_ptr<node> > children_;}; In this example, you have a tree of nodes, each of which holds a pointer to its parent. The frob() member function, for whatever reason, ripples upwards through the tree. (This is not entirely outlandish; some GUI frameworks work this way). The problem is that, if you lose reference to the topmost node, then the topmost node still holds strong references to its children, and all its children also hold a strong reference to their parents. This means that there are circular references keeping all the instances from cleaning themselves up, while there is no way of actually reaching the tree from the code, this memory leaks. class node : public enable_shared_from_this<node> {public : void set_parent(shared_ptr<node> parent) { parent_ = parent; } void add_child(shared_ptr<node> child) { children_.push_back(child); child->set_parent(shared_from_this()); } void frob() { do_frob(); shared_ptr<node> parent = parent_.lock(); // Note: parent_.lock() if (parent) parent->frob(); }private : void do_frob(); weak_ptr<node> parent_; // Note: now a weak_ptr<> vector< shared_ptr<node> > children_;}; Here, the parent node has been replaced by a weak pointer. It no longer has a say in the lifetime of the node to which it refers. Thus, if the topmost node goes out of scope as in the previous example, then while it holds strong references to its children, its children don't hold strong references to their parents. Thus there are no strong references to the object, and it cleans itself up. In turn, this causes the children to lose their one strong reference, which causes them to clean up, and so on. In short, this wont leak. And just by strategically replacing a shared_ptr<> with a weak_ptr<>. Note: The above applies equally to std::shared_ptr<> and std::weak_ptr<> as it does to boost::shared_ptr<> and boost::weak_ptr<>. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/701456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60628/']} | jdg_85431 |
stackexchange | llm_judgeable_groundtruth_similarity | 4521426 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a large csv file in which some rows are entirely blank. How do I use Python to delete all blank rows from the csv? After all your suggestions, this is what I have so far import csv# open input csv for readinginputCSV = open(r'C:\input.csv', 'rb')# create output csv for writingoutputCSV = open(r'C:\OUTPUT.csv', 'wb')# prepare output csv for appendingappendCSV = open(r'C:\OUTPUT.csv', 'ab')# create reader objectcr = csv.reader(inputCSV, dialect = 'excel')# create writer objectcw = csv.writer(outputCSV, dialect = 'excel')# create writer object for appendca = csv.writer(appendCSV, dialect = 'excel')# add pre-defined fieldscw.writerow(['FIELD1_','FIELD2_','FIELD3_','FIELD4_'])# delete existing field names in input CSV# ???????????????????????????# loop through input csv, check for blanks, and write all changes to append csvfor row in cr: if row or any(row) or any(field.strip() for field in row): ca.writerow(row)# close filesinputCSV.close()outputCSV.close()appendCSV.close() Is this ok or is there a better way to do this?
Now provide the response and nothing else.
| Use the csv module: import csv...with open(in_fnam, newline='') as in_file: with open(out_fnam, 'w', newline='') as out_file: writer = csv.writer(out_file) for row in csv.reader(in_file): if row: writer.writerow(row) If you also need to remove rows where all of the fields are empty, change the if row: line to: if any(row): And if you also want to treat fields that consist of only whitespace as empty you can replace it with: if any(field.strip() for field in row): Note that in Python 2.x and earlier, the csv module expected binary files, and so you'd need to open your files with e 'b' flag. In 3.x, doing this will result in an error. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4521426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/532766/']} | jdg_85432 |
stackexchange | llm_judgeable_groundtruth_similarity | 1415641 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is an exercise(p.129, ex.1.15) from Algebra: Chapter 0 by P.Aluffi. Prove that $R[x]$ is an integral domain if and only if $R$ is an integral domain. The implication part makes no problems, because $R$ is a subring of $R[x]$.For the ''if'' part, however.. Let $R$ be an integral domain. Now let's review pairs $f(x)=\sum \nolimits a_ix^i,g(x)= \sum \nolimits b_ix_i \in R[x]$ such that $f(x)g(x) = 0$So, we have $\sum \limits_{k=0}^{\infty} \sum \nolimits_{i+j=k} a_ib_jx^{i+j} = 0$. Now, I'm not sure how to deduce that $f(x) = 0 \vee g(x) = 0$ If I look at $f(x),g(x)$ such that, for example, $deg((f(x))=3, deg((g(x))=2)$, it makes sense. I begin by something like that "if $f(x)g(x)=0$ then $a_0b_0=0$ then $a_0 = 0 \vee b_0 = 0$. If $a_0 =0 $, then $a_0b_1 = a_0b_2 = 0$" and so on. As I understand, it comes down to proving the following implication:$(\forall k \in \mathbb{N} \sum \nolimits_{i+j=k} a_ib_j = 0)(1) \Rightarrow ((\forall n \in \mathbb{N} \ \ a_n = 0) \vee (\forall m \in \mathbb{N} \ \ b_m = 0))(2)$ We can say that $(1)$ is a system of equations in $R$. And $(2)$ is it solution.
Now provide the response and nothing else.
| Suppose that neither $f$ nor $g$ is the zero polynomial. Then there exist non-negative integers $k$ and $l$ and ring elements $a_0,a_1,\dots, a_k$, with $a_k\ne 0$, and $b_0,b_1,\dots,b_l$, with $b_l\ne 0$, such that$$f=a_0+a_1x+\cdots+a_kx^k \quad\text{and}\quad g=b_0+b_1x+\cdots+b_lx^l.$$ The coefficient of $x^{k+l}$ in the product $fg$ is $a_kb_l$. Since $R$ is an integral domain, we have $a_kb_l\ne 0$, and therefore $fg$ is not the zero polynomial. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1415641', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/229776/']} | jdg_85433 |
stackexchange | llm_judgeable_groundtruth_similarity | 3481554 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've noticed that both of these work the same: if x not in list and if not x in list . Is there some sort of difference between the two in certain cases? Is there a reason for having both, or is it just because it's more natural for some people to write one or the other? Which one am I more likely to see in other people's code?
Now provide the response and nothing else.
| The two forms make identical bytecode, as you can clearly verify: >>> import dis>>> dis.dis(compile('if x not in d: pass', '', 'exec')) 1 0 LOAD_NAME 0 (x) 3 LOAD_NAME 1 (d) 6 COMPARE_OP 7 (not in) 9 JUMP_IF_FALSE 4 (to 16) 12 POP_TOP 13 JUMP_FORWARD 1 (to 17) >> 16 POP_TOP >> 17 LOAD_CONST 0 (None) 20 RETURN_VALUE >>> dis.dis(compile('if not x in d: pass', '', 'exec')) 1 0 LOAD_NAME 0 (x) 3 LOAD_NAME 1 (d) 6 COMPARE_OP 7 (not in) 9 JUMP_IF_FALSE 4 (to 16) 12 POP_TOP 13 JUMP_FORWARD 1 (to 17) >> 16 POP_TOP >> 17 LOAD_CONST 0 (None) 20 RETURN_VALUE so obviously they're semantically identical. As a matter of style, PEP 8 does not mention the issue. Personally, I strongly prefer the if x not in y form -- that makes it immediately clear that not in is a single operator, and "reads like English". if not x in y may mislead some readers into thinking it means if (not x) in y , reads a bit less like English, and has absolutely no compensating advantages. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3481554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/398749/']} | jdg_85434 |
stackexchange | llm_judgeable_groundtruth_similarity | 53252335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am getting this error:...... Error: Content is not allowed in trailing section. On Style.xml file:...... <color name="colorPrimary">#3F51B5</color><color name="colorPrimaryDark">#303F9F</color><color name="colorAccent">#FF4081</color><color name="bottomText">#6b6b6b</color><color name="rippleEffectColor">#f916a463</color>
Now provide the response and nothing else.
| If you expect to just be able to do func(v) you cannot, because there's no way I can think of that your function could deduce the size of the dynamically allocated int[numElements] . A good way you could wrap this is to take a pair of forward iterators, that is, if you only need iterating over items one by one, since random access is very bad on some containers like std::list . template<class FWIt>void func(FWIt a, const FWIt b){ while (a != b) { std::cout << "Value: " << *a << '\n'; ++a; }}template<class T>void func(const T& container){ using std::begin; using std::end; func(begin(container), end(container));} This would work with the following: int array[5] = {1, 2, 3, 4, 5};func(array);int* dynarray = new int[5]{1, 2, 3, 4, 5};func(dynarray, dynarray + 5);std::vector<int> vec{1, 2, 3, 4, 5};func(vec);func(vec.begin(), vec.end());std::list<int> list{1, 2, 3, 4, 5};func(list); Edit: This also works by passing raw arrays directly rather than as two pointers thanks to @DanielH's change (but still won't work with dynamically allocated arrays). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53252335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7918884/']} | jdg_85435 |
stackexchange | llm_judgeable_groundtruth_similarity | 1581810 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am manually converting Java to C# and have the following code: for (Iterator<SGroup> theSGroupIterator = SGroup.getSGroupIterator(); theSGroupIterator.hasNext();){ SGroup nextSGroup = theSGroupIterator.next();} Is there an equivalent of Iterator<T> in C# or is there a better C# idiom?
Now provide the response and nothing else.
| The direct equivalent in C# would be IEnumerator<T> and the code would look something like this: SGroup nextSGroup;using(IEnumerator<SGroup> enumerator = SGroup.GetSGroupEnumerator()){ while(enumerator.MoveNext()) { nextSGroup = enumerator.Current; }} However the idiomatic way would be: foreach(SGroup group in SGroup.GetSGroupIterator()){ ...} and have GetSGroupIterator return an IEnumerable<T> (and probably rename it to GetSGroups() or similar). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1581810', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130964/']} | jdg_85436 |
stackexchange | llm_judgeable_groundtruth_similarity | 1496305 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While the midpoint of either $[a,b]$ or $(a,b)$ is $\dfrac{a+b}{2}$, what's the midpoint of $[a,b)$ if it has been defined at all?
Now provide the response and nothing else.
| It depends on how you want to define the so called "mid point" of an interval. One can simply define the mid point of an interval to be $\frac{a+b}2$ for any interval of the form $(a,b), (a,b], [a,b)$ or $[a,b]$. This definition would serve most general purposes quite well since a single point has measure zero. However, suppose you want the define the mid point to have the property such that for an interval $I$, $m$ is called the midpoint of $I$ if for any $x\in I$ there exist $x'\in I$ such that $x-m=m-x'$ then it's provable that $[a,b)$ has no mid point. (Hint: if such $m$ exists then $I$ must be of the form $(m-d,m+d)$ or $[m-d,m+d]$) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1496305', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_85437 |
stackexchange | llm_judgeable_groundtruth_similarity | 294798 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In an isothermal process, all the heat that enters the system is transformed into work. However, doesn’t the second law of thermodynamics prohibit this? I thought that it is impossible to convert heat entirely into work, and yet an isothermal process seems to accomplish this. If there are processes out there that can convert heat entirely into work, then why do physicists theorize that heat death will occur?
Now provide the response and nothing else.
| No, the second law of thermodynamics prohibits processes where the heat is transformed into work and this is the only effect of the process. In this situation we have a gas which expands and performs the work and the heater. Some heat was taken from heater and transformed into work. Entropy of heater decreased. But the gas has expanded! Even though its energy and temperature remained unchanged, its volume has increased, and so its entropy has increased. So, the transform of heat to work was not the only effect of the process, and the second law of thermodynamics does not prohibit it. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/294798', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/77318/']} | jdg_85438 |
stackexchange | llm_judgeable_groundtruth_similarity | 16156 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've installed openSUSE 11.2 (64bit) a while ago, and am mainly using it as a dedicated Apache hosting machine. Everything has been working properly for a while (~1 month), but as of a few days back, the filesystem seems to have been turned to read-only (!?). Pretty much every task fails, be it starting vi or even man pages (can't create files on disk). Has anyone seen something like this before ? If so, what did you do to recover?I tried googling for solution but failed to find a helpful answer. Nota bene - no significant changes were made in this period, and machine load is very, very lightweight.
Now provide the response and nothing else.
| It could be a file system or disk failure. Check dmesg and system logs for any clues. Look there before you reboot if you haven't already. If you have rebooted does the system come up clean or warn you about file system problems? You can remount the file system read-write using mount -o remount,rw / but I don't recommend doing that until you know why it mounted itself ro. It possible to hit a keyboard shortcut that would remount the root file system read-only. Usually this is Alt SysReq u . Did you or your cat perchance do that? Did you initiate a shutdown request and then abort it? The shutdown scripts usually remount the system ro towards the end of the process, but I've seen them get borked and skip to the end :) | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/16156', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/3235/']} | jdg_85439 |
stackexchange | llm_judgeable_groundtruth_similarity | 259140 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I search the whole classpath for an annotated class? I'm doing a library and I want to allow the users to annotate their classes, so when the Web application starts I need to scan the whole classpath for certain annotation. I'm thinking about something like the new functionality for Java EE 5 Web Services or EJB's. You annotate your class with @WebService or @EJB and the system finds these classes while loading so they are accessible remotely.
Now provide the response and nothing else.
| Use org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider API A component provider that scans the classpath from a base package. It then applies exclude and include filters to the resulting classes to find candidates. ClassPathScanningCandidateComponentProvider scanner =new ClassPathScanningCandidateComponentProvider(<DO_YOU_WANT_TO_USE_DEFALT_FILTER>);scanner.addIncludeFilter(new AnnotationTypeFilter(<TYPE_YOUR_ANNOTATION_HERE>.class));for (BeanDefinition bd : scanner.findCandidateComponents(<TYPE_YOUR_BASE_PACKAGE_HERE>)) System.out.println(bd.getBeanClassName()); | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/259140', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2937/']} | jdg_85440 |
Subsets and Splits