source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 50418704 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The newest release version of the Google Maps JavaScript (3.32.13) is conflicting with Prototype.js version 1.7.3. When I have Prototype included on the page, the Street View of Google Maps will not handle mouse drags to "look around" Is this a known issue? Any workarounds?
Now provide the response and nothing else.
| So the problem is that PrototypeJS is overwriting Array.from as an alias to $A() which creates an extended Array object. IF (big IF) you are not using Array.from to create a shallow copy of an array and expecting the Prototype extended methods, you can remove/comment out the line in prototype.js that is only Array.from = $A; In my copy of 1.7.3 it is line 1114 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50418704', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/95642/']} | jdg_85541 |
stackexchange | llm_judgeable_groundtruth_similarity | 4212 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We are considering a connected weighted graph G. 1-tree is a tree with one extra edge added (so it contains exactly one cycle). The task is to find minimum spanning 1-tree of G. I was thinking of first using any of well-known MST algorithms and then adding an edge of smallest weight to the MST (which is not in the MST). Will something like this work (I am having trouble proving it would)?
Now provide the response and nothing else.
| I'm not sure this is research level, but here's a sketch. Suppose this algorithm outputs a 1-tree and the actual cheapest 1-tree is some other 1T. Now, since you built your tree with Kruskal's algorithm, you know that the tree directly proceeding adding your last edge(the cycle-inducing edge) was minimal, and as such costs less than any subtree of the "true" cheapest 1-tree. So, consider the subtree of the "true" 1-tree that is comprised of all edges save one edge in the cycle that your tree doesn't contain(it is easy to show that, unless their cycles are equal, this edge must exist)*. This tree is more expensive than yours. Moreover, since your tree is a tree, adding that edge from the "true" 1T would have made yours a 1-tree. So your final edge must be cheaper than that edge, or you would have chosen that edge. Thus, your 1-tree must be cheapest. *If the cycles are the same, but the remaining subtrees are different, you are also done since yours was built using Kruskal's algorithm. | {} | {'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/4212', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/3152/']} | jdg_85542 |
stackexchange | llm_judgeable_groundtruth_similarity | 476540 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm basically trying to teach myself how to code and I want to follow good practices. There are obvious benefits to unit testing. There is also much zealotry when it comes to unit-testing and I prefer a much more pragmatic approach to coding and life in general. As context, I'm currently writing my first "real" application which is the ubiquitous blog engine using asp.net MVC. I'm loosely following the MVC Storefront architecture with my own adjustments. As such, this is my first real foray into mocking objects. I'll put the code example at the end of the question. I'd appreciate any insight or outside resources that I could use to increase my understanding of the fundamentals of testing and mocking. The resources I've found on the net are typically geared towards the "how" of mocking and I need more understanding of the where, why and when of mocking. If this isn't the best place to ask this question, please point me to a better place. I'm trying to understand the value that I'm getting from the following tests. The UserService is dependent upon the IUserRepository. The value of the service layer is to separate your logic from your data storage, but in this case most of the UserService calls are just passed straight to IUserRepository. The fact that there isn't much actual logic to test could be the source of my concerns as well. I have the following concerns. It feels like the code is just testing that the mocking framework is working. In order to mock out the dependencies, it makes my tests have too much knowledge of the IUserRepository implementation. Is this a necessary evil? What value am I actually gaining from these tests? Is the simplicity of the service under test causing me to doubt the value of these tests. I'm using NUnit and Rhino.Mocks, but it should be fairly obvious what I'm trying to accomplish. [SetUp] public void Setup() { userRepo = MockRepository.GenerateMock<IUserRepository>(); userSvc = new UserService(userRepo); theUser = new User { ID = null, UserName = "http://joe.myopenid.com", EmailAddress = "[email protected]", DisplayName = "Joe Blow", Website = "http://joeblow.com" }; } [Test] public void UserService_can_create_a_new_user() { // Arrange userRepo.Expect(repo => repo.CreateUser(theUser)).Return(true); // Act bool result = userSvc.CreateUser(theUser); // Assert userRepo.VerifyAllExpectations(); Assert.That(result, Is.True, "UserService.CreateUser(user) failed when it should have succeeded"); } [Test] public void UserService_can_not_create_an_existing_user() { // Arrange userRepo.Stub(repo => repo.IsExistingUser(theUser)).Return(true); userRepo.Expect(repo => repo.CreateUser(theUser)).Return(false); // Act bool result = userSvc.CreateUser(theUser); // Assert userRepo.VerifyAllExpectations(); Assert.That(result, Is.False, "UserService.CreateUser() allowed multiple copies of same user to be created"); }
Now provide the response and nothing else.
| Essentially what you are testing here is that the methods are getting called, not whether or not they actually work. Which is what mocks are supposed to do. Instead of calling the method, they just check to see if the method got called, and return whatever is in the Return() statement. So in your assertion here: Assert.That(result, Is.False, "error message here"); This assertion will ALWAYS succeed because your expectation will ALWAYS return false, because of the Return statement: userRepo.Expect(repo => repo.CreateUser(theUser)).Return(false); I'm guessing this isn't that useful in this case. Where mocking is useful is when you want to, for example, make a database call somewhere in your code, but you don't want to actually call to the database. You want to pretend that the database got called, but you want to set up some fake data for it to return, and then (here's the important part) test the logic that does something with the fake data your mock returned. In the above examples you are omitting the last step. Imagine you had a method that displayed a message to the user that said whether the new user was created: public string displayMessage(bool userWasCreated) { if (userWasCreated) return "User created successfully!"; return "User already exists";} then your test would be userRepo.Expect(repo => repo.CreateUser(theUser)).Return(false);Assert.AreEqual("User already exists", displayMessage(userSvc.CreateUser(theUser))) Now this has some value, because you are testing some actual behavior. Of course, you could also just test this directly by passing in "true" or "false." You don't even need a mock for that test. Testing expectations is fine, but I've written plenty of tests like that, and have come to the same conclusion that you are reaching - it just isn't that useful. So in short, mocking is useful when you want to abstract away externalities, like databases, or webservice calls, etc, and inject known values at that point. But it's not often useful to test mocks directly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/476540', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1880/']} | jdg_85543 |
stackexchange | llm_judgeable_groundtruth_similarity | 14564222 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the below background job that writes to a csv file and emails it out. I am using the Tempfile class so the file is removed after I email it to the user. Currently, when I look at the csv file I am producing the results look like the following: ["Client Application" "Final Price" "Tax" "Credit" "Base Price" "Billed At" "Order Guid" "Method of Payment Guid" "Method of Payment Type"]["web" nil nil nil nil nil nil "k32k313k1j3" "credit card"] Please ignore the data, but the issue is, it is being written directly to the file in the ruby format and not removing the "" and [] characters. Please see the code below: class ReportJob@queue = :report_job def self.perform(client_app_id, current_user_id) user = User.find(current_user_id) client_application = Application.find(client_app_id) transactions = client_application.transactions file = Tempfile.open(["#{Rails.root}/tmp/", ".csv"]) do |csv| begin csv << ["Application", "Price", "Tax", "Credit", "Base Price", "Billed At", "Order ID", "Payment ID", "Payment Type"] transactions.each do |transaction| csv << "\n" csv << [application.name, transaction.price, transaction.tax, transaction.credit, transaction.base_price, transaction.billed_at, transaction.order_id, transaction.payment_id, transaction.payment_type] end ensure ReportMailer.send_rev_report(user.email, csv).deliver csv.close(unlink_now=false) end endendend Would this be an issue with using the tempfile class instead of the csv class? or is there something I could do to change the way it is being written to the file? Adding the code for reading the csv file in the mailer. I am currently getting a TypeError that says "can't convert CSV into String". class ReportMailer < ActionMailer::Base default :from => "[email protected]" def send_rev_report(email, file) attachments['report.csv'] = File.read("#{::Rails.root.join('tmp', file)}") mail(:to => email, :subject => "Attached is your report") end endend
Now provide the response and nothing else.
| The issue is that you're not actually writing csv data to the file. You're sending arrays to the filehandle. I believe you need something like: Tempfile.open(....) do |fh| csv = CSV.new(fh, ...) <rest of your code>end to properly setup the CSV output filtering. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14564222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/481176/']} | jdg_85544 |
stackexchange | llm_judgeable_groundtruth_similarity | 10801302 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to load a file onto a MySQL database, having the primary key auto_incremented and I would like the data to be updated if i find any duplicate rows. However, the REPLACE keywords only works on primary key, which is auto generated so i'm stuck. how to be able to have a table with an ID that auto_increments and at the same time to be able to insert/update data from a file using LOAD DATA INFILE? Here is the table CREATE TABLE `oxygen_domain`.`TEST` (`TEST_ID` int(11) NOT NULL AUTO_INCREMENT,`NAME` varchar(255) NOT NULL,`VALUE` varchar(255) DEFAULT NULL,PRIMARY KEY (`TEST_ID`,`NAME`,`VALUE`)) and here is the command LOAD DATA LOCAL INFILE 'C:/testData.txt'REPLACEINTO TABLE TESTFIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'LINES TERMINATED BY '\r\n'IGNORE 1 LINES(NAME, VALUE); and here is the sample data ignored name, ignored valuename1,value1name2,value2name3,value3 The wanted ending result after running the command above multiple times withthe above data is |TEST_ID |NAME |VALUE|1, 'name1', 'value1'2, 'name2', 'value2'3, 'name3', 'value3'
Now provide the response and nothing else.
| OBSERVATION #1 You should not do REPLACE because it is a mechanical DELETE and INSERT . As the MySQL Documentation says about REPLACE Paragraph 2 REPLACE is a MySQL extension to the SQL standard. It either inserts, or deletes and inserts. For another MySQL extension to standard SQL—that either inserts or updates—see Section 13.2.5.3, “INSERT ... ON DUPLICATE KEY UPDATE Syntax”. Paragraph 5 To use REPLACE, you must have both the INSERT and DELETE privileges for the table. Using REPLACE will throw away established values for TEST_ID that cannot automatically be reused. OBSERVATION #2 The table layout will not support trapping of duplicate keys If a name is unique, the table should be laid out like this LAYOUT #1 CREATE TABLE `oxygen_domain`.`TEST` (`TEST_ID` int(11) NOT NULL AUTO_INCREMENT,`NAME` varchar(255) NOT NULL,`VALUE` varchar(255) DEFAULT NULL,PRIMARY KEY (`TEST_ID`),KEY (`NAME`)) If a name allows multiple values, the table should be laid out like this LAYOUT #2 CREATE TABLE `oxygen_domain`.`TEST` (`TEST_ID` int(11) NOT NULL AUTO_INCREMENT,`NAME` varchar(255) NOT NULL,`VALUE` varchar(255) DEFAULT NULL,PRIMARY KEY (`TEST_ID`),KEY (`NAME`,`VALUE`)) PROPOSED SOLUTION Use a temp table to catch everything. Then, perform a big INSERT from the temp table based on layout LAYOUT #1 Replace the VALUE for a Duplicate NAME USE oxygen_domainDROP TABLE IF EXISTS `TESTLOAD`;CREATE TABLE `TESTLOAD` SELECT NAME,VALUE FROM TEST WHERE 1=2;LOAD DATA LOCAL INFILE 'C:/testData.txt'INTO TABLE `TESTLOAD`FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'LINES TERMINATED BY '\r\n'IGNORE 1 LINES(NAME, VALUE);INSERT INTO `TEST` (NAME, VALUE)SELECT NAME, VALUE FROM `TESTLOAD`ON DUPLICATE KEY UPDATE VALUE = VALUES(VALUE);DROP TABLE `TESTLOAD`; LAYOUT #2 Ignore Duplicate (NAME,VALUE) rows USE oxygen_domainDROP TABLE IF EXISTS `TESTLOAD`;CREATE TABLE `TESTLOAD` SELECT NAME,VALUE FROM TEST WHERE 1=2;LOAD DATA LOCAL INFILE 'C:/testData.txt'INTO TABLE `TESTLOAD`FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'LINES TERMINATED BY '\r\n'IGNORE 1 LINES(NAME, VALUE);INSERT IGNORE INTO `TEST` (NAME, VALUE)SELECT NAME, VALUE FROM `TESTLOAD`;DROP TABLE `TESTLOAD`; Update if we need to avoid the creating and dropping of the table each time. we can TRUNCATE TRUNCATE the table before or after using INSERT...INTO statement. Therefore, we do not have to create the table next time. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10801302', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1423934/']} | jdg_85545 |
stackexchange | llm_judgeable_groundtruth_similarity | 33308121 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been experimenting with ES6 for a while now, and I've just come to a slight problem. I really like using arrow functions, and whenever I can, I use them. However, it would appear that you can't bind them! Here is the function: var f = () => console.log(this); Here is the object I want to bind the function to: var o = {'a': 42}; And here is how I would bind f to o : var fBound = f.bind(o); And then I can just call fBound : fBound(); Which will output this (the o object): {'a': 42} Cool! Lovely! Except that it doesn't work. Instead of outputting the o object, it outputs the window object. So I'd like to know: can you bind arrow functions? (And if so, how?) I've tested the code above in Google Chrome 48 and Firefox 43, and the result is the same.
Now provide the response and nothing else.
| You cannot rebind this in an arrow function. It will always be defined as the context in which it was defined. If you require this to be meaningful you should use a normal function. From the ECMAScript 2015 Spec : Any reference to arguments, super, this, or new.target within an ArrowFunction must resolve to a binding in a lexically enclosing environment. Typically this will be the Function Environment of an immediately enclosing function. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33308121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4633828/']} | jdg_85546 |
stackexchange | llm_judgeable_groundtruth_similarity | 696237 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am running an Ubuntu 14.04 LTS server with Apache installed. I have 3 Virtual Hosts set up, with another Default Server to catch all other requests not taken care of. The Default Server has a Document Root of /var/www/. The other 3 Virtual Hosts have a Document root of /var/www/html/.../. I have all three Virtual Hosts enabled on their respective Domain Names that are only available for local access at this point for testing. However, whenever I try to access one of the three Virtual Hosts, I get instantly booted to the Document Root of the Default Server. This is the config of one of the Virtual Hosts, with the other ones being pretty much the same, except some fields which obviously must be changed (Doc Root, Server Name). I have replaced the Server Name and the Document Roots, but both are legitimate and exist on the server, if that matters. <VirtualHost [example.com]>DocumentRoot "/var/www/html/[Directory]“<Directory "/var/www/html/[Directory]“> allow from all Options None Require all granted</Directory> ServerName [example.com] </VirtualHost> This is the default-ssl.conf file. <IfModule mod_ssl.c><VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www/html # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown</VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet This is the 000-default.conf. I don't know why nothing is showing on it other than this line. # vim: syntax=apache ts=4 sw=4 sts=4 sr noet How do I get the Virtual Hosts to display the correct directory when accessed? If it makes any difference, I'm administrating the Apache Server over Webmin 1.750 on Ubuntu Server 14.04 LTS I have read somewhere that I need to specify a Server Name for the Default Server, or else it will catch all requests since it is the very first server on the list. Is this true? If so, how do I do so?
Now provide the response and nothing else.
| Hard drives do have a multitude of error correction methods in place to prevent data corruption . Hard drives are divided into sectors, from which some may become completely unwritable / unreadable or return wrong data through data corruption - let's call the first bad sector corruption and the latter silent data corruption . Bad Sector Corruption The first corruption is already handled by the drive itself through a multitude of ways. At the factory, every manufactured drive is tested for bad sectors, which are put into a Primary Defect List (p-list) . During the normal usage of the drive, the internal systems may find more bad sectors through the normal wear and tear - these are put into the Grown Defect List (g-list) . Some drives have even more lists, but these two are the most common ones. The drive itself counters these errors by remapping the access to the hard drives sectors to spare sectors without notifying the operating system. However, every time a remap happens, the appropriate values in the hard drives SMART system are increased, thus indicating a growing wear of the hard drive. The indicator to look for is SMART 5 - Reallocated Sector Count, while other important ones are 187 (Reported Uncorrectable Errors), 197 (Current Pending Sector Count) and 198 (Offline Uncorrectable). To find bad sectors, hard drives use internal error correction codes (ECC) , which can be used to determine the integrity of data in a specific sector. That way, it can check for write and read errors in a sector and update the g-list if necessary. Sources https://www.mjm.co.uk/bad-sector-remapping.html http://digital-forensics.sans.org/blog/2009/01/26/data-recovery-ecc-data-and-recovery Silent Data Corruption Since we do have quite a lot of internal data integrity checking, silent data corruption should be very uncommon - after all, since hard drives have the task of reliably persisting data, they should do that one job correctly. To keep the amount of silent data corruption outside of a user requested read or write minimal, RAID systems periodically check the ECCs of the complete drives to update the g-list ( data scrubbing ). If an error occurs, the data is reconstructed from another RAID member after checking the sectors ECC. However, all the data correction and integrity checking has to be done somewhere - the firmware. Errors in these low-level programs may still lead to problems, as might mechanical problems and false positives ECC sums. An example would be an unchecked write, where the firmware erroneously reports a successful write, while the actual harddrive write did not occur or was faulty (an identity discrepancy ). There are some studies on the statistical occurence of these failures, where a file system data integrity check did report a failure without the underlying drive reporting a problem, thus showing a silent data corruption. TLDR : less than 0.3% in consumer disks and less than 0.02% in enterprise disks on average contained such identity discrepancies over a 17 month time span with 1.5 million disks checked (365 disks in total had identity discrepancies) - see Table 10 and Section 5 in this publication . Sources http://research.cs.wisc.edu/adsl/Publications/corruption-fast08.pdf | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/696237', 'https://serverfault.com', 'https://serverfault.com/users/278126/']} | jdg_85547 |
stackexchange | llm_judgeable_groundtruth_similarity | 53453861 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say we have a context provider set up, along with some initial data property values. Somewhere along the line, let's say a consumer then modifies those properties. On page reload, those changes are lost. What is the best way to persist the data so we can retain those data modifications? Any method other than simply local storage?
Now provide the response and nothing else.
| Yeah, if you want the data to persist across reloads, your options are going to be storing that info server-side (via an api call) or in browser storage (local storage, session storage, cookies). The option you'll want to use depends on what level of persistence you're looking to achieve. Regardless of storage choice, it would likely look something along the lines of const MyContext = React.createContext(defaultValue);class Parent extends React.Component { setValue = (value) => { this.setState({ value }); } state = { setValue: this.setValue, value: localStorage.getItem("parentValueKey") } componentDidUpdate(prevProps, prevState) { if (this.state.value !== prevState.value) { // Whatever storage mechanism you end up deciding to use. localStorage.setItem("parentValueKey", this.state.value) } } render() { return ( <MyContext.Provider value={this.state}> {this.props.children} </MyContext.Provider> ) }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/53453861', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1930938/']} | jdg_85548 |
stackexchange | llm_judgeable_groundtruth_similarity | 1169539 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to do a linear regression in R using the lm() function. My data is an annual time series with one field for year (22 years) and another for state (50 states). I want to fit a regression for each state so that at the end I have a vector of lm responses. I can imagine doing for loop for each state then doing the regression inside the loop and adding the results of each regression to a vector. That does not seem very R-like, however. In SAS I would do a 'by' statement and in SQL I would do a 'group by'. What's the R way of doing this?
Now provide the response and nothing else.
| Here's one way using the lme4 package. library(lme4) d <- data.frame(state=rep(c('NY', 'CA'), c(10, 10)), year=rep(1:10, 2), response=c(rnorm(10), rnorm(10))) xyplot(response ~ year, groups=state, data=d, type='l') fits <- lmList(response ~ year | state, data=d) fits#------------Call: lmList(formula = response ~ year | state, data = d)Coefficients: (Intercept) yearCA -1.34420990 0.17139963NY 0.00196176 -0.01852429Degrees of freedom: 20 total; 16 residualResidual standard error: 0.8201316 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1169539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37751/']} | jdg_85549 |
stackexchange | llm_judgeable_groundtruth_similarity | 2191637 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We know that $\sqrt{x^2} = |x|$. My question is: Does this property apply to $\sqrt[n]{x^n}$, for all $n \in \mathbb{N}$? I can't think on how i could check this. It is not very clear to me why $\sqrt{x^2} = |x|$ either.Thank you.
Now provide the response and nothing else.
| No. For a counterexample $\sqrt[3]{-8}=-2$ For even $n$, we define $\sqrt[n]{x}$ to be $\ge 0$ because we want $\sqrt[n]{x}$ to work like a function. That is, if we said "$\sqrt[n]{x}$ is that $y$ such that $y^n =x$", then we would get 2 solutions for $y$ for even $n$, but for it to work like a function we want just one. This is why $\sqrt[2]{x^2} = |x|$, rather than $\sqrt[2]{x^2}=\pm x$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2191637', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/371741/']} | jdg_85550 |
stackexchange | llm_judgeable_groundtruth_similarity | 1559695 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How is the derivative of a f(x) typically calculated programmatically to ensure maximum accuracy? I am implementing the Newton-Raphson method, and it requires taking of the derivative of a function.
Now provide the response and nothing else.
| I agree with @erikkallen that (f(x + h) - f(x - h)) / 2 * h is the usual approach for numerically approximating derivatives. However, getting the right step size h is a little subtle. The approximation error in ( f(x + h) - f(x - h)) / 2 * h decreases as h gets smaller, which says you should take h as small as possible. But as h gets smaller, the error from floating point subtraction increases since the numerator requires subtracting nearly equal numbers. If h is too small, you can loose a lot of precision in the subtraction. So in practice you have to pick a not-too-small value of h that minimizes the combination of approximation error and numerical error. As a rule of thumb, you can try h = SQRT(DBL_EPSILON) where DBL_EPSILON is the smallest double precision number e such that 1 + e != 1 in machine precision. DBL_EPSILON is about 10^-15 so you could use h = 10^-7 or 10^-8 . For more details, see these notes on picking the step size for differential equations. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1559695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149045/']} | jdg_85551 |
stackexchange | llm_judgeable_groundtruth_similarity | 1194620 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a loop that reads plenty of data from an external source. The process takes about 20 seconds, and I want to show the progress to the user. I don't need any fancy progress bars, so I chose to plot my progress in a label that will say "Step 1/1000", then change to "Step 2/1000" etc. My code looks something like this: // "count" is the number of steps in the loop, // I receive it in previous codeString countLabel = "/"+count.ToString();for (i = 0; i < count; i++){ ... do analysis ... labelProgress.Content = "Step "+i.ToString()+countLabel} However, during that analysis the screen is "stuck" and the progress does not show as advancing. I understand this behavior from my past in C++, where I would probably have a separate thread showing a progress bar receiving notifications from the loop, or some form of repaint/refresh, or forcing the window/app to process its message queue. What's the right way to do it in C#? I'm not tied to the label, so if there's a simple progress-bar popup screen I could use instead of this label it would also be great... Thanks
Now provide the response and nothing else.
| Move the work to a BackgroundWorker and use the ReportProgress method. for (i = 0; i < count; i++){ ... do analysis ... worker.ReportProgress((100 * i) / count);}private void MyWorker_ProgressChanged(object sender, ProgressChangedEventArgs e){ taskProgressBar.Value = Math.Min(e.ProgressPercentage, 100);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1194620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103532/']} | jdg_85552 |
stackexchange | llm_judgeable_groundtruth_similarity | 1911151 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using MVC .NET in a distributed environment with CSLA and I can reference HttpPostedFileBase from one of my web layers (eg Website.MVC), but I cannot reference HttpPostedFileBase from a separate layer (lets call it OtherLayer.Web). Any idea on what I need to do to be able to call HttpPostedFileBase ?I am able to use HttpPostedFile in both layers - should I just use this instead? The assembly references are basically the same - in Website.MVC I have: namespace Website.Mvc.Controllers{ using System; using System.Collections; using System.Collections.Generic; using System.Web.Mvc; using System.Web; using System.IO; using PetOrganizer.Mvc.Helpers; using TrupanionPetInsurance.Web; Whereas in my other layer i have: namespace OtherLayer.Web{ using System; using System.Collections; using System.Collections.Generic; using System.Collections.Specialized; using System.Data; using System.Data.SqlClient; using System.IO; using System.Net.Mail; using System.Text; using System.Text.RegularExpressions; using System.Web; using System.Web.Mvc; using System.Web.Security; using System.Xml; using System.Xml.Serialization; using Csla; using iTextSharp.text; using iTextSharp.text.pdf; using TruDat.BusinessLayer; using TruDat.BusinessLayer.Billing; using TruDat.BusinessLayer.Data; using TruDat.BusinessLayer.Utility; using TrupanionPetInsurance.Web.EmailTemplateParser; using TruDat.BusinessLayer.DataServices;
Now provide the response and nothing else.
| Make sure your project references the System.Web.Abstractions assembly which contains the HttpPostedFileBase type. As confusing as it may seem, the HttpPostedFile type lives in a separate assembly ( System.Web ) but the same exact namespace. It is important to remember that the namespaces can span assemblies and the name of an assembly does not necessarily dictate the namespace of the types contained in it. There is no System.Web.Abstractions namespace simply two assemblies that contain types in the System.Web namespace. Once you have referenced both assemblies you can reference both types by their unqualified names with this single using statement: using System.Web; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1911151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232523/']} | jdg_85553 |
stackexchange | llm_judgeable_groundtruth_similarity | 15255 |
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I insert the file name of the active file buffer into a command? When I invoke a command that needs a file name, I am often doing so from a file buffer and want the name of that file while typing the command into the minibuffer. I'm looking for a hypothetical C-x … that I can type during forming a command line in the minibuffer. So if, while the current buffer is a file named Lorem Ipsum.txt , I type this: M-x grep RET 'foo' SPC C-x … RET the M-x grep RET presents a minibuffer prompting for the grep command. In response, I've typed the regex pattern argument ( 'foo' ) and inserted the filename argument ( C-x … ). That minibuffer will end up as: grep -nH -e 'foo' 'Lorem Ipsum.txt' What existing command gets the current file buffer's file name like this? What is its default key binding?
Now provide the response and nothing else.
| What existing command gets the current file buffer's file name like this? Unfortunately there is no ready made command for this, but we can make one as follows: (defun name-of-the-file () "Gets the name of the file the current buffer is based on." (interactive) (insert (buffer-file-name (window-buffer (minibuffer-selected-window))))) The magic is in the minibuffer-selected-window function; it does the right thing by picking the last buffer before the minibuffer was activated. Otherwise using just the buffer-file-name won't get us the filename into the minibuffer. Interestingly, the same function can be used in a regular buffer or minibuffer to insert the file name. No special treatment is needed. What is its default key binding? There is none, but you can pick a convenient one and assign it as follows: (global-set-key (kbd "C-c f") 'name-of-the-file) Put the above lisp code lines in your init.el file. This works in most trivial cases, but hard to predict what conflicts it might have with whatever modes you have active when invoking this function. What if the buffer is not visiting a valid filename, such as the scratch buffer? Then this error is shown in the echo area: name-of-the-file: Wrong type argument: char-or-string-p, nil Picking a valid buffer with a proper filename before invoking the shortcut will avoid such errors. There are several other solutions like the above, but they mainly differ in how they handle exceptions or overrides for any conflicts. For example, some employ mode-hooks to make this function available only in certain modes. | {} | {'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/15255', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/2506/']} | jdg_85554 |
stackexchange | llm_judgeable_groundtruth_similarity | 42593148 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a schema in Oracle-12c similar to a typical forum with accounts , posts , comments . I am writing a single query to get... one user all that user's posts the comments on each of those posts and the author of each comment. The query looks like this: select "accounts".*, "p".*, "c".*, "author".*from "accounts"cross apply ( select * from "posts" where "posts"."author_id" = "accounts"."id") "p"cross apply ( select * from "comments" where "comments"."post_id" = "p"."id") "c"left join "accounts" "author" on "author"."id" = "c"."author_id"where "accounts"."id" = 1 This query works as expected. I'm using CROSS APPLY instead of a typical JOIN because I'll be adding OFFSET and FETCH to paginate later on. However, the problem is that CROSS APPLY omits the posts that have no comments, which I don't want. I want to keep the posts in the results even if they have no comments. So I tried changing CROSS APPLY to OUTER APPLY . select "accounts".*, "p".*, "c".*, "author".*from "accounts"outer apply ( select * from "posts" where "posts"."author_id" = "accounts"."id") "p"outer apply ( select * from "comments" where "comments"."post_id" = "p"."id") "c"left join "accounts" "author" on "author"."id" = "c"."author_id"where "accounts"."id" = 1 But now I get this error: ORA-00904: "p"."id": invalid identifier00904. 00000 - "%s: invalid identifier"*Cause: *Action:Error at Line: 9 Column: 34 For some reason, my second OUTER APPLY join is complaining about me referencing "p"."id" from the results of the first one. But its fine when I was using CROSS APPLY . What is going on? Why is there this difference in behavior between these? EDIT: OUTER APPLY may not seem necessary in this basic example. This has been distilled from a more complex scenario in which I must insist that OUTER APPLY is indeed necessary, but the details of that scenario are irrelevant to the actual question I'm asking—which is about this seemingly undocumented difference in behavior between CROSS and OUTER APPLY . EDIT: Oracle version: Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production Client: Oracle SQL Developer Version 4.2.0.16.356 Server: output of uname -a - Linux ubuntu-1gb-sfo2-01 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux DDL: link
Now provide the response and nothing else.
| To CodeFuller's answer, I would just add that (A) there is a patch available for this bug and (B) there is a work-around SQL that works in 12.1.0.2, though I do not know whether it will serve your purposes. The workaround is to basically nest your joins, like so: SELECT accounts.*, p.*, author.*FROM accounts CROSS APPLY (SELECT posts.id, posts.author_id, posts.text, comments.comment_author_id, comments.comment_text FROM posts OUTER APPLY (SELECT comments.author_id comment_author_id, comments.text comment_text FROM comments WHERE comments.post_id = posts.id) comments WHERE posts.author_id = accounts.id) p LEFT JOIN accounts author ON author.id = p.comment_author_idWHERE accounts.id = 1;ID NAME ID_1 AUTHOR_ID TEXT COMMENT_AUTHOR_ID COMMENT_TEXT ID_2 NAME_1 ---- --------- ---- --------- ------------------------------------------------- ----------------- --------------------------------------- ----- ------------------- 1 Fred 1 1 Fred wrote this and it has comments 3 This is Helen's comment on Fred's post 3 Helen 1 Fred 2 1 Fred wrote this and it does not have any comments-------- End of Data --------2 row(s) fetched Reference: table DDL for workaround CREATE TABLE accounts( id NUMBER PRIMARY KEY, name VARCHAR2 (30));CREATE TABLE posts( id NUMBER PRIMARY KEY, author_id NUMBER, text VARCHAR2 (240));CREATE TABLE comments( id NUMBER PRIMARY KEY, post_id NUMBER, author_id NUMBER, text VARCHAR2 (240));INSERT INTO accounts (id, name)VALUES (1, 'Fred');INSERT INTO accounts (id, name)VALUES (2, 'Mary');INSERT INTO accounts (id, name)VALUES (3, 'Helen');INSERT INTO accounts (id, name)VALUES (4, 'Iqbal');INSERT INTO posts (id, author_id, text)VALUES (1, 1, 'Fred wrote this and it has comments');INSERT INTO posts (id, author_id, text)VALUES (2, 1, 'Fred wrote this and it does not have any comments');INSERT INTO posts (id, author_id, text)VALUES (3, 4, 'Iqbal wrote this and it does not have any comments');INSERT INTO comments (id, post_id, author_id, text)VALUES (1, 1, 3, 'This is Helen''s comment on Fred''s post'); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42593148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6078821/']} | jdg_85555 |
stackexchange | llm_judgeable_groundtruth_similarity | 6266 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following problem: Assume you have a large list of directed edges, that constitute a large graph. I would like to provide an initial subset of vertices and I need to know which additional vertices I need to select in order to make sure that the resulting (sub)-graph is connected make sure that I don't have sink/source vertices, i.e. all vertices have at least one incoming and one outgoing edge. minimize the number of additional vertices I don't have a working code (at least not for V8+). But assume the following directed edges: ex = {"9" -> "7", "4" -> "6", "1" -> "9", "3" -> "5", "10" -> "8", "5" -> "2", "2" -> "5", "9" -> "3", "3" -> "1", "7" -> "9", "8" -> "6", "3" -> "10", "2" -> "1", "7" -> "4", "1" -> "4", "2" -> "7", "5" -> "6", "7" -> "2"};gr=Graph[ex, VertexLabels -> "Name", ImagePadding -> 20] Say, we initially choose vertices "1" and "3" and "7", then the subgraph has source/sinks: Subgraph[gr, {"3", "1", "7"}, VertexLabels -> "Name", ImagePadding -> 20] Now, a possible completion of the graph without sinks/sources would be: sgr=Subgraph[gr, {"3", "1", "9", "7"}, VertexLabels -> "Name", ImagePadding -> 20] HighlightGraph[gr, sgr] I understand that this problem might not have a unique solution. Any solution is fine for me. (The upgraded Graph capabilities of V8 are still mostly unexplored area for me, so I apologize for not having a working initial solution. My V7 approach stopped working so I hesitate in postin it here.) I need to do this for a large set of directed edges (10000+) so performance might be an issue as well. Edit(new): Following up on Szabolcs comments & answer the following works very well. badguys[gr_] := Union[sinks[gr], sources[gr]]healthyGraph[gr_] := FixedPoint[VertexDelete[#, badguys[#]] &, gr]completeNetwork::ggraphx = "At least one of the vertices `1` is a sink or source.";completeNetworkStep[g_?GraphQ, list_] /; And @@ (MemberQ[VertexList[g], #] & /@ list) := Module[{clist}, (* connect the vertices*) clist = connect[g, list]; (*remove sinks*) clist = FixedPoint[step[sinks][g, #] &, clist]; (*remove sources*) clist = FixedPoint[step[sources][g, #] &, clist]]completeNetwork[g_?GraphQ, list_] /; And @@ (MemberQ[VertexList[g], #] & /@ list) := Module[{hgr}, (* clean the network from sinks/sources*) hgr = healthyGraph[g]; (* check if list of vertices is still part of the healthy graph*) If[ And @@ (MemberQ[VertexList[hgr], #] & /@ list), FixedPoint[completeNetworkStep[hgr, #] &, list], Message[completeNetwork::ggraphx, list]; {}] ]ShowCompleteSubgraph[g_?GraphQ, list_] := HighlightGraph[g, Subgraph[g, completeNetwork[g, list]]] In many instances the algorithm works very good. Some problems remain though. Consider the following setup: vertices = {30, 43, 57, 1, 75, 24, 74, 94, 62, 47, 51, 89, 95, 87, 5, 73, 80, 91, 3, 67, 4, 8, 93, 18, 85, 49, 39, 13, 45, 79, 96, 98, 81, 19, 21, 15, 10, 60, 77, 76};edges = {85 -> 4, 94 -> 95, 45 -> 18, 75 -> 3, 80 -> 30, 15 -> 80, 51 -> 21, 15 -> 43, 13 -> 95, 75 -> 91, 4 -> 30, 95 -> 76, 94 -> 51, 95 -> 21, 30 -> 45, 81 -> 96, 39 -> 13, 89 -> 1, 76 -> 3, 96 -> 47, 67 -> 77, 67 -> 10, 4 -> 24, 57 -> 89, 73 -> 95, 89 -> 51, 45 -> 80, 21 -> 8, 74 -> 73, 98 -> 96, 4 -> 76, 77 -> 79, 43 -> 93, 15 -> 19, 3 -> 57, 76 -> 15, 94 -> 24, 45 -> 15, 75 -> 89, 73 -> 60, 3 -> 49, 98 -> 10, 1 -> 43, 10 -> 15, 49 -> 5, 8 -> 79, 51 -> 10, 60 -> 51, 3 -> 13, 60 -> 43, 96 -> 62, 57 -> 4, 45 -> 95, 67 -> 5, 1 -> 4, 98 -> 30, 39 -> 75, 39 -> 18, 89 -> 75, 89 -> 15, 43 -> 39, 60 -> 10, 91 -> 39, 85 -> 8, 47 -> 89, 57 -> 85, 76 -> 39, 98 -> 95, 51 -> 73, 76 -> 8, 30 -> 49, 87 -> 49, 77 -> 93, 80 -> 21, 96 -> 57, 39 -> 76, 39 -> 30, 62 -> 91, 94 -> 10, 96 -> 81, 95 -> 75, 62 -> 77, 3 -> 87, 43 -> 87, 49 -> 24, 21 -> 87, 94 -> 39, 94 -> 98, 87 -> 89, 5 -> 13, 21 -> 67, 47 -> 5, 62 -> 47, 39 -> 47, 91 -> 60, 96 -> 76, 10 -> 79};ini1 = {45, 4, 62, 15, 51};exgr = Graph[edges, VertexLabels -> "Name"];{{sinks@#, sources@#} &@ Subgraph[exgr, completeNetwork[exgr, ini1]], ini1, completeNetwork[exgr, ini1]} {{{}, {96}}, {45, 4, 62, 15, 51}, {1, 4, 15, 21, 30, 43, 45, 47, 51, 57, 62, 76, 87, 89, 96}} You see that a source vertex (96) remains. The graph looks like ShowCompleteSubgraph[exgr, completeNetwork[exgr, ini1]] Apparently, vertex 62 from the initial list has only outgoing edges, except for 96->62. How can we modify the algorithm to open up routes along alternative edges?
Now provide the response and nothing else.
| Let's denote the graph by gr and the initial subset of vertices by v . Connecting the graph First, we break the subgraph generated by v into connected components. We select one vertex from the smallest connected component, and another one that is not in it. We find the shortest path in the undirected gr that connects these two, and add all vertices from the shortest path to v . Repeat until the subgraph becomes connected. smallestComponent[g_Graph] := With[{components = ConnectedComponents@UndirectedGraph[g]}, Extract[components, Ordering[components, 1]] ]connectStep[gr_, v_] := Module[{sc, rest}, sc = smallestComponent@Subgraph[gr, v]; rest = Complement[v, sc]; If[rest === {}, v, Union[ FindShortestPath[UndirectedGraph[gr], First[sc], First[rest]], v ] ] ]connect[gr_, v_] := FixedPoint[connectStep[gr, #] &, v] Usage: connect[gr, {"1", "3", "7"}](* ==> {"1", "3", "7", "9"} *) Getting rid of sinks and sources Let's assume that the subgraph is connected. Take the subgraph, and select one sink from v . Then select any other vertex and find the shortest path between these two in the directed gr , using the sink as starting point. Add all vertices from the shortest path to v . Repeat until there are no sinks in the subgraph. Sources can be removed in an analogous way, except they need to be end point of the path. sinks[g_?GraphQ] := Pick[VertexList[g], VertexOutDegree[g], 0]sources[g_?GraphQ] := Pick[VertexList[g], VertexInDegree[g], 0]step[sinkOrSource_][gr_, v_] := Module[{ss, s, t}, ss = sinkOrSource[Subgraph[gr, v]]; If[ss === {}, v, s = First[ss]; t = First@DeleteCases[v, s]; If[sinkOrSource == source, {s,t} = {t,s}]; Union[v, FindShortestPath[gr, s, t]] ] ]FixedPoint[step[sink][gr, #]&, {"1", "3"}](* ==> {"1", "3", "9"} *) | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/6266', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/135/']} | jdg_85556 |
stackexchange | llm_judgeable_groundtruth_similarity | 937573 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to change the UserAgent of the WebBrowser control in a Winforms application. I have successfully achieved this by using the following code: [DllImport("urlmon.dll", CharSet = CharSet.Ansi)]private static extern int UrlMkSetSessionOption( int dwOption, string pBuffer, int dwBufferLength, int dwReserved);const int URLMON_OPTION_USERAGENT = 0x10000001;public void ChangeUserAgent(){ List<string> userAgent = new List<string>(); string ua = "Googlebot/2.1 (+http://www.google.com/bot.html)"; UrlMkSetSessionOption(URLMON_OPTION_USERAGENT, ua, ua.Length, 0);} The only problem is that this only works once. When I try to run the ChangeUserAgent() method for the second time it doesn't work. It stays set to the first changed value. This is quite annoying and I've tried everything but it just won't change more than once. Does anyone know of a different, more flexible approach? Thanks
Now provide the response and nothing else.
| I'm not sure whether I should just copy/paste from a website , but I'd rather leave the answer here, instead of a link. If anyone can clarify in comments, I'll be much obliged. Basically, you have to extend the WebBrowser class. public class ExtendedWebBrowser : WebBrowser{ bool renavigating = false; public string UserAgent { get; set; } public ExtendedWebBrowser() { DocumentCompleted += SetupBrowser; //this will cause SetupBrowser to run (we need a document object) Navigate("about:blank"); } void SetupBrowser(object sender, WebBrowserDocumentCompletedEventArgs e) { DocumentCompleted -= SetupBrowser; SHDocVw.WebBrowser xBrowser = (SHDocVw.WebBrowser)ActiveXInstance; xBrowser.BeforeNavigate2 += BeforeNavigate; DocumentCompleted += PageLoaded; } void PageLoaded(object sender, WebBrowserDocumentCompletedEventArgs e) { } void BeforeNavigate(object pDisp, ref object url, ref object flags, ref object targetFrameName, ref object postData, ref object headers, ref bool cancel) { if (!string.IsNullOrEmpty(UserAgent)) { if (!renavigating) { headers += string.Format("User-Agent: {0}\r\n", UserAgent); renavigating = true; cancel = true; Navigate((string)url, (string)targetFrameName, (byte[])postData, (string)headers); } else { renavigating = false; } } }} Note: To use the method above you’ll need to add a COM reference to “Microsoft Internet Controls”. He mentions your approach too, and states that the WebBrowser control seems to cache this user agent string, so it will not change the user agent without restarting the process. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/937573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/111624/']} | jdg_85557 |
stackexchange | llm_judgeable_groundtruth_similarity | 478483 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $S$ be the set of all sequences $x=\{x_n\}$ of real numbers such that only a finite number of the $x_n$ are nonzero. Define $d(x,y)=\max|x_n-y_n|$. Is the space complete? Completeness means that any Cauchy sequence converges to a point in the space. Suppose we have a sequence $\{\textbf{a}_i\}_{i=0}^\infty$, where $\textbf{a}_i=(a_{i1},a_{i2},\ldots)$. Since $\{\textbf{a}_i\}$ is Cauchy, for any fixed $k$, the real sequence $a_{1k},a_{2k},\ldots$ is also Cauchy, hence converges to a real number $b_i$. I'm not sure, however, whether the sequence $\{\textbf{a}_i\}$ converges to the sequence $\textbf{b}=(b_1,b_2,\ldots)$, or whether the sequence $\textbf{b}$ belongs to $S$.
Now provide the response and nothing else.
| HINT: What if the first $n$ terms of $\mathbf{a}_n$ are the first $n$ terms of the harmonic series, the remaining terms all being $0$? $$\begin{align*}\mathbf{a}_1&=\langle 1,0,0,0,\ldots\rangle\\\mathbf{a}_2&=\left\langle 1,\frac12,0,0,0,\ldots\right\rangle\\\mathbf{a}_3&=\left\langle 1,\frac12,\frac13,0,0,0,\ldots\right\rangle\\&\;\vdots\end{align*}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/478483', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/66189/']} | jdg_85558 |
stackexchange | llm_judgeable_groundtruth_similarity | 33397778 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across an interesting secure coding rule in C++ which states: Do not reenter a function during the initialization of a static variable declaration. If a function is reentered during the constant initialization of a static object inside that function, the behavior of the program is undefined. Infinite recursion is not required to trigger undefined behavior, the function need only recur once as part of the initialization. The non_compliant example of the same is: #include <stdexcept>int fact(int i) noexcept(false) { if (i < 0) { // Negative factorials are undefined. throw std::domain_error("i must be >= 0"); } static const int cache[] = { fact(0), fact(1), fact(2), fact(3), fact(4), fact(5), fact(6), fact(7), fact(8), fact(9), fact(10), fact(11), fact(12), fact(13), fact(14), fact(15), fact(16) }; if (i < (sizeof(cache) / sizeof(int))) { return cache[i]; } return i > 0 ? i * fact(i - 1) : 1;} which according to the source gives the error: terminate called after throwing an instance of '__gnu_cxx::recursive_init_error' what(): std::exception when executed in Visual Studio 2013 . I tried similar code of my own and got the same error (compiled using g++ and executed, on Ubuntu ). I am doubtful if my understanding is correct with respect to this concept as I am not well-versed with C++. According to me, since the cache array is constant, which means it can be read-only and needs to be initialized only once as static, it is getting initialized again and again as the values for this array is the value returned by each of the comma-separated recursive function calls which is against the behavior of the declared array. Thus, it gives undefined behavior which is also stated in the rule. What is a better explanation for this?
Now provide the response and nothing else.
| In order to execute fact() , you need to first statically initialize fact::cache[] . In order to initially fact::cache , you need to execute fact() . There's a circular dependency there, which leads to the behavior you see. cache will only be initialized once, but it requires itself to be initialized in order to initialize itself. Even typing this makes my head spin. The right way to introduce a cache table like this is to separate it into a different function: int fact(int i) noexcept(false) { if (i < 0) { // Negative factorials are undefined. throw std::domain_error("i must be >= 0"); } return i > 0 ? i * fact(i - 1) : 1;} int memo_fact(int i) noexcept(false) { static const int cache[] = { fact(0), fact(1), fact(2), fact(3), fact(4), fact(5), fact(6), fact(7), fact(8), fact(9), fact(10), fact(11), fact(12), fact(13), fact(14), fact(15), fact(16) }; if (i < (sizeof(cache) / sizeof(int))) { return cache[i]; } else { return fact(i); } } Here, memo_fact::cache[] will only be initialized once - but its initialization is no longer dependent on itself. So we have no issue. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33397778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1493850/']} | jdg_85559 |
stackexchange | llm_judgeable_groundtruth_similarity | 8717675 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a workqueue table that has a workid column. The workID column has values that increment automatically. Is there a way I can run a query in the backend to insert a new row and have the workID column increment automatically? When I try to insert a null, it throws error ORA01400 - Cannot insert null into workid. insert into WORKQUEUE (facilitycode,workaction,description) values ('J', 'II', 'TESTVALUES') What I have tried so far - I tried to look at the table details and didn't see any auto-increment. The table script is as follow "WORKID" NUMBER NOT NULL ENABLE, Database: Oracle 10g Screenshot of some existing data. ANSWER: I have to thank each and everyone for the help. Today was a great learning experience and without your support, I couldn't have done. Bottom line is, I was trying to insert a row into a table that already has sequences and triggers. All I had to do was find the right sequence, for my question, and call that sequence into my query. The links you all provided me helped me look these sequences up and find the one that is for this workid column. Thanks to you all, I gave everyone a thumbs up, I am able to tackle another dragon today and help patient care take a step forward!"
Now provide the response and nothing else.
| To get an auto increment number you need to use a sequence in Oracle.(See here and here ). CREATE SEQUENCE my_seq;SELECT my_seq.NEXTVAL FROM DUAL; -- to get the next value-- use in a trigger for your table demoCREATE OR REPLACE TRIGGER demo_increment BEFORE INSERT ON demoFOR EACH ROWBEGIN SELECT my_seq.NEXTVAL INTO :new.id FROM dual;END;/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8717675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/684899/']} | jdg_85560 |
stackexchange | llm_judgeable_groundtruth_similarity | 640520 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the basic concept of transmit binary data wirelessly or from radio? Here are so far my knowledge about it. Any digital data (image, sound, video) is represented by binary digit (bit). Binary digit only have two possible state, it's zero only or one only. To transmit binary data via radio (wireless), we should encode binary series to the wave form. In this case we can encode with manipulating amplitudo of wave (AM) or frequency of wave (FM), anyway I feel FM choice is better. For encode binary state to the frequency, it depends on convention. Here is my convention, if bit to send is currently one, then the frequency is higher, otherwise lower. Therefore maybe I should set threshold value so the receiver know what is higher frequency and what is lower frequency to help receiver decode binary. I expect I can simply send and receive data like how Universal Serial Asyncronous Receive Transmit (USART) work. But USART work with wire not wireless which using voltage to encode/decode binary instead of radio frequency. Suppose I have USART with standard baudrate 9600, how do I convert logical voltage to the frequency?So basically if voltage higher, then frequency is higher, otherwise lower. I also expect I can transmit with distance more than 10 Kilometer. Another question, what tools and module that I should prepare for this project? Here are my hypothesa: Microcontroller Antenna for receiver Antenna for transmitter Maybe this project is simply done by using loraWAN. Edit: my purpose is forwarding data from/to internet. Because one of node is internet active while one of node is not coveraged by internet.
Now provide the response and nothing else.
| The component that you're looking for is called a Voltage Controlled Oscillator, or VCO. It's an electronic oscillator whose frequency is controlled by an external voltage signal. You can either build the VCO from scratch or simply buy one. Keep in mind that you have to comply with your country's regulations on radio frequency transmitters if you decide to build your own transmitters. There are likely limits on the transmit power and the frequency bands that you may use without a license. 10 kilometer usable range might not be legally possible without a license. A few tens of meters is more realistic with a low-power (legal) DIY transmitter. You might be better off using the internet as the communication medium for your project. If you still want to go the DIY route, the most important thing is to make sure that you don't have any unwanted sidebands (i.e. from distortion), which might interfere with other signals outside of the band that you're actually transmitting on. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/640520', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/270326/']} | jdg_85561 |
stackexchange | llm_judgeable_groundtruth_similarity | 5850000 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Say i have an array in C int array[6] = {1,2,3,4,5,6} how could I split this into {1,2,3} and {4,5,6} Would this be possible using memcpy? Thank You, nonono
Now provide the response and nothing else.
| Sure. The straightforward solution is to allocate two new arrays using malloc and then using memcpy to copy the data into the two arrays. int array[6] = {1,2,3,4,5,6}int *firstHalf = malloc(3 * sizeof(int));if (!firstHalf) { /* handle error */}int *secondHalf = malloc(3 * sizeof(int));if (!secondHalf) { /* handle error */}memcpy(firstHalf, array, 3 * sizeof(int));memcpy(secondHalf, array + 3, 3 * sizeof(int)); However, in case the original array exists long enough, you might not even need to do that. You could just 'split' the array into two new arrays by using pointers into the original array: int array[6] = {1,2,3,4,5,6}int *firstHalf = array;int *secondHalf = array + 3; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5850000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/329965/']} | jdg_85562 |
stackexchange | llm_judgeable_groundtruth_similarity | 2709575 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a text file that contains about 100000 articles.The structure of file is: .Document ID 42944-YEAR:5.Date 03\08\11.Cat politicalArticle Content 1.Document ID 42945-YEAR:5.Date 03\08\11.Cat politicalArticle Content 2 I want to open this file in c# for processing it line by line.I tried this code: String[] FileLines = File.ReadAllText( TB_SourceFile.Text).Split(Environment.NewLine.ToCharArray()); But it says: Exception of type 'System.OutOfMemoryException' was thrown. The question is How can I open this file and read it line by line. File Size: 564 MB (591,886,626 bytes) File Encoding: UTF-8 File contains Unicode characters.
Now provide the response and nothing else.
| You can open the file and read it as a stream rather than loading everything into memory all at once. From MSDN: using System;using System.IO;class Test { public static void Main() { try { // Create an instance of StreamReader to read from a file. // The using statement also closes the StreamReader. using (StreamReader sr = new StreamReader("TestFile.txt")) { String line; // Read and display lines from the file until the end of // the file is reached. while ((line = sr.ReadLine()) != null) { Console.WriteLine(line); } } } catch (Exception e) { // Let the user know what went wrong. Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2709575', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148203/']} | jdg_85563 |
stackexchange | llm_judgeable_groundtruth_similarity | 2067267 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to build a synthesizer for the iPhone. I understand that it's possible to use custom audio units for the iPhone. At first glance, this sounds promising, since there's lots and lots of Audio Unit programming resources available. However, using custom audio units on the iPhone seems a bit tricky ( see: http://lists.apple.com/archives/Coreaudio-api/2008/Nov/msg00262.html ) This seems like the sort of thing that loads of people must be doing, but a simple google search for "iphone audio synthesis" doesn't turn up anything along the lines of a nice and easy tutorial or recommended tool kit. So, anyone here have experience synthesizing sound on the iPhone? Are custom audio units the way to go, or is there another, simpler approach I should consider?
Now provide the response and nothing else.
| I'm also investigating this. I think the AudioQueue API is probably the way to go. Here's as far as I got, seems to work okay. File: BleepMachine.h //// BleepMachine.h// WgHeroPrototype//// Created by Andy Buchanan on 05/01/2010.// Copyright 2010 Andy Buchanan. All rights reserved.//#include <AudioToolbox/AudioToolbox.h>// Class to implement sound playback using the AudioQueue API's// Currently just supports playing two sine wave tones, one per// stereo channel. The sound data is liitle-endian signed 16-bit @ 44.1KHz//class BleepMachine{ static void staticQueueCallback( void* userData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer ) { BleepMachine* pThis = reinterpret_cast<BleepMachine*> ( userData ); pThis->queueCallback( outAQ, outBuffer ); } void queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer ); AudioStreamBasicDescription m_outFormat; AudioQueueRef m_outAQ; enum { kBufferSizeInFrames = 512, kNumBuffers = 4, kSampleRate = 44100, }; AudioQueueBufferRef m_buffers[kNumBuffers]; bool m_isInitialised; struct Wave { Wave(): volume(1.f), phase(0.f), frequency(0.f), fStep(0.f) {} float volume; float phase; float frequency; float fStep; }; enum { kLeftWave = 0, kRightWave = 1, kNumWaves, }; Wave m_waves[kNumWaves];public: BleepMachine(); ~BleepMachine(); bool Initialise(); void Shutdown(); bool Start(); bool Stop(); bool SetWave( int id, float frequency, float volume );};// Notes by name. Integer value is number of semitones above A.enum Note{ A = 0, Asharp, B, C, Csharp, D, Dsharp, E, F, Fsharp, G, Gsharp, Bflat = Asharp, Dflat = Csharp, Eflat = Dsharp, Gflat = Fsharp, Aflat = Gsharp,};// Helper function calculates fundamental frequency for a given notefloat CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave=4 );float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber ); File:BleepMachine.mm //// BleepMachine.mm// WgHeroPrototype//// Created by Andy Buchanan on 05/01/2010.// Copyright 2010 Andy Buchanan. All rights reserved.//#include "BleepMachine.h"void BleepMachine::queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer ){ // Render the wave // AudioQueueBufferRef is considered "opaque", but it's a reference to // an AudioQueueBuffer which is not. // All the samples manipulate this, so I'm not quite sure what they mean by opaque // saying.... SInt16* coreAudioBuffer = (SInt16*)outBuffer->mAudioData; // Specify how many bytes we're providing outBuffer->mAudioDataByteSize = kBufferSizeInFrames * m_outFormat.mBytesPerFrame; // Generate the sine waves to Signed 16-Bit Stero interleaved ( Little Endian ) float volumeL = m_waves[kLeftWave].volume; float volumeR = m_waves[kRightWave].volume; float phaseL = m_waves[kLeftWave].phase; float phaseR = m_waves[kRightWave].phase; float fStepL = m_waves[kLeftWave].fStep; float fStepR = m_waves[kRightWave].fStep; for( int s=0; s<kBufferSizeInFrames*2; s+=2 ) { float sampleL = ( volumeL * sinf( phaseL ) ); float sampleR = ( volumeR * sinf( phaseR ) ); short sampleIL = (int)(sampleL * 32767.0); short sampleIR = (int)(sampleR * 32767.0); coreAudioBuffer[s] = sampleIL; coreAudioBuffer[s+1] = sampleIR; phaseL += fStepL; phaseR += fStepR; } m_waves[kLeftWave].phase = fmodf( phaseL, 2 * M_PI ); // Take modulus to preserve precision m_waves[kRightWave].phase = fmodf( phaseR, 2 * M_PI ); // Enqueue the buffer AudioQueueEnqueueBuffer( m_outAQ, outBuffer, 0, NULL ); }bool BleepMachine::SetWave( int id, float frequency, float volume ){ if ( ( id < kLeftWave ) || ( id >= kNumWaves ) ) return false; Wave& wave = m_waves[ id ]; wave.volume = volume; wave.frequency = frequency; wave.fStep = 2 * M_PI * frequency / kSampleRate; return true;}bool BleepMachine::Initialise(){ m_outFormat.mSampleRate = kSampleRate; m_outFormat.mFormatID = kAudioFormatLinearPCM; m_outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; m_outFormat.mFramesPerPacket = 1; m_outFormat.mChannelsPerFrame = 2; m_outFormat.mBytesPerPacket = m_outFormat.mBytesPerFrame = sizeof(UInt16) * 2; m_outFormat.mBitsPerChannel = 16; m_outFormat.mReserved = 0; OSStatus result = AudioQueueNewOutput( &m_outFormat, BleepMachine::staticQueueCallback, this, NULL, NULL, 0, &m_outAQ ); if ( result < 0 ) { printf( "ERROR: %d\n", (int)result ); return false; } // Allocate buffers for the audio UInt32 bufferSizeBytes = kBufferSizeInFrames * m_outFormat.mBytesPerFrame; for ( int buf=0; buf<kNumBuffers; buf++ ) { OSStatus result = AudioQueueAllocateBuffer( m_outAQ, bufferSizeBytes, &m_buffers[ buf ] ); if ( result ) { printf( "ERROR: %d\n", (int)result ); return false; } // Prime the buffers queueCallback( m_outAQ, m_buffers[ buf ] ); } m_isInitialised = true; return true;}void BleepMachine::Shutdown(){ Stop(); if ( m_outAQ ) { // AudioQueueDispose also chucks any audio buffers it has AudioQueueDispose( m_outAQ, true ); } m_isInitialised = false;}BleepMachine::BleepMachine(): m_isInitialised(false), m_outAQ(0){ for ( int buf=0; buf<kNumBuffers; buf++ ) { m_buffers[ buf ] = NULL; }}BleepMachine::~BleepMachine(){ Shutdown();}bool BleepMachine::Start(){ OSStatus result = AudioQueueSetParameter( m_outAQ, kAudioQueueParam_Volume, 1.0 ); if ( result ) printf( "ERROR: %d\n", (int)result ); // Start the queue result = AudioQueueStart( m_outAQ, NULL ); if ( result ) printf( "ERROR: %d\n", (int)result ); return true;}bool BleepMachine::Stop(){ OSStatus result = AudioQueueStop( m_outAQ, true ); if ( result ) printf( "ERROR: %d\n", (int)result ); return true;}// A (A4=440)// A# f(n)=2^(n/12) * r// B where n = number of semitones// C and r is the root frequency e.g. 440// C#// D frq -> MIDI note number// D# p = 69 + 12 x log2(f/440)// E// F // F#// G// G#//// MIDI Note ref: http://www.phys.unsw.edu.au/jw/notes.html//// MIDI Node numbers:// A3 57// A#3 58// B3 59// C4 60 <--// C#4 61// D4 62// D#4 63// E4 64// F4 65// F#4 66// G4 67// G#4 68// A4 69 <--// A#4 70// B4 71// C5 72float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave ){ semiTones += ( 12 * (octave-4) ); float root = 440.f; float fn = powf( 2.f, (float)semiTones/12.f ) * root; return fn;}float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber ){ SInt32 semiTones = midiNoteNumber - 69; return CalculateFrequencyFromNote( semiTones, 4 );}//for ( SInt32 midiNote=21; midiNote<=108; ++midiNote )//{// printf( "MIDI Note %d: %f Hz \n",(int)midiNote,CalculateFrequencyFromMIDINote( midiNote ) );//} Update: Basic usage info Initialise. Somehere near the start, I'm using initFromNib: in my code m_bleepMachine = new BleepMachine;m_bleepMachine->Initialise();m_bleepMachine->Start(); Now the sound playback is running, but generating silence. In your code, call this when you want to change the tone generation m_bleepMachine->SetWave( ch, frq, vol ); where ch is the channel ( 0 or 1 ) where frq is the frequency to set in Hz where vol is the volume ( 0=-Inf db, 1=-0db ) At program termination delete m_bleepMachine; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2067267', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44683/']} | jdg_85564 |
stackexchange | llm_judgeable_groundtruth_similarity | 12331077 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Trying to use handlebars.js for templating but the library seems to ignore newlines. What is the correct way to deal with newlines? Should they be replaced manually after the templating action?
Now provide the response and nothing else.
| It doesn't do so automatically, but using the helpers feature this can be achieved: JS: Handlebars.registerHelper('breaklines', function(text) { text = Handlebars.Utils.escapeExpression(text); text = text.replace(/(\r\n|\n|\r)/gm, '<br>'); return new Handlebars.SafeString(text);}); HTML template: <div> {{breaklines description}}</div> | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12331077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/378594/']} | jdg_85565 |
stackexchange | llm_judgeable_groundtruth_similarity | 401415 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been told by another fellow C programmer to write large applications in several different .c and .h files, and then compile them together. They say it will run faster. Does a multifile application run faster than a singlefile one? If so, what makes it run faster? Also, what other benefits are there to multi file programming? Which platform(s) does multi-file C programs affect performance? Will a multi-file Windows application run faster than a single-file one? Will a multi-file MacOS application run faster than a single-file one? Will a multi-file Ubuntu application run faster than a single-file one?
Now provide the response and nothing else.
| There are a lot of technical reasons behind using multiple files when writing large complex systems. All of them are meaningless in the face of the best reason to use multiple files: Readability. When I write code that resides in one file I'm presenting what you need to understand to follow how this part of the system works. Every detail not in this file is abstracted away, represented with a good name that should ensure you can still understand what is happening here without poking your nose into the other files. If I've failed to do that I've written crappy code and you should call me out for it. In cases like that multiple files rarely do you any good. Without that consideration I can write the whole program in one file. The CPU wont care. It will just make humans miserable when they try to read it. The traditional technical reason is to separate code into independently deployable units that can change without having to redeploy the whole system. There are cases where that's very important such as when your software is burned on many chips and you don't want to throw away all the chips just because one needs to change. It's also true that being independently deployable allows compiles to go faster since you only have to recompile what changed. Even in those cases though, I'd still argue that the biggest benefit is creating a boundary that limits what you expect your readers to hold in their head at any one time. TL;DR If multi file programs annoy you because you have to keep looking in multiple files to understand them you're simply looking at poorly abstracted code with bad names. That shouldn't be what it feels like. Each file should tell one story from one perspective. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/401415', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/-1/']} | jdg_85566 |
stackexchange | llm_judgeable_groundtruth_similarity | 663582 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Calculate how many integers between $0$ to $9999$ that has the digits $2,5,8$. That is integers that has each of the three numbers at least once. This is similar to How many numbers between $0$ and $9999$ have either of the digits $2,5,8$ at least once - Check my answer but mine is quite different. My question is how do you calculate it by the 'straight forward' method ? I have a proposed solution: If the last digit is not one of the three then we have: $3!\cdot8$ possibilities. If the last digit is one of them, we have $(3\cdot3!)/2$ we divide by two for symmetry (2258 is counted twice for example). In total we get: 57 such integers.
Now provide the response and nothing else.
| No, with four different digits there are $7\times4!=168$ possibilities; with one of those three digits doubled there are $3\times4!/2!=36$ possibilities. That's $204$ in all. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/663582', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/105885/']} | jdg_85567 |
stackexchange | llm_judgeable_groundtruth_similarity | 1110998 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an elementary proof that $x^x \geq x!$ for natural numbers $x$? I am not looking for a heuristic argument such as the one that there are $x$ terms in $x^x$ and $x!$ and since almost every term in $x \times x \times .... \times x$ is greater than almost every term in $x(x-1)(x-2)...(1)$, then $x^x \geq x!$
Now provide the response and nothing else.
| For $n=1$ is valid.$$1!\leq 1^1$$Assume for $n$ is valid:$$n!\leq n^n$$ Multiply by $n+1$ both sides $$(n+1)!\leq n^n(n+1)\leq(n+1)^n(n+1)=(n+1)^{n+1}$$ The last step is because if $n\leq n+1$ then $n^n\leq (n+1)^n$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1110998', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_85568 |
stackexchange | llm_judgeable_groundtruth_similarity | 44526103 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use pyplot.arrow do draw some straight arrows, e.g., import matplotlib.pyplot as pltimport numpy as npv={}for i in range (1,4): v[i]=np.array([np.cos(-2*np.pi/3*i),np.sin(-2*np.pi/3*i)])plt.arrow(.85*(.05*v[2]+.95*v[1])[0],.85*(.05*v[2]+.95*v[1])[1],.85*.9*(v[2]-v[1])[0],.85*.9*(v[2]-v[1])[1],width=0,head_width=.03,head_length=.045,length_includes_head=True,color="black")plt.arrow(.85*(.05*v[3]+.95*v[2])[0],.85*(.05*v[3]+.95*v[2])[1],.85*.9*(v[3]-v[2])[0],.85*.9*(v[3]-v[2])[1],width=0,head_width=.03,head_length=.045,length_includes_head=True,color="black")plt.arrow(.85*(.05*v[1]+.95*v[3])[0],.85*(.05*v[1]+.95*v[3])[1],.85*.9*(v[1]-v[3])[0],.85*.9*(v[1]-v[3])[1],width=0,head_width=.03,head_length=.045,length_includes_head=True,color="black")plt.axes().set_xlim(-.5,1)plt.axes().set_ylim(-np.sqrt(3)/2,np.sqrt(3)/2)plt.axes().set_aspect(1)plt.show() Now I want to also draw some arrows that have circular curvature instead of being straight. I see that I can achieve this with pyplot.annotate() or patches.FancyArrowPatch with connectionstyle="arc3,rad=.5" or so. But these arrows look completely different from the pyplot.arrow s and do not fit with the rest of my figures. And I don't know how I could pass something like connectionstyle to pyplot.arrow . Is there a way to draw curved arrows that look exactly like those that I get from pyplot.arrow ?
Now provide the response and nothing else.
| You cannot plot curved arrows with pyplot.arrow . However, patches.FancyArrowPatch should offer all the options to get any arrow style you want, so the idea would be to use a FancyArrowPatch for the straight arrows as well, such that you can use the same style for all arrows. import matplotlib.pyplot as pltimport matplotlib.patches as patchesplt.axes().set_xlim(-.5,0.5)plt.axes().set_ylim(-0.9,0.7)plt.axes().set_aspect(1)style = "Simple, tail_width=0.5, head_width=4, head_length=8"kw = dict(arrowstyle=style, color="k")a1 = patches.FancyArrowPatch((-0.4, -0.6), (0, 0.6), **kw)a2 = patches.FancyArrowPatch((0, 0.6), (0.4, -0.6), **kw)a3 = patches.FancyArrowPatch((-0.4, -0.6), (0.4, -0.6), connectionstyle="arc3,rad=.5", **kw)for a in [a1, a2, a3]: plt.gca().add_patch(a)plt.show() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44526103', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8155599/']} | jdg_85569 |
stackexchange | llm_judgeable_groundtruth_similarity | 1910642 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the wikipedia it states that the rate of divergence of Harmonic series is $\sum_{k=1}^n \frac{1}{k} < \log n +1 $ I have tried to find a reference, other than wikipedia, for this bound but with no success. Which is a good reference for the above bound?
Now provide the response and nothing else.
| This bound is obtained by approximating the sum by definite integrals (see here ). We obtain$$\log(n+1)=\int_1^{n+1}\frac1x\mathrm dx\le\sum_{k=1}^n\frac1k\le1+\int_1^n\frac1x\mathrm dx=1+\log n.$$This also shows that$$\sum_{k=1}^n\frac1k\sim \log n\quad\text{as}\quad n\to\infty,$$where $a_n\sim b_n$ denotes the fact that $a_n/b_n\to1$ as $n\to\infty$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1910642', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71138/']} | jdg_85570 |
stackexchange | llm_judgeable_groundtruth_similarity | 7400 |
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone comment on technical explanation of the wind rose attached? I can understand the wind is blowing between WNW & SSW with speed of 7-11 km/h. For frequency of 11-16 km/h its blowing from SW & SSW.For frequency of 4-7 km/h its blowing between WNW & ESE? Is this all or some info I am missing?What do the spikes mean?
Now provide the response and nothing else.
| The wind rose shows quite a few things. The goal of a wind rose is to help you understand the frequency, strength, and direction the wind comes from . Why the direction the wind is from , and not to ? Well, they are often used to visualize data for building design, and there we care about resisting loads, so we use the direction the wind comes from. They are also used in meteorology: different winds bring different weather (north winds are cold in the Northern hemisphere), and so it makes sense to use from there as well. Aviators care about the winds they will encounter taking off or landing, and so they want to know what they have to resist. There are many more reasons, some of which are discussed at https://en.wikipedia.org/wiki/Wind_rose . Some other things to know as you look at the chart: This chart is based on a time series of winds at the location. The data have been placed ('binned') into wind speed ranges, and direction ranges. For example, a recorded wind speed and direction of 12.6 km/hr from 92° will be placed into the 11-16 km/hr wind speed range, and W direction range. This makes it much easier to take in the results. The different coloured lines are for the different wind speed ranges, measured in kilometers per hour. Winds are shown in small sectors of wind direction. In this plot the bins are 22.5°, which corresponds to 16 subdivisions of the compass rose. The distance from the center of the rose gives the frequency of winds in each wind direction sector; this is why the north axis has a number on it that increases from 0 to 4000; this is the number of records. To use the wind rose, you can start by looking at winds of a certain speed range, for example the pale green series. This is winds in the range 7-11 km/hr. Now, follow it clockwise around the plot. You see that there is a peak in winds with a speed of 7-11 km/hr to the north, another to the E, and also a lot of 7-11 km/hr winds between WNW and SSE. You could also look at a particular wind direction, for example for winds from the west. Maybe your building has a very exposed west wall? Anyway, you would see then that the most common winds are in the range 7-11 km/hr (green line), followed by 11-16 km/hr (purple), then 4-7 km/hr (red), then 0-4 km/hr (blue), and so-on. If we wanted to, we could get the relative frequency of winds in this sector by looking at the distance from the center of the plot for each line and dividing the distance for one line by the sum of all lines in that direction sector. We can also see some information about the maximum wind speed. For example, there are no winds over 22 km/hr at this location, because there is no orange line. But, there is a light blue line, which shows winds from 16-22 km/hr, so we can say that the maximum wind speed that's been see at this location is less than 22 km/hr, but more than 16 km/hr. This wind rose is quite hard to use. Often we would plot the wind speeds as stacked bars so that you can see what is the direction from which winds come most often. This is used in wind energy, for example. See https://stackoverflow.com/questions/17266780/wind-rose-with-ggplot-r for an example of a different way to show this data. | {} | {'log_upvote_score': 4, 'links': ['https://earthscience.stackexchange.com/questions/7400', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/5122/']} | jdg_85571 |
stackexchange | llm_judgeable_groundtruth_similarity | 2839354 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I faced this factorization formula $$ax^2 + bx + c = a(x − x_1)(x − x_2)$$ where $x_1$ is the first solution and $x_2$ is the second one. But I don't understand how the formula gets derived... Could anybody explain it to me? Thanks! I know I can solve a quadratic equation and substitute the solutions into this formula $a(x − x_1)(x − x_2)$, but I want to understand how it works.
Now provide the response and nothing else.
| Since $$x_1+x_2 = -{b\over a} \implies b= -a(x_1+x_2)$$ and $$x_1x_2 = {c\over a} \implies c= ax_1x_2$$ so \begin{eqnarray} ax^2+bx+c &=& ax^2-a(x_1+x_2)x+ax_1x_2\\ &=& a(x^2-x_1x-x_2x+x_1x_2)\\ &=& a(x-x_1)(x-x_2)\end{eqnarray} | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2839354', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/488749/']} | jdg_85572 |
stackexchange | llm_judgeable_groundtruth_similarity | 2236781 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am doing this proof by induction but I seem to be getting stuck. My attempt: For $n=0$, we can see it is true as $5|0$. Setting $M_n = 4^{6n}+8^{2n^2}-(-1)^{n^3}-1$ We will assume $5|M_n$. Now to show it is true for $n+1$: $M_{n+1}= 4^{6(n+1)}+8^{2(n+1)^2}-(-1)^{(n+1)^3}-1$. I'm aware I need to get factors of $M_n$ and $5$ out but it gets too messy so I feel as though I'm on the wrong track. I expanded to find: $\bbox[yellow]{4^{6n}}\cdot4^6+\bbox[yellow]{8^{2n^2}}\cdot8^{2n}\cdot8-\bbox[yellow]{(-1)^{n^3}}\cdot(-1)^{3n^2+3n+1}-1$ I can see the highlighted terms make up $M_n$ but I don't see what I can do with that. I tried to rewrite the highlighted parts in terms of $M_n$ but it became way too messy, so I feel like there is an easier route. Any help is appreciated.
Now provide the response and nothing else.
| Work mod 5. Thus$$4^{6n} + 8^{2n^2} - (-1)^{n^3} - 1\equiv(-1)^{6n} + 3^{2n^2} - (-1)^{n^3} - 1\equiv 3^{2n^2} - (-1)^{n^3} \pmod{5}.$$Next write $3^{2n^2}$ as $9^{n^2}$ to get$$3^{2n^2} - (-1)^{n^3}= 9^{n^2} - (-1)^{n^3}\equiv (-1)^{n^2} - (-1)^{n^3} \pmod{5}.$$It only remains to note that this last quantity is $0$ by considering the cases of $n$ even and $n$ odd. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2236781', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/239590/']} | jdg_85573 |
stackexchange | llm_judgeable_groundtruth_similarity | 52398 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand the basics.WAN is a network that connects multiple LANs together. Usually the router is the connection between the LAN and WAN. But I have some questions books and the internet could not answer yet. What is a WAN Address? How can I distinguish between a WAN and LAN address? How does the whole WAN addressing work? (I imagined a WAN address is the address of a router and the router somewhat forwards a package to my LAN IP)
Now provide the response and nothing else.
| Routers route packets between networks. Routers could have all LAN connections, all WAN connections, or some combination. It doesn't matter, each router interface needs an address in the network connected to that interface. Because routers route packets between networks, each router interface is in a different network. The terms "WAN" (Wide Area Network) and "LAN" (Local Area Network) are somewhat subjective. It is generally accepted that a WAN connection is a connection to a network at a remote site, while a LAN connection is a connection to a network at the local site (perhaps one or a few close buildings). There are also other network terms tossed around: CAN (Campus Area Network), MAN (Metropolitan Area Network), etc. A router really has no idea about these terms; it simply knows that is has multiple networks attached to its various interfaces, and it will route packets received on an interface to a different interface towards the destination address in the packet header. WAN and LAN play no part in a router decision. The routing decision is made based on the packet destination address and the routing table. What is a WAN Address? It is a layer-3 network address, just like a LAN address is. How can I distinguish between a WAN and LAN address? There is nothing in IP (either IPv4 or IPv6) that distinguishes between WAN and LAN, so there is no difference between LAN and WAN addresses. IANA maintains the IANA IPv4 Special-Purpose Address Registry and the IANA IPv6 Special-Purpose Address Registry . They classify IP addresses in multiple ways (Source, Destination, Forwardable, Globally Reachable, Reserved-by-Protocol), but there is no distinction for WAN and LAN. How does the whole WAN addressing work? (I imagined a WAN address is the address of a router and the router somewhat forwards a package to my LAN IP) It works the same way as any other IP addressing. See the How do you calculate the prefix, network, subnet, and host numbers? question for how to do IPv4 addressing. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/52398', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/49192/']} | jdg_85574 |
stackexchange | llm_judgeable_groundtruth_similarity | 10625716 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In what cases should we include cassert ?
Now provide the response and nothing else.
| In short, don't use it; use <assert.h> . C++11 removed any formal guarantee of a "c...." header not polluting the global namespace. It was never an in-practice guarantee, and now it's not even a formal guarantee. Hence, with C++11 there is no longer any conceivable advantage in using the "c...." header variants, while there is the distinct and clear disadvantage that code that works well with one compiler and version of that compiler, may fail to compile with another compiler or version, due to e.g. name collisions or different overload selection in the global namespace. SO, while cassert was pretty meaningless in C++03 (you can't put a macro in a namespace), it is totally meaningless -- even as a special case of a general scheme -- in C++11. Addendum , Dec 22 2013: The standard defines each C++ C header <X.h> header in terms of the <cX> header, which in turn is defined in terms of the corresponding C library header. C++11 §D.5/2 : “Every C header, each of which has a name of the form name.h , behaves as if each name placed in the standard library namespace by the corresponding cname header is placed within the global namespace scope.” C++11 §D.5/3 (non-normative example): “The header <cstdlib> assuredly provides its declarations and definitions within the namespace std . It may also provide these names within the global namespace. The header <stdlib.h> assuredly provides the same declarations and definitions within the global namespace, much as in the C Standard. It may also provide these names within the namespace std .” Stack Overflow user C.R. ’s comment made me aware that some versions of g++, such as MinGW g++ 4.7.2, are quite non-standard with respect to the <X.h> headers, lacking the overloads of e.g. sin that the C++ standard requires: I already knew that MinGW g++ 4.7.2 also entirely lacks functions such as swprintf , and that it has ditto shortcomings in the pure C++ library such as lacking C++11 std::to_string . However, the information about it lacking the C function overloads was new to me. In practice the lacking overloads with g++ means ignoring the g++ issue, or avoiding using the missing g++ overloads, e.g. using only double sin( double ) , or using the std namespace overloads (one then needs to include <cmath> to guarantee their presence with g++). In order to use the g++ std namespace overloads unqualified, one practical approach is to define headers wrappers for this compiler. I've used that approach to address g++ shortcomings wrt. to the printf family. For as David Wheeler once remarked, “All problems in computer science can be solved by another level of indirection”… Then things can be arranged so that standard code that uses g++'s missing overloads, also compiles with g++. This adjusts the compiler to the standard, with a fixed amount of code. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10625716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1377855/']} | jdg_85575 |
stackexchange | llm_judgeable_groundtruth_similarity | 2994758 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In PHP, what do you mean by function overloading and function overriding. and what is the difference between both of them? couldn't figure out what is the difference between them.
Now provide the response and nothing else.
| Overloading is defining functions that have similar signatures, yet have different parameters. Overriding is only pertinent to derived classes, where the parent class has defined a method and the derived class wishes to override that method. In PHP, you can only overload methods using the magic method __call . An example of overriding : <?phpclass Foo { function myFoo() { return "Foo"; }}class Bar extends Foo { function myFoo() { return "Bar"; }}$foo = new Foo;$bar = new Bar;echo($foo->myFoo()); //"Foo"echo($bar->myFoo()); //"Bar"?> | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2994758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/344774/']} | jdg_85576 |
stackexchange | llm_judgeable_groundtruth_similarity | 1929481 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to check if polynomials $1, 1+t^2, 1+t+t^2$ are linearly independent. My idea is: $1 \to [1,0,0]$ $1+t^2\to [1,1,0]$ $1+t^2+t^3 \to [1,1,1]$ And now $\left( \begin{array}{ccc}1 & 0 & 1 \\0 & 1 & 1 \\0 & 1 & 1 \end{array} \right)$ I would like to find rank of this array. Rank of this array is $3$ so columns are linearly independent. Is it correct reasoning ?
Now provide the response and nothing else.
| The coordinate vectors with respect to the basis $\{1,t,t^2\}$ are$$\begin{bmatrix}1\\0\\0\end{bmatrix}\quad\begin{bmatrix}1\\0\\1\end{bmatrix}\quad\begin{bmatrix}1\\1\\1\end{bmatrix}$$so the matrix you have to compute the rank of is$$\begin{bmatrix}1 & 1 & 1 \\0 & 0 & 1 \\0 & 1 & 1\end{bmatrix}$$Swapping rows 2 and 3 brings it into unitriangular form, so the rank is $3$. Alternatively you can observe that$$t=(1+t+t^2)-(1+t^2)\\t^2=(1+t^2)-1$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1929481', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/366431/']} | jdg_85577 |
stackexchange | llm_judgeable_groundtruth_similarity | 5786936 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to create a XML Document object using the NodeList. Can someone pls help me to do this. This is my Java code: import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.xpath.*; import org.w3c.dom.*;public class ReadFile { public static void main(String[] args) { String exp = "/configs/markets"; String path = "testConfig.xml"; try { Document xmlDocument = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(path); XPath xPath = XPathFactory.newInstance().newXPath(); XPathExpression xPathExpression = xPath.compile(exp); NodeList nodes = (NodeList) xPathExpression.evaluate(xmlDocument, XPathConstants.NODESET); } catch (Exception ex) { ex.printStackTrace(); } }} I want to have an XML file like this: <configs> <markets> <market> <name>Real</name> </market> <market> <name>play</name> </market> </markets></configs> Thanks in advance.
Now provide the response and nothing else.
| You should do it like this: you create a new org.w3c.dom.Document newXmlDoc where you store the nodes in your NodeList , you create a new root element, and append it to newXmlDoc then, for each node n in your NodeList , you import n in newXmlDoc , and then you append n as a child of root Here is the code: public static void main(String[] args) { String exp = "/configs/markets/market"; String path = "src/a/testConfig.xml"; try { Document xmlDocument = DocumentBuilderFactory.newInstance() .newDocumentBuilder().parse(path); XPath xPath = XPathFactory.newInstance().newXPath(); XPathExpression xPathExpression = xPath.compile(exp); NodeList nodes = (NodeList) xPathExpression. evaluate(xmlDocument, XPathConstants.NODESET); Document newXmlDocument = DocumentBuilderFactory.newInstance() .newDocumentBuilder().newDocument(); Element root = newXmlDocument.createElement("root"); newXmlDocument.appendChild(root); for (int i = 0; i < nodes.getLength(); i++) { Node node = nodes.item(i); Node copyNode = newXmlDocument.importNode(node, true); root.appendChild(copyNode); } printTree(newXmlDocument); } catch (Exception ex) { ex.printStackTrace(); }}public static void printXmlDocument(Document document) { DOMImplementationLS domImplementationLS = (DOMImplementationLS) document.getImplementation(); LSSerializer lsSerializer = domImplementationLS.createLSSerializer(); String string = lsSerializer.writeToString(document); System.out.println(string);} The output is: <?xml version="1.0" encoding="UTF-16"?><root><market> <name>Real</name> </market><market> <name>play</name> </market></root> Some notes: I've changed exp to /configs/markets/market , because I suspect you want to copy the market elements, rather than the single markets element for the printXmlDocument , I've used the interesting code in this answer I hope this helps. If you don't want to create a new root element, then you may use your original XPath expression, which returns a NodeList consisting of a single node (keep in mind that your XML must have a single root element) that you can directly add to your new XML document. See following code, where I commented lines from the code above: public static void main(String[] args) { //String exp = "/configs/markets/market/"; String exp = "/configs/markets"; String path = "src/a/testConfig.xml"; try { Document xmlDocument = DocumentBuilderFactory.newInstance() .newDocumentBuilder().parse(path); XPath xPath = XPathFactory.newInstance().newXPath(); XPathExpression xPathExpression = xPath.compile(exp); NodeList nodes = (NodeList) xPathExpression. evaluate(xmlDocument,XPathConstants.NODESET); Document newXmlDocument = DocumentBuilderFactory.newInstance() .newDocumentBuilder().newDocument(); //Element root = newXmlDocument.createElement("root"); //newXmlDocument.appendChild(root); for (int i = 0; i < nodes.getLength(); i++) { Node node = nodes.item(i); Node copyNode = newXmlDocument.importNode(node, true); newXmlDocument.appendChild(copyNode); //root.appendChild(copyNode); } printXmlDocument(newXmlDocument); } catch (Exception ex) { ex.printStackTrace(); }} This will give you the following output: <?xml version="1.0" encoding="UTF-16"?><markets> <market> <name>Real</name> </market> <market> <name>play</name> </market> </markets> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5786936', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/458680/']} | jdg_85578 |
stackexchange | llm_judgeable_groundtruth_similarity | 3625725 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to use customized template for hg log which looks like this: hg log --template '{node|short} {desc} [{date|age} by {author}]\'n --color=always This in default terminal color is not very readable, so for instance I would like to make node red and desc green. How can I do this? In git I can define this kind of formatting like this: git log --pretty=format:'%Cred%h%Creset %Cgreen%s%Creset [%ar by %an]' Is a similar thing possible in mercurial?
Now provide the response and nothing else.
| AFAIK, there's no way to do this directly in Mercurial, but if you're on a Unix-y system you could use ANSI escape codes to control the colors. For example: hg log --template "\x1B[31m{node|short} \x1B[32m{desc}\x1B[0m\n" will give you the node in red and the desc in green. On the Windows command prompt, you have to enable the ColorExtension and the codes are the parameters to the color command ( help color in the command prompt), so the equivalent would be: hg log --template "\x1B[4m{node|short} \x1B[0;2m{desc}" Note: in the second escape sequence, the 0 is to reset the text color and the 2 is to set it to green. Without the 0 , it seems you get an inclusive-or of the color codes, which in this case would be yellow. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3625725', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6254/']} | jdg_85579 |
stackexchange | llm_judgeable_groundtruth_similarity | 67224646 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am getting below exception even after registering the AddDefaultIdentity System.InvalidOperationException: Service type: IUserClaimsPrincipalFactory`1 not registered. Here's the Identity registration code: services.AddDefaultIdentity<ApplicationUser>(options => { //Disable account confirmation. options.SignIn.RequireConfirmedAccount = false; options.SignIn.RequireConfirmedEmail = false; options.SignIn.RequireConfirmedPhoneNumber = false; }) .AddEntityFrameworkStores<ApplicationDbContext>(); And here's the IdentityServer code: var builder = services.AddIdentityServer() .AddInMemoryApiScopes(IdentityServerConfig.ApiScopes) .AddInMemoryIdentityResources(IdentityServerConfig.IdentityResources) .AddInMemoryApiResources(IdentityServerConfig.ApiResources) .AddInMemoryClients(IdentityServerConfig.Clients) .AddAspNetIdentity<IdentityUser>();
Now provide the response and nothing else.
| I spent hours solving this typo. The problem was that my registration was wrong. I was working with ApplicationUser but I was registering IdentityUser in the identity server. I fixed my problem by setting the ApplicationUser in both AddIdentityServer() and AddDefaultIdentity() . Here's the working code: .Net Core Identity services.AddDefaultIdentity<ApplicationUser>(options => { //Disable account confirmation. options.SignIn.RequireConfirmedAccount = false; options.SignIn.RequireConfirmedEmail = false; options.SignIn.RequireConfirmedPhoneNumber = false; }) .AddEntityFrameworkStores<ApplicationDbContext>(); IdentityServer4: var builder = services.AddIdentityServer() .AddInMemoryApiScopes(IdentityServerConfig.ApiScopes) .AddInMemoryIdentityResources(IdentityServerConfig.IdentityResources) .AddInMemoryApiResources(IdentityServerConfig.ApiResources) .AddInMemoryClients(IdentityServerConfig.Clients) .AddAspNetIdentity<ApplicationUser>(); Notice the use of ApplicationUser .I posted question here so that I remember it next time. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/67224646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9522887/']} | jdg_85580 |
stackexchange | llm_judgeable_groundtruth_similarity | 389342 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The image.size attribute of UIImageView gives the size of the original UIImage . I would like to find out the size of the autoscaled image when it is put in the UIImageView (typically smaller than the original). For example, I have the image set to Aspect Fit . Now I want to know its new height and width on the screen so I can draw accurately on the new scaled image. Is there any way to do this without figuring it out myself based on the UIImageView size & UIImage original size (basically reverse engineering its scaling)?
Now provide the response and nothing else.
| Objective-C: -(CGRect)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)imageView{ float imageRatio = image.size.width / image.size.height; float viewRatio = imageView.frame.size.width / imageView.frame.size.height; if(imageRatio < viewRatio) { float scale = imageView.frame.size.height / image.size.height; float width = scale * image.size.width; float topLeftX = (imageView.frame.size.width - width) * 0.5; return CGRectMake(topLeftX, 0, width, imageView.frame.size.height); } else { float scale = imageView.frame.size.width / image.size.width; float height = scale * image.size.height; float topLeftY = (imageView.frame.size.height - height) * 0.5; return CGRectMake(0, topLeftY, imageView.frame.size.width, height); }} Swift 4: func frame(for image: UIImage, inImageViewAspectFit imageView: UIImageView) -> CGRect { let imageRatio = (image.size.width / image.size.height) let viewRatio = imageView.frame.size.width / imageView.frame.size.height if imageRatio < viewRatio { let scale = imageView.frame.size.height / image.size.height let width = scale * image.size.width let topLeftX = (imageView.frame.size.width - width) * 0.5 return CGRect(x: topLeftX, y: 0, width: width, height: imageView.frame.size.height) } else { let scale = imageView.frame.size.width / image.size.width let height = scale * image.size.height let topLeftY = (imageView.frame.size.height - height) * 0.5 return CGRect(x: 0.0, y: topLeftY, width: imageView.frame.size.width, height: height) }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/389342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85581 |
stackexchange | llm_judgeable_groundtruth_similarity | 100268 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
AgeUK (and others) warn about making phone calls directly after receiving a scam call and advise you to "wait for the line to clear": Use a different phone if you can, or wait 5 to 10 minutes after the cold call if using the same phone - just in case they waited on the line. How does this work? Why can't the telephone network fix this? Does the scammer require specialized equipment or does this work from any landline phone?
Now provide the response and nothing else.
| This is because of Called Subscriber Held (CSH). This is not specific to telephony in the UK but rather a line state applicable on PSTNs caused by the person who made the call not hanging up. The person from which the call originates must hang up for the call to disconnect as it is the person from which the call originates that is paying for the bill. The person receiving the call may hang up, but this will not disconnect the call unless the originating caller also hangs up. Usually once a CSH condition is detected, a timer will start which will clear the call after a specified period of time (for example 3 minutes). Back in ye olden days when people had phone handsets connected to base stations with a wire(!) this feature meant they could put the phone down and move to another room without it dropping the call. :) Using the recall (R) button on most phones will put any current call into an on hold state and give you a dial tone. | {} | {'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/100268', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/86541/']} | jdg_85582 |
stackexchange | llm_judgeable_groundtruth_similarity | 48121750 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When an error is thrown in our React 16 codebase, it is caught by our top-level error boundary. The ErrorBoundary component happily renders an error page when this happens. Where the ErrorBoundary sits return ( <Provider store={configureStore()}> <ErrorBoundary> <Router history={browserHistory}>{routes}</Router> </ErrorBoundary> </Provider> ) However, when navigating back using the browser back button (one click), the URL changes in the address but the page does not update. I have tried shifting the error boundary down the component tree but this issue persists. Any clues on where this issue lies?
Now provide the response and nothing else.
| The op has probably found a resolution by now, but for the benefit of anyone else having this issue I'll explain why I think its happening and what can be done to resolve it. This is probably occurring due to the conditional rendering in the ErrorBoundary rendering the error message even though the history has changed. Although not shown above, the render method in the ErrorBoundary is probably similar to this: render() { if (this.state.hasError) { return <h1>An error has occurred.</h1> } return this.props.children;} Where hasError is being set in the componentDidCatch lifecycle method. Once the state in the ErrorBoundary has been set it will always render the error message until the state changes (hasError to false in the example above). The child components (the Router component in this case) will not be rendered, even when the history changes. To resolve this, make use of the react-router withRouter higher order component, by wrapping the export of the ErrorBoundary to give it access to the history via the props: export default withRouter(ErrorBoundary); In the ErrorBoundary constructor retrieve the history from the props and setup a handler to listen for changes to the current location using history.listen . When the location changes (back button clicked etc.) if the component is in an error state, it is cleared enabling the children to be rendered again. const { history } = this.props;history.listen((location, action) => { if (this.state.hasError) { this.setState({ hasError: false, }); }}); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48121750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3288707/']} | jdg_85583 |
stackexchange | llm_judgeable_groundtruth_similarity | 957368 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $O$ be a circle of radius $r$ lying in the first quadrant of the Cartesian plane, tangent to the $x$-axis at the point $(r, 0)$ and tangent to the $y$-axis at the point $(0, r)$. Let $T$ be a right triangle whose legs are segments of the axes, with inscribed circle $O$. What is the ratio of the legs of the triangle when the area of $T$ is minimized? It seems obvious that the answer is $1$ (the right triangle is isosceles), but I'm having a time showing this. We can draw a diameter through the circle parallel to the $x$-axis. Let $p$ be the point of tangency between the circle and a hypotenuse, and let $\theta$ be the angle between the diameter and the radius meeting $p$. Since the radius meeting $p$ will be perpendicular to the hypotenuse, the hypotenuse will have slope $-\cot\theta$. Letting $(x, 0)$ and $(0, y)$ be the axis intercepts of the hypotenuse, and $[(r+\cos\theta), (r+\sin\theta)]$ the point $p$ of tangency, we have $$x=r+\cos\theta+r\tan\theta+\sin\theta\tan\theta \\ y=r+\sin\theta+r\cot\theta+\cos\theta\cot\theta.$$ We multiple those (yikes), differentiate, and solve. I did that, thinking that eventually I'd get a simpler expression. No such luck. Is there a much simpler way to do this that I'm not seeing?
Now provide the response and nothing else.
| If by "2D surface z = f(x, y)" you mean a curve in the plane, that is, the set of solutions in $x$ and $y$ to this equation for a particular value of $z$, which is how I interpret the question in any event: Here's a quick description of a procedure for finding the curves $f(x, y) = z = \text{constant}$, first, assume one knows a "sufficiently nice", i.e., differentiable, functional form for $f(x, y)$. Then taking $\nabla f(x, y) = ((\partial f/\partial x)(x, y), (\partial f /\partial y)(x, y))$ one obtains the gradient vector field of $f(x, y)$. Then define the vector field $\vec T(x, y)$ via $\vec T(x, y) = J \nabla f(x, y), \tag{1}$ where $J = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}. \tag{2}$ It is easy to see that $\nabla f \cdot \vec T = \nabla f \cdot J \nabla f = 0, \tag{3}$ since $J$ is a skew-symmetric matrix; in fact, $J$ is an orthogonal matrix which represents a rotation by $\pi/2$ in the counter-clockwise direction. In any event, (3) shows that $\vec T$ is in fact tangent to the level sets of $f(x, y)$. Thus computing the integral curves of $\vec T$ is tantamount to computing the level sets of $f(x, y)$; if we pick some point $(x_0, y_0)$ then $f(x, y) = f(x_0, y_0)$ for any $(x, y)$ on the trajectry through $(x_0, y_0)$. Any number of methods may be used to calculate the trajectories of $\vec T$; one concern will be to keep the timestep $h$ sufficiently small to obtain adequate resolution; similar concerns apply to the spacing of the initial points $(x_0, y_0)$;a tighter spacing of initial values and a smaller timestep will generally lead to a higher resolution at the cost, of course, of greater run times to complete the process. Of course, if a functional form for $f(x, y)$ isn't readily available, e.g., you are working with a set of measured data instead of a given expression for $f$, then you would have to construct $\nabla f(x, y)$ from information extracted from the data sets; this is where algorithms such as marching squares or "triangulation and slicing" make their appearance. Either technique is a structured way of searching the given data set with an eye to (approximately) constructing the level sets of $f(x, y)$, but they can also be used to extract the normal field $\nabla f$ from the raw input. If this is done in a sufficiently high resolution manner, then we still obtain a vector function $\nabla f(x, y)$ which may be used as above to find and integrate the vector field $\vec T$. But in the process of so doing, one will have effectively obtained the level sets of $f$, making the integration step I suggested above redundant. But , there is another way to do things: if the data for $f(x, y)$ lies on a grid, one can in principle directly compute $\nabla f(x, y)$ via differencing, setting for example $\dfrac{\partial f}{\partial x}(i, j) = \dfrac{f(i + 1, j) - f(i - 1, j)}{2h}, \tag{4}$ where $h$ is the step-size in the $x$ direction, with an analogous formula for $\dfrac{\partial f}{\partial y}(i, j)$; there are also other differencing schemes available. Then one can use the vector array $\nabla f(i, j)$ to perform the procedures previously outlined. If the data set is triangulated, one can interpolate to a grid and so obtain an array of the form $\nabla f(i, j)$, and proceed from there. It is a challenge to estimate the complexity or run-time of a code addressing this problem constructed from choices between various possible subsidiary algorithms such as the ones mentioned here. It might be more efficient to work directly from marching squares or sliced triangulations. But either of these approaches require a degree of interpolation which itself can be time-consuming. On the other hand, calculating straight from an array of the form $f(i, j)$ to obtain $\nabla f(i, j)$ may in fact be less work. I don't have a ready answer at this point; perhaps someone with a firmer command of these topics could chime in . . . I worked at an avant garde computer graphics firm in a previous incarnation, and ideas such as those presented here often came up in the coffee room. Ultimately, the gridifiers/triangulators/polygonizers won the day in the market place, largely due to the (relative) ease with which such algorithms may be mapped to, and hence implemented in, hardware. But such notions may yet again surface and find there place in the world of commerce. Meanwhile, it's a pleasure to reminisce and find inspiration in novel (to yours truly at least) approaches to such problems. Hope this helps. Cheers, and as always, Fiat Lux!!! | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/957368', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/140756/']} | jdg_85584 |
stackexchange | llm_judgeable_groundtruth_similarity | 2327229 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $f_1$ and $f_2$ are Riemann-integrable with respect to $\alpha$ over $[a, b]$. If $f_1(x) \leq f_2(x)$ on $[a, b]$, then $$ \int_a^b f_1 d \alpha \leq \int_a^b f_2 d \alpha. $$ This is (essentially) Theorem 6.12 (b) in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition. Here is my proof: As $f_1 \leq f_2$ on $[a, b]$, so, for any partition $P = \left\{ \ x_0, \ldots, x_n \right\}$ of $[a, b]$, we have $$ \inf_{x_{i-1} \leq x \leq x_i } f_1(x) \leq \inf_{x_{i-1} \leq x \leq x_i } f_2(x), \ \mbox{ and } \ \sup_{x_{i-1} \leq x \leq x_i } f_1(x) \leq \sup_{x_{i-1} \leq x \leq x_i } f_2(x)$$ for each $ i = 1, \ldots, n$, and therefore $$ L \left( P, f_1, \alpha \right) \leq L \left( P, f_2, \alpha \right), \ \mbox{ and } \ U \left( P, f_1, \alpha \right) \leq U \left( P, f_2, \alpha \right) \tag{0} $$ for every partition $P$ of $[a, b]$. Now as $f_1 \in \mathscr{R}(\alpha)$ and $f_2 \in \mathscr{R}(\alpha)$, so, for $j = 1, 2$, we have $$ \sup \left\{ \ L \left( P, f_j, \alpha \right) \ \colon \ P \mbox{ is a partition of } [a, b] \ \right\} = \int_a^b f_j d \alpha = \inf \left\{ \ U \left( P, f_j, \alpha \right) \ \colon \ P \mbox{ is a partition of } [a, b] \ \right\}. $$ Therefore, for $j = 1, 2$, we have $$ L \left( P, f_j, \alpha \right) \leq \int_a^b f_j d \alpha \leq U \left( P, f_j, \alpha \right) \tag{1}$$ for every partition $P$ of $[a, b]$; moreover, for every real number $\delta > 0$, there exist partitions $P_j$, $Q_j$ of $[a, b]$ such that $$ \int_a^b f_j d \alpha - \delta < L \left( P_j, f_j, \alpha \right), \mbox{ and } U \left( Q_j, f_j, \alpha \right) < \int_a^b f_j d \alpha + \delta, \tag{2} $$ and, hence if $S_j$ is any partition of $[a, b]$ such that $S_j \supset P_j$ and $S_j \supset Q_j$, then (by Theorem 6.4 in Baby Rudin, 3rd edition) we must have $$ L \left( P_j, f_j, \alpha \right) \leq L \left( S_j, f_j, \alpha \right) \leq U \left( S_j, f_j, \alpha \right) \leq U \left( Q_j, f_j, \alpha \right). \tag{3} $$ From (2) and (3) we can conclude that, for each $j = 1, 2$, there exists a partition $S_j$ of $[a, b]$ such that $$ \int_a^b f_j d \alpha - \delta < L \left( S_j, f_j, \alpha \right) \leq U \left( S_j, f_j, \alpha \right) < \int_a^b f_j d \alpha + \delta. \tag{4} $$ Now let $P$ be any partition of $[a, b]$ such that $P \supset S_1$ and $P \supset S_2$. Then (again by Theorem 6.4 in Baby Rudin, 3rd edition) we have for each $j = 1, 2$, $$ L \left( S_j, f_j, \alpha \right) \leq L \left( P, f_j, \alpha \right) \leq U \left( P, f_j, \alpha \right) \leq U \left( S_j, f_j, \alpha \right). \tag{5} $$ Thus, for every real number $\delta > 0$, we see that $$ \begin{align}\int_a^b f_1 d\alpha &\leq U \left( P, f_1, \alpha \right) \qquad \mbox{ [ by (1) above ] } \\&\leq U \left( P, f_2, \alpha \right) \qquad \mbox{ [ by (0) above ] } \\& \leq U \left( S_2, f_2, \alpha \right) \qquad \mbox{ [ by (5) ] } \\& < \int_a^b f_2 d \alpha + \delta \qquad \mbox{ [ by (4) ] }, \end{align}$$ which implies that $$ \int_a^b f_1 d \alpha \leq \int_a^b f_2 d \alpha, $$ as required. Is this proof correct? If so, then is my presentation clear and optimal enough? If not, then where lie the pitfalls in my reasoning? Have I superfluously used any of the partitions $P_j$, $Q_j$, $S_j$ for $j = 1, 2$, or the partition $P$ at the end?
Now provide the response and nothing else.
| Your proof is correct, but it can be shortened. Since, for each partition $P$ , $L(f_1,P,\alpha)\leqslant L(f_2,P,\alpha)$ , $$\sup\bigl\{L(f_1,P,\alpha)\,|\,P\text{ is a partition of }[a,b]\bigr\}\leqslant\sup\bigl\{L(f_2,P,\alpha)\,|\,P\text{ is a partition of }[a,b]\bigr\}.$$ Therefore, $\displaystyle\int_a^bf_1\,\mathrm d\alpha\leqslant\int_a^bf_2\,\mathrm d\alpha$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2327229', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/59734/']} | jdg_85585 |
stackexchange | llm_judgeable_groundtruth_similarity | 2576946 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following mapping for my table in MySql: <class name="Tag, namespace" table="tags" > <id name="id" type="Int32" unsaved-value="0"> <generator class="native"></generator> </id> <property name="name" type="String" not-null="true"></property> <component name="record_dates" class="DateMetaData, namespace" > <property name="created_at" type="DateTime" not-null="true"></property> <property name="updated_at" type="DateTime" not-null="true"></property> </component></class> As you see the record_dates property is defined as a component field of type DateMetaDate. Both created_at and updated_at fields in 'tags' table are updated via triggers. Thus I can insert a new record like such: var newTag = new Tag() { name = "some string here"}Int32 id = (Int32)Session.Save(tag);Session.Flush();ITag t = Session.Get<Tag>(id);ViewData["xxx"] = t.name; // -----> not nullViewData["xxx"] = t.record_dates.created_at; // -----> is null However when querying the same record back immediately after it was inserted the record_dates field ends up null even though in the table those fields have got values. Can any one please point out why the Session.Get ignores getting everything back from the table? is it because it caches the newly created record for which the records_dates is null? If so how can it be told to ignore the cached version?
Now provide the response and nothing else.
| What you're seeing is a clash between <mvc:annotation-driven /> and the explicit bean definition of DefaultAnnotationHandlerMapping . When you add <mvc:annotation-driven /> , Spring declares its own DefaultAnnotationHandlerMapping , and because it appears before your own one, it gets priority. Your interceptor is registered with your DefaultAnnotationHandlerMapping , but that never actually get called. Try removing <mvc:annotation-driven /> and trying again. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2576946', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/294089/']} | jdg_85586 |
stackexchange | llm_judgeable_groundtruth_similarity | 2162831 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can do this: int main(int argc, char** argv) { unsigned char cTest = 0xff; return 0;} But what's the right way to get a hexadecimal number into the program via the command line? unsigned char cTest = argv[1]; doesn't do the trick. That produces a initialization makes integer from pointer without a cast warning.
Now provide the response and nothing else.
| As the type of main indicates, arguments from the command line are strings and will require conversion to different representations. Converting a single hexadecimal command-line argument to decimal looks like #include <stdio.h>#include <stdlib.h>int main(int argc, char *argv[]){ printf("%ld\n", strtol(argv[1], NULL, 16)); return 0;} Example usage: $ ./hex ff255 Using strtol and changing the final argument from 16 to 0 as in printf("%ld\n", strtol(argv[1], NULL, 0)); makes the program accept decimal, hexadecimal (indicated by leading 0x , and octal (indicated by leading 0 ) values: $ ./num 0x70112$ ./num 07056$ ./num 7070 Using the bash command shell, take advantage of ANSI-C Quoting to have the shell perform the conversion, and then your program just prints the values from the command line. #include <stdio.h>#include <string.h>int main(int argc, char *argv[]){ int i; for (i = 1; i < argc; i++) { unsigned char value = argv[i][0]; if (strlen(argv[i]) > 1) fprintf(stderr, "%s: '%s' is longer than one byte\n", argv[0], argv[i]); printf(i + 1 < argc ? "%u " : "%u\n", value); } return 0;} Bash supports many formats of the form $'...' , but $'\xHH' appears to be the closest match to your question. For example: $ ./print-byte $'\xFF' $'\x20' $'\x32'255 32 50 Maybe you pack the values from the command line into a string and print it. #include <errno.h>#include <stdio.h>#include <stdlib.h>#include <string.h>int main(int argc, char *argv[]){ int i; if (argc > 1) { char *s = malloc(argc); if (!s) { fprintf(stderr, "%s: malloc: %s\n", argv[0], strerror(errno)); return 1; } for (i = 1; i < argc; i++) s[i - 1] = strtol(argv[i], NULL, 16) & 0xff; s[argc - 1] = '\0'; printf("%s\n", s); free(s); } return 0;} In action: $ ./pack-string 48 65 6c 6c 6f 21Hello! All of the above is reinventing wheels that bash and the operating system already provide for you. $ echo $'\x48\x65\x6c\x6c\x6f\x21'Hello! The echo program prints its command-line arguments on the standard output, which you can think of as a for loop over the arguments and a printf for each. If you have another program that performs the decoding for you, use Command Substitution that replaces a command surrounded by backticks or $() with its output. See examples below, which again use echo as a placeholder. $ echo $(perl -e 'print "\x50\x65\x72\x6c"')Perl$ echo `python -c 'print "\x50\x79\x74\x68\x6f\x6e"'`Python | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2162831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217649/']} | jdg_85587 |
stackexchange | llm_judgeable_groundtruth_similarity | 298665 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand C and C++ are different languages but when I was learning C++ I was always told that C is a subset of C++ or C++ is C with classes. And that was quite true until the appearance of C++x0, C++11 (or the modern C++ 11/14/17 in general). In fact (especially when working on embedded systems) it's very likely to find code written in C++ but with a lot of parts written entirely in pure C language. Here I have several questions: Should I stop using the term C/C++? If the answer to #1 is yes, how would I call a program that use a mix of C and C++? Given that both of them are 'different' languages is it likely that at some point C++ compilers stop supporting code written in the C language (since modern c++ is diverging from the C mentality for basic stuff like pointers, dynamic memory handling, etc) Is there right now any collaboration between the people who makes the standards of C/C++ to keep the compatibility If #4 is yes, such collaboration could end up in the near future with the appearance of the modern c++ (11/14/17) I know that there already similar questions, but I'm sure that a lot of people share these questions so I'm very interested to get good answers especially for the points that have to do with the C++ tendency in the near future.
Now provide the response and nothing else.
| C was never a subset of C++. The most obvious example of this is int new; . This has been true since C89 and C++98, and the languages have only grown further from each other as new standards have come out. Should I stop using the term C/C++ Yes If the answer to #1 is yes, how would I call a program that use a mix of C and C++? A source file is written in one language or the other. A program can consist of code from multiple languages working together, or an executable produced by linking different compiled objects. You would say the program was written in C and C++, "C/C++" is not a language. Given that both of them are 'different' languages is it likely that at some point C++ compilers stop supporting code written in the C language They never did. char *a = malloc(10); . C and C++ have never been fully compatible for at least as long as they've had ISO standards (I don't know all the details about the pre-standardized days). click the links or see below for a file that is fine with C89 and up, but isn't valid under any C++ standard. afaik no, but I don't know much about the C working group. /* A bunch of code that compiles and runs under C89 but fails under any C++ *//* type aliases and struct names occupy separate namespaces in C, not in C++ */struct S { int i; };typedef int S;struct Outer { struct Inner { int i; } in; };/* struct Inner will be Outer::Inner in C++ due to name scope */struct Inner inner;/* default return type of int in C, C++ functions need explicit return types */g() { return 0;}/* C sees this as two declarations of the same integer, * C++ sees it as redefinition */int n;int n;/* K&R style argument type declarations */void h(i) int i; { }/* struct type declaration in return type */struct S2{int a;} j(void) { struct S2 s = {1}; return s; }/* struct type declaration in argument, stupid and useless, but valid *//*void dumb(struct S3{int a;} s) { } *//* enum/int assignment */enum E{A, B};enum E e = 1;void k() { goto label; /* C allows jumping past an initialization */ { int x = 0;label: x = 1; }}/* () in declaration means unspecified number of arguments in C, the definition * can take any number of arguments, * but means the same as (void) in C++ (definition below main) */void f();int main(void) { f(1); /* doesn't match declaration in C++ */ { /* new is a keyword in C++ */ int new = 0; } /* no stdio.h include results in implicit definiton in C. However, * as long as a matching function is found at link-time, it's fine. * C++ requires a declaration for all called functions */ puts("C is not C++"); { int *ip; void *vp = 0; ip = vp; /* cast required in C++, not in C */ } return 0;}/* matches declaration in C, not in C++ */void f(int i) { } I always feel it's worth mentioning that C is a subset of Objective-C. | {} | {'log_upvote_score': 9, 'links': ['https://softwareengineering.stackexchange.com/questions/298665', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/198524/']} | jdg_85588 |
stackexchange | llm_judgeable_groundtruth_similarity | 21865940 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a little question on how to change the group owner of a file. At the moment the group of the file is the same as my user name but I want to change it so another user can access that same file and read/write to it.
Now provide the response and nothing else.
| The syntax would be chown user:group file ... This usually (=when you're not root ) only works as long as you are a member of the same group (since otherwise by chown'ing files you could exceed the other user's disk quota). If you both are in the same group you could also allow write access with chmod g+w file As above this will allow all members of group write access. If you do not share a common group, all you can safely do is to allow read access to the file so the other user can make a copy of it and edit his/her copy. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21865940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2489694/']} | jdg_85589 |
stackexchange | llm_judgeable_groundtruth_similarity | 13580486 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We observed a very strange bug with our app on the samsung s3 (running android 4.0.4 ) When typing text into a text area only the first character of each word would fill in, and only after you pressed space or picked a word from predictive text. Typing 'what the hell ..' would result in 'w t h' being entered in the text area Other text areas on different screens were working fine. Turning off predictive text also fixed the issue. Has anyone run into this issue before?
Now provide the response and nothing else.
| You need to dig into the Makefiles to remove -O for freak.cpp. UPDATE:This is exactly how one should do it (tested with 2.4.3 and 2.4.4). you need to edit YOUR_BUILD_DIR/modules/features2d/CMakeFiles/opencv_features2d.dir/build.make Search for freak.cpp. You find three blocks: Building CXX..., Preprocessing CXX..., and Compiling CX.... I just needed to change the Building part. The last line of that block looks like this: .... YOUR_COMPILER $(CXX_DEFINES) $(CXX_FLAGS) ... and if you check you find out that CXX_FLAGS has a -O3 in it. If you add -O0 after CXX_FLAGS it suppresses the O3. So your lines should look like this. .... YOUR_COMPILER $(CXX_DEFINES) $(CXX_FLAGS) -O0 ... This is at least working here! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13580486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1826153/']} | jdg_85590 |
stackexchange | llm_judgeable_groundtruth_similarity | 4959230 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I came across this way to filter out every second value of a list: perl -E 'say grep --$|, 1..10'13579 How does it work?
Now provide the response and nothing else.
| $| is a special Perl variable that can only have the values 0 and 1. Any assignment to $| of a true non-zero value, like $| = 1; $| = 'foo'; $| = "4asdf"; # 0 + "4asdf" is 4$| = \@a; will have the effect of setting $| to 1. Any assignment of a false zero value $| = 0;$| = "";$| = undef;$| = "erk"; # 0 + "erk" is 0 will set $| to 0. Expand --$| to $| = $| - 1 , and now you can see what is going on.If $| was originally 1, then --$| will change the value to 0.If $| was originally 0, then --$| will try to set the value to -1 but will actually set the value to 1. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4959230', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/244297/']} | jdg_85591 |
stackexchange | llm_judgeable_groundtruth_similarity | 758616 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does anyone know the sums of the following two series?$$\sum_{n=0}^\infty (-1)^n \frac{x^{4n+1}}{4n+1}$$$$\sum_{n=1}^\infty (-1)^{n+1} \frac{x^{4n-1}}{4n-1}$$I encounter such series in my work.
Now provide the response and nothing else.
| Well, to attack the first sum, if we call it $f(x)$, then $$f'(x) = \sum_{n=0}^{\infty} (-1)^n x^{4 n} = \frac1{1+x^4}$$ so that $$f(x) = \int_0^x \frac{dt}{1+t^4}$$ You can use partial fractions to deduce that $$\begin{align}f(x) &= \frac1{2 \sqrt{2}}\int_0^x dt \, \left [\frac{t+\sqrt{2}}{t^2+\sqrt{2} t+1}-\frac{t-\sqrt{2}}{t^2-\sqrt{2} t+1} \right ] \\ &= \frac1{2 \sqrt{2}} \int_0^x dt \left [\frac{t+1/\sqrt{2}+1/\sqrt{2}}{(t+1/\sqrt{2})^2+1/2}-\frac{t-1/\sqrt{2}-1/\sqrt{2}}{(t-1/\sqrt{2})^2+1/2} \right ] \\ &= \frac1{2 \sqrt{2}} \left [\arctan{(\sqrt{2} x+1)} - \frac{\pi}{4} + \arctan{(\sqrt{2} x-1)} + \frac{\pi}{4} \right ]\\ &+ \frac1{4 \sqrt{2}} \left [\log{(x^2+\sqrt{2} x+1)} - \log{(x^2-\sqrt{2} x+1)} \right ]\\ &=\frac1{2 \sqrt{2}} \arctan{\left (\frac{\sqrt{2} x}{1-x^2}\right )}+\frac1{4 \sqrt{2}} \log{\left (\frac{x^2+\sqrt{2} x+1}{x^2-\sqrt{2} x+1} \right )}\end{align} $$ The second sum is obviously similar, except that you are evaluating $$g(x) = \int_0^x dt \frac{t^2}{1+t^4} $$ and, using similar steps, i.e., partial fractions, you get $$\begin{align}g(x) &= \frac1{2 \sqrt{2}}\int_0^x dt \, \left [\frac{t}{t^2-\sqrt{2} t+1}-\frac{t}{t^2+\sqrt{2} t+1} \right ] \\ &= \frac1{2 \sqrt{2}} \int_0^x dt \left [\frac{t-1/\sqrt{2}+1/\sqrt{2}}{(t-1/\sqrt{2})^2+1/2}-\frac{t+1/\sqrt{2}-1/\sqrt{2}}{(t+1/\sqrt{2})^2+1/2} \right ]\\ &= \frac1{2 \sqrt{2}} \arctan{\left (\frac{\sqrt{2} x}{1-x^2}\right )}-\frac1{4 \sqrt{2}} \log{\left (\frac{x^2+\sqrt{2} x+1}{x^2-\sqrt{2} x+1} \right )} \end{align} $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/758616', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3249/']} | jdg_85592 |
stackexchange | llm_judgeable_groundtruth_similarity | 30135858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So i followed instructions from the Quickstart Guide from the Parse SDK Website. The app runs fine the first time around. But when I minimize the app and run it again from the task switcher, it force closes. The error doesn't make any sense to me. Logcat - 05-09 08:57:40.611 19419-19419/com.example.shubhamkanodia.bookmybook E/CrashReporting﹕ ParseCrashReporting caught a RuntimeException exception for com.example.shubhamkanodia.bookmybook. Building report.05-09 08:57:40.626 19419-19419/com.example.shubhamkanodia.bookmybook E/CrashReporting﹕ Handling exception for crash java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.shubhamkanodia.bookmybook/com.example.shubhamkanodia.bookmybook.MainActivity}: java.lang.IllegalStateException: `Parse#enableLocalDatastore(Context)` must be invoked before `Parse#initialize(Context)` at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2325) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2387) at android.app.ActivityThread.handleRelaunchActivity(ActivityThread.java:3947) at android.app.ActivityThread.access$900(ActivityThread.java:151) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1309) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698) Caused by: java.lang.IllegalStateException: `Parse#enableLocalDatastore(Context)` must be invoked before `Parse#initialize(Context)` at com.parse.Parse.enableLocalDatastore(Parse.java:104) at com.example.shubhamkanodia.bookmybook.MainActivity.onCreate(MainActivity.java:21) at android.app.Activity.performCreate(Activity.java:5990) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1106) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2278) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2387) at android.app.ActivityThread.handleRelaunchActivity(ActivityThread.java:3947) at android.app.ActivityThread.access$900(ActivityThread.java:151) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1309) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698) Mainactivity.java protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); if(!ParseCrashReporting.isCrashReportingEnabled()) ParseCrashReporting.enable(this); Parse.enableLocalDatastore(this); //Its already before initialize Parse.initialize(this, "XXX", "XXX"); ParseInstallation.getCurrentInstallation().saveInBackground(); ParseAnalytics.trackAppOpenedInBackground(getIntent()); }
Now provide the response and nothing else.
| You can have a separate class that do this initialization stuff. So create a class ParseApplication that extends Application . public class ParseApplication extends Application { @Override public void onCreate() { super.onCreate(); ParseCrashReporting.enable(this); Parse.enableLocalDatastore(this); Parse.initialize(this, "xxx", "xxx"); } } And in AndroidManifest.xml, add ParseApplication class to Application <application android:name="com.example.parsetry.ParseApplication" // you should replace this based on your package android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30135858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3210476/']} | jdg_85593 |
stackexchange | llm_judgeable_groundtruth_similarity | 64839175 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why I am getting: ./src/components/mapComponent/MapView.jsxAttempted import error: 'Map' is not exported from 'react-leaflet'. I am importing this in the component: import React, { Component } from "react";import { Map, TileLayer } from "react-leaflet";import "leaflet/dist/leaflet.css"; This is confusing me where to look as all looks to be correct in code....
Now provide the response and nothing else.
| Try with MapContainer component . The MapContainer component is responsible for creating the Leaflet Mapinstance and providing it to its child components, using a ReactContext. When creating a MapContainer element, its props are used as options tocreate the Map instance. Now you have to import MapContainer. import { MapContainer, TileLayer, Marker } from 'react-leaflet';<MapContainer className="markercluster-map" center={[51.0, 19.0]} zoom={4} maxZoom={18}> <TileLayer url="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png" attribution='© <a href="http://osm.org/copyright">OpenStreetMap</a> contributors' /></MapContainer> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64839175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14543924/']} | jdg_85594 |
stackexchange | llm_judgeable_groundtruth_similarity | 627169 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
So, this is the situation. It seems we need to have an open TCP port 5432 to the world, where a customer has access to his PostgreSQL database. For obvious reasons, we can't say just "no", only as a last-last resort. What are the biggest troubles? How can I defend our infrastructure? Anyways: why shouldn't it be opened to the world? I think, maybe it is more secure than some 20 year old, unmaintained FTP server. P.S. VPN isn't ok. Some encryption maybe (if I can give him a JDBC connection URL which works ).
Now provide the response and nothing else.
| Require SSL, keep SELinux turned on, monitor the logs, and use a current PostgreSQL version . Server side Require SSL In postgresql.conf set ssl=on and make sure you have your keyfile and certfile installed appropriately (see the docs and the comments in postgresql.conf ). You might need to buy a certificate from a CA if you want to have it trusted by clients without special setup on the client. In pg_hba.conf use something like: hostssl theuser thedatabase 1.2.3.4/32 md5 ... possibly with "all" for user and/or database, and possibly with a wider source IP address filter. Limit users who can log in, deny remote superuser login Don't allow "all" for users if possible; you don't want to permit superuser logins remotely if you can avoid the need for it. Limit rights of users Restrict the rights of the user(s) that can log in. Don't give them CREATEDB or CREATEUSER rights. REVOKE the CONNECT right from PUBLIC on all your databases, then give it back to only the users/roles that should be able to access that database. (Group users into roles and grant rights to roles, rather than directly to individual users). Make sure users with remote access can only connect to the DBs they need, and only have rights to the schemas, tables, and columns within that they actually need. This is good practice for local users too, it's just sensible security. Client setup In PgJDBC, pass the parameter ssl=true : To instruct the JDBC driver to try and establish a SSL connection you must add the connection URL parameter ssl=true. ... and install the server certificate in the client's truststore, or use a server certificate that's trusted by one of the CAs in Java's built-in truststore if you don't want the user to have to install the cert. Ongoing action Now make sure you keep PostgreSQL up to date . PostgreSQL has only had a couple of pre-auth security holes, but that's more than zero, so stay up to date. You should anyway, bugfixes are nice things to have. Add a firewall in front if there are large netblocks/regions you know you don't ever need access from. Log connections and disconnections (see postgresql.conf ). Log queries if practical. Run an intrusion detection system or fail2ban or similar in front if practical. For fail2ban with postgres, there is a convenient how-to here Monitor the log files. Bonus paranoia Extra steps to think about... Require client certificates If you want, you can also use pg_hba.conf to require that the client present an X.509 client certificate trusted by the server. It doesn't need to use the same CA as the server cert, you can do this with a homebrew openssl CA. A JDBC user needs to import the client certificate into their Java Keystore with keytool and possibly configure some JSSE system properties to point Java at their keystore, so it's not totally transparent. Quarantine the instance If you want to be really paranoid, run the instance for the client in a separate container / VM, or at least under a different user account, with just the database(s) they require. That way if they compromise the PostgreSQL instance they won't get any further. Use SELinux I should't have to say this, but ... Run a machine with SELinux support like RHEL 6 or 7, and don't turn SELinux off or set it to permissive mode . Keep it in enforcing mode. Use a non-default port Security by only obscurity is stupidity. Security that uses a little obscurity once you've done the sensible stuff probably won't hurt. Run Pg on a non-default port to make life a little harder for automated attackers. Put a proxy in front You can also run PgBouncer or PgPool-II in front of PostgreSQL, acting as a connection pool and proxy. That way you can let the proxy handle SSL, not the real database host. The proxy can be on a separate VM or machine. Use of connection pooling proxies is generally a good idea with PostgreSQL anyway, unless the client app already has a built-in pool. Most Java application servers, Rails, etc have built-in pooling. Even then, a server side pooling proxy is at worst harmless. | {} | {'log_upvote_score': 7, 'links': ['https://serverfault.com/questions/627169', 'https://serverfault.com', 'https://serverfault.com/users/-1/']} | jdg_85595 |
stackexchange | llm_judgeable_groundtruth_similarity | 49290826 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Traceback (most recent call last): File "Final_3.py", line 42, in <module> np.savetxt("table.csv", output_arr, fmt='%s' , delimiter=",") File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 1381, in savetxt fh.write(v) File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 1291, in write_normal self.fh.write(asunicode(v)) File "/usr/local/lib/python2.7/dist-packages/numpy/compat/py3k.py", line 70, in asunicode return str(s).decode('ascii')UnicodeDecodeError: 'ascii' codec can't decode byte 0xce in position 22: ordinal not in range(128) I have also triedreload(sys) sys.setdefaultencoding('utf8') in my code, but it still didn't help.
Now provide the response and nothing else.
| There are a lot of misunderstandings about both cookies and refresh tokens and OAuth2. First, it is not true that only confidential clients can use a refresh token. The OAuth2 protocol says that confidential clients must authenticate, but does not require confidential clients. Ergo, client authentication is optional on the refresh operation. See RFC 6749, Section 6, Refreshing An Access Token . Second, you have to understand what the alternatives are: Forcing the user to enter his or her username and password every 5 minutes (whenever the access token expires) Long lived access tokens Authentication via HTTP Cookies Everybody in the world, who doesn't use refresh tokens, uses option #3. Authentication via cookies is functionally and security-wise 100% equivalent to storing a refresh token. Of course, with both tokens and cookies, there are options for where they are kept: a. HTTP only,b. secure (require TLS/SSL) andc. session (in memory) vs. persistent (local, domain storage) The "HTTP only" option applies only to cookies and, thus, may represent the only advantage of using cookies over tokens. I.e. tokens are handled via Javascript, so there's no option to keep them away from scripts. That said, the tokens are available only to Javascript from the domain of the page that stored it (or as allowed by CORS policy). So this issue can be overblown. Of course, care must be taken to always use TLS/SSL to transmit either authentication cookies or tokens. Honestly, since we know most breaches occur from within the private corporate network, end-to-end TLS is a basic requirement anymore. Finally, whether cookies or tokens are ever persisted , i.e. stored somewhere that survives closing the browser or even rebooting the device, depends on the trade-off you're making between usability and security - for your application . For applications that require a higher level of security, just keep everything in memory (i.e. session cookies, tokens in a Javascript variable). But for apps that don't require as much security and really want a session life on order of days or weeks, then you need to store them. Either way, that storage is accessible only to pages and scripts from the original domain and, thus, cookies and tokens are functionally equivalent. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49290826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8531928/']} | jdg_85596 |
stackexchange | llm_judgeable_groundtruth_similarity | 632057 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When I put my hand on a hot metal (say) solid, I can feel my hand heating up. I suspect this is caused mostly by particles (electrons, atoms, ...?) from the solid colliding with the particles that make up my hand thereby transferring kinetic energy to it. But why does this lead to my hand heating up and not it (also?) being pushed?
Now provide the response and nothing else.
| Here is another scenario where the thing that you describe does happen: A tube is filled with a gas, for example plain air. The tube fits nicely around a finger. The fit is so precise that there is a sufficient seal, so the air cannot escape, but there is only just enough friction between the tube wall and your finger to prevent the tube from sliding off just like that. Gently increase the temperature of the gas. Now the molecules of the gas have a higher average velocity . The effect of that higher average velocity is that your finger is pushed out of the tube. The force on your finger arises from the accumulative effect of gas molecules bouncing against your skin. A gas doesn't have internal cohesion. When you give a gas opportunity to expand it will. Now consider a solid. A solid has internal cohesion. A solid does not expand like a gas at room temperature, and neither does it expand like a gas when you heat it up. (A solid will expand a little, but that's not visible to the naked eye.) When you heat a solid the molecules of the solid move back and forth faster than at colder temperature. Let's say a particular molecule has - just for an instant - a velocity away from the bulk of the solid. So the molecule is on its way to ascend out of the solid. But as that molecule ascends the forces of cohesion from the neighbouring molecules increase. As a consequence the ascending molecule is pulled back into the solid . The molecule now acquires a velocity back towards the bulk of the solid . This molecule will overshoot, and will very briefly create a local indentation of the solid. The motion of the molecules of the solid do transfer heat to your skin as you are touching the heated solid. And it's not just the outward punches that transfer heat. There is also an effect of interaction with the transient indentations from molecules overshooting on their way back into the bulk of the solid . You can think of that as a suction effect, if you will. As to your skin being pushed one way or the other: the combined effect of the "punches" and the "suctions" adds to zero. What remains is the transfer of heat. For that transfer the effect of the "punches" and the "suctions" do add up; that is the transfer of heat from a solid to your skin. | {} | {'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/632057', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/297373/']} | jdg_85597 |
stackexchange | llm_judgeable_groundtruth_similarity | 53126 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $G$ be a finite group. Then the irreducible complex representations of $G$ come in three sorts: real, complex and symplectic=quaternionic. The type of an irreducible character $\chi$ can be read of from the Frobenius-Schur indicator $$ s_2(\chi) = \frac{1}{|G|}\sum_{g\in G} \chi( g^2 ) \in \{ 1,0,-1 \}. $$Now the following seems to be true (and has been used by Noah Snyder in his interesting answer to another question ) but I can't see why:Suppose all characters of $G$ have real values. (Equivalently, every element of $G$ is conjugate to its inverse.) Then it seems that the Frobenius-Schur indicator defines a grading of the irreducible representations, and thus of the character ring. This means that if $\chi$ and $\psi$ are irreducible characters with $s_2 (\chi) = s_2(\psi)$, then all the irreducible constitutents of $\chi\psi$ have indicator $1$, and if $s_2(\chi) = -s_2(\psi)$, then all constituents of $\chi\psi$ have indicator $-1$. Why is this actually true? Of course, for example in the first case, $\chi\psi$ is afforded by a real representation, so the symplectic representations must occur with even multiplicity. But why can they not occur at all? Moreover, I would like to know if this generalizes to other fields than $\mathbb{R}$, using elements of a Brauer group instead of the Frobenius-Schur indicator. EDIT: The statement turned out to be wrong in general (see below), so the original question is in some sense obsolete. A more appropriate questions would have been why this Frobenius-Schur indicator grading is there in some (many?) cases.
Now provide the response and nothing else.
| As Zoltan suspected in his answer, the statement in my question is not true. I have now found counterexamples: Let $q$ be an odd prime power. Then $SL(2,q)$ contains a group isomorphic to the quaternion group $Q_8$, which yields a (semiregular) action of $Q_8$ on $V=(\mathbb{F}_q)^2$. Let $G = V \rtimes Q_8$ be the semidirect product. All faithful irreps of $G$ have degree $8$(they are induced from nontrivial linear characters of $V$), and are of real type. The other irreps have $V$ in its kernel and come from irreps of $Q_8$. In particular, all irreps of $G$ are self-dual, and exactly one is of symplectic type, of degree $2$. The tensor product of a faithful irrep with itself contains the symplectic irrep as summand.Note that $\mathbf{Z}(G)=1$, so there's no nontrivial grading of the irreps of $G$. Taking $q=3$ in the above yields a counterexample of order $72$, and I have to admit that I used GAP to find it. For the record, lets note that there are also four groups of order $64$, where all characters are real valued, but the FS-indicator doesn't define a grading of the irreps. These are the groups with identifiers [64, 218], [ 64, 224 ], [ 64, 243 ], [ 64, 245 ] in the SmallGroups library of GAP. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/53126', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10266/']} | jdg_85598 |
stackexchange | llm_judgeable_groundtruth_similarity | 169213 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading a paper where the author mentions: "The coefficient of variation is primarily a descriptive statistic, but it is amenable to statistical inferences such as null hypothesis testing or confidence intervals. Standard procedures are often very dependent of the normality assumption and current work is exploring alternative procedures which are less dependent on this normality assumption." The paper is from 2010. So my question is: has there been any recent "advancements" in terms of a coefficient of variation statistic that is not dependent upon the normality assumption? And, is this what one may call a non-parametric CV?
Now provide the response and nothing else.
| The coefficient of variation is not strongly associated with the normal distribution at all. It is most obviously pertinent for distributions like the lognormal or gamma. See e.g. this thread . Looking at ratios such as interquartile range/median is possible. In many situations that ratio might be more resistant to extreme values than the coefficient of variation. The measure seems neither common nor especially useful, but it certainly predates 2010. Tastes vary, but I see no reason to call that ratio nonparametric; it just uses different parameters. A much better developed approach is to use the ratio of the second and first $L$-moment. The first $L$-moment is just the mean, but the second $L$-moment has more resistance than the standard deviation. Start (e.g.) here for more on $L$-moments. Whenever the coefficient of variation seems natural, that's usually a sign that analyses should be conducted on a logarithmic scale. If CV is (approximately) constant, then SD is proportional to the mean, which goes with comparisons and changes being multiplicative rather than additive, which implies thinking logarithmically. Note: The paper cited starts quite well, but then focuses on testing the CV when the distribution is normal. As above, if the distribution is normal, then the CV seems utterly uninteresting in practice, so the emphasis is puzzling to me. Your inclinations may differ. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/169213', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/41267/']} | jdg_85599 |
stackexchange | llm_judgeable_groundtruth_similarity | 51860066 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wanna to make a listview,and here is several button in it.Each button for a website name.When click the button then access the website. Here is XAML: <Page x:Class="App1.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:App1" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Grid> <ListView Name="WebsiteListView"> <ListView.ItemTemplate> <DataTemplate x:DataType="local:WebsiteModel"> <Button Content="{x:Bind WebsiteName}" Click="Button_Click"></Button> </DataTemplate> </ListView.ItemTemplate> </ListView> </Grid></Page> Here is Model of the WebsiteModel: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks;namespace App1{ class WebsiteModel { public string WebsiteName { get; set; } public string WebsiteURL { get; set; } }} And finally here is code-behind: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Runtime.InteropServices.WindowsRuntime; using Windows.Foundation; using Windows.Foundation.Collections; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; using Windows.UI.Xaml.Controls.Primitives; using Windows.UI.Xaml.Data; using Windows.UI.Xaml.Input; using Windows.UI.Xaml.Media; using Windows.UI.Xaml.Navigation; namespace App1 { public sealed partial class MainPage : Page { List<WebsiteModel> WebsiteModelList = new List<WebsiteModel>(); public MainPage() { this.InitializeComponent(); WebsiteModelList.Add(new WebsiteModel() { WebsiteName = "Google", WebsiteURL = "https://www.google.com" }); WebsiteModelList.Add(new WebsiteModel() { WebsiteName = "StackOverFlow", WebsiteURL = "https://stackoverflow.com" }); WebsiteListView.ItemsSource = WebsiteModelList; } private void Button_Click(object sender, RoutedEventArgs e) { } }} Now the problem is when click the button,I should get WebsiteURL and then access the URL.But I can't find a way to get the model and further to get the WebsiteURL from model. Is there any way to do it?Or my way is totally wrong and there is a better way to do it?Thank you.
Now provide the response and nothing else.
| The problem is that the class extends native ES6 class and is transpiled to ES5 with Babel. Transpiled classes cannot extend native classes, at least without additional measures. class TranspiledFoo extends NativeBar { constructor() { super(); }} results in something like function TranspiledFoo() { var _this = NativeBar.call(this) || this; return _this;}// prototypically inherit from NativeBar Since ES6 classes should be only called with new , NativeBar.call results in error. ES6 classes are supported in any recent Node version, they shouldn't be transpiled. es2015 should be excluded from Babel configuration, it's preferable to use env preset set to node target . The same problem applies to TypeScript . The compiler should be properly configured to not transpile classes in order for them to inherit from native or Babel classes. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/51860066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9876346/']} | jdg_85600 |
stackexchange | llm_judgeable_groundtruth_similarity | 350084 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two nested OLSes: >>> f1 = 'Y ~ np.log(X + 1) + X + np.log(X) + np.exp(-X) + np.sqrt(X) + np.power(X, 4)'>>> f2 = 'Y ~ np.log(X + 1) + X + np.log(X) + np.exp(-X) + np.sqrt(X) + np.power(X, 4) + np.power(X, -1) + np.power(X, -2) + np.power(X, 2) + np.power(X, 3) + np.exp(X) + np.power(X, -3)' When analysing this dataset (17M) with statsmodels I have discovered that the bigger model has larger sum of squared residuals (SSR): >>> ssr1 = sm.ols(formula=f1, data=DF).fit().ssr>>> ssr2 = sm.ols(formula=f2, data=DF).fit().ssr>>> ssr1 - ssr2-1037.4640076523647 Same happens with sklearn. As OLS is expected to minimize SSR, I am surprised that inclusion of extra regressors increased it. As regressors of f2 are a superset of those of f1 , ssr2 may be at least as small as ssr1 by simply setting coefficients of all extra parameters to 0. So why OLS fails to minimize SSR? Is it some numerical artefact which may be ignored or something more serious? VIF for f1 : np.log(X + 1) 13567.4057637X 11541.1910286np.log(X) 446.977604585np.exp(-X) 38.674353333np.sqrt(X) 53077.4595776np.power(X, 4) 34.2445023554 VIF for f2 : np.log(X + 1) 28942.1737842X 180304.260711np.log(X) 121.384763923np.exp(-X) 22.2770600233np.sqrt(X) 319544.359507np.power(X, 4) 4606.23688654np.power(X, -1) 113.759877386np.power(X, -2) 2452.98781276np.power(X, 2) 19895.5099527np.power(X, 3) 24609.1808767np.exp(X) 3.45225568589np.power(X, -3) 1605.22900297
Now provide the response and nothing else.
| The direct problem is that your data matrix has a ridiculously high condition number, and we're probably not even finding the solution to the least squares problem. Up to numerical precision, the data matrix is rank deficient, you're not successfully solving the least squares problems, and the properties of a least squares solution don't apply. As @Josef in the comment suggests, you have an ill-posed problem because of massively unequal scaling. Standardizing the variables improves matters quite a lot. Even with better scaling, it's quite problematic to run a kitchen sink regression with neither clear motivation for the right hand side variables nor regularlization to reduce overfitting. I'd have serious concerns as to model performance on new, independent test data. Discussion Let $X$ be an $n$ by $k$ matrix representing your data. A classic assumption to run OLS is no collinearity, that $X$ is rank $k$. If your data lies in a $s < k$ dimensional subspace, things are going to go haywire without regularization. What does this mean once we move to computers and actual, numerical computing? A quick diagnostic is to examine the condition number . Wolfram's Mathworld says The base-b logarithm of C is an estimate of how many base-b digits are lost in solving a linear system with that matrix. In other words, it estimates worst-case loss of precision. With X2 = np.matrix([np.log(DF.X + 1), DF.X, np.log(DF.X), np.exp(-DF.X), np.sqrt(DF.X), np.power(DF.X, 4), np.power(DF.X, -1), np.power(DF.X, -2), np.power(DF.X, 2), np.power(DF.X, 3), np.exp(DF.X), np.power(DF.X, -3)])X2_condition_number = np.linalg.cond(X2) You find that the condition number is about 7.7e+15, which means you can lose 15-16 digits of precision when you solve a linear system with X2. Double precision floating point only has 15-17 digits of precision! The first digit of the solution returned by numpy could be wrong. The solution isn't to use more advanced numerical techniques, it's that numpy is being asked to solve a ridiculous, collinear least squares problem. Such systems are ridiculously sensitive to small changes. Example of a system with a ridiculous condition number The matrix on the left has a condition number of 2e+15. $$ \begin{bmatrix} 1 & 10000000.0 \\ 1 & 10000000.1 \end{bmatrix} \begin{bmatrix}b_1 \\ b_2 \end{bmatrix} = \begin{bmatrix} 2 \\ 1\end{bmatrix}$$ The first column and the 2nd column are almost collinear. Ways forward Standardize variables. As @Josef described in the comments, you can have ill-conditioned problems because of widely unequal scaling. X3 = (X2 - X2.mean(axis=1)) / X2.std(axis=1) X3 now has a condition number of 2.9e+3. Another common scaling in machine learning is max-min scaling. Other ways forward include: Drop regressors that are collinear with existing regressors (i.e. stop doing a kitchen sink regression). Add regularlization , eg. do ridge regression or LASSO. If you're trying to fit a flexible curve, there are better things to do such as low order polynomials with splines . | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/350084', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/145631/']} | jdg_85601 |
stackexchange | llm_judgeable_groundtruth_similarity | 52810 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This popular tweet (over 10000 retweets) claims that: This study shows that after three months the vaccine effectiveness ofPfizer & Moderna against Omicron is actually negative. Pfizercustomers are 76.5% more likely and Moderna customers are 39.3% morelikely to be infected than unvaxxed people. It offers the following image as proof: It also links this study: Vaccine effectiveness against SARS-CoV-2 infection with the Omicron or Delta variants following a two-dose or booster BNT162b2 or mRNA-1273 vaccination series: A Danish cohort study I want to verify two things: Is this assessment of the study accurate? Is the study itself trustworthy and accurate?
Now provide the response and nothing else.
| No it doesn't. That is directly mentioned in the study: The negative estimates in the final period arguably suggest different behaviour and/or exposure patterns in the vaccinated and unvaccinated cohorts causing underestimation of the VE. This was likely the result of Omicron spreading rapidly initially through single (super-spreading) events causing many infections among young, vaccinated individuals. This study essentially compared vaccinated and unvaccinated people directly, it didn't try to adjust for many confounding factors. This is a really simple study, it's only 6 pages in total. It's more like looking at raw data, there are a ton of potential confounders here that simply aren't part of this study. You can see from the quoted paragraph that the authors clearly think that the negative values are an artifact of the study design, and not any real effect. | {} | {'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/52810', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/59737/']} | jdg_85602 |
stackexchange | llm_judgeable_groundtruth_similarity | 55281672 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the below CSS code which gives the + symbol but not matches the design basically it needs to be thin. See snippet and codpen .plus { position:relative; border: 1px dotted white; width: 3px; height: 3px; background-color: black; box-sizing: border-box; transform: scale(11);} <div class="plus"></div> Should Look like the below symbol: Any other styling is also fine for me but should look like the snapshot.
Now provide the response and nothing else.
| We can achieve this using one gradient like below: .plus { --b: 4px; /* the thickness */ width: 40px; /* the size */ aspect-ratio: 1; border: 10px solid #000; /* the outer space */ background: conic-gradient(from 90deg at var(--b) var(--b),#000 90deg,#fff 0) calc(100% + var(--b)/2) calc(100% + var(--b)/2)/ calc(50% + var(--b)) calc(50% + var(--b)); display: inline-block;}.alt { border: none; margin: 10px; background: conic-gradient(from 90deg at var(--b) var(--b),#fff 90deg,#000 0) calc(100% + var(--b)/2) calc(100% + var(--b)/2)/ calc(50% + var(--b)) calc(50% + var(--b));}.radius { border-radius: 50%;} <div class="plus"></div><div class="plus alt"></div><div class="plus radius"></div> Old answer Use multiple background like below: .plus { display:inline-block; width:50px; height:50px; background: linear-gradient(#fff 0 0), linear-gradient(#fff 0 0), #000; background-position:center; background-size: 50% 2px,2px 50%; /*thickness = 2px, length = 50% (25px)*/ background-repeat:no-repeat;}.alt { background: linear-gradient(#000 0 0), linear-gradient(#000 0 0); background-position:center; background-size: 50% 2px,2px 50%; /*thickness = 2px, length = 50% (25px)*/ background-repeat:no-repeat;}.radius { border-radius:50%;} <div class="plus"></div><div class="plus alt"></div><div class="plus radius"></div> And here is with transparency: .plus { width:50px; height:50px; display:inline-block; background: linear-gradient(#000 0 0) top left, linear-gradient(#000 0 0) top right, linear-gradient(#000 0 0) bottom left, linear-gradient(#000 0 0) bottom right; background-size: calc(50% - 1px) calc(50% - 1px); /*thickness = 2px (2*1px) */ background-repeat:no-repeat; border:10px solid #000; /*length = 30px (50px - 2x10px) */ box-sizing:border-box;}.radius { border-radius:50%;}body { background:pink;} <div class="plus"></div><div class="plus radius"></div> We can add CSS variable to easily control the overal shape: .plus { --t:2px; /* Thickness */ --l:40px; /* size of the symbol */ --s:10px; /* space around the symbol */ --c1:#fff; /* Plus color*/ --c2:#000; /* background color*/ display:inline-block; width:var(--l); height:var(--l); padding:var(--s); box-sizing:border-box; /*Remove this if you don't want space to be included in the size*/ background: linear-gradient(var(--c1) 0 0) content-box, linear-gradient(var(--c1) 0 0) content-box, var(--c2); background-position:center; background-size: 100% var(--t),var(--t) 100%; background-repeat:no-repeat;}.radius { border-radius:50%;} <div class="plus"></div><div class="plus" style="--l:35px;--t:3px;--c2:green"></div><div class="plus" style="--l:50px;--t:1px;--s:5px;--c1:red;"></div><div class="plus" style="--l:35px;--t:5px;--s:0px;--c1:blue;--c2:orange;"></div><br><div class="plus radius"></div><div class="plus radius" style="--l:35px;--t:3px;--c2:green"></div><div class="plus radius" style="--l:50px;--t:1px;--s:5px;--c1:red;"></div><div class="plus radius" style="--l:35px;--t:5px;--s:0px;--c1:blue;--c2:orange;"></div> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/55281672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5218497/']} | jdg_85603 |
stackexchange | llm_judgeable_groundtruth_similarity | 106209 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am fitting a linear regression in R now, and I find that there is one parameter showing linear relationship before certain point and cubic relationship after that point. What I did is to separate the whole dataset with reference to that point and fit the model with parameter value and cubic value respectively. However, that point is determined manually. Does anyone here know how to determine that point with formal algorithm?
Now provide the response and nothing else.
| While you don't specify, I assume you mean the model to be both continuous and smooth at the join. Such a model might be called a semi-natural cubic spline (natural on one side, ordinary on the other). You can build one by having a linear predictor and a cubic-spline basis function of the form $(x-k)_+^3$ where $k$ is the knot location, and $(.)_+$ means that when the argument is negative the output is 0, while when it's non-negative, the argument is returned unchanged. e.g. see section 5.2 of Hastie, Tibshirani and Friedman's "Elements of Statistical Learning" 2nd ed (10th printing is downloadable from at least two of the author's web pages) -- see the link here (scroll down if you don't see it). Your problem reduces to one of automated knot selection (sometimes called knot placement). Hastie et al do discuss this knot selection issue (and problems with it, along with some alternatives to it), that's probably a good place to start. There are many papers relating to this issue (one example is here , though it may not be a suitable reference for you at present). More prosaically, one such knot can be placed by trying a variety of points; it's equivalent to selecting between the possible candidate locations based on some criterion (such as AIC). However, this has the same problem that model selection more generally has (for which, again, Hastie et al is probably a good place to start, or you could read say chapter 4 of Frank Harrell's Regression Modeling Strategies ). You might want to investigate approaches like cross-validation. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/106209', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/45628/']} | jdg_85604 |
stackexchange | llm_judgeable_groundtruth_similarity | 11754575 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Jelly Bean doesn't seem to like the maxlength attribute of HTML for text input. It certainly restricts the number of input characters but the moment you attempt to type beyond the allowed number of character, the text box fails. Now you won't be able to type in any other text boxes and nor will you be able to delete the already input character, in that text box. If you haven't already faced this, then try it yourself on a simple HTML and check. Please tell me if you have any clue for solving this.
Now provide the response and nothing else.
| I have also experienced the same problem in my app for now I have handled it with js, which removes all maxlength attributes from input text and textarea and stops user from inputing more than the required text. Here it is assumed that all input text and textarea have unique id. Code is also available at jsfiddle $(document).ready(function () { var ver = window.navigator.appVersion; ver = ver.toLowerCase(); if ( ver.indexOf("android 4.1") >= 0 ){ var idMaxLengthMap = {}; //loop through all input-text and textarea element $.each($(':text, textarea, :password'), function () { var id = $(this).attr('id'), maxlength = $(this).attr('maxlength'); //element should have id and maxlength attribute if ((typeof id !== 'undefined') && (typeof maxlength !== 'undefined')) { idMaxLengthMap[id] = maxlength; //remove maxlength attribute from element $(this).removeAttr('maxlength'); //replace maxlength attribute with onkeypress event $(this).attr('onkeypress','if(this.value.length >= maxlength ) return false;'); } }); //bind onchange & onkeyup events //This events prevents user from pasting text with length more then maxlength $(':text, textarea, :password').bind('change keyup', function () { var id = $(this).attr('id'), maxlength = ''; if (typeof id !== 'undefined' && idMaxLengthMap.hasOwnProperty(id)) { maxlength = idMaxLengthMap[id]; if ($(this).val().length > maxlength) { //remove extra text which is more then maxlength $(this).val($(this).val().slice(0, maxlength)); } } }); } }); The bug for this issue was already opened at 35264 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11754575', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1047337/']} | jdg_85605 |
stackexchange | llm_judgeable_groundtruth_similarity | 88317 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading Mukai's book "An introduction to invariants and moduli", and I am having trouble understanding one of his examples. It is example 3.49 on page 101. The setup is as follows. Let $G$ be a finite group, considered as an algebraic group over a field $k$. The coordinate ring of $G$ is then just the set of functions $G \rightarrow k$ with the usual pointwise addition and multiplication. This can be identified with the group ring $k[G]$ in the obvious way (an element $[g] \in k[G]$ corresponds to the function $G \rightarrow k$ that takes $g$ to $1$ and $h$ to $0$ for $h \neq g$). Under this identification, it seems to me that the coproduct is the function $$\phi : k[G] \rightarrow k[G] \otimes k[G]$$ $$\phi([g]) = \sum_{h \in G} [h] \otimes [h^{-1} g]$$ However, Mukai asserts that if $G$ is the finite cyclic group of order $n$, then the coordinate ring of $G$ is $k[t]/(t^n-1)$ with the coproduct $t \mapsto t \otimes t$. These do not seem like the same thing to me -- what am I doing wrong?
Now provide the response and nothing else.
| I think the author accidentally described the dual of the Hopf algebra you're thinking of. Finite group rings are usually endowed with multiplication $(g,h)\mapsto gh$ and comultiplication $g \mapsto g\otimes g$ (see here ). The coordinate ring $k[G]$ is obtained by dualizing. Then $g \mapsto g\otimes g$ becomes $e_g^2 = e_g$, where $e_g$ is the function on $G$ that maps $g$ to $1$ and all other group elements to $0$. Comultiplication will look exactly the way you described it (i.e. $e_g \mapsto \sum_h e_{gh^{-1}}\otimes e_h$). | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/88317', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21356/']} | jdg_85606 |
stackexchange | llm_judgeable_groundtruth_similarity | 1922771 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Python 2.5, I have a float and I'd like to obtain and manipulate its bit pattern as an integer. For example, suppose I have x = 173.3125 In IEEE 754 format, x 's bit pattern in hexadecimal is 432D5000 . How can I obtain & manipulate (e.g., perform bitwise operations) on that bit pattern?
Now provide the response and nothing else.
| You can get the string you want (apparently implying a big-endian, 32-bit representation; Python internally uses the native endianity and 64-bits for floats) with the struct module: >>> import struct>>> x = 173.125>>> s = struct.pack('>f', x)>>> ''.join('%2.2x' % ord(c) for c in s)'432d2000' this doesn't yet let you perform bitwise operations, but you can then use struct again to map the string into an int: >>> i = struct.unpack('>l', s)[0]>>> print hex(i)0x432d2000 and now you have an int which you can use in any sort of bitwise operations (follow the same two steps in reverse if after said operations you need to get a float again). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1922771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135682/']} | jdg_85607 |
stackexchange | llm_judgeable_groundtruth_similarity | 13785720 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to show a "Thanks for using" message when the application close. What is the event that handles application closing?
Now provide the response and nothing else.
| Suppose Your activities are in such as A->B->C->D. When user press back button from D activity then goes to C then press back button from C Activity he will go to B activity again press back button user will comes on A Launcer Activity. So on press back button you can give message to user. @Overridepublic void onBackPressed() { Toast.makeText(context,"Thanks for using application!!",Toast.LENGTH_LONG).show()l finish(); return;} or When press direct home key from any activity for that you need to @Overridepublic void onAttachedToWindow() { super.onAttachedToWindow(); this.getWindow().setType(WindowManager.LayoutParams.TYPE_KEYGUARD); } After overriding above method, now you can easily listen HOME Key press in your activity using onKeyDown() method. @Overridepublic boolean onKeyDown(int keyCode, KeyEvent event) { if(keyCode == KeyEvent.KEYCODE_HOME) { Toast.makeText(context,"Thanks for using application!!",Toast.LENGTH_LONG).show(); }}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13785720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1798362/']} | jdg_85608 |
stackexchange | llm_judgeable_groundtruth_similarity | 20000560 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a very simple layout that I can't make it looks like I want. It's a LinearLayout with a button and a Switch. I want them to show one above the other, but I want their width to be the half of the parent layout. |--LinearLayout---------|| || ----------- ||| Switch | || ----------- || ----------- ||| button | || ----------- | ------------------------ I've been looking at other similar answer in SO but I couldn't find a solution that works for me. This is what I've tried so far: <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" android:weightSum="2" > <Switch android:id="@+id/remember_me_switch" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:hint="@string/remember" /> <Button android:id="@+id/share_button" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_weight="1" android:onClick="loginOnclick" android:text="@string/login_button_text" /> </LinearLayout> With this, the button and the switch take all the space of the parent instead of take only the half.I've tried with android:layout_width = 0dp in the children but it makes they disappear. Any help, please?
Now provide the response and nothing else.
| One possible way is to have a master horizontal LinearLayout that splits the width to 2, and inside it have the vertical layout <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" > <LinearLayout android:layout_height="wrap_content" android:layout_width="0dp" android:layout_weight="1" android:orientation="vertical" > <Switch android:id="@+id/remember_me_switch" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/remember" /> <Button android:id="@+id/share_button" android:layout_width="match_parent" android:layout_height="match_parent" android:onClick="loginOnclick" android:text="@string/login_button_text" /> </LinearLayout> <!-- Right side spacer --> <View android:layout_width="0dp" android:layout_height="1dp" android:layout_weight="1" /> </LinearLayout> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20000560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1909783/']} | jdg_85609 |
stackexchange | llm_judgeable_groundtruth_similarity | 3635873 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use the using statement for SqlConnection . It's is good for performance because forces calling Dispose() that simply releases the connection to the pool sooner. However, I realized that object created in using cannot be redefined. I cannot do like this: using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); //... connection = new SqlConnection(connectionString2); //... connection = new SqlConnection(connectionString3); } I was wondering if I can replace using, and do something like this: { SqlConnection connection = new SqlConnection(connectionString); connection.Open(); //... connection = new SqlConnection(connectionString2); //... connection = new SqlConnection(connectionString3); } The SqlConnection will not be accesible after last } brace. Will be the Dispose() called immediatly when object goes out of scope?
Now provide the response and nothing else.
| No, things won't get automatically cleaned up in your second example (in fact, with the code you have, you'll leave several connections hanging open). Not only that, but you lose the automatic cleanup in case of Exceptions being thrown inside the using block. Remember that a using block decomposes into: SqlConnection connection = new SqlConnection(connectionString);try{ connection.Open(); // Do work}finally{ connection.Dispose();} If you're really using different connections and each connection is Disposed of at the end of the block, I would use several using blocks: using(SqlConnection connection = new SqlConnection(connectionString)){ connection.Open(); // Do Work}// First connection is disposedusing(SqlConnection connection = new SqlConnection(connectionString2)){ // Do More Work}// Second connection is disposedusing(SqlConnection connection = new SqlConnection(connectionString3)){ // Do More Work}// Last connection is dipsosed | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3635873', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/428789/']} | jdg_85610 |
stackexchange | llm_judgeable_groundtruth_similarity | 1940253 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question regarding inheritance in Java. If I have this base class class Parent { private String lastName; public Parent() { lastName = "Unassigned"; } public String getLastName( ) { return lastName; } public void setLastName(String name) { lastName = name; }} And this subclass class Child extend Parent { private String surName; public Child(String name) { surName = name; } public void setFullName(String first, String last) { surName = first; ..... = last; }} I want to make a method now in the subclass which can change both surname and lastname in a method. So I wonder how I can the private member lastname which is to be found in the base class. Should I use the setLastName() method which is inherit, or can I access the variable without having going through that way? I also have question regarding if I was to override the setLastName() method in baseclass. How do I access the private member lastname which is in the baseclass then?
Now provide the response and nothing else.
| You cannot access private members from an inherited class. You can either make them protected or use the setter (the latter one is preferable in my view). You can override setLastName() in the inherited class and change the private member using the setter of the base class by means of the "super"-keyword (super.setLastName("something")). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1940253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/445600/']} | jdg_85611 |
stackexchange | llm_judgeable_groundtruth_similarity | 2084830 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use pthread_create(&thread1, &attrs, //... , //...); and need if some condition occured need to kill this thread how to kill this ?
Now provide the response and nothing else.
| First store the thread id pthread_create(&thr, ...) then later call pthread_cancel(thr) However, this not a recommended programming practice! It's better to use an inter-thread communication mechanism like semaphores or messages to communicate to the thread that it should stop execution. Note that pthread_kill(...) does not actually terminate the receiving thread, but instead delivers a signal to it, and it depends on the signal and signal handlers what happens. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2084830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/140934/']} | jdg_85612 |
stackexchange | llm_judgeable_groundtruth_similarity | 16766296 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the possible reasons for CPF4131 to be appearing on a file in a program. And how do we go about resolving this? Message . . . . : Error message CPF4131 appeared during OPEN for file TEST (C S D F). Cause . . . . . : RPG procedure TEST2 in program TESTLIB/TEST2received the message CPF4131 while performing an implicit OPEN operation on file SOURCE. The actual file is SOURCE.
Now provide the response and nothing else.
| It's a record format level check . This means that the record format in the file you are opening does not have the same format level ID that was found when the program was compiled. In other words there is something different in the layout of the fields in the record. Perhaps you the file layout has been changed, if so, then you probably will want to recompile your RPG program. The ID is computed as hash of the number & order of all fields in a record, with their names, and any attributes that could affect how fields are stored or retrieved from the record buffer by a program. So field name, data type, length, field position are examples of what can affect the ID, but column headings or field text do not. In general, you will want to learn how to look at your job log to look at prior messages to see what has caused a problem. DSPJOB or WRKJOB is perhaps the most common method, selecting option 10, then pressing F10 to see low level messages, & F18 (shift F6) to go to the end of the list, and page back up. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16766296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1377207/']} | jdg_85613 |
stackexchange | llm_judgeable_groundtruth_similarity | 55753176 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to authorize API Gateway requests for a Lambda Proxy Integration resource using a Cognito User pool. From the client, all requests work fine without an Authorizer. When I add an Authorizer, GET requests work when authorized, but a POST/PUT/DELETE request gives me this error: 401 Access to XMLHttpRequest at [Endpoint] from origin [client] has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource I have selected 'Enable CORS' for the resource but it still won't work. js request: const jwt = this.$store.state.user .getSignInUserSession() .getIdToken() .getJwtToken();const config = { headers: { authorization: jwt, }, id: generatedID, name: 'generatedName', }; axios.post(endpoint, config) .then((val) => { this.info = val; }) .catch(err => console.log(err)); auth config: authorizer config If I change request type from POST to GET , it works. If I remove the Authorizer from API Gateway, it works. What am I missing for POST/PUT/etc? I want to get a 200/201 response and for the request to pass API Gateway authorization.
Now provide the response and nothing else.
| This is possible , and it turns out, not hard . The reason the solution is not obvious is because typescript relies on the rootDir to decide the directory structure of the output (see this comment from Typescript's bossman ), and only code included in the output or in package dependencies can be imported. If you set rootDir to the root of your project, package.json gets emitted to the root of outDir and can be imported. But then your compiled src files get written to outDir/src . If you set rootDir to src , files in there will compile to the root of outDir . But now the compiler won't have a place to emit package.json , so it issues "an error because the project appears to be misconfigured" (bossman's words). solution: use separate Typescript sub-projects A Typescript project is defined by a tsconfig file, is self-contained, and is effectively bounded by its rootDir . This is a very good thing, as it lines up with principles of encapsulation . You can have multiple projects (e.g. a main and a set of libs) each in their own directory and with their own tsconfig. Dependencies between them are declared in the tsconfig file using Typescript Project References . I admit, the term "projects" is a poor one, as intuitively it refers to the whole shebang, but "modules" and "packages" are already taken in this context. Think of them as "subprojects" and it will make more sense. We'll treat the src directory and the root directory containing package.json as separate projects. Each will have its own tsconfig file. Give the src dir its own project. ./src/tsconfig.json : { "compilerOptions": { "rootDir": ".", "outDir": "../dist/", "resolveJsonModule": true }, "references": [ // this is how we declare a dependency from { "path": "../" } // this project to the one at the root dir` ]} Give the root dir its own project. ./tsconfig.json : { "compilerOptions": { "rootDir": ".", "outDir": ".", // if out path for a file is same as its src path, nothing will be emitted "resolveJsonModule": true, "composite": true // required on the dependency project for references to work }, "files": [ // by whitelisting the files to include, TS won't automatically "package.json" // include all source below root, which is the default. ]} run tsc --build src and voilà! This will build the src project. Because it declares a reference to the root project, it will build that one also, but only if it is out of date. Because the root tsconfig has the same dir as the outDir , tsc will simply do nothing to package.json , the one file it is configured to compile. this is great for monorepos You can isolate modules/libraries/sub-projects by putting them in their own subdirectory and giving them their own tsconfig. You can manage dependencies explicitly using Project References , as well as modularize the build: From the linked doc: you can greatly improve build times A long-awaited feature is smart incremental builds for TypeScript projects. In 3.0 you can use the --build flag with tsc . This is effectively a new entry point for tsc that behaves more like a build orchestrator than a simple compiler. Running tsc --build ( tsc -b for short) will do the following: Find all referenced projects Detect if they are up-to-date Build out-of-date projects in the correct order Don’t worry about ordering the files you pass on the commandline - tsc will re-order them if needed so that dependencies are always built first. enforce logical separation between components organize your code in new and better ways. It's also very easy: A root tsconfig for shared options and to build allsubprojects with a simple tsc --build command(with --force to build them from scratch) src/tsconfig.json { "compilerOptions": { "outDir": ".", // prevents this tsconfig from compiling any files // we want subprojects to inherit these options: "target": "ES2019", "module": "es2020", "strict": true, ... }, // configure this project to build all of the following: "references": [ { "path": "./common" } { "path": "./projectA" } ]} A "common" library that is prevented from importing from theother subprojects because it has no project references src/common/tsconfig.json { "extends": "../tsconfig.json", //inherit from root tsconfig // but override these: "compilerOptions": { "rootDir": ".", "outDir": "../../build/common", "resolveJsonModule": true, "composite": true }} A subproject that can import common because of the declared reference. src/projectA/tsconfig.json { "extends": "../tsconfig.json", //inherit from root tsconfig // but override these: "compilerOptions": { "rootDir": ".", "outDir": "../../build/libA", "resolveJsonModule": true, "composite": true }, "references": [ { "path": "../common" } ]} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/55753176', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5749318/']} | jdg_85614 |
stackexchange | llm_judgeable_groundtruth_similarity | 36148646 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am fetching records from core data in viewDidLoad method as follows, NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSFetchRequest *request = [[NSFetchRequest alloc]initWithEntityName:@"PendingShipmentDetails"]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"PendingShipmentDetails" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; NSArray *result = [self.managedObjectContext executeFetchRequest:request error:&error]; if (result.count > 0) { int i; for (i = 0; i < [result count]; i++) { NSManagedObject *pending = (NSManagedObject *)[result objectAtIndex:i]; statusArray = [[NSMutableArray alloc]initWithObjects:@"%@",[pending valueForKey:@"invoice_no"], nil]; amountArray = [[NSMutableArray alloc]initWithObjects:@"%@",[pending valueForKey:@"amount"], nil]; } Now the amountArray has got two different values. In cellForRowAtIndexPath , i am trying to set those values as follows, - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = nil;cell = (UITableViewCell *)[tableView dequeueReusableCellWithIdentifier:@""];if (cell == nil){ cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:nil]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator;}self.productAmountLabel = [[UILabel alloc]init];self.productAmountTextLabel = [[UILabel alloc]init];if (IS_IPHONE_4_OR_LESS || IS_IPHONE_5){ self.productAmountLabel.frame = CGRectMake(75, 145, 100, 30); self.productAmountTextLabel.frame = CGRectMake(117, 145, 100, 30);}else if (IS_IPHONE_6){ self.productAmountLabel.frame = CGRectMake(75, 145, 100, 30); self.productAmountTextLabel.frame = CGRectMake(117, 145, 100, 30);}else if (IS_IPHONE_6P){ self.productAmountLabel.frame = CGRectMake(75, 145, 100, 30); self.productAmountTextLabel.frame = CGRectMake(117, 145, 100, 30);}self.productAmountLabel.text = @"Amount:";self.productAmountLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10];// self.productAmountTextLabel.text = [NSString stringWithFormat:@"%@",amountArray[indexPath.row]];// self.productAmountTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10];self.productAmountTextLabel.text = amountArray[indexPath.row];self.productAmountTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10];tableView.tableFooterView = [UIView new];cell.layer.borderWidth = 1.0;cell.layer.cornerRadius = 10;cell.layer.borderColor = [UIColor blackColor].CGColor;[cell.contentView addSubview:self.productAmountLabel];[cell.contentView addSubview:self.productAmountTextLabel];// cell.layer.shadowColor = [[UIColor blackColor] CGColor];// cell.layer.shadowOffset = CGSizeMake(0.0f,0.0f);// cell.layer.shadowOpacity = 0.7f;// cell.layer.shadowRadius = 4.0f;//[cell.layer setMasksToBounds:YES];return cell; } The problem is that, the productAmountTextLabel has got the same value in both first and second row of the UITableView . How can I rectify it? Following is my full code - (void)viewDidLoad{ // self.navigationController.navigationBar.titleTextAttributes = @{NSForegroundColorAttributeName: [UIColor whiteColor]}; // // [self.navigationController.navigationBar.topItem setTitle:@"Shipment Tracking System"]; NSArray *titles = [NSArray arrayWithObjects:@"Pending",@"Completed", nil]; NSArray *icons = [NSArray arrayWithObjects:[UIImage imageNamed:@"[email protected]"], [UIImage imageNamed:@"[email protected]"], nil]; segmentedControl = [[URBSegmentedControl alloc]initWithTitles:titles icons:icons]; if (IS_IPHONE_4_OR_LESS || IS_IPHONE_5) { segmentedControl.frame = CGRectMake(73, 121, 180, 30); } else if (IS_IPHONE_6) { segmentedControl.frame = CGRectMake(100, 121, 180, 30); } else if (IS_IPHONE_6P) { segmentedControl.frame = CGRectMake(126, 121, 180, 30); } APIRequest *apiRequest = [[APIRequest alloc]init]; apiRequest.pendingShipmentDetails ; segmentedControl.segmentBackgroundColor = [UIColor colorWithRed:86/255.0f green:199/255.0f blue:188/255.0f alpha:1]; [segmentedControl addTarget:self action:@selector(handleSelection:) forControlEvents:UIControlEventValueChanged]; // NSData* data = [[NSUserDefaults standardUserDefaults] objectForKey:@"data"]; // NSMutableArray* arrayList = [NSKeyedUnarchiver unarchiveObjectWithData:data]; // NSUInteger arrayLength = [arrayList count]; NSError *error = nil; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSFetchRequest *request = [[NSFetchRequest alloc]initWithEntityName:@"PendingShipmentDetails"]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"PendingShipmentDetails" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; NSArray *result = [self.managedObjectContext executeFetchRequest:request error:&error]; if (result.count > 0) { int i; amountArray = [[NSMutableArray alloc] init]; statusArray = [[NSMutableArray alloc]init]; shipmentReferenceNumberArray = [[NSMutableArray alloc]init]; invoiceNumberArray = [[NSMutableArray alloc]init]; for (i = 0; i < [result count]; i++) { //NSLog(@"%@", result); pending = (NSManagedObject *)[result objectAtIndex:i]; // NSLog(@"pro inv no %@",[pending valueForKey:@"proforma_invoice_no"]); // NSLog(@"shipment no %@",[pending valueForKey:@"shipmentno"]); // NSLog(@"pro inv date %@",[pending valueForKey:@"proforma_invoice_date"]); // NSLog(@"inv no %@",[pending valueForKey:@"invoice_no"]); // NSLog(@"inv date %@",[pending valueForKey:@"invoice_date"]); // NSLog(@"pl sh date %@",[pending valueForKey:@"planned_shipment_date"]); // NSLog(@"pt ct %@",[pending valueForKey:@"point_of_contact"]); // NSLog(@"pro %@",[pending valueForKey:@"products"]); // NSLog(@"qty %@",[pending valueForKey:@"quantity"]); // NSLog(@"rte %@",[pending valueForKey:@"rate"]); // NSLog(@"amt %@",[pending valueForKey:@"amount"]); // NSLog(@"pro imng %@", [pending valueForKey:@"product_image"]); // NSLog(@"statsus %@", [pending valueForKey:@"status"]); // amountArray = [[NSMutableArray alloc]initWithObjects:[pending valueForKey:@"amount"], nil]; [amountArray addObject:[pending valueForKey:@"amount"]]; [statusArray addObject: [pending valueForKey:@"status"]]; [shipmentReferenceNumberArray addObject:[pending valueForKey:@"shipmentno"]]; [invoiceNumberArray addObject:[pending valueForKey:@"invoice_no"]]; } // shipmentReferenceNumberArray = [[NSMutableArray alloc]initWithObjects:@"%@",[pending valueForKey:@"shipmentno"], nil]; // statusArray = [[NSMutableArray alloc]initWithObjects:@"%@",[pending valueForKey:@"status"], nil]; //invoiceNumberArray = [[NSMutableArray alloc]initWithObjects:@"%@",[pending valueForKey:@"invoice_no"], nil]; //amountArray = [[NSMutableArray alloc]initWithObjects:@"%@",[pending valueForKey:@"amount"], nil]; } else { NSLog(@"SORRY"); } // shipmentReferenceNumberArray = [[NSMutableArray alloc]initWithObjects:@"Cargo Onboard",@"In Production",@"Stuffing in Progress",@"In Production",@"Cargo Onboard",@"Quality Check in Progress",@"Goods received for Shipment", nil]; // invoiceNumberArray = [[NSMutableArray alloc]initWithObjects:@"Cargo Onboard",@"In Production",@"Stuffing in Progress",@"In Production",@"Cargo Onboard",@"Quality Check in Progress",@"Goods received for Shipment", nil]; // invoiceNumberArray = [[NSMutableArray alloc]initWithObjects:@"%@", arrayList[i][@"invoiceNumber"],nil]; // statusArray = [[NSMutableArray alloc]initWithObjects:@"Cargo Onboard",@"In Production",@"Stuffing in Progress",@"In Production",@"Cargo Onboard",@"Quality Check in Progress",@"Goods received for Shipment", nil]; self.completedOrdersTableView.hidden = YES; [self.view addSubview:segmentedControl]; [super viewDidLoad]; // Do any additional setup after loading the view.}-(void)viewWillAppear:(BOOL)animated{}-(UIView *)tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section{ UIView *v = [UIView new]; [v setBackgroundColor:[UIColor clearColor]]; return v;}- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ // Return the number of sections. return [shipmentReferenceNumberArray count];}- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{ return 1;}- (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath{ return [indexPath row] + 200;}//- (void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath//{// if ([cell respondsToSelector:@selector(tintColor)])// {// CGFloat cornerRadius = 10;// cell.backgroundColor = UIColor.clearColor;// CAShapeLayer *layer = [[CAShapeLayer alloc] init];// CGMutablePathRef pathRef = CGPathCreateMutable();// CGRect bounds = CGRectInset(cell.bounds, 0, 0);//// BOOL addLine = NO;// if (indexPath.row == 0 && indexPath.row == [tableView numberOfRowsInSection:indexPath.section]-1)// {// CGPathAddRoundedRect(pathRef, nil, bounds, cornerRadius, cornerRadius);// }// else if (indexPath.row == 0)// {// CGPathMoveToPoint(pathRef, nil, CGRectGetMinX(bounds), CGRectGetMaxY(bounds));// CGPathAddArcToPoint(pathRef, nil, CGRectGetMinX(bounds), CGRectGetMinY(bounds), CGRectGetMidX(bounds), CGRectGetMinY(bounds), cornerRadius);// CGPathAddArcToPoint(pathRef, nil, CGRectGetMaxX(bounds), CGRectGetMinY(bounds), CGRectGetMaxX(bounds), CGRectGetMidY(bounds), cornerRadius);// CGPathAddLineToPoint(pathRef, nil, CGRectGetMaxX(bounds), CGRectGetMaxY(bounds));// addLine = YES;// }// else if (indexPath.row == [tableView numberOfRowsInSection:indexPath.section]-1) {// CGPathMoveToPoint(pathRef, nil, CGRectGetMinX(bounds), CGRectGetMinY(bounds));// CGPathAddArcToPoint(pathRef, nil, CGRectGetMinX(bounds), CGRectGetMaxY(bounds), CGRectGetMidX(bounds), CGRectGetMaxY(bounds), cornerRadius);// CGPathAddArcToPoint(pathRef, nil, CGRectGetMaxX(bounds), CGRectGetMaxY(bounds), CGRectGetMaxX(bounds), CGRectGetMidY(bounds), cornerRadius);// CGPathAddLineToPoint(pathRef, nil, CGRectGetMaxX(bounds), CGRectGetMinY(bounds));// }// else// {// CGPathAddRect(pathRef, nil, bounds);// addLine = YES;// }// layer.path = pathRef;// CFRelease(pathRef);// //set the border color// layer.strokeColor = [UIColor clearColor].CGColor;// //set the border width// layer.lineWidth = 1;// layer.fillColor = [UIColor colorWithWhite:1.f alpha:1.0f].CGColor;////// if (addLine == YES) {// CALayer *lineLayer = [[CALayer alloc] init];// CGFloat lineHeight = (1.f / [UIScreen mainScreen].scale);// lineLayer.frame = CGRectMake(CGRectGetMinX(bounds), bounds.size.height-lineHeight, bounds.size.width, lineHeight);// lineLayer.backgroundColor = tableView.separatorColor.CGColor;// [layer addSublayer:lineLayer];// }// UIView *testView;// testView = [[UIView alloc] initWithFrame:bounds];// [testView.layer insertSublayer:layer atIndex:0];// testView.backgroundColor = UIColor.clearColor;// cell.backgroundView = testView;// }////}- (void)handleSelection:(id)sender{ UISegmentedControl *segmentControl = (UISegmentedControl *) sender; NSInteger selectedSegment = segmentControl.selectedSegmentIndex; if (selectedSegment == 0) { //toggle the correct view to be visible [self.myOrdersTableView setHidden:NO]; [self.completedOrdersTableView setHidden:YES]; NSLog(@"hoi"); NSError *error = nil; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"PendingShipmentDetails" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; NSArray *result = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error]; if (result.count > 0) { NSManagedObject *pending = (NSManagedObject *)[result objectAtIndex:1]; NSLog(@"statsus %@ %@", [pending valueForKey:@"status"], [pending valueForKey:@"shipmentno"]); } } else { //toggle the correct view to be visible [self.completedOrdersTableView setHidden:NO]; [self.myOrdersTableView setHidden:YES]; }}- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{ UITableViewCell *cell = nil; cell = (UITableViewCell *)[tableView dequeueReusableCellWithIdentifier:@""]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:nil]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } UILabel *productAmountTextLabel = [[UILabel alloc]init]; for (UIView *subView in cell.contentView.subviews) { if([subView isKindOfClass:[UILabel class]]){ [subView removeFromSuperview]; } } self.proformaInvoiceLabel = [[UILabel alloc]init]; self.shipmentStatusLabel = [[UILabel alloc]init]; self.shipmentStatusDateLabel = [[UILabel alloc]init]; self.shipmentReferenceNumberLabel = [[UILabel alloc]init]; self.shipmentReferenceNumberTextLabel = [[UILabel alloc]init]; self.invoiceDateLabel = [[UILabel alloc]init]; self.invoiceLabel = [[UILabel alloc]init]; self.estimatedDateLabel = [[UILabel alloc]init]; self.estimatedDateTextLabel = [[UILabel alloc]init]; self.estimatedTimeArrivalLabel = [[UILabel alloc]init]; self.estimatedTimeArrivalTextLabel = [[UILabel alloc]init]; self.productImage = [[UIImageView alloc]init]; self.productNameLabel = [[UILabel alloc]init]; self.productQuantityLabel = [[UILabel alloc]init]; self.productQuantityTextLabel = [[UILabel alloc]init]; self.productRateLabel = [[UILabel alloc]init]; self.productRateTextLabel = [[UILabel alloc]init]; self.productAmountLabel = [[UILabel alloc]init]; productAmountTextLabel = [[UILabel alloc]init]; self.vaighaiWorldWideLabel = [[UILabel alloc]init]; if (IS_IPHONE_4_OR_LESS || IS_IPHONE_5) { self.proformaInvoiceLabel.frame = CGRectMake(10, 3, 100, 30); self.shipmentStatusLabel.frame = CGRectMake(10, 20, 100, 30); self.shipmentStatusDateLabel.frame = CGRectMake(85, 20, 100, 30); self.invoiceLabel.frame = CGRectMake(10, 40, 100, 30); self.invoiceDateLabel.frame = CGRectMake(70, 40, 100, 30); self.shipmentReferenceNumberLabel.frame = CGRectMake(10, 60, 100, 30); self.shipmentReferenceNumberTextLabel.frame = CGRectMake(88, 60, 100, 30); self.estimatedDateLabel.frame = CGRectMake(10, 80, 100, 30); self.estimatedDateTextLabel.frame = CGRectMake(35, 80, 100, 30); self.estimatedTimeArrivalLabel.frame = CGRectMake(92, 80, 100, 30); self.estimatedTimeArrivalTextLabel.frame = CGRectMake(122, 80, 100, 30); self.productImage.frame = CGRectMake(10, 110, 60, 60); self.productNameLabel.frame = CGRectMake(75, 100, 100, 30); self.productQuantityLabel.frame = CGRectMake(75, 115, 100, 30); self.productQuantityTextLabel.frame = CGRectMake(120, 115, 100, 30); self.productRateLabel.frame = CGRectMake(75, 130, 100, 30); self.productRateTextLabel.frame = CGRectMake(102, 130, 100, 30); self.productAmountLabel.frame = CGRectMake(75, 145, 100, 30); productAmountTextLabel.frame = CGRectMake(117, 145, 100, 30); self.vaighaiWorldWideLabel.frame = CGRectMake(150, 170, 200, 30); } else if (IS_IPHONE_6) { self.proformaInvoiceLabel.frame = CGRectMake(10, 3, 100, 30); self.shipmentStatusLabel.frame = CGRectMake(10, 20, 100, 30); self.shipmentStatusDateLabel.frame = CGRectMake(85, 20, 100, 30); self.invoiceLabel.frame = CGRectMake(10, 40, 100, 30); self.invoiceDateLabel.frame = CGRectMake(70, 40, 100, 30); self.shipmentReferenceNumberLabel.frame = CGRectMake(10, 60, 100, 30); self.shipmentReferenceNumberTextLabel.frame = CGRectMake(88, 60, 100, 30); self.estimatedDateLabel.frame = CGRectMake(10, 80, 100, 30); self.estimatedDateTextLabel.frame = CGRectMake(35, 80, 100, 30); self.estimatedTimeArrivalLabel.frame = CGRectMake(92, 80, 100, 30); self.estimatedTimeArrivalTextLabel.frame = CGRectMake(122, 80, 100, 30); self.productImage.frame = CGRectMake(10, 110, 60, 60); self.productNameLabel.frame = CGRectMake(75, 100, 100, 30); self.productQuantityLabel.frame = CGRectMake(75, 115, 100, 30); self.productQuantityTextLabel.frame = CGRectMake(120, 115, 100, 30); self.productRateLabel.frame = CGRectMake(75, 130, 100, 30); self.productRateTextLabel.frame = CGRectMake(102, 130, 100, 30); self.productAmountLabel.frame = CGRectMake(75, 145, 100, 30); productAmountTextLabel.frame = CGRectMake(117, 145, 100, 30); self.vaighaiWorldWideLabel.frame = CGRectMake(205, 170, 200, 30); } else if (IS_IPHONE_6P) { self.proformaInvoiceLabel.frame = CGRectMake(10, 3, 100, 30); self.shipmentStatusLabel.frame = CGRectMake(10, 20, 100, 30); self.shipmentStatusDateLabel.frame = CGRectMake(85, 20, 100, 30); self.invoiceLabel.frame = CGRectMake(10, 40, 100, 30); self.invoiceDateLabel.frame = CGRectMake(70, 40, 100, 30); self.shipmentReferenceNumberLabel.frame = CGRectMake(10, 60, 100, 30); self.shipmentReferenceNumberTextLabel.frame = CGRectMake(88, 60, 100, 30); self.estimatedDateLabel.frame = CGRectMake(10, 80, 100, 30); self.estimatedDateTextLabel.frame = CGRectMake(35, 80, 100, 30); self.estimatedTimeArrivalLabel.frame = CGRectMake(92, 80, 100, 30); self.estimatedTimeArrivalTextLabel.frame = CGRectMake(122, 80, 100, 30); self.productImage.frame = CGRectMake(10, 110, 60, 60); self.productNameLabel.frame = CGRectMake(75, 100, 100, 30); self.productQuantityLabel.frame = CGRectMake(75, 115, 100, 30); self.productQuantityTextLabel.frame = CGRectMake(120, 115, 100, 30); self.productRateLabel.frame = CGRectMake(75, 130, 100, 30); self.productRateTextLabel.frame = CGRectMake(102, 130, 100, 30); self.productAmountLabel.frame = CGRectMake(75, 145, 100, 30); productAmountTextLabel.frame = CGRectMake(117, 145, 100, 30); self.vaighaiWorldWideLabel.frame = CGRectMake(234, 170, 200, 30); } self.shipmentReferenceNumberLabel.text = @"Shipment Ref. #:"; self.shipmentReferenceNumberLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.shipmentReferenceNumberTextLabel.text = @"shipment_ref_no"; self.shipmentReferenceNumberTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.productNameLabel.text = @"products"; self.productNameLabel.font = [UIFont fontWithName:@"HelveticaNeue-Bold" size:10]; self.productAmountLabel.text = @"Amount:"; self.productAmountLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; // self.productAmountTextLabel.text = @"amount"; // self.productAmountTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; int i; for (i = 0 ; i<[shipmentReferenceNumberArray count]; i++){ NSLog(@"as %@",amountArray[i]); } NSLog(@"huhuhu %@",[NSString stringWithFormat:@"%@",amountArray[indexPath.row]]); // self.productAmountTextLabel.text = [NSString stringWithFormat:@"%@",amountArray[indexPath.row]]; // self.productAmountTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; productAmountTextLabel.text = amountArray[indexPath.row]; productAmountTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.productQuantityLabel.text = @"Quantity:"; self.productQuantityLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.productQuantityTextLabel.text = @"quantity"; self.productQuantityTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.invoiceLabel.text = @"invoiceNumber"; self.invoiceLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.invoiceDateLabel.text = @"invoiceDate"; self.invoiceDateLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.proformaInvoiceLabel.text = @"shipment_ref_no"; self.proformaInvoiceLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.shipmentStatusLabel.text = @"Cargo Shipped"; self.shipmentStatusLabel.font = [UIFont fontWithName:@"HelveticaNeue-BoldItalic" size:10]; self.shipmentStatusDateLabel.text = @"shipped date"; self.shipmentStatusDateLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.estimatedDateLabel.text = @"ETD:"; self.estimatedDateLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.estimatedDateTextLabel.text = @"etd"; self.estimatedDateTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.estimatedTimeArrivalLabel.text = @"/ ETA:"; self.estimatedTimeArrivalLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.estimatedTimeArrivalTextLabel.text = @"eta"; self.estimatedTimeArrivalTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.productImage.image = [UIImage imageNamed:@"BioMass Pellets.jpg"]; self.productRateLabel.text = @"Rate:"; self.productRateLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.productRateTextLabel.text = @"USD 100"; self.productRateTextLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:10]; self.vaighaiWorldWideLabel.text = @"Shipped by Vaighai World Wide"; self.vaighaiWorldWideLabel.font = [UIFont fontWithName:@"HelveticaNeue-Bold" size:10]; tableView.tableFooterView = [UIView new]; cell.layer.borderWidth = 1.0; cell.layer.cornerRadius = 10; cell.layer.borderColor = [UIColor blackColor].CGColor; [cell.contentView addSubview:self.proformaInvoiceLabel]; [cell.contentView addSubview:self.shipmentReferenceNumberLabel]; [cell.contentView addSubview:self.shipmentReferenceNumberTextLabel]; [cell.contentView addSubview:self.shipmentStatusDateLabel]; [cell.contentView addSubview:self.shipmentStatusLabel]; [cell.contentView addSubview:self.invoiceDateLabel]; [cell.contentView addSubview:self.invoiceLabel]; [cell.contentView addSubview:self.estimatedDateLabel]; [cell.contentView addSubview:self.estimatedDateTextLabel]; [cell.contentView addSubview:self.estimatedTimeArrivalLabel]; [cell.contentView addSubview:self.estimatedTimeArrivalTextLabel]; [cell.contentView addSubview:self.productImage]; [cell.contentView addSubview:self.productNameLabel]; [cell.contentView addSubview:self.productQuantityLabel]; [cell.contentView addSubview:self.productQuantityTextLabel]; [cell.contentView addSubview:self.productRateLabel]; [cell.contentView addSubview:self.productRateTextLabel]; [cell.contentView addSubview:self.productAmountLabel]; [cell.contentView addSubview:productAmountTextLabel]; [cell.contentView addSubview:self.vaighaiWorldWideLabel]; // cell.layer.shadowColor = [[UIColor blackColor] CGColor]; // cell.layer.shadowOffset = CGSizeMake(0.0f,0.0f); // cell.layer.shadowOpacity = 0.7f; // cell.layer.shadowRadius = 4.0f; //[cell.layer setMasksToBounds:YES]; return cell;}- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath;{ // NSLog(@"you have selected %d",indexPath.row); [tableView deselectRowAtIndexPath:indexPath animated:YES]; [tableView setAllowsSelection:YES]; UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; tabBar = [self.storyboard instantiateViewControllerWithIdentifier:@"TabBarController"]; [self.navigationController pushViewController:tabBar animated:YES];}
Now provide the response and nothing else.
| The problem is that you confuse TypeScript level modules and Webpack level modules. In Webpack any file that you import goes through some build pipeline. In Typescript only .ts and .js files are relevant and if you try to import x from file.png TypeScript just does not know what to do with it, Webpack config is not used by TypeScript. In your case you need to separate the concerns, use import from for TypeScript/EcmaScript code and use require for Webpack specifics. You would need to make TypeScript ignore this special Webpack require syntax with a definition like this in a .d.ts file: declare function require(string): string; This will make TypeScript ignore the require statements and Webpack will be able to process it in the build pipeline. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36148646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85615 |
stackexchange | llm_judgeable_groundtruth_similarity | 3259130 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Correct me if im wrong but while doing a foreach an IEnumerable<T> creates garbage no matter what T is. But I'm wondering if you have a List<T> where T is Entity. Then say there is a derived class in the list like Entity2D. Will it have to create a new enumerator for each derived class? Therefore creating garbage? Also does having an interface let's say IEntity as T create garbage?
Now provide the response and nothing else.
| List<T> 's GetEnumerator method actually is quite efficient. When you loop through the elements of a List<T> , it calls GetEnumerator. This, in turn, generates an internal struct which holds a reference to the original list, an index, and a version ID to track for changes in the list. However, since a struct is being used, it's really not creating "garbage" that the GC will ever deal with. As for "create a new enumerator for each derived class" - .NET generics works differently than C++ templates. In .NET, the List<T> class (and it's internal Enumerator<T> struct) is defined one time , and usable for any T. When used, a generic type for that specific type of T is required, but this is only the type information for that newly created type, and quite small in general. This differs from C++ templates, for example, where each type used is created at compile time, and "built in" to the executable. In .NET, the executable specifies the definition for List<T> , not List<int> , List<Entity2D> , etc... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3259130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78629/']} | jdg_85616 |
stackexchange | llm_judgeable_groundtruth_similarity | 8090235 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to update a total price when someone changes the select option. Here is the select element I'm using: <select id="payment_talks_purchased" name="payment[talks_purchased]"> <option value="1">One</option> <option value="2">Three</option></select> This is the jQuery I'm using: jQuery(document).ready(function() { var price = $(".total-price span.price") var save = $(".savings") $("#payment_talks_purchased").change(function() { var selection = $("#payment_talks_purchased").val() if (selection == 2) { price.html("$12"); save.css("visibility", "visible"); } else if (selection == 1) { price.html("$5"); save.css("visibility", "hidden"); } }); }); It works perfectly. It changes the price to $12 and shows the discount message. If I change the select option back to One/1, it changes the text back to $5 and removes the discount message. I converted this to CoffeeScript but it only works when I make the first change. The price is updated. However, when I try to change it back to option 1, it doesn't update. jQuery -> price = $(".total-price span.price") save = $(".savings") select = $("#payment_talks_purchased") select.change -> selection = select.val() if selection = 2 price.html "$12" return save.css "visibility", "visible" else if selection = 1 price.html "$5" return save.css "visibility", "hidden" I've been working on this for hours and am at my wits end. Any help would be greatly appreciated.
Now provide the response and nothing else.
| Your selection = 1 inside your if statements is (still) an assignment in CoffeeScript, you need to use == for comparison. Try this: jQuery -> price = $(".total-price span.price") save = $(".savings") select = $("#payment_talks_purchased") select.change -> selection = select.val() if selection == '2' price.html "$12" return save.css "visibility", "visible" else if selection == '1' price.html "$5" return save.css "visibility", "hidden" Also, == is converted to === so you'll want to compare against strings unless you want to "cast" your value to a number using selection = +select.val() (thanks to Trevor Burnham for this casting trick) or parseInt(select.val(), 10) . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8090235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/895518/']} | jdg_85617 |
stackexchange | llm_judgeable_groundtruth_similarity | 327533 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering today, that if $\mathscr{M}$ is the collection of all sets that admit a metric generating a compact topology, then Is $\mathscr{M}$ a set in ZFC? If it is, what is the cardinality of $\mathscr{M}$? The reason why I found $1.$ interesting is because in general, the collection of all sets is not a set in ZFC, and each set admits a metric space structure via discrete metric and discrete topology. However, only those sets that are finite are compact with this metric. So could compactness assumption restrict this collection to become a set? And about $2.$, I figured that atleast under the equivalence relation of homeomorphism the quotient space should be a set with cardinality less or equal than $2^{\mathfrak{c}}$, since every compact metric space admits an embedding to $[0,1]^{\mathbb{N}}$, which is of the size $\mathfrak{c}$. So for each equivalence class we could pick a representative from the power-set of $[0,1]^{\mathbb{N}}$. But how about if we consider the relation of being isometric instead of homeomorphic? And finally, if we drop the quotient spaces entirely? My motivation for these questions arose when I was reading about the Gromov-Hausdorff distance for compact metric spaces. Maybe there is a trivial answer that $\mathscr{M}$ is not a set.
Now provide the response and nothing else.
| For any $x$, $\{x\}$ with the metric $d(x,x)=0$ is a compact metric space, so it’s clear that $\mathscr{M}$ cannot be a set. However, it is true that $|X|\le 2^\omega$ for every $X\in\mathscr{M}$, so there is a set $\mathscr{M}_0$ of compact metric spaces such that every compact metric space is homeomorphic (indeed isometric) to one in $\mathscr{M}_0$. Added: For each cardinal $\kappa\le 2^\omega$ let $$\mathscr{M}_\kappa=\big\{\langle\kappa,d\rangle:d\text{ is a metric on }\kappa\big\}\subseteq{}^{\kappa\times\kappa}\Bbb R\;;$$ this is clearly a set, as is $\mathscr{M}=\bigcup\{\mathscr{M}_\kappa:\kappa\le 2^\omega\text{ is a cardinal}\}$, and every compact metric space is isometric to some space in $\mathscr{M}$. (For $\kappa>1$ $\mathscr{M}_\kappa$ contains $2^\kappa$ isometric copies of each space, corresponding to permutations of $\kappa$, so you might want to choose representatives of the isometry classes.) To answer your last question in the comments, a compact metric space $X$ is separable, and every compatible metric is continuous on $X\times X$, so it has at most $2^\omega$ compatible metrics. On the other hand, it $d$ is a compatible metric, then so is $\alpha d$ for every $\alpha>0$, so $X$ has $2^\omega$ compatible metrics (unless $X$ is the one-point space!). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/327533', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/22446/']} | jdg_85618 |
stackexchange | llm_judgeable_groundtruth_similarity | 45897 |
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand the concept of the efficient frontier and am able to calculate it in Python. But even when generating 50'000 random 10 asset portfolios, the single portfolios are not even close to the efficient frontier. I see that, for example, the maximum sharpe ratio portfolio has very pronounced allocation (most of the 10 asset get 0 allocation). Since this work is very critical for myself I just wanted to ask the community if you experienced similar behaviour? Is it normal that when generating random portfolios not even one lies near the efficient frontier? Please find the code below: def portfolio_annualised_performance(weights, mean_returns, cov_matrix): returns = np.sum(mean_returns*weights ) std = np.sqrt(np.dot(weights.T, np.dot(cov_matrix, weights))) return std, returnsdef random_portfolios(num_portfolios, mean_returns, cov_matrix, risk_free_rate): results = np.zeros((3,num_portfolios)) weights_record = [] for i in range(num_portfolios): weights = abs(np.random.randn(len(mean_returns))) weights /= np.sum(weights) weights_record.append(weights) portfolio_std_dev, portfolio_return = portfolio_annualised_performance(weights, mean_returns, cov_matrix) results[0,i] = portfolio_std_dev results[1,i] = portfolio_return results[2,i] = (portfolio_return - risk_free_rate) / portfolio_std_dev return results, weights_recorddef neg_sharpe_ratio(weights, mean_returns, cov_matrix, risk_free_rate): p_var, p_ret = portfolio_annualised_performance(weights, mean_returns, cov_matrix) return -(p_ret - risk_free_rate) / p_vardef max_sharpe_ratio(mean_returns, cov_matrix, risk_free_rate): num_assets = len(mean_returns) args = (mean_returns, cov_matrix, risk_free_rate) constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1}) bound = (0.0,1.0) bounds = tuple(bound for asset in range(num_assets)) result = sco.minimize(neg_sharpe_ratio, num_assets*[1./num_assets,], args=args, method='SLSQP', bounds=bounds, constraints=constraints) return resultdef portfolio_volatility(weights, mean_returns, cov_matrix): return portfolio_annualised_performance(weights, mean_returns, cov_matrix)[0]def min_variance(mean_returns, cov_matrix): num_assets = len(mean_returns) args = (mean_returns, cov_matrix) constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1}) bound = (0.0,1.0) bounds = tuple(bound for asset in range(num_assets)) result = sco.minimize(portfolio_volatility, num_assets*[1./num_assets,], args=args, method='SLSQP', bounds=bounds, constraints=constraints) return resultdef efficient_return(mean_returns, cov_matrix, target): num_assets = len(mean_returns) args = (mean_returns, cov_matrix) def portfolio_return(weights): return portfolio_annualised_performance(weights, mean_returns, cov_matrix)[1] constraints = ({'type': 'eq', 'fun': lambda x: portfolio_return(x) - target}, {'type': 'eq', 'fun': lambda x: np.sum(x) - 1}) bounds = tuple((0.0,1) for asset in range(num_assets)) result = sco.minimize(portfolio_volatility, num_assets*[1./num_assets,], args=args, method='SLSQP', bounds=bounds, constraints=constraints) return resultdef efficient_frontier(mean_returns, cov_matrix, returns_range): efficients = [] for ret in returns_range: efficients.append(efficient_return(mean_returns, cov_matrix, ret)) return efficientsdef display_calculated_ef_with_random(mean_returns, cov_matrix, num_portfolios, risk_free_rate): results, _ = random_portfolios(num_portfolios,mean_returns, cov_matrix, risk_free_rate) max_sharpe = max_sharpe_ratio(mean_returns, cov_matrix, risk_free_rate) sdp, rp = portfolio_annualised_performance(max_sharpe['x'], mean_returns, cov_matrix) max_sharpe_allocation = pd.DataFrame(max_sharpe.x,index=curr_w_terms,columns=['allocation']) max_sharpe_allocation.allocation = [round(i*100,4)for i in max_sharpe_allocation.allocation] max_sharpe_allocation = max_sharpe_allocation.T min_vol = min_variance(mean_returns, cov_matrix) sdp_min, rp_min = portfolio_annualised_performance(min_vol['x'], mean_returns, cov_matrix) min_vol_allocation = pd.DataFrame(min_vol.x,index=curr_w_terms,columns=['allocation']) min_vol_allocation.allocation = [round(i*100,4)for i in min_vol_allocation.allocation] min_vol_allocation = min_vol_allocation.T print("-"*80) print("Maximum Sharpe Ratio Portfolio Allocation\n") print("Annualised Return:", round(rp,4)) print("Annualised Volatility:", round(sdp,4)) print("\n") print(max_sharpe_allocation) print("-"*80) print("Minimum Volatility Portfolio Allocation\n") print("Annualised Return:", round(rp_min,4)) print("Annualised Volatility:", round(sdp_min,4)) print("\n") print(min_vol_allocation) plt.figure(figsize=(10, 7)) plt.scatter(results[0,:],results[1,:],c=results[2,:],cmap='YlGnBu', marker='o', s=10, alpha=0.3) plt.colorbar() plt.scatter(sdp,rp,marker='*',color='r',s=500, label='Maximum Sharpe ratio') plt.scatter(sdp_min,rp_min,marker='*',color='g',s=500, label='Minimum volatility') target = np.linspace(rp_min, 0.05, 20) efficient_portfolios = efficient_frontier(mean_returns, cov_matrix, target) plt.plot([p['fun'] for p in efficient_portfolios], target, linestyle='-.', color='black', label='efficient frontier') plt.title('Calculated Portfolio Optimization based on Efficient Frontier') plt.xlabel('annualised volatility') plt.ylabel('annualised returns') plt.legend(labelspacing=0.8) plt.ylim([-0.005,0.03]) plt.xlim([0.0,0.05])display_calculated_ef_with_random(log_ret, new_cov, 50000, 0) I haven't annualised the Covar-Matrix since I already have annual return estimates as well as covar estimates. My very question is: is this plausible or not? EDIT Since the weight generation process of my random portfolios seems to preffer too similar portfolio I changed the following function: def random_portfolios(num_portfolios, mean_returns, cov_matrix, risk_free_rate): results = np.zeros((3,num_portfolios)) weights_record = [] for i in range(num_portfolios): weights = abs(np.random.randn(len(mean_returns))) weights[weights<1] = 0 if sum(weights)==0: print("sum=0") indexes = np.unique(np.random.randint(0,10,3)).tolist() weights[indexes] = abs(np.random.randn(len(indexes))) weights /= np.sum(weights) weights_record.append(weights) portfolio_std_dev, portfolio_return = portfolio_annualised_performance(weights, mean_returns, cov_matrix) results[0,i] = portfolio_std_dev results[1,i] = portfolio_return results[2,i] = (portfolio_return - risk_free_rate) / portfolio_std_dev return results, weights_record After doing so, the Portfolios are way better distributed: So, can we then agree that the above code does what it should and I can continue from here?
Now provide the response and nothing else.
| You seem to have two distinct problems: How to generate random portfolios How optimal portfolios are structured Ad 1) A straightforward way to simulate the weights of random portfolios is to use the Dirichlet distribution $Dir(\alpha_1,\ldots,\alpha_n)$ . This is a distribution on the Simplex (i.e. on $S=\{x\in\mathbb{R}^n | \sum x_i =1, x_i\geq 0\}$ , which can give you very diversified as well as very concentrated portfolios. Setting all $\alpha_i=1$ gives you the uniform distribution on the simplex, making some $\alpha_i$ smaller will give you portfolios concentrated in those assets making $\alpha_i$ larger more diversified allocations. All pertinent facts about $Dir(\alpha_1,\ldots,\alpha_n)$ can be found in the Wikipedia article . Below are two plots of densities for different choices of an exchangeable Dirichlet density for the two-Simplex (which is a triangle in space): Ad 2) Your "optimal" portfolio will depend on your optimisation criterion and the joint asset returns. So I doubt anyone can make non-trivial general statements. But optimal portfolios by definition are extreme. Hence it is not surprising for them to be non-generic. Judging from my experience, Sharpe ratio does indeed favour very imbalanced portfolios with little diversification. | {} | {'log_upvote_score': 5, 'links': ['https://quant.stackexchange.com/questions/45897', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/30682/']} | jdg_85619 |
stackexchange | llm_judgeable_groundtruth_similarity | 32486198 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to update firmware and settings on a device connected to a serial port.Since this is done by a sequence of commands, I send a command and wait until I recive an answer. Inside the answere (many lines) I search for a string that indicates if the operation is finished successfully. Serial->write(“boot”, 1000);Serial->waitForKeyword(“boot successful”);Serial->sendFile(“image.dat”);… So I’ve created a new Thread for this blocking read/write method. Inside the thread I make use of the waitForX() functions.If I call watiForKeyword() it will call readLines() until it detects the keyword or timesout bool waitForKeyword(const QString &keyword){ QString str; // read all lines while(serial->readLines(10000)) { // check each line while((str = serial->getLine()) != "") { // found! if(str.contains(keyword)) return true; } } // timeout return false;} readLines() reads everything available and separates it into lines , each line is placed inside a QStringList and to get a string I call getLine() which returns the first string in the list and deletes it. bool SerialPort::readLines(int waitTimeout){if(!waitForReadyRead(waitTimeout)){ qDebug() << "Timeout reading" << endl; return false;}QByteArray data = readAll();while (waitForReadyRead(100)) data += readAll();char* begin = data.data();char* ptr = strstr(data, "\r\n");while(ptr != NULL){ ptr+=2; buffer.append(begin, ptr - begin); emit readyReadLine(buffer); lineBuffer.append(QString(buffer)); // store line in Qstringlist buffer.clear(); begin = ptr; ptr = strstr(begin, "\r\n");}// restbuffer.append(begin, -1);return true;} The problem is if I send a file via terminal to test the app readLines() will only read a smale part of the file ( 5 Lines or so). Since these lines do not contain the keyword. the function will run once again, but this time it dosnt wait for timeout, readLines just return false immediately.Whats wrong ? Also I'm not shure if this is the right approach... Does anyone know how to send a sequenze of commands and wait for a response each time?
Now provide the response and nothing else.
| Let's use QStateMachine to make this simple. Let's recall how you wished such code would look: Serial->write("boot", 1000);Serial->waitForKeyword("boot successful");Serial->sendFile("image.dat"); Let's put it in a class that has explicit state members for each state the programmer could be in. We'll also have action generators send , expect , etc. that attach given actions to states. // https://github.com/KubaO/stackoverflown/tree/master/questions/comm-commands-32486198#include <QtWidgets>#include <private/qringbuffer_p.h>#include <type_traits>[...]class Programmer : public StatefulObject { Q_OBJECT AppPipe m_port { nullptr, QIODevice::ReadWrite, this }; State s_boot { &m_mach, "s_boot" }, s_send { &m_mach, "s_send" }; FinalState s_ok { &m_mach, "s_ok" }, s_failed { &m_mach, "s_failed" };public: Programmer(QObject * parent = 0) : StatefulObject(parent) { connectSignals(); m_mach.setInitialState(&s_boot); send (&s_boot, &m_port, "boot\n"); expect(&s_boot, &m_port, "boot successful", &s_send, 1000, &s_failed); send (&s_send, &m_port, ":HULLOTHERE\n:00000001FF\n"); expect(&s_send, &m_port, "load successful", &s_ok, 1000, &s_failed); } AppPipe & pipe() { return m_port; }}; This is fully functional, complete code for the programmer! Completely asynchronous, non-blocking, and it handles timeouts, too. It's possible to have infrastructure that generates the states on-the-fly, so that you don't have to manually create all the states. The code is much smaller and IMHO easier to comperehend if you have explicit states. Only for complex communication protocols with 50-100+ states would it make sense to get rid of explicit named states. The AppPipe is a simple intra-process bidirectional pipe that can be used as a stand-in for a real serial port: // See http://stackoverflow.com/a/32317276/1329652/// A simple point-to-point intra-process pipe. The other endpoint can live in any/// thread.class AppPipe : public QIODevice { [...]}; The StatefulObject holds a state machine, some basic signals useful for monitoring the state machine's progress, and the connectSignals method used to connect the signals with the states: class StatefulObject : public QObject { Q_OBJECT Q_PROPERTY (bool running READ isRunning NOTIFY runningChanged)protected: QStateMachine m_mach { this }; StatefulObject(QObject * parent = 0) : QObject(parent) {} void connectSignals() { connect(&m_mach, &QStateMachine::runningChanged, this, &StatefulObject::runningChanged); for (auto state : m_mach.findChildren<QAbstractState*>()) QObject::connect(state, &QState::entered, this, [this, state]{ emit stateChanged(state->objectName()); }); }public: Q_SLOT void start() { m_mach.start(); } Q_SIGNAL void runningChanged(bool); Q_SIGNAL void stateChanged(const QString &); bool isRunning() const { return m_mach.isRunning(); }}; The State and FinalState are simple named state wrappers in the style of Qt 3. They allow us to declare the state and give it a name in one go. template <class S> struct NamedState : S { NamedState(QState * parent, const char * name) : S(parent) { this->setObjectName(QLatin1String(name)); }};typedef NamedState<QState> State;typedef NamedState<QFinalState> FinalState; The action generators are quite simple, too. The meaning of an action generator is "do something when a given state is entered". The state to act on is always given as the first argument. The second and subsequent arguments are specific to the given action. Sometimes, an action might need a target state as well, e.g. if it succeeds or fails. void send(QAbstractState * src, QIODevice * dev, const QByteArray & data) { QObject::connect(src, &QState::entered, dev, [dev, data]{ dev->write(data); });}QTimer * delay(QState * src, int ms, QAbstractState * dst) { auto timer = new QTimer(src); timer->setSingleShot(true); timer->setInterval(ms); QObject::connect(src, &QState::entered, timer, static_cast<void (QTimer::*)()>(&QTimer::start)); QObject::connect(src, &QState::exited, timer, &QTimer::stop); src->addTransition(timer, SIGNAL(timeout()), dst); return timer;}void expect(QState * src, QIODevice * dev, const QByteArray & data, QAbstractState * dst, int timeout = 0, QAbstractState * dstTimeout = nullptr){ addTransition(src, dst, dev, SIGNAL(readyRead()), [dev, data]{ return hasLine(dev, data); }); if (timeout) delay(src, timeout, dstTimeout);} The hasLine test simply checks all lines that can be read from the device for a given needle. This works fine for this simple communications protocol. You'd need more complex machinery if your communications were more involved. It is necessary to read all the lines, even if you find your needle. That's because this test is invoked from the readyRead signal, and in that signal you must read all the data that fulfills a chosen criterion. Here, the criterion is that the data forms a full line. static bool hasLine(QIODevice * dev, const QByteArray & needle) { auto result = false; while (dev->canReadLine()) { auto line = dev->readLine(); if (line.contains(needle)) result = true; } return result;} Adding guarded transitions to states is a bit cumbersome with the default API, so we will wrap it to make it easier to use, and to keep the action generators above readable: template <typename F>class GuardedSignalTransition : public QSignalTransition { F m_guard;protected: bool eventTest(QEvent * ev) Q_DECL_OVERRIDE { return QSignalTransition::eventTest(ev) && m_guard(); }public: GuardedSignalTransition(const QObject * sender, const char * signal, F && guard) : QSignalTransition(sender, signal), m_guard(std::move(guard)) {} GuardedSignalTransition(const QObject * sender, const char * signal, const F & guard) : QSignalTransition(sender, signal), m_guard(guard) {}};template <typename F> static GuardedSignalTransition<F> *addTransition(QState * src, QAbstractState *target, const QObject * sender, const char * signal, F && guard) { auto t = new GuardedSignalTransition<typename std::decay<F>::type> (sender, signal, std::forward<F>(guard)); t->setTargetState(target); src->addTransition(t); return t;} That's about it - if you had a real device, that's all you need. Since I don't have your device, I'll create another StatefulObject to emulate the presumed device behavior: class Device : public StatefulObject { Q_OBJECT AppPipe m_dev { nullptr, QIODevice::ReadWrite, this }; State s_init { &m_mach, "s_init" }, s_booting { &m_mach, "s_booting" }, s_firmware { &m_mach, "s_firmware" }; FinalState s_loaded { &m_mach, "s_loaded" };public: Device(QObject * parent = 0) : StatefulObject(parent) { connectSignals(); m_mach.setInitialState(&s_init); expect(&s_init, &m_dev, "boot", &s_booting); delay (&s_booting, 500, &s_firmware); send (&s_firmware, &m_dev, "boot successful\n"); expect(&s_firmware, &m_dev, ":00000001FF", &s_loaded); send (&s_loaded, &m_dev, "load successful\n"); } Q_SLOT void stop() { m_mach.stop(); } AppPipe & pipe() { return m_dev; }}; Now let's make it all nicely visualized. We'll have a window with a text browser showing the contents of the communications. Below it will be buttons to start/stop the programmer or the device, and labels indicating the state of the emulated device and the programmer: int main(int argc, char ** argv) { using Q = QObject; QApplication app{argc, argv}; Device dev; Programmer prog; QWidget w; QGridLayout grid{&w}; QTextBrowser comms; QPushButton devStart{"Start Device"}, devStop{"Stop Device"}, progStart{"Start Programmer"}; QLabel devState, progState; grid.addWidget(&comms, 0, 0, 1, 3); grid.addWidget(&devState, 1, 0, 1, 2); grid.addWidget(&progState, 1, 2); grid.addWidget(&devStart, 2, 0); grid.addWidget(&devStop, 2, 1); grid.addWidget(&progStart, 2, 2); devStop.setDisabled(true); w.show(); We'll connect the device's and programmer's AppPipe s. We'll also visualize what the programmer is sending and receiving: dev.pipe().addOther(&prog.pipe()); prog.pipe().addOther(&dev.pipe()); Q::connect(&prog.pipe(), &AppPipe::hasOutgoing, &comms, [&](const QByteArray & data){ comms.append(formatData(">", "blue", data)); }); Q::connect(&prog.pipe(), &AppPipe::hasIncoming, &comms, [&](const QByteArray & data){ comms.append(formatData("<", "green", data)); }); Finally, we'll connect the buttons and labels: Q::connect(&devStart, &QPushButton::clicked, &dev, &Device::start); Q::connect(&devStop, &QPushButton::clicked, &dev, &Device::stop); Q::connect(&dev, &Device::runningChanged, &devStart, &QPushButton::setDisabled); Q::connect(&dev, &Device::runningChanged, &devStop, &QPushButton::setEnabled); Q::connect(&dev, &Device::stateChanged, &devState, &QLabel::setText); Q::connect(&progStart, &QPushButton::clicked, &prog, &Programmer::start); Q::connect(&prog, &Programmer::runningChanged, &progStart, &QPushButton::setDisabled); Q::connect(&prog, &Programmer::stateChanged, &progState, &QLabel::setText); return app.exec();}#include "main.moc" The Programmer and Device could live in any thread. I've left them in the main thread since there's no reason to move them out, but you could put both into a dedicated thread, or each into its own thread, or into threads shared with other objects, etc. It's completely transparent since AppPipe supports communications across the threads. This would also be the case if QSerialPort was used instead of AppPipe . All that matters is that each instance of a QIODevice is used from one thread only. Everything else happens via signal/slot connections. E.g. if you wanted the Programmer to live in a dedicated thread, you'd add the following somewhere in main : // fix QThread brokenness struct Thread : QThread { ~Thread() { quit(); wait(); } }; Thread progThread; prog.moveToThread(&progThread); progThread.start(); A little helper formats the data to make it easier to read: static QString formatData(const char * prefix, const char * color, const QByteArray & data) { auto text = QString::fromLatin1(data).toHtmlEscaped(); if (text.endsWith('\n')) text.truncate(text.size() - 1); text.replace(QLatin1Char('\n'), QString::fromLatin1("<br/>%1 ").arg(QLatin1String(prefix))); return QString::fromLatin1("<font color=\"%1\">%2 %3</font><br/>") .arg(QLatin1String(color)).arg(QLatin1String(prefix)).arg(text);} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32486198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2611661/']} | jdg_85620 |
stackexchange | llm_judgeable_groundtruth_similarity | 436967 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The question has been motivated by the fact that the $1+1$ massless bosonic free field suffers the infrared problem as a "tempered distribution". The reason is essentially that $\int_{\mathbb{R}} \frac{dp}{\lvert p \rvert}$ is logarithmically divergent. Since this is a infrared problem, I am curious whether the issue will be resolved by introducing a infrared cutoff, which is mathematically interpreted as compact supports in the spacetime variable. In other words, the $1+1$ massless bosonic free field can be defined as a "just distribution" instead of being tempered? Or more concretely, does the following integral converges for an arbitrary compactly supported smooth function $f(x,y)$ on $\mathbb{R}^2$ ?: \begin{equation}\int_{\mathbb{R}}\frac{dp}{\lvert p \rvert} \int_{\mathbb{R}^2} dxdy f(x,y)e^{i(-\lvert p \rvert x+ py)}\end{equation} It seems nontrivial to evaluate the above integral for me. Could anyone please help?
Now provide the response and nothing else.
| The massless GFF is well-defined as a random tempered distribution modulo constants, i.e. an element of the dual of the space of Schwartz test functions with vanishing integral. If you want it to be defined as a "normal" random tempered distribution, then you have to arbitrarily fix the zero mode somehow. For example, you could enforce that testing against the indicator function of the centred unit ball gives zero. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/436967', 'https://mathoverflow.net', 'https://mathoverflow.net/users/56524/']} | jdg_85621 |
stackexchange | llm_judgeable_groundtruth_similarity | 69311 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been on this site now a couple of months and I notice various symbols used for MOSFETs. What is the preferred symbol for an N Channel MOSFET and why?
Now provide the response and nothing else.
| It is likely that you saw a Circuit Lab sysmbol and that this caused you to ask this question. The Circuit Lab N Channel MOSFET symbol is both unusual and illogical. I'd avoid using them if at all possible. Read on ... Acceptable [tm] N Channel MOSFET symbol tends to have these characteristics. Gate symbol on one side. 3 "contacts" on other side vertically. Top of these is drain.Bottom of these 3 is source. Middle has an arrow pointing INTO the FET and the outside end is connected to source. This indicates that there is a connected body diode and that it is non conducting when the source is more negative than the drain (arrow is same as would be for a discrete diode). Any symbol which obeys these guidelines should be "clear enough" and OK to use. I have very occasionally seen people use a symbol which does not comply with these guidelines but which is still recognisable as an N Channel MOSFET. SO. Any of these are OK, and you can see the differences for the unmarked P Channels. Many more examples here But!!! Jippie's example shows the rogue version. [Note: See below - this is in fact intended to be a P Channel sysmbol]. Truly horrible. I'd have to wonder if this was a P Channel symbol or an N Channel one. Even the discussion it is taken from has people expressing uncertainty re arrow direction. As shown IF that is an N Channel then it is implying body diode polarity and NOT current flow in source. Thusly ________________' Circuit Lab is apparently the (or a) culprit. This is their symbol for an N Channel MOSFET. A nasty piece of work, alas. Arrow shows usual drain-source conduction direction BUT as a MOSFET is a 2 quadrant device and will provide a true resistive on channel with \$V_{gs}\$ positive BUT \$V_{ds}\$ negative, the arrow is meaningless and, as it is in the opposite direction to most N Channel MOSFET sysmbols it is misleading to most. (Note the proper use of this symbol in table below). USER23909 helpfully pointed out this page - Wikipedia - MOSFET . This page includes the following symbols. User xxx says these may be IPC standards, but Wikipedia is silent re their source. Wikipedia MOSFET symbols http://en.wikipedia.org/wiki/MOSFET#Circuit_symbols | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/69311', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/20218/']} | jdg_85622 |
stackexchange | llm_judgeable_groundtruth_similarity | 106102 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering whether it makes sense to use the $\sum $ notation for uncountable indexing sets. For example it seems to me it would not make sense to say $$\sum_{a \in A} a \quad \text{where A is some uncountable indexing set e.g. some $A \subset \mathbb{R}$ }$$ Would it be better to avoid the above notation in general for uncountable indexing sets ? Any help in making better sense of this would be very appreciated.
Now provide the response and nothing else.
| It makes sense when it is given a precise definition that makes sense. If $I$ indexes a set of elements of a Hausdorff topological abelian group, then $\sum\limits_{i\in I}a_i$ can be used to denote the limit of the net of finite sums, with the finite subsets of $I$ directed by inclusion, when this limit exists. In particular, if each $a_i$ is a nonnegative real number, then $\sum\limits_{i\in I}a_i$ exists as an element of $\mathbb R$ if and only if $\sup\left\{\sum\limits_{i\in F}a_i:F\text{ is a finite subset of }A\right\}<\infty$, and in that case $\sum\limits_{i\in I}a_i$ equals that supremum. Finiteness of this sum implies that $\{i\in I:a_i\neq 0\}$ is countable. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/106102', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23446/']} | jdg_85623 |
stackexchange | llm_judgeable_groundtruth_similarity | 15203485 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was trying to figure out how to unit test if my the URLs of my controllers are properly secured. Just in case someone changes things around and accidentally removes security settings. My controller method looks like this: @RequestMapping("/api/v1/resource/test") @Secured("ROLE_USER")public @ResonseBody String test() { return "test";} I set up a WebTestEnvironment like so: import javax.annotation.Resource;import javax.naming.NamingException;import javax.sql.DataSource;import org.junit.Before;import org.junit.runner.RunWith;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.beans.factory.annotation.Qualifier;import org.springframework.context.support.ClassPathXmlApplicationContext;import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;import org.springframework.security.core.userdetails.UserDetails;import org.springframework.security.core.userdetails.UserDetailsService;import org.springframework.security.web.FilterChainProxy;import org.springframework.test.context.ActiveProfiles;import org.springframework.test.context.ContextConfiguration;import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;import org.springframework.test.context.web.WebAppConfiguration;import org.springframework.test.web.servlet.MockMvc;import org.springframework.test.web.servlet.setup.MockMvcBuilders;import org.springframework.web.context.WebApplicationContext;@RunWith(SpringJUnit4ClassRunner.class)@WebAppConfiguration@ContextConfiguration({ "file:src/main/webapp/WEB-INF/spring/security.xml", "file:src/main/webapp/WEB-INF/spring/applicationContext.xml", "file:src/main/webapp/WEB-INF/spring/servlet-context.xml" })public class WebappTestEnvironment2 { @Resource private FilterChainProxy springSecurityFilterChain; @Autowired @Qualifier("databaseUserService") protected UserDetailsService userDetailsService; @Autowired private WebApplicationContext wac; @Autowired protected DataSource dataSource; protected MockMvc mockMvc; protected final Logger logger = LoggerFactory.getLogger(this.getClass()); protected UsernamePasswordAuthenticationToken getPrincipal(String username) { UserDetails user = this.userDetailsService.loadUserByUsername(username); UsernamePasswordAuthenticationToken authentication = new UsernamePasswordAuthenticationToken( user, user.getPassword(), user.getAuthorities()); return authentication; } @Before public void setupMockMvc() throws NamingException { // setup mock MVC this.mockMvc = MockMvcBuilders .webAppContextSetup(this.wac) .addFilters(this.springSecurityFilterChain) .build(); }} In my actual test I tried to do something like this: import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;import org.junit.Test;import org.springframework.mock.web.MockHttpSession;import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;import org.springframework.security.core.context.SecurityContextHolder;import org.springframework.security.web.context.HttpSessionSecurityContextRepository;import eu.ubicon.webapp.test.WebappTestEnvironment;public class CopyOfClaimTest extends WebappTestEnvironment { @Test public void signedIn() throws Exception { UsernamePasswordAuthenticationToken principal = this.getPrincipal("test1"); SecurityContextHolder.getContext().setAuthentication(principal); super.mockMvc .perform( get("/api/v1/resource/test")// .principal(principal) .session(session)) .andExpect(status().isOk()); }} I picked this up here: http://java.dzone.com/articles/spring-test-mvc-junit-testing here: http://techdive.in/solutions/how-mock-securitycontextholder-perfrom-junit-tests-spring-controller or here: How to JUnit tests a @PreAuthorize annotation and its spring EL specified by a spring MVC Controller? Yet if one looks closely this only helps when not sending actual requests to URLs, but only when testing services on a function level. In my case an "access denied" exception was thrown: org.springframework.security.access.AccessDeniedException: Access is denied at org.springframework.security.access.vote.AffirmativeBased.decide(AffirmativeBased.java:83) ~[spring-security-core-3.1.3.RELEASE.jar:3.1.3.RELEASE] at org.springframework.security.access.intercept.AbstractSecurityInterceptor.beforeInvocation(AbstractSecurityInterceptor.java:206) ~[spring-security-core-3.1.3.RELEASE.jar:3.1.3.RELEASE] at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:60) ~[spring-security-core-3.1.3.RELEASE.jar:3.1.3.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) ~[spring-aop-3.2.1.RELEASE.jar:3.2.1.RELEASE] ... The following two log messages are noteworthy basically saying that no user was authenticated indicating that setting the Principal did not work, or that it was overwritten. 14:20:34.454 [main] DEBUG o.s.s.a.i.a.MethodSecurityInterceptor - Secure object: ReflectiveMethodInvocation: public java.util.List test.TestController.test(); target is of class [test.TestController]; Attributes: [ROLE_USER]14:20:34.454 [main] DEBUG o.s.s.a.i.a.MethodSecurityInterceptor - Previously Authenticated: org.springframework.security.authentication.AnonymousAuthenticationToken@9055e4a6: Principal: anonymousUser; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails@957e: RemoteIpAddress: 127.0.0.1; SessionId: null; Granted Authorities: ROLE_ANONYMOUS
Now provide the response and nothing else.
| It turned out that the SecurityContextPersistenceFilter , which is part of the Spring Security filter chain, always resets my SecurityContext , which I set calling SecurityContextHolder.getContext().setAuthentication(principal) (or by using the .principal(principal) method). This filter sets the SecurityContext in the SecurityContextHolder with a SecurityContext from a SecurityContextRepository OVERWRITING the one I set earlier. The repository is a HttpSessionSecurityContextRepository by default. The HttpSessionSecurityContextRepository inspects the given HttpRequest and tries to access the corresponding HttpSession . If it exists, it will try to read the SecurityContext from the HttpSession . If this fails, the repository generates an empty SecurityContext . Thus, my solution is to pass a HttpSession along with the request, which holds the SecurityContext : import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;import org.junit.Test;import org.springframework.mock.web.MockHttpSession;import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;import org.springframework.security.core.context.SecurityContextHolder;import org.springframework.security.web.context.HttpSessionSecurityContextRepository;import eu.ubicon.webapp.test.WebappTestEnvironment;public class Test extends WebappTestEnvironment { public static class MockSecurityContext implements SecurityContext { private static final long serialVersionUID = -1386535243513362694L; private Authentication authentication; public MockSecurityContext(Authentication authentication) { this.authentication = authentication; } @Override public Authentication getAuthentication() { return this.authentication; } @Override public void setAuthentication(Authentication authentication) { this.authentication = authentication; } } @Test public void signedIn() throws Exception { UsernamePasswordAuthenticationToken principal = this.getPrincipal("test1"); MockHttpSession session = new MockHttpSession(); session.setAttribute( HttpSessionSecurityContextRepository.SPRING_SECURITY_CONTEXT_KEY, new MockSecurityContext(principal)); super.mockMvc .perform( get("/api/v1/resource/test") .session(session)) .andExpect(status().isOk()); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15203485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/991496/']} | jdg_85624 |
stackexchange | llm_judgeable_groundtruth_similarity | 578898 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $[a_{i,j}(x_1,\ldots,x_n)]$ be a skew-symmetric $n\times n$ matrix of functions $a_{i,j}\in C^\infty(\mathbb{R}^n)$ . The vector field $$v=\sum\left(\dfrac{\partial}{\partial x_i}a_{i,j}\right)\dfrac{\partial}{\partial x_j}$$ is divergence-free . Prove by induction that for every $n\geq 2$ , every $C^\infty$ divergence-free vector field on $\mathbb{R}^n$ is of this form. Consider $n=2$ . Suppose the vector field is $f_1(x_1,x_2)\dfrac{\partial}{\partial x_1}+f_2(x_1,x_2)\dfrac{\partial}{\partial x_2}$ . Since the vector field is divergence-free, we have that $\dfrac{\partial}{\partial x_1}f_1(x_1,x_2)+\dfrac{\partial}{\partial x_2}f_2(x_1,x_2)=0$ . By this result , there exists a function $g(x_1,x_2)$ whose $x_1$ -derivative equals $f_2$ and whose $x_2$ -derivative equals $-f_1$ . The result follows. But how about for $n>2$ ? To use induction, I have to relate a divergence-free vector field of $\mathbb{R}^n$ to a divergence-free vector field of $\mathbb{R}^{n-1}$ . It is possible that the following result will help: Let $v$ be a vector field on $\mathbb{R}^n$ . Show that $v$ can be written as a sum $v=f_1\dfrac{\partial}{\partial x_1}+w$ where $w$ is a divergence-free vector field.
Now provide the response and nothing else.
| Induction Hypothesis over $k$: Given a smooth divergence free vector field $v:\mathbb R^n \to \mathbb R^n$ such that $\text{div}_{k} v := \sum_{i=1}^k \frac{\partial v_i}{\partial x_i} = 0$, there exists a smooth skew-symmetric matrix $a:\mathbb R^n \to \mathbb R^{n\times n}$ such that $v_j = \sum_{i=1}^k \frac\partial{\partial x_i} a_{ij}$ for $1 \le i \le k$. The case $k=0$ is trivial. Suppose it is true for $k-1 \ge 0$. We prove it for $k$. Let$$ f_1(x_1,\dots,x_n) = \int_0^{x_1} \frac{\partial}{\partial x_k} v_k(\xi,x_2,x_1,\dots,x_n) \, d\xi .$$Then$$ \frac{\partial}{\partial x_1}(v_1+f_1) + \frac{\partial}{\partial x_2}v_2 + \dots + \frac{\partial}{\partial x_{k-1}}v_{k-1} = 0 .$$ By the inductive hypothesis, there is a skew symmetric matrix $a_{ij}$ such$$ v_1 + f_1 = \sum_{i=1}^{k-1} \frac{\partial}{\partial x_i} a_{i1} $$$$ v_j = \sum_{i=1}^{k-1} \frac{\partial}{\partial x_i} a_{ij} \quad \text{ for $2 \le j \le k-1$}$$ We define$$ f_2(x_1,\dots,x_n) = \int_0^{x_1} v_k(\xi,x_2,\dots,x_{k-1},0,x_{k+1},\dots,x_n) \, d\xi - \int_0^{x_k} f_1(x_1,\dots,x_{k-1},\xi,\dots,x_n) \, d\xi .$$Then $$ \frac\partial{\partial x_1} f_2 = v_k(x_1,\dots,x_{k-1},0,\dots,x_n) - \int_0^{x_k} \frac\partial{\partial x_k} v_k(x_1,\dots,x_{k-1},\xi,\dots,x_n) \, d\xi = -v_k $$and$$ \frac\partial{\partial x_k} f_2 = - f_1 $$Now extend the matrix $a$ by setting $a_{k1} = -a_{1k} = f_2$, and $a_{kj}=a_{jk} = 0$ for $2 \le j \le k$.Then$$ \sum_{i=1}^{k} \frac{\partial}{\partial x_i} a_{i1} = v_1 + f_1 + \frac\partial{\partial x_k} f_2 = v_1, $$$$ \sum_{i=1}^{k} \frac{\partial}{\partial x_i} a_{j1} = v_j \quad \text{ for $2 \le j \le k-1$}, $$and$$ \sum_{i=1}^{k} \frac{\partial}{\partial x_i} a_{ik} = - \frac\partial{\partial x_1} f_2 = v_k. $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/578898', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108892/']} | jdg_85625 |
stackexchange | llm_judgeable_groundtruth_similarity | 38064697 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Say I have some_data of shape (1, n) . I have new incoming_data of shape (1, n±x) , where x is some positive integer much smaller than n . I would like to squeeze or stretch incoming_data such that it is of the same length as n . How might this be done, using the SciPy stack? Here's an example of what I'm trying to accomplish. # Stretch arr2 to arr1's shape while "filling in" interpolated valuearr1 = np.array([1, 5, 2, 3, 7, 2, 1])arr2 = np.array([1, 5, 2, 3, 7, 1])result> np.array([1, 5, 2, 3, 6.x, 2.x 1]) # of shape (arr1.shape) As another example: # Squeeze arr2 to arr1's shape while placing interpolated value.arr1 = np.array([1, 5, 2, 3, 7, 2, 1])arr2 = np.array([1, 5, 2, 3, 4, 7, 2, 1])result> np.array([1, 5, 2, 3.x, 7.x, 2.x, 1]) # of shape (arr1.shape)
Now provide the response and nothing else.
| You can implement this simple compression or stretching of your data using scipy.interpolate.interp1d . I'm not saying it necessarily makes sense (it makes a huge difference what kind of interpolation you're using, and you'll generally only get a reasonable result if you can correctly guess the behaviour of the underlying function), but you can do it. The idea is to interpolate your original array over its indices as x values, then perform interpolation with a sparser x mesh, while keeping its end points the same. So essentially you have to do a continuum approximation to your discrete data, and resample that at the necessary points: import numpy as npimport scipy.interpolate as interpimport matplotlib.pyplot as pltarr_ref = np.array([1, 5, 2, 3, 7, 1]) # shape (6,), referencearr1 = np.array([1, 5, 2, 3, 7, 2, 1]) # shape (7,), to "compress"arr2 = np.array([1, 5, 2, 7, 1]) # shape (5,), to "stretch"arr1_interp = interp.interp1d(np.arange(arr1.size),arr1)arr1_compress = arr1_interp(np.linspace(0,arr1.size-1,arr_ref.size))arr2_interp = interp.interp1d(np.arange(arr2.size),arr2)arr2_stretch = arr2_interp(np.linspace(0,arr2.size-1,arr_ref.size))# plot the examples, assuming same x_min, x_max for all dataxmin,xmax = 0,1fig,(ax1,ax2) = plt.subplots(ncols=2)ax1.plot(np.linspace(xmin,xmax,arr1.size),arr1,'bo-', np.linspace(xmin,xmax,arr1_compress.size),arr1_compress,'rs')ax2.plot(np.linspace(xmin,xmax,arr2.size),arr2,'bo-', np.linspace(xmin,xmax,arr2_stretch.size),arr2_stretch,'rs') ax1.set_title('"compress"')ax2.set_title('"stretch"') The resulting plot: In the plots, blue circles are the original data points, and red squares are the interpolated ones (these overlap at the boundaries). As you can see, what I called compressing and stretching is actually upsampling and downsampling of an underlying (linear, by default) function. This is why I said you must be very careful with interpolation: you can get very wrong results if your expectations don't match your data. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38064697', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1274908/']} | jdg_85626 |
stackexchange | llm_judgeable_groundtruth_similarity | 56275 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to prove that if a regular parametrized curve has the property that all its tangent lines passs through a fixed point then its trace is a segment of a straight line? Thanks
Now provide the response and nothing else.
| In his answer, user14242 used a vector multiplication for two vectors - while I believe it's only defined for 3-dim case. If you talk not only about the 3-dim problem then just writing explicitly the equation of the tangent line you obtain$$r(t)+\dot{r}(t)\tau(t) = a$$where $a$ is a fixed point and $\tau(t)$ denotes the value of parameter when the tangent line crosses $a$. Let's assume that $t$ is a natural parametrization of the curve. Taking the derivative w.r.t. $t$ you have$$\dot{r}(t)(1+\dot{\tau}(t))+n(t)\tau(t) = 0$$where $n(t) = \ddot{r}(t)$ is a normal vector. Also $n\cdot\dot{r} = 0$ and then$$\tau(t)\|n(t)\|^2 = 0.$$You have $\tau(t) = 0$ iff $r(t) = a$ and for all other points $n(t) = 0$ which gives us $\dot{r}(t) = const$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/56275', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14284/']} | jdg_85627 |
stackexchange | llm_judgeable_groundtruth_similarity | 31097016 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Although I seem to get strange results occasionally these seem to me to be the same so can someone describe the difference?
Now provide the response and nothing else.
| 'nextSibling' returns the next Node object whereas 'nextElementSibling' returns the next Element object, so probably the real question is what is the difference between a Node & an Element? Basically an Element is specified by an HTML Tag whereas a Node is any Object in the DOM, so an Element is a Node but a Node can also include Text Nodes in the form of whitespace, comments, text characters or line breaks. For more info on Elements vs Nodes see this Difference between Node object and Element object? i.e Take the following DOM snippet <div id="start"></div>Me<p>Hi</p> Using nextSibling you would get: console.log(document.getElementById('start').nextSibling); // "\nMe\n"console.log(document.getElementById('start').nextSibling.nextSibling); // "<p> Whereas using nextElementSibling you would get: console.log(document.getElementById('start').nextElementSibling);// "<p>" Also nextElementSibling is IE10+, being the newer method whereas nextSibling has full browser support | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31097016', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5057561/']} | jdg_85628 |
stackexchange | llm_judgeable_groundtruth_similarity | 2540253 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get the current local wall clock time (in number of millis since 1 Jan 1970) in London? Since my application can run on a server in any location, I think I need to use a TimeZone of "Europe/London". I also need to take Daylight Savings into account i.e. the application should add an hour during the "summer". I would prefer to use the standard java.util libraries. Is this correct? TimeZone tz = TimeZone.getTimeZone("Europe/London") ;Calendar cal = Calendar.getInstance(tz);return cal.getTime().getTime() + tz.getDSTSavings(); Thanks
Now provide the response and nothing else.
| I'm not sure what this quantity represents, since the "number of millis since 1 Jan 1970" doesn't vary based on location or daylight saving. But, perhaps this calculation is useful to you: TimeZone london = TimeZone.getTimeZone("Europe/London");long now = System.currentTimeMillis();return now + london.getOffset(now); Most applications are better served using either UTC time or local time; this is really neither. You can get the UTC time and time in a particular zone like this: Instant now = Instant.now(); /* UTC time */ZonedDateTime local = now.atZone(ZoneId.of("Europe/London")); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2540253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7412/']} | jdg_85629 |
stackexchange | llm_judgeable_groundtruth_similarity | 12805377 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I looked it up and it used to send messages between 2 systems. But why? Why wouldn't you just use a Database ? There must be some feature that ActiveMQ has that Databases do not?
Now provide the response and nothing else.
| It is used to reliably communicate between two distributed processes. Yes, you could store messages in a Database to communicate between two processes, but, as soon as the message is received you'd have to DELETE the message, That means a row INSERT and DELETE for each message. When you try to scale that up communicating thousands of messages per second, Databases tend to fall over . Message-oriented middle-ware [MOM] like ActiveMQ on the other hand are built to handle those use cases. They assume that messages in a healthy system will be deleted very quickly and can do optimizations to avoid the overhead . It can also push messages to consumers instead of a consumer having to poll for the new message by doing a SQL query. This further reduces the latency involved in processing new messages being sent into the system. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12805377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85630 |
stackexchange | llm_judgeable_groundtruth_similarity | 12651 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Both problems are well-known NP-hard problems with great similarities. In fact, I do not see the real difference between these two problems. It seems relatively easy to model TSR in the form of VRP and likewise inversely. So what is the essential point to make VRP a different problem from TSP? p.s. I cannot find appropriate tags for this question. I think important problems such as TSP should be tags themselves.
Now provide the response and nothing else.
| The Vehicle Routing Problem was introduced in G. B. Dantzig and J. H. Ramser, The Truck Dispatching Problem , Management Science Vol. 6, No. 1 (Oct., 1959), pp. 80-91. The authors underline the differences with TSP in this way: ... The "truck dispatching problem" formulated in this paper may be considered as a generalization of the TSP ... ... The salesman may be required to return to the "terminal point" whenever he has contated $m$ of the $n-1$ remaining points, $m$ being a divisor of $n-1$. For given $n$ and $m$ the problem is to find loops such that all loops have a specified point in common and total loop length is a minimum. Since the loops have one point in commom, this problem may be called the "Clover Leaf Problem"... ... The TSP may also be generalized by imposing the condition that specified deliveries $q_i$ be made at every point $P_i$ (excepting the terminal point). If the capacity of the carrier $C$ is greater than $\sum_i q_i$, the probelm is formally identical with the TSP in its original form since the carrier can serve every delivery point on one trip which links all the points... In the simplest VRP formulation, all trucks (vehicles) have the same capacity and only one product is to be delivered to each point $P_i$. Other common constraints are: time constraints (or total length of each route), time windows, precedence relations between points. To summarize: the main difference between a TSP and VRP is that the salesman must return to the starting location after some points have been visited. For what regards " It seems relatively easy to model TSR in the form of VRP and likewise inversely. "; the reduction from TSP to VRP is immediate, the opposite direction VRP $\leq_m^p$ TSP is surely more complex (and probably it requires other intermediate reductions). | {} | {'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/12651', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/8564/']} | jdg_85631 |
stackexchange | llm_judgeable_groundtruth_similarity | 7283 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to create a user and have no password. As in you can't log in with a password. I want to add keys to its authorized_keys by using root. This is for my automated backup system.
Now provide the response and nothing else.
| Use of passwd -d is plain wrong , at least on Fedora , on any linux distro based on shadow-utils. If you remove the password with passwd -d , it means anyone can login to that user (on console or graphical) providing no password. In order to block logins with password authentication, run passwd -l username , which locks the account making it available to the root user only. The locking is performed by rendering the encrypted password into an invalid string (by prefixing the encrypted string with an !). Any login attempt, local or remote, will result in an "incorrect password" , while public key login will still be working. The account can then be unlocked with passwd -u username . If you want to completely lock an account without deleting it, edit /etc/passwd and set /sbin/nologin or /bin/false in the last field. The former will result in "This account is currently not available." for any login attempt. Please refer to passwd(1) man page. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/7283', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']} | jdg_85632 |
stackexchange | llm_judgeable_groundtruth_similarity | 39239024 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have not been able to find a definitive definition of what initial-scale=1.0 and initial-scale=2.0 means. What do they both mean? I know initial-scale has to do with the zoom, I just don't know what it's values from 1-10 mean. Source: https://www.w3.org/TR/css-device-adapt-1/#translate-meta-to-at-viewport
Now provide the response and nothing else.
| A usual mobile responsive site would contain a HTML meta tag in the head like the following: <meta name="viewport" content="width=device-width, initial-scale=1"> Within the tag the width property controls the size of the viewport. It can be set either to a precise number of pixels like width=400 or to the special value device-width value which is the width of the screen in CSS pixels at a scale of 100%. device-width is the most commonly used width for responsive websites that scale across different screen sizes. When the page is first loaded the initial-scale property controls the initial zoom level ie 1 Viewport pixel = 1 CSS pixel. User-scalable , maximum-scale and minimum-scale properties control how the user is able to zoom the page in or out. You could setup an example html page and include the viewport tag and change the initial-scale attribute to see the difference. Also try viewing the page on different viewport sizes and orientation. initial-scale : The initial zoom when visiting the page. 1.0 does not zoom. To answer what initial-scale=2.0 means here is an example of using 2.0: In addition to the above, you may want to specify the initial zoom factor for the viewing area. If you want to set the viewport of your page to be equal to the device’s width and have it zoom in by default with a factor of 2 for example, this property will come in handy. The code for that would look as follows:. The Image above shows what this would look like — although it is not a particularly practical demonstration of the possibilities the initial scale setting has to offer, the underlying point should be clear: content is blown up with a factor of 2 upon first load. https://dev.opera.com/articles/an-introduction-to-meta-viewport-and-viewport/#initial-scale Some good references to checkout: https://css-tricks.com/snippets/html/responsive-meta-tag/ https://css-tricks.com/probably-use-initial-scale1/ | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39239024', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1798677/']} | jdg_85633 |
stackexchange | llm_judgeable_groundtruth_similarity | 47273509 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of tuples, where each tuple contains a string and a number in the form of: [(string_1, num_a), (string_2, num_b), ...] The strings are nonunique, and so are the numbers, e.g. (string_1 , num_m) or (string_9 , num_b) are likely to exist in the list. I'm attempting to create a dictionary with the string as the key and a set of all numbers occurring with that string as the value: dict = {string_1: {num_a, num_m}, string_2: {num_b}, ...} I've done this somewhat successfully with the following dictionary comprehension with nested set comprehension: #st_id_list = [(string_1, num_a), ...]#st_dict = {string_1: {num_a, num_m}, ...} st_dict = { st[0]: set( st_[1] for st_ in st_id_list if st_[0] == st[0] ) for st in st_id_list} There's only one issue: st_id_list is 18,000 items long. This snippet of code takes less than ten seconds to run for a list of 500 tuples, but over twelve minutes to run for the full 18,000 tuples. I have to think this is because I've nested a set comprehension inside a dict comprehension. Is there a way to avoid this, or a smarter way to it?
Now provide the response and nothing else.
| You have a double loop, so you take O(N**2) time to produce your dictionary. For 500 items, 250.000 steps are taken, and for your 18k items, 324 million steps need to be done. Here is a O(N) loop instead, so 500 steps for your smaller dataset, 18.000 steps for the larger dataset: st_dict = {}for st, id in st_id_list: st_dict.setdefault(st, set()).add(id) This uses the dict.setdefault() method to ensure that for a given key (your string values), there is at least an empty set available if the key is missing, then adds the current id value to that set. You can do the same with a collections.defaultdict() object : from collections import defaultdictst_dict = defaultdict(set)for st, id in st_id_list: st_dict[st].add(id) The defaultdict() uses the factory passed in to set a default value for missing keys. The disadvantage of the defaultdict approach is that the object continues to produce default values for missing keys after your loop, which can hide application bugs. Use st_dict.default_factory = None to disable the factory explicitly to prevent that. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47273509', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7513785/']} | jdg_85634 |
stackexchange | llm_judgeable_groundtruth_similarity | 311493 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
We dont have to many restrictions on the net in our domain right now. Normally its fine (we use fiber. 20 down and 35 up) with 40 users this isnt normally a big deal. Every now and again the bandwidth does spike though and I need to be able to monitor whos using the most bandwidth to give in reports/restrict that user more if needed. do you guys have any sugguestions on software or settings I can do this for? Our domain has 2 DNS servers that the traffic is filtered through and our main firewall is an ASA5505. We use spiceworks so i've been thinking that maybe an addon that would do this would work. Thanks for the help!
Now provide the response and nothing else.
| You might be able to use Netflow on the ASA 5505 to export traffic flow information to a Netflow collector. I think you need version 8.2 on the ASA in order to configure Netflow. Barring that, you could set up a SPAN port on your switch (if you have a switch that supports port mirroring/monitoring) to mirror/monitor the port that uplinks to your firewall/router, install PRTG on a monitoring workstation, set up a packet sniffer sensor in PRTG, and plug the monitoring station into the monitor port on the switch. This is what I do in my office. It's cheap and dirty but it does the job. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/311493', 'https://serverfault.com', 'https://serverfault.com/users/90692/']} | jdg_85635 |
stackexchange | llm_judgeable_groundtruth_similarity | 345187 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I map numbers, linearly, between a and b to go between c and d. That is, I want numbers between 2 and 6 to map to numbers between 10 and 20... but I need the generalized case. My brain is fried.
Now provide the response and nothing else.
| If your number X falls between A and B, and you would like Y to fall between C and D, you can apply the following linear transform: Y = (X-A)/(B-A) * (D-C) + C That should give you what you want, although your question is a little ambiguous, since you could also map the interval in the reverse direction. Just watch out for division by zero and you should be OK. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/345187', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/43005/']} | jdg_85636 |
stackexchange | llm_judgeable_groundtruth_similarity | 351990 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Simple exercise taken from the book Types and Programming Languages by Benjamin C. Pierce. This is a definition of the transitive closure of a relation R. First, we define the sequence of sets of pairs: $$R_0 = R$$$$R_{i+1} = R_i \cup \{ (s, u) | \exists t, (s, t) \in R_i, (t, u) \in R_i \}$$Finally, define the relation $R^+$ as the union of all the $R_i$:$$R^+=\bigcup_i R_i$$Show that $R^+$ is really the transitive closure of R. Questions: I would like to see the proof (I don't have enough mathematical background to make it myself). Isn't the final union superfluous? Won't $R_n$ be the union of all previous sequences?
Now provide the response and nothing else.
| First of all, if this is how you define the transitive closure, then the proof is over. But you may still want to see that it is a transitive relation, and that it is contained in any other transitive relation extending $R$. To the second question, the answer is simple, no the last union is not superfluous because it is infinite. Every step contains a bit more , but not necessarily all the needed information. So let us see that $R^+$ is really transitive, contains $R$ and is contained in any other transitive relation extending $R$. Clearly $R\subseteq R^+$ because $R=R_0$. If $x,y,z$ are such that $x\mathrel{R^+} y$ and $y\mathrel{R^+}z$ then there is some $n$ such that $x\mathrel{R_n}y$ and $y\mathrel{R_n}z$, therefore in $R_{n+1}$ we add the pair $(x,z)$ and so $x\mathrel{R_{n+1}}z$ and therefore $x\mathrel{R^+}z$ as wanted. If $T$ is a transitive relation containing $R$, then one can show it contains $R_n$ for all $n$, and therefore their union $R^+$. To see that $R_n\subseteq T$ note that $R_0$ is such; and if $R_n\subseteq T$ and $(x,z)\in R_{n+1}$ then there is some $y$ such that $(x,y)\in R_n$ and $(y,z)\in R_n$. Since $R_n\subseteq T$ these pairs are in $T$, and since $T$ is transitive $(x,z)\in T$ as well. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/351990', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/70436/']} | jdg_85637 |
stackexchange | llm_judgeable_groundtruth_similarity | 420429 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In a C# console application, is there a smart way to have console output mirrored to a text file? Currently I am just passing the same string to both Console.WriteLine and InstanceOfStreamWriter.WriteLine in a log method.
Now provide the response and nothing else.
| This may be some kind of more work, but I would go the other way round. Instantiate a TraceListener for the console and one for the log file; thereafter use Trace.Write statements in your code instead of Console.Write . It becomes easier afterwards to remove the log, or the console output, or to attach another logging mechanism. static void Main(string[] args){ Trace.Listeners.Clear(); TextWriterTraceListener twtl = new TextWriterTraceListener(Path.Combine(Path.GetTempPath(), AppDomain.CurrentDomain.FriendlyName)); twtl.Name = "TextLogger"; twtl.TraceOutputOptions = TraceOptions.ThreadId | TraceOptions.DateTime; ConsoleTraceListener ctl = new ConsoleTraceListener(false); ctl.TraceOutputOptions = TraceOptions.DateTime; Trace.Listeners.Add(twtl); Trace.Listeners.Add(ctl); Trace.AutoFlush = true; Trace.WriteLine("The first line to be in the logfile and on the console.");} As far as I can recall, you can define the listeners in the application configuration making it possible to activate or deactivate the logging without touching the build. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/420429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/82/']} | jdg_85638 |
stackexchange | llm_judgeable_groundtruth_similarity | 17553 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to understand what it means by patching binary files in an example from the manpage of xxd : Patch the date in the file xxd.1% echo "0000037: 3574 68" | xxd -r - xxd.1% xxd -s 0x36 -l 13 -c 13 xxd.10000036: 3235 7468 204d 6179 2031 3939 36 25th May 1996 Does it mean editing binary files, or is it like in Wikipedia A patch is a piece of software designed to fix problems 1 with, or update a computer program or its supporting data. How do xxd and patch differ, and are they similar? What does the example try to do?
Now provide the response and nothing else.
| To patch a file means to modify it, with the connotation that the modification is generally small. The usage comes from the general English usage where a patch is a small modification (to a piece of cloth, for example). When it comes to files, a patch is not always a repair. A patch is a series of instructions that describe how to modify a file or a set of files. In the unix world, a patch is usually the output of the diff command, describing changes in a text file. A patch in this sense describes the modifications in terms of adding, removing or modifying lines in the files. The patch utility applies these instructions to modify a file or set of files. It tries to be smart about applying multiple patches to the same file, as the first patch could cause changes that prevent the second one from being applies because the file is no longer in the expected state. Because diff and patch strongly base their operation on lines, they are not well-suited to binary files. xxd is a generic utility for working with binary files. One of its capabilities is to make modifications in a file. For example, the command xxd -r - xxd.1 means to apply the modifications described on standard input. The example patch (i.e. the modification instructions) 0000037: 3574 68 mean: starting at offset 0x37 (that's 55 in decimal), replace the next three bytes by 0x35, 0x74 and 0x68 (i.e. the three characters 5th ). | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/17553', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/674/']} | jdg_85639 |
stackexchange | llm_judgeable_groundtruth_similarity | 3753317 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to partition based on char column? After reviewing the MySQL 5.1 documentation it appears that only integer types can be used. Is this correct? Or can I use some function to convert the char into an integer? The char field in question contains a unique identifier.
Now provide the response and nothing else.
| Partitioning in MySQL 5.1 can only deal with integer columns ( Source ). You can only use a few partitioning functions on non-integer columns. For example: CREATE TABLE ti (id INT, amount DECIMAL(7,2), tr_date DATE) ENGINE=INNODB PARTITION BY HASH( MONTH(tr_date) ) PARTITIONS 6; You can also use key partitioning in MySQL 5.1, as long as the primary key includes all the columns in the table's partitioning function: CREATE TABLE k1 ( id CHAR(3) NOT NULL PRIMARY KEY, value int)PARTITION BY KEY(id)PARTITIONS 10; On the other hand, in MySQL 5.5, you can use range column partitioning or list column partitioning on a wide variety of data types, including character-based columns. List Columns Example: CREATE TABLE expenses ( expense_date DATE NOT NULL, category VARCHAR(30), amount DECIMAL (10,3));ALTER TABLE expensesPARTITION BY LIST COLUMNS (category)( PARTITION p01 VALUES IN ('lodging', 'food'), PARTITION p02 VALUES IN ('flights', 'ground transportation'), PARTITION p03 VALUES IN ('leisure', 'customer entertainment'), PARTITION p04 VALUES IN ('communications'), PARTITION p05 VALUES IN ('fees')); Range Columns Example: CREATE TABLE range_test ( code CHAR(3), value INT)PARTITION BY RANGE COLUMNS(code) ( PARTITION p0 VALUES LESS THAN ('MMM'), PARTITION p1 VALUES LESS THAN ('ZZZ')); Further reading: MySQL 5.1 Reference Manual :: Key partitioning A deep look at MySQL 5.5 partitioning enhancements MySQL 5.5 Reference Manual :: Range columns partitioning MySQL 5.5 Reference Manual :: List columns partitioning | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3753317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/127776/']} | jdg_85640 |
Subsets and Splits