source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 1780009 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I match and replace text using regular expressions in multiline mode? I know the RegexOptions.Multiline option, but what is the best way to specify match all with the new line characters in C#? Input: <tag name="abc">thisisatext</tag> Output: [tag name="abc"]thisisatest[/tag] Aahh, I found the actual problem. '&' and ';' in Regex are matching text in a single line, while the same need to be escaped in the Regex to work in cases where there are new lines also.
Now provide the response and nothing else.
| If you mean there has to be a newline character for the expression to match, then \n will do that for you. Otherwise, I think you might have misunderstood the Multiline / Singleline flags. If you want your expression to match across several lines, you actually want to use RegexOptions.Singleline . What it means is that it treats the entire input string as a single line, thus ignoring newlines. Is this what you're after...? Example Regex rx = new Regex("<tag name=\"(.*?)\">(.*?)</tag>", RegexOptions.Singleline);String output = rx.Replace("Text <tag name=\"abc\">test\nwith\nnewline</tag> more text...", "[tag name=\"$1\"]$2[/tag]"); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1780009', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/61821/']} | jdg_74941 |
stackexchange | llm_judgeable_groundtruth_similarity | 43078 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider a set $X$ and a set $Y$. Once can the define a map from $X$ to $Y$ that assigns to each point in $X$ a point in $Y$. On the other hand, if $F(X,Y)$ denotes the set of all functions from $X$ to $Y$ then one might be able to define a map $F_x$ from $X$ to $F(X,Y$) that assigns to each point of $X$ an element of $F(X,Y)$. For the latter construction, consider as example a finite dimensional inner product space $(V, (.|.))$, dual $V^*$ and the (canonical?) isomorphism that defined by the map $v \mapsto (.|v)$ My question is, does this sort of construction have a name? Is there some higher concept that embodies the concept of "mapping a point to a function" ? EDIT Another illustration of this that occurred to me I thought was worth mentioning is the derivative map . Let $X$ and $Y$ be Banach spaces, let $U$ be open in $X$ and let $f:U \rightarrow Y$. If $f$ is differentiable at a point $x_0$ then $\partial f(x_0)$ is a linear map from $U$ to $Y$. Now, if $f$ is differentiable at all points $x$ in $U$ the derivative map is defined as the map that sends each point $x$ of $U$ to the derivative of $f$ at $x$, that is$$x \mapsto \partial f(x) $$
Now provide the response and nothing else.
| There are a few relevant concepts here. First, let's just talk on the level of sets. Suppose $A, B, C$ are sets. Then fixing any function $f : A \times B \to C$, we can take an element $1 \to A$ of $A$ and compose it with $f$ to get a function $B \to C$, or in other words an element of $\text{Hom}(B, C)$. The category of sets is enriched over itself, so we can talk about homs internally, and the above defines a map $$\text{Hom}(A \times B, C) \mapsto \text{Hom}(A, \text{Hom}(B, C))$$ which, as it turns out, is a natural isomorphism. This is the defining property of a Cartesian closed category and is, as Jim Belk says, also called currying. Second, let's go to the level of finite-dimensional vector spaces (I think things get messed up in general). The correct replacement for a function $f : A \times B \to C$ is a bilinear function $f : A \times B \to C$, or equivalently a linear function $f : A \otimes B \to C$. The category $\text{FinVect}$ of finite-dimensional vector spaces is also enriched over itself, and we'd like to say that it also has the structure of a Cartesian closed category, except that we can't because the tensor product is not the Cartesian product. The appropriate generalization is that $\text{FinVect}$ is a closed monoidal category . That means that it comes equipped with a natural isomorphism $$\text{Hom}(A \otimes B, C) \cong \text{Hom}(A, B \Rightarrow C)$$ where we use $B \Rightarrow C$ to denote $\text{Hom}(B, C)$ treated as a vector space, to distinguish it from $\text{Hom}(B, C)$ treated as a set. (This distinction between internal hom and Hom may seem silly here, but in other categories it becomes important.) The above is in some sense just a restatement of the universal property of the tensor product, but in this form (at least for $R$-modules) it's usually called the tensor-hom adjunction , since it shows that $- \otimes B$ is left adjoint to $B \Rightarrow -$. (Incidentally this shows that the latter preserves colimits and the former preserves limits, a fact of great importance in homological algebra.) The situation is slightly complicated when we talk about complex inner product places, so let's talk about real inner product spaces. Real inner product spaces $V$ come equipped with a canonical element of $\text{Hom}(V \otimes V, \mathbb{R})$, so plugging that canonical element into the above natural isomorphism gives a canonical element of $\text{Hom}(V, V^{\ast})$. For a complex inner product space, the inner product is conjugate-linear in one variable, so you don't get a canonical morphism into the dual, you get a canonical anti -morphism into the dual (it's conjugate-linear). What you're looking at seems like a natural transformation between functors, but I don't think it actually is because one of them (the identity functor) is covariant and the other (the dual space functor) is contravariant. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/43078', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9450/']} | jdg_74942 |
stackexchange | llm_judgeable_groundtruth_similarity | 292044 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering about the distribution of $\sqrt{p}$ mod $1$ this morning, as one does while brushing one's teeth. I remembered the paper of Elkies and McMullen (Duke Math. J. 123 (2004), no. 1, 95–139.) about $\sqrt{n}$ mod $1$, but hadn't really thought about it before. Question 1: is $\sqrt{p}$ equidistributed mod $1$, as $p$ varies over all prime numbers? Is this known? Within range of current techniques? Question 2: What about subtler statistics of $\sqrt{p}$ (and $\sqrt{n}$) mod $1$? I made three plots, giving histograms of $\sqrt{n}$ mod $1$ (for natural numbers up to 100,000) and $\sqrt{p}$ mod $1$ (for primes up to 1 million) and (for comparison) a histogram of 100,000 samples drawn uniformly at random from {0,1,...,999}. Here they are for your enjoyment. There's some wild stuff going on, I think! Question 2(a): What's up with these sharp peak/valleys at rational numbers, in the distribution of $\sqrt{n}$ mod $1$? They are especially prominent at fractions of the form $a / 2^{e}$. How tall are these peaks near rational numbers? They persist when sampling from the primes, i.e., in the distribution of $\sqrt{p}$ mod $1$ too. Question 2(b): Outside of those funky spots in 2(a), the distribution of $\sqrt{n}$ mod $1$ is far flatter than one would expect, e.g., from samples drawn uniformly at random as displayed in the bottom histogram. This must have been noticed and quantified before... what's the relevant quantitative result here? Question 2(c): The distribution of $\sqrt{p}$ mod 1 displays the same funky spots near rational numbers, but otherwise seems much closer (in noise-volume) to the random samples at the bottom. Maybe for a larger sample, the funkiness goes away... I don't know. Explanations or conjectures are welcome. Question 3: These seem like natural images to look at. If you know a reference where others have drawn such pictures or studied similar phenomena, I'd love to take a look! -------------Update after answers below----------------- It looks like the answer to Question 1 is YES. Lucia's answer below explains this, and also some of the flatness evident in the $\sqrt{n}$ distribution mod 1. Igor and Aaron discuss the "spikes" around rational numbers. This seems related to binning: if our bins have width 1/1000, we see spikes at multiples of 1/2, 1/4, 1/5, 1/10, etc., related to divisors of 1000. Here's a new picture, which might help us understand the behavior of the distribution of $\sqrt{n}$ mod 1 near rational numbers. I've intentionally drawn the bins so that their endpoints lie on rational numbers with denominator up to 60. (I call this Farey-binning). This seems to bring the "spikes" around rational numbers down to the same size (independent of denominator). I think I'll accept Lucia's answer soon, because it answers the most direct Question 1. But more insights are welcome.
Now provide the response and nothing else.
| You can certainly use Vinogradov's method to show that $\sqrt{p}$ is equidistributed $\pmod 1$. I haven't thought about more subtle properties, such as the gap spacing considered by Elkies and McMullen (or your other questions). For the equidistribution, by Weyl's criterion it is enough to show cancellation in sums of the form $$ \sum_{n\le x} \Lambda(n) e(k\sqrt{n}) $$ for non-zero integers $k$. This is exactly the kind of sum to which Vinogradov's method applies. For example, see Exercise 2 on page 348 of Iwaniec-Kowalski which invites you to show that this sum is $\ll_k x^{\frac 56+\epsilon}$. Sums like this also appeared in the IHES paper of Iwaniec, Luo and Sarnak, where they show that better bounds for this sum (like $O(x^{\frac 12+\epsilon})$) have implications for the Riemann hypothesis for $GL(2)$ $L$-functions. One should expect the exponential sum over primes above to be on the scale of $O(x^{\frac 12+\epsilon})$. This is in keeping with the plots for $\sqrt{p}$ looking like random noise. To see why $\sqrt{n}$ looks different and more flat, note that the number of $n\le N^2$ with $\{ \sqrt{n} \} \in (\alpha,\beta)$ is given by $$ \sum_{k\le N} \sum_{(k+\alpha)^2 < n <(k+\beta)^2} 1 = \sum_{k\le N} (\lfloor 2k\beta+\beta^2 \rfloor - \lfloor 2k \alpha + \alpha^2\rfloor). $$ Since the distribution of $\{ 2k\alpha+\alpha^2\}$ (and similarly for $\beta$) is extremely regular, one should expect this to be nailed down much more precisely than for primes. Finally, suppose for example that $\alpha=a/q$ is a rational number (in lowest terms) with small denominator $q$, which let us assume odd for simplicity. Write $\alpha^2 = b/q + c/q^2$ with $0<c <q$. Note that $\{ 2k\alpha+\alpha^2\}$ will run over $c/q^2$, $1/q+c/q^2$, $\ldots$, $(q-1)/q+c/q^2$, and its average value will be $(q-1)/(2q) + c/q^2$. This can be noticeably different from the average value of $\{ x\}$, which is $1/2$, explaining the "spikes" near small rational numbers. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/292044', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3545/']} | jdg_74943 |
stackexchange | llm_judgeable_groundtruth_similarity | 54143002 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to build my angular project and generate a ZIP file containing it to send it via email and I want the person who receives it to be able to open it on his Desktop clicking index.html file. I changed the baseUrl to ./ or to document.location but I'm getting the following error: "Unhandled Navigation Error" Does anyone have any hint about how to fix this?
Now provide the response and nothing else.
| You can run angular app on double click on index.html file. Just add below code in your app.module.ts note that : remove baseUrl = ./ from index.html file //in App.module.ts : //import these packages import { APP_BASE_HREF, LocationStrategy, HashLocationStrategy } from '@angular/common';// add these packages into providers as below : @NgModule({ imports: [ ..... ], declarations: [ .... ], providers: [ .... { provide: APP_BASE_HREF, useValue: '/' }, { provide: LocationStrategy, useClass: HashLocationStrategy }, .... ] .... }) export class Appmodule{} Now execute : npm run build and double click the index.html file from dist folder.You app should run. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54143002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10899511/']} | jdg_74944 |
stackexchange | llm_judgeable_groundtruth_similarity | 560906 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
In Postfix 2.10.2, I have a setup with multiple domains and several virtual aliases to assign mail addresses to local users. It works fine as long as I do not add a catchall. Before I used virtual aliases, I had a catchall defined with local_recipient_maps =luser_relay = catchall but as I need to sort out mail addresses from different domains, I had to use virtual aliases. Now postfix.org says I should do it like this, which I did: /etc/postfix/main.cf: virtual_alias_domains = example.comvirtual_alias_maps = hash:/etc/postfix/virtual /etc/postfix/virtual: [email protected] [email protected] [email protected] [email protected] catchall But if I do so, the catchall address grabs all my mail instead of just the mail to not explicitly defined addresses. Why is that and how do I change it? I did postmap virtual and also restarted Postfix. There are no errors in the log, it just logs the delivery to the catchall address. And there is a warning "do not list domain example.com in BOTH mydestination and virtual_alias_domains", but I did not do that! I don't even have a mydestination directive. (There is one in the config below, but I added that after NickW suggested so.) Here is my complete conf: alias_database = hash:/etc/aliasesalias_maps = hash:/etc/aliasesappend_dot_mydomain = nobiff = nobroken_sasl_auth_clients = yesconfig_directory = /etc/postfixhome_mailbox = Maildir/inet_interfaces = allinet_protocols = allmailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/dovecot.conf -m "${EXTENSION}"mailbox_size_limit = 0mydestination = $myhostnamemyhostname = mydomain.commynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128myorigin = /etc/mailnamereadme_directory = norecipient_delimiter = +relayhost =smtp_tls_session_cache_database = btree:${data_directory}/smtp_scachesmtp_use_tls = yessmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destinationsmtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destinationsmtpd_sasl_auth_enable = yessmtpd_sasl_authenticated_header = yessmtpd_sasl_local_domain = $myhostnamesmtpd_sasl_path = private/dovecot-authsmtpd_sasl_security_options = noanonymoussmtpd_sasl_type = dovecotsmtpd_sender_restrictions = reject_unknown_sender_domainsmtpd_tls_auth_only = yessmtpd_tls_cert_file = /etc/dovecot/dovecot.pemsmtpd_tls_key_file = /etc/dovecot/private/dovecot.pemsmtpd_tls_mandatory_ciphers = mediumsmtpd_tls_mandatory_protocols = SSLv3, TLSv1smtpd_tls_received_header = yessmtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scachesmtpd_use_tls = yestls_random_source = dev:/dev/urandomvirtual_alias_domains = $myhostname, myotherdomain.comvirtual_alias_maps = hash:/etc/postfix/virtual
Now provide the response and nothing else.
| So, I figured it out. Some people suggest that the catch-all has to be on top of the virtual alias file, but I tried that before and it did not help (though I found that solution quite logical). What worked is: Set mydestination=localhost (that is not $myhostname ) Add the catchall on top of the virtual alias file: @domain.com catchall-account@localhost Add all other virtual aliases below: [email protected] contact@localhost The example assumes you have UNIX users named catchall-account and contact . Mails to [email protected] will be delivered to the contact user while all other mail will be delivered to the catch-all account. Maybe this is not necessary in all cases, but in my special case I want to use an account to save mail for some addresses, but mail sent directly to that account should end up in the catch-all. After all, looks like Postfix is not working it's way through the virtual aliases from top to bottom, and additionally catch-alls have some special priority. I will be glad about further comments in case someone is actually able to explain this behaviour. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/560906', 'https://serverfault.com', 'https://serverfault.com/users/202316/']} | jdg_74945 |
stackexchange | llm_judgeable_groundtruth_similarity | 34897014 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an indexed pandas dataframe. By searching through its index, I find a row of interest. How do I find out the iloc of this row? Example: dates = pd.date_range('1/1/2000', periods=8)df = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])df A B C D2000-01-01 -0.077564 0.310565 1.112333 1.0234722000-01-02 -0.377221 -0.303613 -1.593735 1.3543572000-01-03 1.023574 -0.139773 0.736999 1.4175952000-01-04 -0.191934 0.319612 0.606402 0.3925002000-01-05 -0.281087 -0.273864 0.154266 0.3740222000-01-06 -1.953963 1.429507 1.730493 0.1099812000-01-07 0.894756 -0.315175 -0.028260 -1.2326932000-01-08 -0.032872 -0.237807 0.705088 0.978011window_stop_row = df[df.index < '2000-01-04'].iloc[-1]window_stop_rowTimestamp('2000-01-08 00:00:00', offset='D')#which is the iloc of window_stop_row?
Now provide the response and nothing else.
| You want the .name attribute and pass this to get_loc : In [131]:dates = pd.date_range('1/1/2000', periods=8)df = pd.DataFrame(np.random.randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])dfOut[131]: A B C D2000-01-01 0.095234 -1.000863 0.899732 -1.7421522000-01-02 -0.517544 -1.274137 1.734024 -1.3694872000-01-03 0.134112 1.964386 -0.120282 0.5736762000-01-04 -0.737499 -0.581444 0.528500 -0.7376972000-01-05 -1.777800 0.795093 0.120681 0.5240452000-01-06 -0.048432 -0.751365 -0.760417 -0.1816582000-01-07 -0.570800 0.248608 -1.428998 -0.6620142000-01-08 -0.147326 0.717392 3.138620 1.208639In [133]: window_stop_row = df[df.index < '2000-01-04'].iloc[-1]window_stop_row.nameOut[133]:Timestamp('2000-01-03 00:00:00', offset='D')In [134]:df.index.get_loc(window_stop_row.name)Out[134]:2 get_loc returns the ordinal position of the label in your index which is what you want: In [135]: df.iloc[df.index.get_loc(window_stop_row.name)]Out[135]:A 0.134112B 1.964386C -0.120282D 0.573676Name: 2000-01-03 00:00:00, dtype: float64 if you just want to search the index then so long as it is sorted then you can use searchsorted : In [142]:df.index.searchsorted('2000-01-04') - 1Out[142]:2 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34897014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11464/']} | jdg_74946 |
stackexchange | llm_judgeable_groundtruth_similarity | 27826576 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
PyYAML can handle cyclic graphs in regular python objects. For example: Snippet #1. class Node: passa = Node()b = Node()a.child = bb.child = a# We now have the cycle a->b->aserialized_object = yaml.dump(a)object = yaml.load(serialized_object) This code succeeds, so clearly there's some mechanism to prevent infinite recursion when loading the serialized object. How do I harness that when I write my own YAML constructor function? For example, say Node is a class with transient fields foo and bar , and intransient field child . Only child should make it into the yaml document. I would hope to do this: Snippet #2. def representer(dumper, node): return dumper.represent_mapping("!node", {"child": node.child})def constructor(loader, data): result = Node() mapping = loader.construct_mapping(data) result.child = mapping["child"] return resultyaml.add_representer(Node, representer)yaml.add_constructor("!node", constructor)# Retry object cycle a->b->a from earlier code snippetserialized_object = yaml.dump(a)print serialized_objectobject = yaml.load(serialized_object) But it fails: &id001 !nodechild: !node child: *id001yaml.constructor.ConstructorError: found unconstructable recursive node: in "<string>", line 1, column 1: &id001 !node I see why. My constructor function isn't built for recursion. It needs to return the child object before it finishes constructing the parent object, and that fails when the child and parent are the same object. But clearly PyYAML has graph traversals that solve this problem, because Snippet #1 works. Maybe there's one pass to construct all the objects and a second pass to populate their fields. My question is, how can my custom constructor tie into those mechanisms? An answer to that question would be ideal. But if the answer is that I can't do this with custom constructors, and there is a less desirable alternative (e.g. mixing the YAMLObject class into my Node class), then that answer would be appreciated too.
Now provide the response and nothing else.
| For complex types, that might involve recursion (mapping/dict, sequence/list, objects), the constructor cannot create the object in one go. You should therefore yield the constructed object in the constructor() function, and then update any values after that¹: def constructor(loader, data): result = Node() yield result mapping = loader.construct_mapping(data) result.child = mapping["child"] that gets rid of the error. ¹ I don't think this is documented anywhere, without me looking at py/constructor.py intensively, while upgrading PyYAML to ruamel.yaml , I would not have known how to do this. A typical case of: read the source Luke | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27826576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8735/']} | jdg_74947 |
stackexchange | llm_judgeable_groundtruth_similarity | 15547210 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to change the default value of a column using a SQL statement in SQL Server 2008. I've found in many places how to set the default value when you create a table/add a column but not how to set it/modify it once the column already exists. This is what I can use to set it on adding: ALTER TABLE MyTable ADD MyColumn int NOT NULL DEFAULT 0 And that works, but if I try to modify it later: ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL DEFAULT -1ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL SET DEFAULT -1 None of those are syntactically correct, and I don't find the syntax to do what I pretend anywhere. The only option I come with is to add a new column, copy values from previous column, then remove previous column and new column to make the change, but that doesn't seem right to me. Is there a way of doing what I want in just one simple sentence? Thanks.
Now provide the response and nothing else.
| When you add a column with default value, what happens is that there's a default constraint being created: create table _temp ( x int default 1) sp_help _temp result: constraint_type constraint_nameDEFAULT on column x DF___temp__x__5A3B20F9 So default clause in column definition is just a shortcut for creating a constraint; default value is not an intrinsic property of a column. If you want to modify default for already existing column, you have to drop the constraint first: alter table _temp drop constraint DF___temp__x__5A3B20F9 And then create a new default constraint: alter table _temp add constraint DF_temp_x default 2 for x | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15547210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/948479/']} | jdg_74948 |
stackexchange | llm_judgeable_groundtruth_similarity | 6075998 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to make my UI more responsive in my WPF app. I spawn a new thread using Task.Factory.StartNew( () => RecurseAndDeleteStart() ); In that method RecurseAndDeleteStart() I want to update a label in the UI with the file that is being deleted. How does one accomplish this?
Now provide the response and nothing else.
| Since it's WPF, you can use the Dispatcher and call Dispatcher.BeginInvoke to marshal the call back to the UI thread to update the label. Alternatively, you can pass a TaskScheduler into your method, and use it to update the label as follows: // This line needs to happen on the UI thread...TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext();Task.Factory.StartNew( () => RecurseAndDeleteStart(uiScheduler) ); Then, inside your method, when you want to update a label, you could do: Task.Factory.StartNew( () => { theLabel.Text = "Foo"; }, CancellationToken.None, TaskCreationOptions.None, uiScheduler); This will push the call back onto the UI thread's synchronization context. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6075998', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148675/']} | jdg_74949 |
stackexchange | llm_judgeable_groundtruth_similarity | 8505664 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I couldn't find anything useful to this problem and I never used authentication in a web application before. So anything with / without cookies is new for me. We've got an existing (non-web-)program using SQL Server accounts for authentication. Now, I want to do the same with my MVC3 project. Every user has the same rights inside the database. The (non-web-)program handles the rights itself. After logging in successful (with an SQL account), the login name is mapped to an entity in the database from which one can get the users ID and handle the right management. If the user "admin" is logged in, he gets the ID 4711 and will see some more input fields etc. while other users have less rights. Is this possible? Besides that: What's the best approach to store the authentication and how? Thanks in advance.
Now provide the response and nothing else.
| numpy.random.Generator.choice offers a replace argument to sample without replacement: from numpy.random import default_rngrng = default_rng()numbers = rng.choice(20, size=10, replace=False) If you're on a pre-1.17 NumPy, without the Generator API, you can use random.sample() from the standard library: print(random.sample(range(20), 10)) You can also use numpy.random.shuffle() and slicing, but this will be less efficient: a = numpy.arange(20)numpy.random.shuffle(a)print a[:10] There's also a replace argument in the legacy numpy.random.choice function, but this argument was implemented inefficiently and then left inefficient due to random number stream stability guarantees, so its use isn't recommended. (It basically does the shuffle-and-slice thing internally.) Some timings: import timeitprint("when output size/k is large, np.random.default_rng().choice() is far far quicker, even when including time taken to create np.random.default_rng()")print(1, timeit.timeit("rng.choice(a=10**5, size=10**4, replace=False, shuffle=False)", setup="import numpy as np; rng=np.random.default_rng()", number=10**3)) #0.16003450006246567print(2, timeit.timeit("np.random.default_rng().choice(a=10**5, size=10**4, replace=False, shuffle=False)", setup="import numpy as np", number=10**3)) #0.19915290002245456print(3, timeit.timeit("random.sample( population=range(10**5), k=10**4)", setup="import random", number=10**3)) #5.115292700007558print("when output size/k is very small, random.sample() is quicker")print(4, timeit.timeit("rng.choice(a=10**5, size=10**1, replace=False, shuffle=False)", setup="import numpy as np; rng=np.random.default_rng()", number=10**3)) #0.01609779999125749print(5, timeit.timeit("random.sample( population=range(10**5), k=10**1)", setup="import random", number=10**3)) #0.008387799956835806 So numpy.random.Generator.choice is what you usually want to go for, except for very small output size/ k . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8505664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1010351/']} | jdg_74950 |
stackexchange | llm_judgeable_groundtruth_similarity | 29337086 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 3 tables listing below: Table_A: order_number | header_id 123 | 80001 Table_B header_id | line_id | quantity 80001 | 10001 | 1 80001 | 10002 | 3 80001 | 10003 | 5 Table_C header_id | line_id | hold_price_id | released_flag 80001 | 10001 | 2001 | Y 80001 | 10002 | 2002 | Y 80001 | 10003 | 2003 | N I wrote a query as shown below: SELECT A.order_number, A.header_id, B.line_id, B.quantity, C.hold_price_id, C.released_flagFROM Table_A a, Table_B b, Table_C cWHERE a.header_id = b.header_idAND c.line_id = b.line_idAND a.order_number = '123'; My desire output is as shown below: order_number | header_id | line_id | quantity | hold_price_id | released_flag 123 | 80001 | 10001 | 1 | 2001 | Y 123 | 80001 | 10002 | 3 | 2002 | Y 123 | 80001 | 10003 | 5 | 2003 | N However the query show me the below result: order_number | header_id | line_id | quantity | hold_price_id | released_flag 123 | 80001 | 10001 | 1 | 2001 | Y 123 | 80001 | 10001 | 3 | 2002 | Y 123 | 80001 | 10001 | 5 | 2003 | N 123 | 80001 | 10002 | 1 | 2001 | Y 123 | 80001 | 10002 | 3 | 2002 | Y 123 | 80001 | 10002 | 5 | 2003 | N 123 | 80001 | 10003 | 1 | 2001 | Y 123 | 80001 | 10003 | 3 | 2002 | Y 123 | 80001 | 10003 | 5 | 2003 | N Is it something wrong on my query? Please advice. Thank you!
Now provide the response and nothing else.
| You need to learn to use proper explicit join syntax. A simple rule: never use commas in the from clause. Always use explicit join s: SELECT A.order_number, A.header_id, B.line_id, B.quantity, C.hold_price_id, C.released_flagFROM Table_A a JOIN Table_B b ON a.header_id = b.header_id JOIN Table_C c ON c.header_id = b.header_id AND c.line_id = b.line_idWHERE a.order_number = '123'; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29337086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1664831/']} | jdg_74951 |
stackexchange | llm_judgeable_groundtruth_similarity | 3023503 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can check for a next() method, but is that enough? Is there an ideomatic way?
Now provide the response and nothing else.
| In Python 2.6 or better, the designed-in idiom for such behavioral checks is a "membership check" with the abstract base class in the collections module of the standard library: >>> import collections>>> isinstance('ciao', collections.Iterable)True>>> isinstance(23, collections.Iterable)False>>> isinstance(xrange(23), collections.Iterable)True Indeed, this kind of checks is the prime design reason for the new abstract base classes (a second important one is to provide "mixin functionality" in some cases, which is why they're ABCs rather than just interfaces -- but that doesn't apply to collections.Iterable , it exists strictly to allow such checks with isinstance or issubclass ). ABCs allow classes that don't actually inherit from them to be "registered" as subclasses anyway, so that such classes can be "subclasses" of the ABC for such checks; and, they can internally perform all needed checks for special methods ( __iter__ in this case), so you don't have to. If you're stuck with older releases of Python, "it's better to ask forgiveness than permission": def isiterable(x): try: iter(x) except TypeError: return False else: return True but that's not as fast and concise as the new approach. Note that for this special case you'll often want to special-case strings (which are iterable but most application contexts want to treat as "scalars" anyway). Whatever approach you're using to check iterableness, if you need such special casing just prepend a check for isinstance(x, basestring) -- for example: def reallyiterable(x): return not isinstance(x, basestring) and isinstance(x, collections.Iterable) Edit : as pointed out in a comment, the question focuses on whether an object is an iter***ator*** rather than whether it's iter***able*** (all iterators are iterable, but not vice versa -- not all iterables are iterators). isinstance(x, collections.Iterator) is the perfectly analogous way to check for that condition specifically. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3023503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/157519/']} | jdg_74952 |
stackexchange | llm_judgeable_groundtruth_similarity | 28868367 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C++, say that: uint64_t i;uint64_t j; then i * j will yield an uint64_t that has as value the lower part of the multiplication between i and j , i.e., (i * j) mod 2^64 .Now, what if I wanted the higher part of the multiplication? I know that there exists an assembly instruction do to something like that when using 32 bit integers, but I am not familiar at all with assembly, so I was hoping for help. What is the most efficient way to make something like: uint64_t k = mulhi(i, j);
Now provide the response and nothing else.
| If you're using gcc and the version you have supports 128 bit numbers (try using __uint128_t) then performing the 128 multiply and extracting the upper 64 bits is likely to be the most efficient way of getting the result. If your compiler doesn't support 128 bit numbers, then Yakk's answer is correct. However, it may be too brief for general consumption. In particular, an actual implementation has to be careful of overflowing 64 bit integers. The simple and portable solution he proposes is to break each of a and b into 2 32-bit numbers and then multiply those 32 bit numbers using the 64 bit multiply operation. If we write: uint64_t a_lo = (uint32_t)a;uint64_t a_hi = a >> 32;uint64_t b_lo = (uint32_t)b;uint64_t b_hi = b >> 32; then it is obvious that: a = (a_hi << 32) + a_lo;b = (b_hi << 32) + b_lo; and: a * b = ((a_hi << 32) + a_lo) * ((b_hi << 32) + b_lo) = ((a_hi * b_hi) << 64) + ((a_hi * b_lo) << 32) + ((b_hi * a_lo) << 32) + a_lo * b_lo provided the calculation is performed using 128 bit (or greater) arithmetic. But this problem requires that we perform all the calculcations using 64 bit arithmetic, so we have to worry about overflow. Since a_hi, a_lo, b_hi, and b_lo are all unsigned 32 bit numbers, their product will fit in an unsigned 64 bit number without overflow. However, the intermediate results of the above calculation will not. The following code will implement mulhi(a, b) when the mathemetics must be performed modulo 2^64: uint64_t a_lo = (uint32_t)a;uint64_t a_hi = a >> 32;uint64_t b_lo = (uint32_t)b;uint64_t b_hi = b >> 32;uint64_t a_x_b_hi = a_hi * b_hi;uint64_t a_x_b_mid = a_hi * b_lo;uint64_t b_x_a_mid = b_hi * a_lo;uint64_t a_x_b_lo = a_lo * b_lo;uint64_t carry_bit = ((uint64_t)(uint32_t)a_x_b_mid + (uint64_t)(uint32_t)b_x_a_mid + (a_x_b_lo >> 32) ) >> 32;uint64_t multhi = a_x_b_hi + (a_x_b_mid >> 32) + (b_x_a_mid >> 32) + carry_bit;return multhi; As Yakk points out, if you don't mind being off by +1 in the upper 64 bits, you can omit the calculation of the carry bit. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28868367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/774236/']} | jdg_74953 |
stackexchange | llm_judgeable_groundtruth_similarity | 10687306 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose this markup: <table class="table table-bordered" align="center"> No mather how many cells I have, the table is always 100% width. Why's that?
Now provide the response and nothing else.
| All tables within the bootstrap stretch according to their container, which you can easily do by placing your table inside a .span* grid element of your choice. If you wish to remove this property you can create your own table class and simply add it to the table you want to expand with the content within: .table-nonfluid { width: auto !important;} You can add this class inside your own stylesheet and simply add it to the container of your table like so: <table class="table table-nonfluid"> ... </table> This way your change won't affect the bootstrap stylesheet itself (you might want to have a fluid table somewhere else in your document). | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10687306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/725400/']} | jdg_74954 |
stackexchange | llm_judgeable_groundtruth_similarity | 1481 |
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For example, I would like to train my neural network to recognize the type of actions (e.g. in commercial movies or some real-life videos), so I can "ask" my network in which video or movie (and at what frames) somebody was driving a car, kissing, eating, was scared or was talking over the phone. What are the current successful approaches to that type of problem?
Now provide the response and nothing else.
| There are several approaches as to how this can be achieved. One recent study from 2015 about Action Recognition in Realistic Sports Videos PDF uses the action recognition framework based on the three main steps of feature extraction (shape, post or contextual information), dictionary learning to represent a video, and classification ( BoW framework ). A few examples of methods: Spatio-Temporal Structures of Human Poses a joint shape-motion Multi-Task Sparse Learning (MTSL) Hierarchical Space-Time Segments Spatio-Temporal Deformable Part Models (SDPM) Here are the results based on training of 10 action classes based on the UCF sports dataset: Source: Action Recognition in Realistic Sports Videos . | {} | {'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/1481', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/8/']} | jdg_74955 |
stackexchange | llm_judgeable_groundtruth_similarity | 51246 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Y combinator is a concept in functional programming, borrowed from the lambda calculus. It is a fixed-point combinator. A fixed point combinator $G$ is a higher-order function (a functional, in mathematical language) that, given a function $f$, returns a fixed point of $f$. In mathematical language, $$f(G(f)) = G(f)$$ This can be considered the defining equation of a fixed-point combinator. Note that $f$ might be a function whose range and domain are themselves function spaces -- in fact this is the most common use of a fixed-point combinator: you can define a function $\alpha$ by specifying that it is the fixed point of another function $f$, and then compute $\alpha$ as $G(f)$. As mathematicians we're used to functions having names, eg $f:x\mapsto x^2$ is the function called $f$ that maps $x$ to $x^2$. But there's no reason why you can't have anonymous function. Since the lambda calculus deals with these a lot, there's a special notation for them: $$\lambda x.x^2$$ is the function that takes $x$ to $x^2$, so that e.g. $(\lambda x.x^2)(2) = 4$. When there's no ambiguity, we can write function application by concatenation: $(\lambda x.x^2) 2 = 4$, and if we defined $f = \lambda x.x^2$ then $f\; 2 = 4$. Okay, now we get to the meat of the question. The Y combinator is a higher-order function (functional) defined as $$Y = \lambda f. (\lambda x. f (x\;x)) \; (\lambda x. f (x\;x))$$ I can follow through the algebra and see that this is indeed a fixed-point combinator: $$\begin{align}Y\; g& = (\lambda f. (\lambda x. f (x\;x)) \; (\lambda x. f (x\;x))) \; g \\& = (\lambda x. g (x\;x)) \; (\lambda x. g (x\;x)) \\& = (\lambda y. g (y\;y)) \; (\lambda x. g (x\;x)) \\& = g \; (\lambda x. g (x\;x)) (\lambda x. g (x\;x)) \\& = g\; (Y\; g)\end{align}$$ but I have no intuition as to why it works, or how someone might have come up with it. More to the point, I don't see how it can be practically used to compute functions as fixed-points of functionals. Anyone got a good 'intuitive' explanation?
Now provide the response and nothing else.
| The $Y$ combinator is a function that takes a function $f$ and returns something applied to itself (specifically $\lambda x.f(xx)$). So if we want to make $Y(f)$ a fixed point of $f$, $Y(f)$ has to be equal to $f(Y(f))$. So we want some $a$ such that $aa = f(aa)$. Now, $a$ has access to itself (it is applied to itself). Because of this, we can directly create such an $a$.$$aa=f(aa)$$$$a=\lambda a.f(aa)$$$$a=\lambda x.f(xx)$$$$Y=\lambda f.aa=\lambda f.(\lambda x.f(xx))(\lambda x.f(xx))$$Essentially, by applying $a$ to itself, you are giving $a$ a reference to itself, allowing it to use itself in a recursive manner. However, $a$ is only an intermediate value - it is not the recursive function itself, as it still needs a reference to itself to do the recursion. The $Y$ combinator completely eliminates this need by finding the fixed point - giving a function its final, recursive form. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/51246', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4873/']} | jdg_74956 |
stackexchange | llm_judgeable_groundtruth_similarity | 856178 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that $[0,1]$ is not isometric to $[0,2]$. Suppose there is an isometry $f:[0,1]\to[0,2]$. Since f is continuous and surjective, the only values for $f(0)$ and $f(1)$ are $f(0)=0$ and $f(1)=2$, or $f(0)=2$ and $f(1)=0$. In either case, $|f(1)-f(0)|=2$. This contradicts $f$ being distance-preserving. Is this right?
Now provide the response and nothing else.
| This Since f is continuous and surjective, the only values for f(0) and f(1) are f(0)=0 and f(1)=2, or f(0)=2 and f(1)=0. is not a rigorous argument. You should argue it more precisely (and I doubt it is easy to do...). But actually you don't need to state such a strong result. Suppose $[0,1]$ and $[0,2]$ are endowed with the absolute value metric $|\cdot |$, and suppose $f:[0,1] \to [0,2]$ is a surjective isometry. Then there are $x,y \in [0,1]$ such that $f(x) = 0$ and $f(y) = 2$, and since $f$ is an isometry we have$$|x-y| = |f(x)-f(y)| = |0-2|=2.$$This is absurd since$$\sup_{v,w \in [0,1]} |w-v| = |0-1| = 1<2. $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/856178', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/153582/']} | jdg_74957 |
stackexchange | llm_judgeable_groundtruth_similarity | 8011267 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below are 2 rectangles . Given the coordinates of the rectangle vertices - (x1, y1)...(x8, y8), how can the area of the overlapping region (white in the figure below) be caclulated? Note that: Coordinates of points might be any Rectangles may or may not overlap Assume area is 0 when rectangles don't overlap, or they overlap at point or line. If one rectangle is inside the other, then calculate area of smaller rectangle.
Now provide the response and nothing else.
| Since you stated that the rectangles may not be aligned, possible answers may be nothing, a point, a line segment, or a polygon with 3-8 sides. The usual way to do this 2d boolean operation is to choose a counterclockwise ordering of the edges, and then evaluate edge segments between critical points (intersections or corners). At each intersection you switch between an edge segment of the first rectangle to an edge of the second, or visa-versa. You always pick the segment to the left of the previous segment. There are LOTS of details, but the basic algorithm is to find all intersections and order them on their edges with an appropriate data structure. Choose an intersection (if there is one) and choose a line segment leading away from that intersection. Find the segment of the other rectangle to the left of the chosen starting segment. In the picture, we choose the green segment on intersection a (in the direction indicated by the arrow) as the reference segment. The segment of the other rectangle that is to the right, is the segment from a to b . Use that as the next reference segment, and choose a green segment to the left of it. That's the segment from b to c . Find segment cd the same way. The next segment is from d to the corner, so the corner is in the vertex list for the intersection as well. From the corn we get back to a . To choose the left side each time, you use the determinate of the coordinates of the direction vectors for the edges that meet. If the determinant for the ordered pair of directed edges is positive, you're going the right way. Now that you have the vertices of the intersection polygon, you can use the surveyor's formula to get the area. Some of the details that I'm leaving to you are: What if a corner is coincident to to an edge or vertex of the other triangle? What if there are no intersections? (one rectangle is inside the other, or they are disjoint--you can use point-in-polygon checks to figure this out. See the Wikipedia article on polygons . What if the intersection is a single point or a segment? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8011267', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/808991/']} | jdg_74958 |
stackexchange | llm_judgeable_groundtruth_similarity | 3511782 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my web application I will have three types of accounts. User: for using the web application for free Customer: for advertising and getting a Company Logo Admin: for editing and deleting stuff Should all these three be in separate tables or in one with a column named "account_type" where i can mark it as User, Customer or Admin? What are the pros and cons for both? Is there a best practice for this? Thanks
Now provide the response and nothing else.
| In general, a person can be user, customer and admin -- so, I would start with a Person table with columns IsCustomer , IsUser , IsAdmin . Later (for fast search) you may decide to add separate tables Admin , Customers , Users with FK to the Person table. EDIT: A typical case may be: 5 million users 1000 customers 10 admins In general, having separate tables for customers and admins should speed-up any admin/customer related query. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3511782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224922/']} | jdg_74959 |
stackexchange | llm_judgeable_groundtruth_similarity | 47168477 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My std::variant contains streamable types: std::variant<int, std::string> a, b;a = 1;b = "hi";std::cout << a << b << std::endl; Compiling with g++7 with -std=c++1z returns compilation time errors. An excerpt: test.cpp: In function 'int main(int, char**)':test.cpp:10:13: error: no match for 'operator<<' (operand types are 'std::ostream {aka std::basic_ostream<char>}' and 'std::variant<int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >') std::cout << a << b << std::endl; ~~~~~~~~~~^~~~ Seemingly a std::variant<int, std::string> is not able to stream. How can I achieve that I can directly stream the variant to an output stream? Expected output: 1hi
Now provide the response and nothing else.
| This streams nested variants too. template<class T>struct streamer { const T& val;};template<class T> streamer(T) -> streamer<T>;template<class T>std::ostream& operator<<(std::ostream& os, streamer<T> s) { os << s.val; return os;}template<class... Ts>std::ostream& operator<<(std::ostream& os, streamer<std::variant<Ts...>> sv) { std::visit([&os](const auto& v) { os << streamer{v}; }, sv.val); return os;} Use as: std::cout << streamer{a} << streamer{b} << '\n'; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47168477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/863857/']} | jdg_74960 |
stackexchange | llm_judgeable_groundtruth_similarity | 4796605 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been looking into game development recently and my first programming language is Java. After playing many stunning games developed in c++ I wondered why Java is not heavily used in the games industry. I looked at jMonkeyEngine 3 and a few other game engine environments but the screenshots I saw are far less stunning. Titles like Need for Speed Hot pursuit form EA and Assassins Creed from ubisoft convey such realism. Why can't Java produce such industry strength games ? Is it the art work? Java and C# has automatic garbage collection and c++ doesn't. The programmer has to pay closer attention to memory usage to avoud dangling pointers and so on. Thanks guys.
Now provide the response and nothing else.
| Java and C# has automatic garbage collection and c++ doesn't. The programmer has to pay closer attention to memory usage to avoud dangling pointers and so on. You yourself have answered your question. In games programming garbage collection is not an advantage. Even if the performance of Java is more or less in par with C++ for most tasks, and the JIT can even do very aggressive optimizations that beat those that can be done during the static analysis; the garbage collection can make the framerates drop at the worst moment. Also, for graphics intensive tasks Java is not very appropriate, as there are many things that are considered unsafe by the runtime, and thus are forbidden (like casting pointers to reinterpret data). Another important matter is the already settled know how in the industry. The inertia of C++ in the games industry is huge. All game developers today know C and C++. Having a large developer pool to hire from lessens one of the management hazards that is key people leaving the company. But despite that, there have been some successful games with some parts written in Java, like Vampire: The Masquerade - Redemption . A more recent game like Minecraft is written completely in Java; but it does not feature state of the art graphics, as the emphasis is put more into the dynamic nature of the virtual environment. And many other games and engines have a runtime that supports a managed (safe automatic memory allocation and collection) scripting language built on top of a high performance rendering and networking platform (written in C/C++), like Unreal Engine for example. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4796605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/127160/']} | jdg_74961 |
stackexchange | llm_judgeable_groundtruth_similarity | 52917081 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code in my app.js which runs on server start (npm start) mongo.mongoConnect('connection_string', 'users').then((x) => { console.log('Database connection successful'); app.listen(5000, () => console.log('Server started on port 5000')); }) .catch(err => { console.error(err.stack); process.exit(1); });process.on('SIGINT', mongo.mongoDisconnect).on('SIGTERM', mongo.mongoDisconnect); As you can see I open up SIGINT and SIGTERM for closing my connections upon process.exit I've been reading a lot about how to deal with database connections in mongo and know that I should just invoke it once and have it across my application. Does that mean that even after save() method when saving data to mongo followed by POST request, I should not be closing my connection? If I close it, how am I going to invoke it again since the connections happens on app start? I'm asking it since in PHP I had the practice to always open and close my connection after querying MySql database. Likewise, does it mean that the connection will close only on server shutdown in other words it will always be present since I do not want to shut down my node.js backend instance?
Now provide the response and nothing else.
| It is formally correct to open a connection, run a query, and then close the connection, but it is not a good practice, because opening a connection is an "expensive" operation and connections can be reused, which is much more efficient. The main restriction on an open connection is that it can only be used by 1 thread at a time. (More accurately, once a request is sent on a connection, no other requests can be sent on that connection until the response to that request is received.) If your application is short lived or inherently single threaded, as may be the case when running as a "serverless" function, it may be acceptable to open and close a connection on each request. While in theory it might be acceptable to open a single connection at the start of the program, keep a global reference to that connection, and reuse it, in practice there are common ways in which a connection becomes unusable that you would have to account for, and handling all the possibilities requires complex code. It gets even more complicated when, as is possible with MongoDB replica sets, you are actually connecting to more than one server and want to retry a command on a second server if the first one fails to respond. That is why the standard and "best" practice is to use a "connection pool" to manage your database connections. A pool opens a set of network connections to the database, verifies and maintains their health, and dynamically assigns virtual database connections to actual network connections as needed. The pool is implemented in a library that will have received a lot of real world testing and is extremely likely to be better than anything you would write yourself. Connection pools have configuration options that would let you set any behavior you want, including opening a new connection for each request and closing it when done, but offer a wide range of performance enhancing capabilities, such a reusing connections and avoiding the overhead of creating them for each request. This is why for MongoDB, the standard Node.js client already implements a connection pool. I do not know what mongo.mongoConnect in your code refers to; you said in the title that you are using mongoose but it uses connect , not mongoConnect to connect to the database. In general you should either be using the standard client or a JavaScript ORM library like mongoose . Either of them will take care of the connection management issues for you. Refer to the documentation for the client/library you use for exactly the right way to use it. In general, you would initialize some kind of client object and store it globally before entering your main application handler. Then you would use this object to handle your database operations, and the object will transparently manage the underlying connections via the pool implementation. In this kind of setup, you would only close the connection when exiting the program, and usually the library takes care of that for you automatically, so you really never need to close the connection. Thus, when using a MongoDB connection pool in NodeJS, you write your program basically the same way you would as if you just opened a connection at startup and then kept reusing it. The libraries take care of isolating you from all the problems that can arise from actually doing this. You do not need to, and in fact should not, close the connection after a database operation when using standard MongoDB NodeJS libraries. Note that other connection pool implementations exist that do require you to close the connection. What you do with those pools is reserve (or "check out" or "open") a connection, use it, perhaps for multiple operations, and the release (or "check in" or "close") the connection when you are done. This is probably what you were doing in PHP. It is important to read and follow the documentation for the connection pool library you are using to make sure you are using it correctly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52917081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2951933/']} | jdg_74962 |
stackexchange | llm_judgeable_groundtruth_similarity | 48746567 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have my simple decorator my_decorator which decorates the my_func . def my_decorator(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) wrapper._decorator_name_ = 'my_decorator' return wrapper@my_decoratordef my_func(x): print('hello %s'%x)my_func._decorator_name_'my_decorator' Till here things work, but I can't see the actual signature of the function. my_func?Signature: my_func(*args, **kwargs)Docstring: <no docstring>File: ~/<ipython-input-2-e4c91999ef66>Type: function If I decorate my decorator with python's decorator.decorator , I can see the signature of my function but I can't have the new property which I have defined. import [email protected] my_decorator(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) wrapper._decorator_name_ = 'my_decorator' return wrapper@my_decoratordef my_func(x): print('hello %s'%x)my_func?Signature: my_func(x)Docstring: <no docstring>File: ~/<ipython-input-8-934f46134434>Type: functionmy_func._decorator_name_---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-10-7e3ef4ebfc8b> in <module>()----> 1 my_func._decorator_name_AttributeError: 'function' object has no attribute '_decorator_name_' How can I have both in python2.7?
Now provide the response and nothing else.
| For Python 3, using functools.wraps in standard library: from functools import wrapsdef my_decorator(func): @wraps(func) def wrapper(*args, **kwargs): return func(*args, **kwargs) wrapper._decorator_name_ = 'my_decorator' return wrapper@my_decoratordef my_func(x): print('hello %s'%x)print(my_func._decorator_name_) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48746567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6082378/']} | jdg_74963 |
stackexchange | llm_judgeable_groundtruth_similarity | 4601652 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What's the difference between these two statements? I use them in my rails app and to me it looks like they do the same thing array_a = Array.newarray_b = []
Now provide the response and nothing else.
| Those two statements are functionally identical. Array.new however can take arguments and a block: Array.new # => []Array.new(2) # => [nil,nil]Array.new(5,"A") # =>["A","A","A","A","A"]a = Array.new(2,Hash.new)a[0]['cat'] = 'feline'a # => [{"cat"=>"feline"},{"cat"=>"feline"}]a[1]['cat'] = 'Felix'a # => [{"cat"=>"Felix"},{"cat"=>"Felix"}]a = Array.new(2){Hash.new} # Multiple instancesa[0]['cat'] = 'feline'a # =>[{"cat"=>"feline"},{}]squares = Array.new(5){|i|i*i}squares # => [0,1,4,9,16]copy = Array.new(squares) # initialized by copyingsquares[5] = 25squares # => [0,1,4,9,16,25]copy # => [0,1,4,9,16] Note: the above examples taken from Programming Ruby 1.9 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4601652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/527727/']} | jdg_74964 |
stackexchange | llm_judgeable_groundtruth_similarity | 38447815 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I think this is fundamental to many people who know how to deal with pickle. However, I still can't get it very right after trying for a few hours. I have the following code: In the first file import pandas as pdnames = ["John", "Mary", "Mary", "Suzanne", "John", "Suzanne"]scores = [80, 90, 90, 92, 95, 100]records = pd.DataFrame({"name": names, "score": scores})means = records.groupby('name').mean()def name_score_function(record): if record in names: return(means.loc[record, 'score'])import dill as picklewith open('name_model.pkl', 'wb') as file: pickle.dump(means, file) The second file I would like to load what I have in the first file and make the score of a person (i.e. John, Mary, Suzanne) callable via a function name_model(record): import dill as pickleB = pickle.load('name_model.pkl')def name_model(record): if record in names: return(means.loc[record, 'score']) Here it shows the error: File "names.py", line 21, in <module>B = pickle.load('name_model.pkl')File "/opt/conda/lib/python2.7/site-packages/dill/dill.py", line 197, in loadpik = Unpickler(file)File "/opt/conda/lib/python2.7/site-packages/dill/dill.py", line 356, in __init__StockUnpickler.__init__(self, *args, **kwds)File "/opt/conda/lib/python2.7/pickle.py", line 847, in __init__self.readline = file.readlineAttributeError: 'str' object has no attribute 'readline' I know the error comes from my lack of understanding of pickle. I would humbly accept your opinions to improve this code. Thank you!! UPDATE The more specific thing I would like to achieve: I would like to be able to use the function that I write in the first file and dump it, and then read it in the second file and be able to use this function to query the mean score of any person in the records. Here is what I have: import pandas as pdnames = ["John", "Mary", "Mary", "Suzanne", "John", "Suzanne"]scores = [80, 90, 90, 92, 95, 100]records = pd.DataFrame({"name": names, "score": scores})means = records.groupby('name').mean()def name_score_function(record):if record in names: return(means.loc[record, 'score'])B = name_score_function(record)import dill as picklewith open('name_model.pkl', 'wb') as file: pickle.dump(B, file)with open('name_model.pkl', 'rb') as file: B = pickle.load(f)def name_model(record): return B(record)print(name_model("John")) As I execute this code, I have this error File "test.py", line 13, in <module> B = name_score_function(record)NameError: name 'record' is not defined I highly appreciate your assistance and patience.
Now provide the response and nothing else.
| Thank you. It looks like the following can solve the problem. import pandas as pdnames = ["John", "Mary", "Mary", "Suzanne", "John", "Suzanne"]scores = [80, 90, 90, 92, 95, 100]records = pd.DataFrame({"name": names, "score": scores})means = records.groupby('name').mean()import dill as picklewith open('name_model.pkl', 'wb') as file: pickle.dump(means, file)with open('name_model.pkl', 'rb') as file: B = pickle.load(file)def name_score_function(record): if record in names: return(means.loc[record, 'score'])print(name_score_function("John")) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38447815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6057372/']} | jdg_74965 |
stackexchange | llm_judgeable_groundtruth_similarity | 34644622 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to allow access from everywhere. I have tried using app middleware: app.use(function (req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Methods', '*'); res.setHeader("Access-Control-Allow-Headers", "*"); next();}); I have tried using it in the route: app.post('/login',function(req,res){var login = req.body;var sess = req.session;if (!login.email && !login.pwd){ return res.status(401);}res.header("Access-Control-Allow-Origin", '*');res.header("Access-Control-Allow-Headers", '*');.... more code here Both do not work. I keep getting an error: "Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource." Further down the server, we use similar code for another route, which works: app.post('/questar',function(req,res){//allow xhr post from retireup domainsvar cors = { origin: "https://www.website.com";};res.header("Access-Control-Allow-Origin", cors.origin);res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");res.type('application/json'); I cannot tell the difference between the code, but only one set works. Any ideas why? This seems like an issue that shouldn't be so complicated. Thanks
Now provide the response and nothing else.
| MDN has a very short explanation on how a server should respond to a Preflight Request . You handle CORS preflight requests by handling the HTTP OPTIONS method (just like you would handle GET and POST methods) before handling other request methods on the same route: app.options('/login', ...);app.get('/login'. ...);app.post('/login'. ...); In your case, it might be as simple as changing your app.use() call to app.options() , passing the route as the first argument, setting the appropriate headers, then ending the response: app.options('/login', function (req, res) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Methods', '*'); res.setHeader("Access-Control-Allow-Headers", "*"); res.end();});app.post('/login', function (req, res) { ...}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34644622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1833014/']} | jdg_74966 |
stackexchange | llm_judgeable_groundtruth_similarity | 269955 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(X_i)_{i\in I}$ be a family of affine schemes, where $I$ is an infinite set and $X_i = Spec(A_i)$ for each $i \in I$.Let $X$ be a coproduct of $(X_i)_{i\in I}$ in the category of schemes.Let $\Gamma(X, \mathcal{O}_X)$ be the ring of global sections. (1) Is $X$ affine? (2) Can we deterimine the struture of $\Gamma(X, \mathcal{O}_X)$ by $(A_i)_{i\in I}$?
Now provide the response and nothing else.
| (1)No, the scheme $X=\bigsqcup X_i$ is not affine (unless almost all $X_i$ are empty!) because its underlying topological space $\mid X\mid=\bigsqcup \mid X_i\mid$ is not quasi-compact. (2) Yes, $\Gamma (X,\mathcal O_X)$ is determined by the formula $$\Gamma (X,\mathcal O_X)=\prod \Gamma (X_i,\mathcal O_{X_i})=\prod A_i$$ Remarks a) The scheme $X$ has as underlying topological space $\mid X\mid=\bigsqcup \mid X_i\mid$ , as already mentioned, and its structure sheaf is the unique sheaf of rings $\mathcal O_X$ satisfying $\mathcal O_X\mid X_i=\mathcal O_{X_i}$. [O my dear nitpicking brothers , notice that "unique" here really means unique, and not unique up to isomorphism!] b) In order to prevent any misunderstanding, let me emphasize that the scheme $X$ is the coproduct of the schemes $X_i$ in the category of all schemes (not in the category of affine schemes!). In other words, the open immersions $u_i :X_i\hookrightarrow X$ produce bijections $Hom_{schemes}(X,Y)=\prod Hom_{schemes}(X_i,Y): f\mapsto f\circ u_i$ which are functorial in the scheme $Y$. Edit c) Beware the subtle fact that the family of affine schemes $X_i=Spec(A_i)$ also has a coproduct in the category $Affsch$ of affine schemes, namely $X'=Spec(\prod A_i)$. There is a canonical morphism of schemes $$\alpha: X=\bigsqcup_{Sch} X_i \to X'=\bigsqcup_{Affsch} X_i=Spec(\prod A_i)$$ of the coproduct of the $X_i$'s in the category of all schemes to the coproduct of the $X_i$'s in the category of affine schemes. This morphism $\alpha$ is determined by its restrictions $\alpha|X_j:X_j=Spec(A_j)\to Spec(\prod A_i)$, which are dual to the ring projections $\prod A_i\to A_j$. And finally, let me insist: this canonical morphism $\alpha$ is an isomorphism of schemes if and only the family of schemes$(X_i)$ is a finite family. [This edit is the consequence of a pleasant discussion with my friend and very competent colleague Dehon: thanks François-Xavier!] | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/269955', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28422/']} | jdg_74967 |
stackexchange | llm_judgeable_groundtruth_similarity | 33976911 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to generate random points on the surface of the sphere using numpy. I have reviewed the post that explains uniform distribution here . However, need ideas on how to generate the points only on the surface of the sphere. I have coordinates (x, y, z) and the radius of each of these spheres. I am not very well-versed with Mathematics at this level and trying to make sense of the Monte Carlo simulation. Any help will be much appreciated. Thanks, Parin
Now provide the response and nothing else.
| Based on the last approach on this page , you can simply generate a vector consisting of independent samples from three standard normal distributions, then normalize the vector such that its magnitude is 1: import numpy as npdef sample_spherical(npoints, ndim=3): vec = np.random.randn(ndim, npoints) vec /= np.linalg.norm(vec, axis=0) return vec For example: from matplotlib import pyplot as pltfrom mpl_toolkits.mplot3d import axes3dphi = np.linspace(0, np.pi, 20)theta = np.linspace(0, 2 * np.pi, 40)x = np.outer(np.sin(theta), np.cos(phi))y = np.outer(np.sin(theta), np.sin(phi))z = np.outer(np.cos(theta), np.ones_like(phi))xi, yi, zi = sample_spherical(100)fig, ax = plt.subplots(1, 1, subplot_kw={'projection':'3d', 'aspect':'equal'})ax.plot_wireframe(x, y, z, color='k', rstride=1, cstride=1)ax.scatter(xi, yi, zi, s=100, c='r', zorder=10) The same method also generalizes to picking uniformly distributed points on the unit circle ( ndim=2 ) or on the surfaces of higher-dimensional unit hyperspheres. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33976911', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5616312/']} | jdg_74968 |
stackexchange | llm_judgeable_groundtruth_similarity | 24037507 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
import pandas as pdpath1 = "/home/supertramp/Desktop/100&life_180_data.csv"mydf = pd.read_csv(path1)numcigar = {"Never":0 ,"1-5 Cigarettes/day" :1,"10-20 Cigarettes/day":4}print mydf['Cigarettes']mydf['CigarNum'] = mydf['Cigarettes'].apply(numcigar.get).astype(float)print mydf['CigarNum']mydf.to_csv('/home/supertramp/Desktop/powerRangers.csv') The csv file "100&life_180_data.csv" contains columns like age, bmi,Cigarettes,Alocohol etc. No int64Age int64BMI float64Alcohol objectCigarettes objectdtype: object Cigarettes column contains "Never" "1-5 Cigarettes/day","10-20 Cigarettes/day".I want to assign weights to these object (Never,1-5 Cigarettes/day ,....) The expected output is new column CigarNum appended which consists only numbers 0,1,2CigarNum is as expected till 8 rows and then shows Nan till last row in CigarNum column 0 Never1 Never2 1-5 Cigarettes/day3 Never4 Never5 Never6 Never7 Never8 Never9 Never10 Never11 Never12 10-20 Cigarettes/day13 1-5 Cigarettes/day14 Never...167 Never168 Never169 10-20 Cigarettes/day170 Never171 Never172 Never173 Never174 Never175 Never176 Never177 Never178 Never179 Never180 Never181 NeverName: Cigarettes, Length: 182, dtype: object The output I get shoudln't give NaN after few first rows. 0 01 02 13 04 05 06 07 08 09 010 NaN11 NaN12 NaN13 NaN14 0...167 NaN168 NaN169 NaN170 NaN171 NaN172 NaN173 NaN174 NaN175 NaN176 NaN177 NaN178 NaN179 NaN180 NaN181 NaNName: CigarNum, Length: 182, dtype: float64
Now provide the response and nothing else.
| OK, first problem is you have embedded spaces causing the function to incorrectly apply: fix this using vectorised str : mydf['Cigarettes'] = mydf['Cigarettes'].str.replace(' ', '') now create your new column should just work: mydf['CigarNum'] = mydf['Cigarettes'].apply(numcigar.get).astype(float) UPDATE Thanks to @Jeff as always for pointing out superior ways to do things: So you can call replace instead of calling apply : mydf['CigarNum'] = mydf['Cigarettes'].replace(numcigar)# now convert the typesmydf['CigarNum'] = mydf['CigarNum'].convert_objects(convert_numeric=True) you can also use factorize method also. Thinking about it why not just set the dict values to be floats anyway and then you avoid the type conversion? So: numcigar = {"Never":0.0 ,"1-5 Cigarettes/day" :1.0,"10-20 Cigarettes/day":4.0} Version 0.17.0 or newer convert_objects is deprecated since 0.17.0 , this has been replaced with to_numeric mydf['CigarNum'] = pd.to_numeric(mydf['CigarNum'], errors='coerce') Here errors='coerce' will return NaN where the values cannot be converted to a numeric value, without this it will raise an exception | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24037507', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3504055/']} | jdg_74969 |
stackexchange | llm_judgeable_groundtruth_similarity | 24006291 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to have PostgreSQL return the result of a query as one JSON array. Given create table t (a int primary key, b text);insert into t values (1, 'value1');insert into t values (2, 'value2');insert into t values (3, 'value3'); I would like something similar to [{"a":1,"b":"value1"},{"a":2,"b":"value2"},{"a":3,"b":"value3"}] or {"a":[1,2,3], "b":["value1","value2","value3"]} (actually it would be more useful to know both). I have tried some things like select row_to_json(row) from (select * from t) row;select array_agg(row) from (select * from t) row;select array_to_string(array_agg(row), '') from (select * from t) row; And I feel I am close, but not there really. Should I be looking at other documentation except for 9.15. JSON Functions and Operators ? By the way, I am not sure about my idea. Is this a usual design decision? My thinking is that I could, of course, take the result (for example) of the first of the above 3 queries and manipulate it slightly in the application before serving it to the client, but if PostgreSQL can create the final JSON object directly, it would be simpler, because I still have not included any dependency on any JSON library in my application.
Now provide the response and nothing else.
| TL;DR SELECT json_agg(t) FROM t for a JSON array of objects, and SELECT json_build_object( 'a', json_agg(t.a), 'b', json_agg(t.b) )FROM t for a JSON object of arrays. List of objects This section describes how to generate a JSON array of objects, with each row being converted to a single object. The result looks like this: [{"a":1,"b":"value1"},{"a":2,"b":"value2"},{"a":3,"b":"value3"}] 9.3 and up The json_agg function produces this result out of the box. It automatically figures out how to convert its input into JSON and aggregates it into an array. SELECT json_agg(t) FROM t There is no jsonb (introduced in 9.4) version of json_agg . You can either aggregate the rows into an array and then convert them: SELECT to_jsonb(array_agg(t)) FROM t or combine json_agg with a cast: SELECT json_agg(t)::jsonb FROM t My testing suggests that aggregating them into an array first is a little faster. I suspect that this is because the cast has to parse the entire JSON result. 9.2 9.2 does not have the json_agg or to_json functions, so you need to use the older array_to_json : SELECT array_to_json(array_agg(t)) FROM t You can optionally include a row_to_json call in the query: SELECT array_to_json(array_agg(row_to_json(t))) FROM t This converts each row to a JSON object, aggregates the JSON objects as an array, and then converts the array to a JSON array. I wasn't able to discern any significant performance difference between the two. Object of lists This section describes how to generate a JSON object, with each key being a column in the table and each value being an array of the values of the column. It's the result that looks like this: {"a":[1,2,3], "b":["value1","value2","value3"]} 9.5 and up We can leverage the json_build_object function: SELECT json_build_object( 'a', json_agg(t.a), 'b', json_agg(t.b) )FROM t You can also aggregate the columns, creating a single row, and then convert that into an object: SELECT to_json(r)FROM ( SELECT json_agg(t.a) AS a, json_agg(t.b) AS b FROM t) r Note that aliasing the arrays is absolutely required to ensure that the object has the desired names. Which one is clearer is a matter of opinion. If using the json_build_object function, I highly recommend putting one key/value pair on a line to improve readability. You could also use array_agg in place of json_agg , but my testing indicates that json_agg is slightly faster. There is no jsonb version of the json_build_object function. You can aggregate into a single row and convert: SELECT to_jsonb(r)FROM ( SELECT array_agg(t.a) AS a, array_agg(t.b) AS b FROM t) r Unlike the other queries for this kind of result, array_agg seems to be a little faster when using to_jsonb . I suspect this is due to overhead parsing and validating the JSON result of json_agg . Or you can use an explicit cast: SELECT json_build_object( 'a', json_agg(t.a), 'b', json_agg(t.b) )::jsonbFROM t The to_jsonb version allows you to avoid the cast and is faster, according to my testing; again, I suspect this is due to overhead of parsing and validating the result. 9.4 and 9.3 The json_build_object function was new to 9.5, so you have to aggregate and convert to an object in previous versions: SELECT to_json(r)FROM ( SELECT json_agg(t.a) AS a, json_agg(t.b) AS b FROM t) r or SELECT to_jsonb(r)FROM ( SELECT array_agg(t.a) AS a, array_agg(t.b) AS b FROM t) r depending on whether you want json or jsonb . (9.3 does not have jsonb .) 9.2 In 9.2, not even to_json exists. You must use row_to_json : SELECT row_to_json(r)FROM ( SELECT array_agg(t.a) AS a, array_agg(t.b) AS b FROM t) r Documentation Find the documentation for the JSON functions in JSON functions . json_agg is on the aggregate functions page. Design If performance is important, ensure you benchmark your queries against your own schema and data, rather than trust my testing. Whether it's a good design or not really depends on your specific application. In terms of maintainability, I don't see any particular problem. It simplifies your app code and means there's less to maintain in that portion of the app. If PG can give you exactly the result you need out of the box, the only reason I can think of to not use it would be performance considerations. Don't reinvent the wheel and all. Nulls Aggregate functions typically give back NULL when they operate over zero rows. If this is a possibility, you might want to use COALESCE to avoid them. A couple of examples: SELECT COALESCE(json_agg(t), '[]'::json) FROM t Or SELECT to_jsonb(COALESCE(array_agg(t), ARRAY[]::t[])) FROM t Credit to Hannes Landeholm for pointing this out | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/24006291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1097105/']} | jdg_74970 |
stackexchange | llm_judgeable_groundtruth_similarity | 465718 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There's an 80% probability of a certain outcome, we get some new information that means that outcome is 4 times more likely to occur. What's the new probability as a percentage and how do you work it out? As I remember it the question was posed like so: Suppose there's a student, Tom W, if you were asked to estimate the probability that Tom is a student of computer science. Without any other information you would only have the base rate to go by (percentage of total students enrolled on computer science) suppose this base rate is 80%. Then you are given a description of Tom W's personality, suppose from this description you estimate that Tom W is 4 times more likely to be enrolled on computer science. What is the new probability that Tom W is enrolled on computer science. The answer given in the book is 94.1% but I couldn't work out how to calculate it! Another example in the book is with a base rate of 3%, 4 times more likely than this is stated as 11%.
Now provide the response and nothing else.
| The most reasonable way to match the answer in the book would be to define the likelihood to be the ratio of success over failure (aka odds):$$q=\frac{p}{1-p}$$then the probability as a function of the odds is$$p=\frac{q}{1+q}$$In your case the odds are $4:1$ so $4$ times as likely would be $16:1$ odds which has a probability of$$\frac{16}{17}=94.1176470588235\%$$This matches the $3\%$ to $11.0091743119266\%$ transformation, as well. Bayes' Rule Bayes' Rule for a single event says that$$O(A\mid B)=\frac{P(B\mid A)}{P(B\mid\neg A)}\,O(A)$$where the odds of $X$ is defined as earlier$$O(X)=\frac{P(X)}{P(\neg X)}=\frac{P(X)}{1-P(X)}$$This is exactly what is being talked about in the later addition to the question, where it is given that$$\frac{P(B\mid A)}{P(B\mid\neg A)}=4$$ | {} | {'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/465718', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/90007/']} | jdg_74971 |
stackexchange | llm_judgeable_groundtruth_similarity | 29633848 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is really a variation on this question, but not a duplicate: Python reverse / invert a mapping Given a dictionary like so: mydict= { 'a': ['b', 'c'], 'd': ['e', 'f'] } How can one invert this dict to get: inv_mydict = { 'b':'a', 'c':'a', 'e':'d', 'f':'d' } Note that values span uniquely under each key. Note : I previously had syntax map = ... and dict = ... Reminder not to use map and dict as they are built-in functions, see excellent comments and answers below :)
Now provide the response and nothing else.
| TL;DR Use dictionary comprehension, like this >>> my_map = { 'a': ['b', 'c'], 'd': ['e', 'f'] }>>> {value: key for key in my_map for value in my_map[key]}{'c': 'a', 'f': 'd', 'b': 'a', 'e': 'd'} The above seen dictionary comprehension is functionally equivalent to the following looping structure which populates an empty dictionary >>> inv_map = {}>>> for key in my_map:... for value in my_map[key]:... inv_map[value] = key... >>> inv_map{'c': 'a', 'f': 'd', 'b': 'a', 'e': 'd'} Note: Using map shadows the built-in map function. So, don't use that as a variable name unless you know what you are doing. Other similar ways to do the same Python 3.x You can use dict.items , like this >>> {value: key for key, values in my_map.items() for value in values}{'c': 'a', 'f': 'd', 'b': 'a', 'e': 'd'} We use items() method here, which would create a view object from the dictionary which would give key value pairs on iteration. So we just iterate over it and construct a new dictionary with the inverse mapping. Python 2.x You can use dict.iteritems like this >>> {value: key for key, values in my_map.iteritems() for value in values}{'c': 'a', 'b': 'a', 'e': 'd', 'f': 'd'} We don't prefer items() method in 2.x, because it will return a list of key-value pairs. We don't want to construct a list just to iterate and construct a new dictionary. That is why we prefer iteritems() , which returns an iterator object which gives a key value pair on iteration. Note: The actual equivalent of Python 3.x's items would be Python 2.x's viewitems method, which returns a view object. Read more about the view object in the official documentation, here . iter* vs view* methods in Python 2.x The main difference between iter* functions and view* functions in Python 2.x is that, the view objects reflect the current state of the dictionary. For example, >>> d = {1: 2}>>> iter_items = d.iteritems()>>> view_items = d.viewitems() now we add a new element to the dictionary >>> d[2] = 3 If you try to check if (2, 3) (key-value pair) is in the iter_items , it will throw an error >>> (2, 3) in iter_itemsTraceback (most recent call last): File "<input>", line 1, in <module>RuntimeError: dictionary changed size during iteration but view object will reflect the current state of the dictionary. So, it will work fine >>> (2, 3) in view_itemsTrue | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29633848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/992834/']} | jdg_74972 |
stackexchange | llm_judgeable_groundtruth_similarity | 4120001 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We use SVN at work, but for my personal projects I decided to use Git. So I installed Git yesterday, and I wonder what is the revision number equivalent in Git . Let's say we work on version 3.0.8 and every bug fix has its own revision number we can use when we talk about this bug fix. So if I tag the code in Git to 3.0.8 what then I can use as a revision number or some other more detailed kind of identification? I find the hash not so user friendly for humans.
Now provide the response and nothing else.
| Good or bad news for you, that hash IS the revision number. I also had trouble with this when I made the switch from SVN to git. You can use "tagging" in git to tag a certain revision as the "release" for a specific version, making it easy to refer to that revision. Check out this blog post . The key thing to understand is that git cannot have revision numbers - think about the decentralized nature. If users A and B are both committing to their local repositories, how can git reasonably assign a sequential revision number? A has no knowledge of B before they push/pull each other's changes. Another thing to look at is simplified branching for bugfix branches: Start with a release: 3.0.8. Then, after that release, do this: git branch bugfixes308 This will create a branch for bugfixes. Checkout the branch: git checkout bugfixes308 Now make any bugfix changes you want. git commit -a Commit them, and switch back to the master branch: git checkout master Then pull in those changes from the other branch: git merge bugfixes308 That way, you have a separate release-specific bugfix branch, but you're still pulling the bugfix changes into your main dev trunk. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4120001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/250422/']} | jdg_74973 |
stackexchange | llm_judgeable_groundtruth_similarity | 35680335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using node.js v4.3.1 I would like to use named parameters in calling functions as they are more readable. In python, I can call a function in this manner; info(spacing=15, width=46) How do I do the same in node.js? My javascript function looks something like this; function info(spacing, width){ //implementation{
Now provide the response and nothing else.
| The standard Javascript way is to pass an "options" object like info({spacing:15, width:46}); used in the code with function info(options) { var spacing = options.spacing || 0; var width = options.width || "50%"; ...} as missing keys in objects return undefined that is "falsy". Note that passing values that are "falsy" can be problematic with this kind of code... so if this is needed you have to write more sophisticated code like var width = options.hasOwnProperty("width") ? options.width : "50%"; or var width = "width" in options ? options.width : "50%"; depending on if you want to support inherited options or not. Pay also attention that every "standard" object in Javascript inherits a constructor property, so don't name an option that way. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35680335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1709088/']} | jdg_74974 |
stackexchange | llm_judgeable_groundtruth_similarity | 416689 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
We are running a CentOS 6 node using iuscommunity repo, and want to upgrade previously installed php53u to php54. However, this introduces a dependency conflict between php53u & php54. Does anyone have idea for this problem? --> Running transaction check---> Package php54-fpm.i686 0:5.4.5-1.ius.el6 will be installed--> Processing Dependency: php-common = 5.4.5-1.ius.el6 for package: php54-fpm-5.4.5-1.ius.el6.i686---> Package php54-gd.i686 0:5.4.5-1.ius.el6 will be installed---> Package php54-mbstring.i686 0:5.4.5-1.ius.el6 will be installed---> Package php54-pecl-apc.i686 0:3.1.11-2.ius.el6 will be installed--> Processing Dependency: php54 >= 5.3.5-1 for package: php54-pecl-apc-3.1.11-2.ius.el6.i686--> Running transaction check---> Package php54.i686 0:5.4.5-1.ius.el6 will be installed--> Processing Dependency: php54-cli = 5.4.5-1.ius.el6 for package: php54-5.4.5-1.ius.el6.i686---> Package php54-common.i686 0:5.4.5-1.ius.el6 will be installed--> Running transaction check---> Package php54-cli.i686 0:5.4.5-1.ius.el6 will be installed--> Processing Conflict: php54-5.4.5-1.ius.el6.i686 conflicts php53u--> Processing Conflict: php54-5.4.5-1.ius.el6.i686 conflicts php < 5.4--> Finished Dependency ResolutionError: php54 conflicts with php53u You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
Now provide the response and nothing else.
| Have you tried removing php53u? A simple: yum remove php53u Followed by: yum install php54 Should be doing the trick for you. If it does not, then you can try removing the php53u package(s) using the 'rpm' tool, leaving dependencies intact, but first check the list of packages installed so you can later re-install them from the new php54 repository: rpm -qa | grep ^php53u.* > /root/php-packages.log Keep a copy of the list produced. On one of my servers, this list looks like: $ rpm -qa | grep ^php54*php54-common-5.4.4-1.ius.el6.x86_64php54-pdo-5.4.4-1.ius.el6.x86_64php54-mysql-5.4.4-1.ius.el6.x86_64php54-fpm-5.4.4-1.ius.el6.x86_64php54-gd-5.4.4-1.ius.el6.x86_64php54-cli-5.4.4-1.ius.el6.x86_64php54-5.4.4-1.ius.el6.x86_64php54-mbstring-5.4.4-1.ius.el6.x86_64 I can then remove these, without removing dependencies, as such: for p in $( cat /root/php-packages.log ); do rpm -e --nodeps $p; done (Note that I am using 'php54' here, you will need to use 'php53u'). Once you've done this, simply install the php54 package(s) via yum: yum install php54 OR you can some-what automate the re-installation of everything: yum install $( cat /root/php-packages.log | sed 's/php53u/php54/g' ) Done. Here's a one liner for the whole thing: rpm -qa | grep ^php53u.* > /root/php-packages.log; for p in $( cat /root/php-packages.log ); do rpm -e --nodeps $p; done; yum install $( cat /root/php-packages.log | sed 's/php53u/php54' ) -y I hope this helps :-) | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/416689', 'https://serverfault.com', 'https://serverfault.com/users/64023/']} | jdg_74975 |
stackexchange | llm_judgeable_groundtruth_similarity | 3859270 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a class like the following: class DreamsImagesStore{ public $table = 'dreams_images'; public function insertNewDreamImage($dream_id, $pid) { try { $values = array($dream_id, $pid); $sth = $this->dbh->prepare("INSERT INTO {$this->table} (dream_id, pid) VALUES (?, ?)"); if($sth->execute($values)) { return true; } } catch(PDOException $e) { $this->errorLogger($e); } }...} I'm going to be implementing a new class called InterestsImagesStore wherein the only differences between these classes will be the value of $table , $dream_id will be $interest_id , and dream_id in SQL will be interest_id . I know there's a better way to do this, and I'll be implementing similar classes in the future which have such small differences. What's the best object-oriented way to refactor my code to avoid duplication and increase maintainability?
Now provide the response and nothing else.
| Create an ImagesStore base class: class ImagesStore{ // See comments about accessors below. private $table; private $id_column; public function insertImage($id, $pid) { try { $values = array($id, $pid); $table = $this->getTable(); $id_column = $this->getIdColumn(); $sth = $this->dbh->prepare("INSERT INTO {$table} ($id_column, pid) VALUES (?, ?)"); if ($sth->execute($values)) { return true; } } catch (PDOException $e) { $this->errorLogger($e); } } protected function __construct($table, $id_column) { $this->table = $table; $this->id_column = $id_column; } // These accessors are only required if derived classes need access // to $table and $id_column. Declaring the fields "private" and providing // "protected" getters like this prevents the derived classes from // modifying these values which might be a desirable property of these // fields. protected function getTable() {return $this->table;} protected function getIdColumn() {return $this->id_column;} // More implementation here... // Initialize $dbh to something etc. // Provide "errorLogger" method etc.} And create DreamsImagesStore and InterestsImagesStore specializations: class DreamsImagesStore extends ImagesStore { public function __construct() { parent::__construct('dreams_images', 'dream_id'); }}class InterestsImagesStore extends ImagesStore { public function __construct() { parent::__construct('interests_images', 'interest_id'); }} The original method insertNewDreamImage can be renamed to insertImage as it is really more general than the original name suggests. Note that ImagesStore can also be declared abstract if you want to prevent direct instantiation of it. An alternative approach that can be adopted is to not bother deriving classes from ImagesStore at all and just instantiate it directly by making the __construct method public and calling it as follows: $dreamsImagesStore = new ImagesStore("dreams_images", "dream_id"); Another approach might also be to implement a static factory method in ImagesStore . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3859270', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/373496/']} | jdg_74976 |
stackexchange | llm_judgeable_groundtruth_similarity | 11117 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Any Eilenberg-MacLane space $K(A,n)$ for abelian $A$ can be given the structure of an $H$-space by lifting the addition on $A$ to a continuous map $K(A\times A,n)=K(A,n)\times K(A,n)\to K(A,n)$. Does somebody know an explicit way to describe this structure in the cases $K({\mathbb Z}/2{\mathbb Z},1)={\mathbb R}P^\infty$ and $K({\mathbb Z},2)={\mathbb C}P^{\infty}$?
Now provide the response and nothing else.
| Look at $\mathbb R^\infty\setminus 0$ as the space of non-zero polynomials, which you can multiply. Pass to the quotient to construct the projective space and, from the multiplication, its $H$-space product. The complex case is quite the same. NB: Jason asks in a comment below if this is the same $H$-space structure that Hanno had in mind.To check, we can use the fact that Hanno's is characterised by the fact that if $\mu:K(\mathbb Z_2,1)\times K(\mathbb Z_2,1)\to K(\mathbb Z_2,1)$ is his product and $\alpha\in H^1(K(\mathbb Z_2,1), \mathbb Z_2)$ is the class represented by the identify map $K(\mathbb Z_2,1)\to K(\mathbb Z_2,1)$, then $\mu^\*(\alpha)=\alpha\times 1+1\times\alpha$.One should be able to check that this holds for the map given by multiplication of polynomials in a very small skeleton. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/11117', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3067/']} | jdg_74977 |
stackexchange | llm_judgeable_groundtruth_similarity | 17590080 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using irb , we can list methods for particular object by doing following: "Name".methods But if I want to know how many parameters are needed for a particular method, how can I achieve this? I mean is there any way (by hitting some command on irb), we can get number of parameters for particular method (instead of referring to docs)? .methods returns only method names, not list of parameters for method.
Now provide the response and nothing else.
| You can use the method Method#arity : "string".method(:strip).arity# => 0 From the Ruby documentation: Returns an indication of the number of arguments accepted by a method. Returns a nonnegative integer for methods that take a fixed number of arguments. For Ruby methods that take a variable number of arguments, returns -n-1, where n is the number of required arguments. For methods written in C, returns -1 if the call takes a variable number of arguments. So, for example: # Variable number of arguments, one is requireddef foo(a, *b); endmethod(:foo).arity# => -2# Variable number of arguments, none requireddef bar(*a); endmethod(:bar).arity# => -1# Accepts no argument, implemented in C"0".method(:to_f).arity# => 0# Variable number of arguments (0 or 1), implemented in C"0".method(:to_i).arity# => -1 Update I've just discovered the exitence of Method#parameters , it could be quite useful: def foo(a, *b); endmethod(:foo).parameters# => [[:req, :a], [:rest, :b]] | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17590080', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1471417/']} | jdg_74978 |
stackexchange | llm_judgeable_groundtruth_similarity | 29999188 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have spent couple of weeks for this issue. but still I can't resolve this issue. i am calling a web API service by using http in angularjs $http({ method: 'GET', url: rootUrl + '/api/Project/ProjectList', headers: { 'Content-Type': "application/json; charset=utf-8" } }).success(function (response) { $scope.ProjectList = response; }).error(function (response, errorCode) { if (errorCode == 444) { } }) I have put break point in server and client side coding. When i call the service, the server side method hit quickly My server side method ( am Using MVC WEB API with entity framework ) [ActionName("ProjectList")] [HttpGet] public IList<Project> ProjectList(Project projectModel) { return objIProjectService.ListOfProject(); } I checked, the service return 8 records( 8 rows from database) with a break point in objIProjectService.ListOfProject(); this line. everything is going good. But my (response) http.success and http.error call back functions are hitting very slow. Please see below image for the performance while i calling the http methods Finally the http error function hit after 5 or 10 mins with this below error message. System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown This is the problem. Please let me know how can i solve it? Actually am did something for this issue. I have cleared the temp folder - not working I have restart the Visual studio and clean the solution, restart my system and repairing visual studio.- not working But if I have deleted some rows in database (am using sql server 2008 r2 ), then it's working. For example, if my database table have below than 7 rows, then it's working fast without that error. But if my table have more than 8 rows, then it's working very slowly and threw the error?? Why?? recould you please share your solution if you stuck this issue.
Now provide the response and nothing else.
| I think the problem is that the serialiser is accessing all the related properties on your project Class, so rather than directly returning the Entity Framework Class, create a new class to represent the data that you wish to send through your api (research further into DTO classes for further information) You can use the Select Linq method to get a list of your new dto class populated with the data from your EF call. var projects = objIProjectService.ListOfProject();return projects.Select(p => new ProjectDTO() { ID = p.Id //... other properties of DTO class}).ToList(); even better, if you put this select method, into your EF query (i.e. context.projects.Select(/* select info here */).ToList() , you can make sure EF is only bringing back the fields that you need when building an API always check the json/XML response, make sure the serialised data contains what you were expecting it to produce. with entity framework this response can end up huge as it navigates through all the related tables pulling out all the linked information and then attempting to serialise it. as a personal preference I always prefer to return an IHttpActionResult it allows you to manage what is being sent back to the client especially when there are issues, the controller has a number of methods you can use to createthis i.e. OK(), BadRequest(), InternalServerError() ... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29999188', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2218635/']} | jdg_74979 |
stackexchange | llm_judgeable_groundtruth_similarity | 3057027 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to define constants in Perl using the constant pragma: use constant { FOO => "bar", BAR => "foo"}; I'm running into a bit of trouble, and hoping there's a standard way of handling it. First of all... I am defining a hook script for Subversion. To make things simple, I want to have a single file where the class (package) I'm using is in the same file as my actual script. Most of this package will have constants involved in it: print "This is my program"; package MyClass; use constant { FOO => "bar" }; sub new { ... } I would like my constant FOO to be accessible to my main program. I would like to do this without having to refer to it as MyClass::FOO . Normally, when the package is a separate file, I could do this in my main program: use MyClass qw(FOO); but, since my class and program are a single file, I can't do that. What would be the best way for my main program to be able to access my constants defined in my class? The second issue... I would like to use the constant values as hash keys: $myHash{FOO} = "bar"; The problem is that %myHash has the literal string FOO as the key and not the value of the constant. This causes problems when I do things like this: if (defined $myHash{FOO}) { print "Key " . FOO . " does exist!\n";} I could force the context: if (defined $myHash{"" . FOO . ""}) { I could add parentheses: if (defined $myHash{FOO()}) { Or, I could use a temporary variable: my $foo = FOO;if (defined $myHash{$foo}) { None of these are really nice ways of handling this issue. So, what is the best way? Is there one way I'm missing? By the way, I don't want to use Readonly::Scalar because it is 1). slow, and 2). not part of the standard Perl package. I want to define my hook not to require additional Perl packages and to be as simple as possible to work.
Now provide the response and nothing else.
| If you want to keep everything in the same file, you can define your constants package as follows: use warnings;use strict;BEGIN { # BEGIN means this will all happen at compile time package Constants; $INC{'Constants.pm'}++; # tell `require` that the package is loaded use base 'Exporter'; # setup package to export our @EXPORT_OK = qw( PI ); # what to export use constant PI => 3.14159; # define your constant}package main;use Constants qw( PI ); # use it like normalprint PI; Then to fool the auto-quoting inside hash subscripts, you can write it like this: $hash{+PI} or $hash{(PI)} or $hash{PI()} or $hash{&PI} or $hash{::PI} ... I could probably keep going, but I think you get the point. The reason that $INC{'Constants.pm'}++ is needed is because the use Constants qw( PI ); line really means: BEGIN { require 'Constants.pm'; Constants->import( qw( PI ) );} And require will check %INC to see if the package has been loaded already. So by giving it a true value (1 in this case), the require 'Constants.pm'; portion of the use will become a no-op. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3057027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/368630/']} | jdg_74980 |
stackexchange | llm_judgeable_groundtruth_similarity | 34287 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was going through some reference design from Altera's Wiki and ran into the following piece of code: always @* begin in_ready <= out_ready || ~out_valid;end My understanding is that having a flip-flop like that helps get more deterministic timing, synthesis tool can refuse to synthesize if out_ready || ~out_valid combinatorial delay exceeds the limit and timing requirements are not met etc. Other than that, that should be en equivalent of the following combinatorial description: assign in_ready = (out_ready || ~out_valid); What made me think about it is the larger piece of code that I am readying. Sometimes combinatorial logic is used to drive the "ready" output of some modules, but sometimes this style. I cannot really spot a functional difference and see an obvious explanation. I've read somewhere that combinatorial logic is not welcome in FPGAs because of timing issues and synchronized design is always preferred, and this project specifically targets FPGAs. But the mixing is what confuses me. So two questions popped out in my head: When and why one solution should be preferred over another? Is there any difference in behavior (or analysis of the design) when the output of the module is "reg" versus when it is a "wire"? Any help is appreciated. Thank you.
Now provide the response and nothing else.
| If you use a flip flop, then changes in the combinational logic will only "appear" on the output of the flop on a clock edge. In general, with combinational logic, you will get all sorts of spurious transitions while the logic is "settling" on it's eventual value. If spurious transitions are a bad thing in your overall design (most likely they are), you will want to "filter them out" using a flip flop in this way. These transient changes in combinational outputs when inputs change are generally called "hazards." In terms of the hardware you are generating in both cases, you will get the same combinational logic generated in your FPGA in both cases (out_ready || ~out_valid). The difference is, in the one case you are simply aliasing the output net of the combinational logic with the name in_ready, whereas in the other case you are connecting the output of the combinational logic to the input of a clocked flip flop. In summary, there's nothing wrong with combinational logic, and you'd be hard pressed to design something useful in an FPGA that didn't include a bunch of it. Running the output through a flip flop isolates the next "stage" (i.e. the consumer) of your logic from seeing the spurious transitions of that logic that happen inherently when the inputs change (this design pattern, incidentally, is what pipelining - e.g. in modern CPUs - is all about). Remember to think of FPGAs and digital logic as hardware when writing Verilog (because that's what they are after all)! Also, on a sidenote, determinism is not something that is measured on a sliding scale - something either is or is not deterministic, end of story. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/34287', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/-1/']} | jdg_74981 |
stackexchange | llm_judgeable_groundtruth_similarity | 30477 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying probability for preparing myself for machine learning.I came across this notion of Expectation of (random variable) or (function of random variable) like $E[X]$ or $E[g(X)]$.Can anybody explain me the intuitive idea of the above notion (taking into consideration it's implications in machine learning, may be)
Now provide the response and nothing else.
| Expected value means the average value of the random variable. It is a weighted average of the values of the function, weighted by the probabilities of taking those values. Expected value is a linear map from some random variables to numbers. This map throws away all of the information about a random variable except one number. This number is the center of mass of the distribution. In case the random variable is a statistic you can measure from a sample, you can imagine repeating independent samples many times. In that case, the Strong Law of Large Numbers (a theorem) says that the average of the results will be close to the expected value in that the limit of the averages will converge to the expected value. This does not mean that you need a statistic to come from a repeated experiment for the expected value to make sense. Expected value makes sense even if you can only observe a statistic once. If your text doesn't spend a lot of time on the central and fundamental idea of expected value, it could be that the author assumes you have already read another text which has done this, and so you might look for a more elementary text. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/30477', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8874/']} | jdg_74982 |
stackexchange | llm_judgeable_groundtruth_similarity | 25182858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to have a code where i would declare a column name as variable and then use this variable to retrieve desired column from a certain table. DECLAREcol_n VARCHAR (100) := 'X' ;BEGINselect col_n from my_table;END; What is the most easy and explicit way for that in Oracle?
Now provide the response and nothing else.
| You can use dynamic sql to execute a query that you construct as a string. It would go something along these lines: DECLAREcol_n VARCHAR (100) := 'X' ;plsql_block VARCHAR2(500);BEGIN plsql_block := 'select ' || col_n || ' from my_table'; EXECUTE IMMEDIATE plsql_block;END; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25182858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3787877/']} | jdg_74983 |
stackexchange | llm_judgeable_groundtruth_similarity | 494096 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There may be situations where I would need to find an object by parameters other than ID. What is the proper RESTful way to do that? For example I might want to find a User by username and password , so the strictly RESTful " GET /users/1 " wouldn't work. According to the Rails docs this is the URL pattern for getting all the instances of a resource: " GET /users ". I could add parameters to that: " GET /users?username=joe&password=topsecret ", but that would be perverting the official purpose of the GET request.
Now provide the response and nothing else.
| "GET /users?username=joe&password=topsecret", but that would be perverting the official purpose of the GET request." No it isn't perverting anything. That's absolutely the correct and RESTful way to do it, and is the reccomended way of retrieving dynamic results in the http spec. REST doesn't care what's in the URL, only that it's unique. The url for that page could be http://3f778a9b8a7c778696e for all REST architecture cares, so long as that's the only way to get there, and it doesn't ever lead anywhere else. http defines a query string protocol for returning dynamic results. Given the current state of your database, the query string that you give your application ought to always return the same result. Then it will be RESTFUL. URL aesthetics are a different issue from REST altogether. according to the REST architecture, the rules of the GET request are that it always returns the same results (or maintains the same results for reasonably long periods of time, so that caching works), and that GET Doesn't have side effects. GET needs to be idempotent (always return the same results, regardless of how many times you call it) and not cause the system to change state. That is it . Of course you don't have to use the query protocol. You can put parameters into forward slashes, inbetween semicolons, or it could be a base64 encoded GUID. It's entirely up to you, as long as it follows those simple rules. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/494096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/42595/']} | jdg_74984 |
stackexchange | llm_judgeable_groundtruth_similarity | 23604026 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having a hard time in adding a class to a date in bootstrap. Here's the datepicker. What I'm trying to achieved is put a small blue dot in the date I specified. I'm thinking of adding a class to the date. How should I do this?
Now provide the response and nothing else.
| Depending on the datepicker you are using you can do something like this: Most of the date pickers have a beforeShowDay option. You can set a class here to add to the day you want to change. For this example im using http://eternicode.github.io/bootstrap-datepicker An example of how to do this can be found here: jsFiddle You will want to put the dates you want to highlight / mark into an array: var active_dates = ["23/5/2014","21/5/2014"]; Then use the beforeShowDay option to check the dates against the current day being shown and then apply a class. <input type="text" id="datepicker" /> $("#datepicker").datepicker({ format: "dd/mm/yyyy", autoclose: true, todayHighlight: true, beforeShowDay: function(date){ var d = date; var curr_date = d.getDate(); var curr_month = d.getMonth() + 1; //Months are zero based var curr_year = d.getFullYear(); var formattedDate = curr_date + "/" + curr_month + "/" + curr_year if ($.inArray(formattedDate, active_dates) != -1){ return { classes: 'activeClass' }; } return; } });` The activeClass can be any form of CSS. In my example i have just changed the background color. In your example you could offset an image and apply it to the day. .activeClass{ background: #F00; } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23604026', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2886106/']} | jdg_74985 |
stackexchange | llm_judgeable_groundtruth_similarity | 6433369 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do i stop an event from being processed or switch what function is called for it? Revised Code: from Tkinter import *class GUI: def __init__(self,root): Window = Frame(root) self.DrawArea = Canvas(Window) self.DrawArea.pack() Window.pack() self.DrawArea.bind("<Button 1>",self.starttracking) def updatetracking(self,event): print event.x,event.y def finishtracking(self,event): self.DrawArea.bind("<Button 1>",self.starttracking) self.DrawArea.unbind("<Motion>") def starttracking(self,event): print event.x,event.y self.DrawArea.bind("<Motion>",self.updatetracking) self.DrawArea.bind("<Button 1>",self.finishtracking)if __name__ == '__main__': root = Tk() App = GUI(root) root.mainloop()
Now provide the response and nothing else.
| You can simply just call bind() again with the new function for the event. Since you are not making use of the third parameter, add , in bind() this will just overwrite whatever is already there. By default this parameter is '' but it also accepts "+" , which will add a callback to the callbacks already triggered by that event. If you start using that optional argument however you will need to use the unbind() function to remove individual callbacks. When you call bind() a funcid is returned. You can pass this funcid as the second parameter to unbind() . Example: self.btn_funcid = self.DrawArea.bind("<Button 1>", self.my_button_callback, "+")# Then some time later, to remove just the 'my_button_callback':self.DrawArea.unbind("<Button 1>", self.btn_funcid)# But if you want to remove all of the callbacks for the event:self.DrawArea.unbind("<Button 1>") | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6433369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/666255/']} | jdg_74986 |
stackexchange | llm_judgeable_groundtruth_similarity | 221713 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a few web servers and a few database servers. To date, they've been standalone machines that are not part of a domain. The web servers don't talk to each other, and the web servers talk to the database servers via SQL Auth. My concern with putting the machines in a domain together were added complexity - it's one more "thing" running, and doing "things" that could go wrong. risk - if a domain controller fails, am I now putting other machines at risk? However, in certain scenarios it does seem convenient for them to be on a domain, sharing credentials. For example, if I want to give the "services" control on one machine access to another machine (because Remote Desktop craps out ) I need to go in and assign privileges on multiple machines - something that I believe Active Directory and Domain Accounts set to simplify. My question: I'm sure there are things I'm not considering here. Is there a best practice?
Now provide the response and nothing else.
| The Powers That Be can clarify things of course, but the entire StackOverflow Network runs on IIS servers with an SQL back end in an Active Directory Domain. I'd say it works well. added complexity - it's one more "thing" running, and doing "things" that could go wrong. Sometimes adding complexity allows you to remove some. Especially if you're worried about scaling out, having a domain can greatly ease the work of adding servers, changing config, and any number of things. Group Policy and centrally administered scripts can do amazing things to ease your life. risk - if a domain controller fails, am I now putting other machines at risk? That's why you have two Domain Controllers, and don't make them reachable from the Internet. If someone penetrates your site, you're pretty much hosed anyway. This is why it's a very good idea to have your AD Domain be just for your application environment, if possible. And finally, Microsoft designs their environment to work within AD. Inter-server communication is both easier and more secure when AD is involved to arbitrate authentication and encourage secure protocol usage. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/221713', 'https://serverfault.com', 'https://serverfault.com/users/14970/']} | jdg_74987 |
stackexchange | llm_judgeable_groundtruth_similarity | 5938125 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know there are many dict to list questions on here but I can't quite find the information I need for my situation so I'm asking a new quetion. Some background: I'm using a hierarchical package for my models and the built-in function which generates the tree structure outputs a nested loop to indicate parents, children, etc. My goal is to keep the logic in views and output a list so that I can simply loop over it in my templates. Here is my data, in the tree structure: 1-1.1--1.1.1---1.1.1.1--1.1.2-1.2--1.2.1--1.2.2-1.3 Here is the nested dictionary I am getting as a result { <Part: 1.1>: { <Part: 1.1.1>: { <Part: 1.1.1.1>: {} }, <Part: 1.1.2>: {} }, <Part: 1.2>: { <Part: 1.2.1>: {}, <Part: 1.2.2>: {} }, <Part: 1.3>: {}} or if you don't like the way I tried to break it up, here is what I get in a single line: {<Part: 1.1>: {<Part: 1.1.1>: {<Part: 1.1.1.1>: {}}, <Part: 1.1.2>: {}}, <Part: 1.2>: {<Part: 1.2.1>: {}, <Part: 1.2.2>: {}}, <Part: 1.3>: {}} What I'd like is to get: [<Part: 1.1>, <Part: 1.1.1>, <Part: 1.1.1.1>, <Part: 1.1.2>, <Part: 1.2>, <Part: 1.2.2>, <Part: 1.2.1>, <Part: 1.3>,] I've tried just iterating over the key in dict.items but then I only get the top level keys (1.1, 1.2, 1.3) What do I need to do to get deeper? thanks!
Now provide the response and nothing else.
| I think recursion can be your friend : top = {"<Part: 1.1>": {"<Part: 1.1.1>": {"<Part: 1.1.1.1>": {}}, "<Part: 1.1.2>": {}}, "<Part: 1.2>": {"<Part: 1.2.1>": {}, "<Part: 1.2.2>": {}}, "<Part: 1.3>": {}} def grab_children(father): local_list = [] for key, value in father.iteritems(): local_list.append(key) local_list.extend(grab_children(value)) return local_listprint grab_children(top) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5938125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/484669/']} | jdg_74988 |
stackexchange | llm_judgeable_groundtruth_similarity | 12926 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The word הארץ ('the earth') appears a very large number of times in the six-day creation account spanning Genesis 1:1 to 2:1. It seems clear to me that the usages in 1:1 and 2:1 are special - the only two cases where the word forms part of the phrase "heavens and the earth" which appear to 'bookend' the creation narrative and are often interpreted as referring to the entire ordered universe, including the heavens and the earth and everything in them." 1 . This question is about the use of the word in 1:2 : 2 The earth was without form and void, and darkness was over the face of the deep. And the Spirit of God was hovering over the face of the waters. ESV All the other instances of the word appear to correspond with the 'earth' that appeared and was named in verses 9-10, the 'dry land': 9 And God said, “Let the waters under the heavens be gathered together into one place, and let the dry land appear.” And it was so. 10 God called the dry land Earth , and the waters that were gathered together he called Seas. And God saw that it was good. ESV However is seems less likely that the occurrence in verse 2 has a similar meaning because It precedes the defining usage in verse 10 It refers to something that is 'without form and void', which isn't compatible with the rather well-defined concept introduced later For this reason is seems to me that the word as used in verse 2 has a different and third unique meaning among the usages in this passage. Is such an idea supportable from a Hebrew linguistic perspective? If so can the word mean something broad like 'universe' or 'creation'? 1 see the NET notes
Now provide the response and nothing else.
| Short Answer: Based on the textual evidence, it may not be a third usage, but in fact the same as the second usage. In other words, the land (as opposed to the waters or heavens) was formless and void. There are two key pieces of evidence from the text that support this conclusion: Gen. 1:2 does not merely say the earth was formless and void, but also that "darkness was over the face of the deep". "The deep" is a term commonly used to refer to the deep waters. In other words, the picture here is not of an empty nothingness, but of dark, deep waters and purposeless / fruitless land. In Gen. 1:9 it is not that dry land appeared out of nowhere, but rather, that as the waters were gathered the dry land appeared. The picture seems to be that the land was underneath the waters, and until the waters were collected the land was not visible, but as the waters were collected (i.e. into oceans and whatnot) the dry land appeared. Regarding your two observations: 1) While the usage in verse 2 does "precede the defining usage" in verse 10, it should be noted that Genesis was not written to an audience without an established language system. (E.g. the same could be said of "God" in verse 1.) The term "land" already had a semantic range prior to the writing of Genesis 1, so the appearance of "land" in verse 2 prior to verse 10 is not significant. 2) "Without form" and "void" should be understood in context. In the immediate context we see the appearance of both land, water, as well as the ability to hover "above" the waters (think "heavens"). Thus, while the "land" was formless and void, that does not imply its absence or non-existence. Regarding your specific additional questions: As you've already noted, this definition is indeed supported by Hebrew linguistics. I am not aware of any usage of "land" to refer to the entire universe or all of creation. As a side note, it is worth mentioning that: Later Hebrew (and biblical) tradition holds that "the earth was formed out of water and by water" (2 Pet. 3:5) which matches this interpretation well, and When Moses wrote the flood narrative (only a few chapters later) he wrote it and the creation narrative in such a way that they use similar terminology and imagery, linking the two events stylistically. I bring this up because in the flood narrative the land does exist but it is under the water, and only appears when the waters recede. When Moses wrote the Promised Land narrative near the end of the Pentateuch, he also wrote that in such a way that it used similar terminology to the Genesis account, linking these two events stylistically as well. Again, we see here waters parting and dry land appearing as the waters recede. Hope that helps! | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/12926', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/43/']} | jdg_74989 |
stackexchange | llm_judgeable_groundtruth_similarity | 10417096 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I am using connection strings in my web.config using SQL authentication. Of course people say this could be a vulnerability as you are storing password in plaintext. However, from what I know, IIS never serves web.config, and web.config should only have read access to administrators and IIS anyway. So if the hacker has gained access to the webserver, then it won't matter what encryption I use because the private key will be on the webserver. Wouldn't encrypting connection string be classified as security through obfuscation? Is it worth encrypting web.config connection string and storing the private key on the webserver? Further, of course if I don't use SSL, I am transmitting connection string over HTTP in plaintext. If I use SSL then this problem should be mitigated as well.
Now provide the response and nothing else.
| I wouldn't say that storing a plaintext password in Web.config is a security vulnerability, in and of itself. But encrypting the password is a useful defense-in-depth measure, not just security through obscurity: What if IIS is misconfigured to serve Web.config? What if a security vulnerability is discovered in ASP.NET (like the padding oracle vulnerability ) that allows anyone to download Web.config? There are varying degrees of access to the Web server, from full administrative privileges to server-side code injection. If an attacker can only manage to do the latter, he might be able to read Web.config but might not be able to access the machine keys, especially if your application is running under partial trust. In the end, it's up to you to decide if the risk of storing plaintext passwords in Web.config is acceptable. Of course, if Windows authentication is an option, then you may want to consider using that instead of SQL authentication. UPDATE: When talking about security, it's a good idea to identify the assets and the threats . In this case, the asset is sensitive data in the database (if the data is unimportant, then why bother protecting it with a password?), and the threat is the possibility of an attacker somehow gaining access to Web.config and thus the database as well. A possible mitigation is to encrypt the database password in Web.config. How much of a risk is it? Do we really have to plan for such an astronomically rare occurrence? This mitigation has already proved its worth once: when the ASP.NET padding oracle vulnerability was discovered. Anyone who stored a plaintext password in Web.config was at risk; anyone who encrypted the password wasn't. How certain are you that another similar vulnerability in ASP.NET won't be discovered in the next few years? Should we also encrypt source code and decrypt on run-time? Seems excessive to me. So what if an attacker does get access to your source code? What's the asset you're protecting, and what's the threat you're concerned about? I think that in many cases, source code is much less valuable than data. (I'm thinking here about off-the-shelf commercial and open-source software which anyone can obtain.) And if your source code is valuable, maybe obfuscation is something to think about. I feel if they already have even limited access to your box, then your host has failed or you've installed vulnerable services already. What about security vulnerabilities in ASP.NET or your code? They do pop up from time to time. My concern is standard practices. Is it a standard? Microsoft has recommended encrypting connection strings. What you should do is evaluate the risk that storing a plaintext password poses: How likely is it that an attacker will be able to discover and exploit a security vulnerability that exposes Web.config? Based on past history, I'd say the likelihood is low (but not "astronomically" low). How valuable or sensitive is your data? If all you're storing is pictures of your cat, then maybe it doesn't matter much whether an attacker gets your database password. But if you're storing personally identifiable information , then from a legal standpoint, I'd say you should take all possible measures to secure your application, including encrypting your connection strings. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10417096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/443664/']} | jdg_74990 |
stackexchange | llm_judgeable_groundtruth_similarity | 9739352 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't pop my stash because I merged a branch which apparently conflicts with my stash and now my stash is seemingly unable to be popped. app.coffee: needs mergeunable to refresh index Anyone know how to resolve this? FIXED! Turns out the actual issue was an unresolved merge conflict from the merge, NOT that the stash would cause a merge conflict. Resolution: Commit the conflicted file.
Now provide the response and nothing else.
| First, check git status . As the OP mentions, The actual issue was an unresolved merge conflict from the merge, NOT that the stash would cause a merge conflict. That is where git status would mention that file as being " both modified " Resolution: Commit the conflicted file. Solution: in this case, simply add and commit your local file. Actually, just git add -- your file , or (if you don't want those changes) git reset -- yourfile (to unstage it) is enough to get past the error message. You can find a similar situation 4 days ago at the time of writing this answer (March 13th, 2012) with this post: " ‘Pull is not possible because you have unmerged files’ ": julita@yulys:~/GNOME/baobab/help/C$ git stash pophelp/C/scan-remote.page: needs mergeunable to refresh index What you did was to fix the merge conflict (editing the right file, and committing it): See " How do I fix merge conflicts in Git? " What the blog post's author did was: julita@yulys:~/GNOME/baobab/help/C$ git reset --hard origin/mallard-documentationHEAD is now at ff2e1e2 Add more steps for optional information for scanning. I.e aborting the current merge completely, allowing the git stash pop to be applied. See " Aborting a merge in Git ". Those are your two options. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9739352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/551828/']} | jdg_74991 |
stackexchange | llm_judgeable_groundtruth_similarity | 119858 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was recently shown a pretty cool video about common cosmological misconceptions . It got me reviewing the different between event horizon (current distance within which we will see/interact ), particle horizon (current distance within which we have seen/interacted , and the Hubble Sphere (distance at which things are currently receding at the speed of light). I've found the following figure extremely helpful: But I'm very confused about the event horizon currently lying outside of the Hubble Sphere. If the universe is accelerating in its expansion, then it seems like everything outside of the Hubble Sphere at this moment - will never be in causal contact, never be within our light-cone, and thus be outside of our event horizon. Is this figure correct?
Now provide the response and nothing else.
| As Chris White points out, this is a subtle issue, so I'm eager to see some more answers - perhaps someone can some up with a good car analogy ;) In the meantime, here's my best shot at an explanation: First, accept that the existence of a preferred spatial slicing does not make FLRW spacetime into Minkowski spacetime: Proper distance at constant cosmological time is no substitute for special relativistic proper distance, and all caveats of general relativity still apply. Now, consider comoving coordinates and pick any two points at rest as emitter and (eventual) absorber. No matter the initial proper distance or recession velocity, a photon will move steadily from emitter towards absorber, decreasing the comoving distance it still needs to travel. It will not stop or freeze at any particular distance, and this is even true for photons that get emitted from beyond the event horizon - they'll just take a longer-than-infinite amount of time to reach their destination... The event horizon is basically the past light cone at $t=\infty$, made up from null geodesics and has physical significance. In contrast, the Hubble sphere is largely arbitrary: It's where a particular coordinate velocity - the recession velocity - reaches $c$. However, the speed of light only limits relative velocities, which need to be evaluated at the same event or via parallel transport. Going by Pulsar's figure , light that reaches us from the Hubble sphere right now has $1\lt z\lt3$, so as far as the photon is concerned, the relative velocity of emitter and absorber was about $(0.7\pm0.1)c$ - nothing to sneeze at, but near enough is not good enough. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/119858', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/8521/']} | jdg_74992 |
stackexchange | llm_judgeable_groundtruth_similarity | 968175 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create a varchar column in SQL that should contain N'guid' while guid is a generated GUID by .NET ( Guid.NewGuid ) - class System.Guid. What is the length of the varchar I should expect from a GUID?Is it a static length? Should I use nvarchar (will GUID ever use Unicode characters)? varchar(Guid.Length) PS. I don't want to use a SQL row guid data-type. I am just asking what is Guid.MaxLength .
Now provide the response and nothing else.
| It depends on how you format the Guid: Guid.NewGuid().ToString() = 36 characters (Hyphenated) outputs: 12345678-1234-1234-1234-123456789abc Guid.NewGuid().ToString("D") = 36 characters (Hyphenated, same as ToString() ) outputs: 12345678-1234-1234-1234-123456789abc Guid.NewGuid().ToString("N") = 32 characters (Digits only) outputs: 12345678123412341234123456789abc Guid.NewGuid().ToString("B") = 38 characters (Braces) outputs: {12345678-1234-1234-1234-123456789abc} Guid.NewGuid().ToString("P") = 38 characters (Parentheses) outputs: (12345678-1234-1234-1234-123456789abc) Guid.NewGuid().ToString("X") = 68 characters (Hexadecimal) outputs: {0x12345678,0x1234,0x1234,{0x12,0x34,0x12,0x34,0x56,0x78,0x9a,0xbc}} | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/968175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/75500/']} | jdg_74993 |
stackexchange | llm_judgeable_groundtruth_similarity | 79043 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consequences of tampered /etc/ssh/moduli describes a possible risk if the moduli file for an OpenSSH server has been tampered with. Taking the logic a step further, is there any concern with the default file shipped with OpenSSH ? I ask because the Secure Secure Shell article mentions this: If you chose to enable 5 [diffie-hellman-group-exchange-sha256], open /etc/ssh/moduli if exists, and delete lines where the 5th column is less than 2000. If it does not exist, create it: ssh-keygen -G "${HOME}/moduli" -b 4096ssh-keygen -T /etc/ssh/moduli -f "${HOME}/moduli"rm "${HOME}/moduli" This reads to me as though DH primes less than 2048 are considered insecure, and should be replaced with larger primes. However, the OpenSSH developers, smart people, haven't replaced the file that ships by default with one that does include larger primes. Am I missing something?
Now provide the response and nothing else.
| Exchange/Selection process Why remove primes shorter than 2000 bits? According to RFC4419 , the key exchange starts with the client sending its preferences to the server in the form of 3 numbers: the minimum acceptable modulus length, the maximum acceptable length and the preferred length. Then the server chooses a random prime that best satisfies this requirement. In practice (at least with OpenSSH 6.7), the client's minimum and maximum is always 1024 and 8192. The preferred length is 8 times the security level of the symmetric cipher. The server then chooses like this: it discards the primes outside the min-max range, then it picks the shortest available length that's not less than the preferred length finally, it chooses one randomly out of those. With AES-128, we end up with an 1024 bit modulus which is more like 2^80 effort to break. (It's not linear, you can't just multiply by 8.) Why regenerate? Why regenerate the file from scratch? To quote RFC4419 : The use of multiple moduli inhibits a determined attacker fromprecalculating moduli exchange values, and discourages dedication ofresources for analysis of any particular modulus. This has been demonstrated in the Logjam attack. It is not as effective if everyone uses the same moduli file distributed with the SSH package. For this reason, I generated different moduli files on every host. Note: modern SSH uses elliptic-curve Diffie-Hellman, which is more secure in theory. If you want to generate new non elliptic-curve primes do this: ssh-keygen -G moduli-2048.candidates -b 2048ssh-keygen -T moduli-2048 -f moduli-2048.candidates Then replace the contents of your moduli (usually /etc/ssh/moduli ) file with the contents of moduli-2048 | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/79043', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/30712/']} | jdg_74994 |
stackexchange | llm_judgeable_groundtruth_similarity | 25291607 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a form built in Symfony and when rendered in the view, the html form may or may not contain all of the fields in the form object (the entity sort of has a couple of different states and not all fields are rendedered in the view). The problem is that when the form is processed in the submit handler, via handleRequest() method of the form object, it resets any properties in the entity that are not present in the post data to null, blowing away any existing value. Is there any way to tell Symfony not to be so stupid and only process the fields present in the POST data? Or do I have to clone the entity before the handleRequest call and then loop over the POST values and copy the related values from the post-handleRequest entity over to the pre-handleRequest clone of the entity, so I preserve the fields that are not in the POST data. phew! as you can see, its a bit of a daft solution, to a bit of a daft problem, tbh. I could understand symfony doing this if the entity was in effect a newly created object, but its been loaded from the DB and then handleRequest called - it should be sensible enough to know the object has already been initialised and only set the fields passed in the POST data. Thanks for any help. Regards Steve.
Now provide the response and nothing else.
| In short, don't use handleRequest . You should use submit directly instead along with the clearMissing parameter set to false. Symfony/Component/Form/FormInterface /** * Submits data to the form, transforms and validates it. * * @param null|string|array $submittedData The submitted data. * @param bool $clearMissing Whether to set fields to NULL * when they are missing in the * submitted data. * * @return FormInterface The form instance * * @throws Exception\AlreadySubmittedException If the form has already been submitted. */public function submit($submittedData, $clearMissing = true); When you use handleRequest it works out what data you are wanting the submit and then submits it using $form->submit($data, 'PATCH' !== $method); , meaning that unless you have submitted the form using the PATCH method then it will clear the fields. To submit the form yourself without clearing your can use... $form->submit($request->get($form->getName()), false); .. which get the form data array from the request and submit it directly, but with the clear missing fields parameter set to false. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25291607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1484864/']} | jdg_74995 |
stackexchange | llm_judgeable_groundtruth_similarity | 104254 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use the Eclipse IDE to develop, compile, and run my Java projects. Today, I'm trying to use the java.io.Console class to manage output and, more importantly, user input. The problem is that System.console() returns null when an application is run "through" Eclipse. Eclipse run the program on a background process, rather than a top-level process with the console window we're familiar with. Is there a way to force Eclipse to run the program as a top level process, or at least create a Console that the JVM will recognize? Otherwise, I'm forced to jar the project up and run on a command-line environment external to Eclipse.
Now provide the response and nothing else.
| I assume you want to be able to use step-through debugging from Eclipse. You can just run the classes externally by setting the built classes in the bin directories on the JRE classpath. java -cp workspace\p1\bin;workspace\p2\bin foo.Main You can debug using the remote debugger and taking advantage of the class files built in your project. In this example, the Eclipse project structure looks like this: workspace\project\ \.classpath \.project \debug.bat \bin\Main.class \src\Main.java 1. Start the JVM Console in Debug Mode debug.bat is a Windows batch file that should be run externally from a cmd.exe console. @ECHO OFFSET A_PORT=8787SET A_DBG=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=%A_PORT%,server=y,suspend=yjava.exe %A_DBG% -cp .\bin Main In the arguments, the debug port has been set to 8787 . The suspend=y argument tells the JVM to wait until the debugger attaches. 2. Create a Debug Launch Configuration In Eclipse, open the Debug dialog (Run > Open Debug Dialog...) and create a new Remote Java Application configuration with the following settings: Project: your project name Connection Type: Standard (Socket Attach) Host: localhost Port: 8787 3. Debugging So, all you have to do any time you want to debug the app is: set a break point launch the batch file in a console launch the debug configuration You can track this issue in bug 122429 . You can work round this issue in your application by using an abstraction layer as described here . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/104254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19147/']} | jdg_74996 |
stackexchange | llm_judgeable_groundtruth_similarity | 10001751 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following program doesn't build in VS11 beta, gcc 4.5, or clang 3.1 #include <thread>#include <memory>int main() { std::unique_ptr<int> p; std::thread th([](std::unique_ptr<int>) { },std::move(p)); th.join();} This is because the argument type is not copyable, but the implementation attempts to copy it. As far as I can tell, this program is well formed and should work. The requirements for std::thread seem to imply that movable, non-copyable arguments should work here. Specifically it says that the callable object and each argument shall satisfy the MoveConstructible requirements, and that INVOKE(DECAY_COPY(std::forward<F>(f)),DECAY_COPY(std::forward<Args>(args))...) shall be a valid expression. In this case I think expression works out to something like: template <class T> typename std::decay<T>::type decay_copy(T&& v){ return std::forward<T>(v); }std::unique_ptr<int> p;auto f = [](std::unique_ptr<int>) {};decay_copy(f)(decay_copy(std::move(p))); And I don't think this is supposed to involve a copy of p . gcc at least can compile this expression, though VS11 does not. Am I wrong about the requirements and the arguments must be copyable? Does the standard leave any leeway on this issue for implementations to copy arguments? Or are the implementation I tried non-conforming?
Now provide the response and nothing else.
| From 30.3.1.2, paragraph 3 and 4 of N3337: template <class F, class ...Args> explicit thread(F&& f, Args&&... args); Requires : F and each Ti in Args shall satisfy the MoveConstructible requirements. INVOKE (DECAY_-COPY ( std::forward<F>(f)), DECAY_COPY (std::forward<Args>(args))...) (20.8.2) shall be a valid expression. Effects : Constructs an object of type thread. The new thread of execution executes INVOKE (DECAY_-COPY ( std::forward<F>(f)), DECAY_COPY (std::forward<Args>(args))...) with the calls to DECAY_COPY being evaluated in the constructing thread. Any return value from this invocation is ignored. [ Note: This implies that any exceptions not thrown from the invocation of the copy of f will be thrown in the constructing thread, not the new thread. —end note ] If the invocation of INVOKE (DECAY_COPY ( std::forward<F>(f)), DECAY_COPY (std::forward<Args>(args))...) terminates with an uncaught exception, std::terminate shall be called. So yes, this should work. If it doesn't, then that's a bug in your implementation. Do note that any parameter movement/copying will happen on the new thread. You're passing references to another thread, so you need to make sure that they still exist until that thread starts. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10001751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365496/']} | jdg_74997 |
stackexchange | llm_judgeable_groundtruth_similarity | 406580 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently the buzz of virtualization has reached my workplace where developers trying out virtual machines on their computers. Earlier I've been hearing from several different developers about setting up virtual machine in their desktop computers for sake of keeping their development environments clean. There are plenty of Virtual Machine software products in the market: Microsoft Virtual PC Sun VirtualBox VMWare Workstation or Player Parallell Inc's Parallells Desktop I'm interested to know how you use virtualization effectively in your work. My question is how do you use Virtual Machines for day-to-day development and for what reason?
Now provide the response and nothing else.
| I just built a real beefy machine at home so that I could run multiple VMs at once. My case is probably extreme though, but here is my logic for doing so. Testing When I test, particularly a desktop app, I typically create multiple VMs, one for each platform that my software should run on (Windows 2000/XP/Vista etc). If 32 and 64 bit flavors are available, I also build one of each. I also play with the VM hardware settings (e.g. lots of RAM, little RAM, 1 core, 2 core, etc). I found plenty of little bugs this way, that definitely would have made it into the wild had I not used this approach. This approach also makes it easy to play with different software scenarios (what happens if the user installing the program doesn't have .NET 3.5 sp1? What happens if he doesn't have XXX component? etc? Development When I develop, I have one VM running my database servers (SQL2000/2005/2008). This is for two reasons. First, it is more realistic. In a production environment your app is probably not running on the same box as the db. Why not replicate it when you develop? Also, when I'm not developing (remember this is also my home machine), do I really need to have all of those database services running? Yes, I could turn them on and off manually, but its so much easier to switch a VM on. Clients If I want to show a client some web work I've done, I can put just a single VM into the DMZ and he can log into the VM and play with the web project, while the rest of my network/computer is safe. Compatibility Vista64 is now my main machine. Any older hardware/software I own will not play nicely with that OS. My solutions is to have Windows XP 32 as a VM for all of those items. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/406580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3713/']} | jdg_74998 |
stackexchange | llm_judgeable_groundtruth_similarity | 52130228 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a button in View that will create a new record and display in another controller and then display that View. However I get this error when I click that button. ActionController::ParameterMissing in JobHeadersController#createparam is missing or the value is empty: job_header# Never trust parameters from the scary internet, only allow the white list through.def job_header_params params.require(:job_header).permit(:company_id, :customer_id, :name, :del_address1, :del_address2, :del_address3, :del_address4, :del_postcode, :tel, :created_by)endendParameters:{"utf8"=>"✓","authenticity_token"=>"BLAHBLAHBLAHg==","company_id"=>"2","customer_id"=>"1","name"=>"Nash Labs","del_address1"=>"23 Taylor Road","del_address2"=>"Odsal","del_address3"=>"Bradford","del_address4"=>"West Yorkshire","del_postcode"=>"BD6 1BH","tel"=>"07522189605","created_by"=>"2","button"=>""} My View code is <%= form_tag({controller: "job_headers", action: "create"}, method: "post") %> <% @other_assets.each do |assets| %> <tr> <td><%= check_box_tag 'selected_assets[]', assets.id%></td> <td><%= assets.service_type %></td> <td><%= assets.asset_type %></td> <td><%= assets.make %></td> <td><%= assets.model %></td> <td><%= assets.serial_no %></td> <td><%= format_date(assets.date_next) %></td> <td><%= link_to "Notes", customer_asset_service_register_path, :id => 'notesModal', "data-toggle" => "modal", 'data-target' => '.bs-example-modal-lg' %></td> </tr> <% end %> </table> </div> <%= hidden_field_tag :company_id, @customer.company_id %> <%= hidden_field_tag :customer_id, @customer.id %> <%= hidden_field_tag :name, @customer.name %> <% if @customer.del_address1.blank? %> <%= hidden_field_tag :del_address1, @customer.address1 %> <% else %> <%= hidden_field_tag :del_address1, @customer.del_address1 %> <% end %> <% if @customer.del_address1.blank? %> <%= hidden_field_tag :del_address2, @customer.address2 %> <% else %> <%= hidden_field_tag :del_address2, @customer.del_address2 %> <% end %> <% if @customer.del_address1.blank? %> <%= hidden_field_tag :del_address3, @customer.address3 %> <% else %> <%= hidden_field_tag :del_address3, @customer.del_address3 %> <% end %> <% if @customer.del_address1.blank? %> <%= hidden_field_tag :del_address4, @customer.address4 %> <% else %> <%= hidden_field_tag :del_address4, @customer.del_address4 %> <% end %> <% if @customer.del_address1.blank? %> <%= hidden_field_tag :del_postcode, @customer.postcode %> <% else %> <%= hidden_field_tag :del_postcode, @customer.del_postcode %> <% end %> <%= hidden_field_tag :tel, @customer.tel %> <%= hidden_field_tag :created_by, current_user.id %> <div class="box-tools pull-right"> <%= button_tag(type: "submit", class: "btn btn-primary pull_right") do %> <i class='fa fa-calendar'></i> Create Job <% end %> <% end %> If I am reading the error right it is expecting the param "job_header" and it is null however, this is to create a new job_header record and so I can't pass that param? If anyone also knows how I Can put a tick box against my table rows so that the selected @other_asset id's get passed to a param as an array please also let me know. It will save me lots of googling. Thanks in advance
Now provide the response and nothing else.
| The error info of the application completed without reading the entire request body often occurs when the client send a request that doesn't fulfill the sever requirements . In other words , it happens just before entering the action , resulting that you cannot debug it via a breakpoint within the body of action method . For example , let's say a action method on the server : [Route("api/[controller]")][ApiController]public class DummyController : ControllerBase{ [HttpPost] public DummyDto PostTest([FromBody] DummyDto dto) { return dto; }} The DummyDto here is a dummy class to hold information: public class DummyDto { public int Id { get; set; }} When clients send a request with payload not well formatted For example , the following post request , which doesn't have a Content-Type: application/json header : POST https://localhost:44306/api/test HTTP/1.1Accept : application/json{ "id":5 } will result in a similar error info : Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request starting HTTP/1.1 POST http://localhost:44306/api/test 10Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request finished in 1.9319ms 404 Microsoft.AspNetCore.Server.Kestrel:Information: Connection id "0HLGH8R93RPUO", Request id "0HLGH8R93RPUO:00000002": the application completed without reading the entire request body. and the response from the server will be 404 : HTTP/1.1 404 Not FoundServer: KestrelX-SourceFiles: =?UTF-8?B?RDpccmVwb3J0XDIwMThcOVw5LTFcU08uQXV0aFJlYWRpbmdXaXRob3V0RW50aXRlQm9keVxBcHBcQXBwXGFwaVx0ZXN0?=X-Powered-By: ASP.NETDate: Mon, 03 Sep 2018 02:42:53 GMTContent-Length: 0 As for the question you described , I suggest you should check the following list : does the Postman send the request with a header of Content-Type: application/json ? make sure you have checked the header If step1 doesn't work , click the code to show what it sends exactly when you send a request to the server . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52130228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9350317/']} | jdg_74999 |
stackexchange | llm_judgeable_groundtruth_similarity | 126757 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on my master thesis now and I've been struggling with a problem for some while now and no one seems to be able to help me or point me in any direction. So now I reach out to see if someone here can help me. Basically I have two (dependent) compound Poisson processes with no parameters in common. I want to find an analytical expression of the covariance of those processes or at least a useful approximation. What remains to compute is this the following: Searched:$E[Y^a Y^b]$, where $Y^a = \sum_{i=1}^{N^a} X_i^a$ and $Y^b = \sum_{i=1}^{N^b} X_i^b$ Known distributions:$N^a \sim Po(\lambda^a)$, $N^b \sim Po(\lambda^b)$, $X^a \sim Exp(1/\mu^a)$, $X^b \sim Exp(1/\mu^b)$ Known parameters:$\lambda^a, \lambda^b, \mu^a, \mu^b$ Correlations:$Cov(N^a,N^b)$ and $Cov(X^a,X^b)$ are non-zero and can be found. Otherwise independent ($N^a \perp X^a$, etc). How can I solve this? Does any analytically tractable solution exist? Can I approximate the solution in any way? If anything is unclear please let me know and I'll explain further! Thanks in advance!
Now provide the response and nothing else.
| Use the tower porperty of conditional expectations. \begin{eqnarray}\mathbb{E}\left[Y^a Y^b\right] &=& \mathbb{E}\left[\mathbb{E}\left[Y^a Y^b\ |\ N_a, N_b \right]\right]\\&=& \mathbb{E}\left[\mathbb{E}\left[\left(\sum_{i=1}^{N_a}X_i^a\right)\left(\sum_{i=1}^{N_b}X_i^b\right) \Bigg|\ N_a, N_b \right]\right]\\&=& \mathbb{E}\left[\mathbb{E}\left[\sum_{i=1}^{N_a}\sum_{j=1}^{N_b}X_i^aX_j^b\ \Bigg|\ N_a, N_b \right]\right]\\&\overset{1}{=}& \mathbb{E}\left[\sum_{i=1}^{N_a}\sum_{j=1}^{N_b} \mathbb{E}\left[ X_i^aX_j^b\ \right]\right]\\&\overset{2}{=}& \mathbb{E}\left[ N_aN_b\left(\text{Cov}(X^a, X^b) - \mathbb{E}\left[X^a\right]\mathbb{E}[X^b]\right)\right]\\&=&\left(\text{Cov}(N^a, N^b) - \mathbb{E}\left[N^a\right]\mathbb{E}[N^b]\right)\left(\text{Cov}(X^a, X^b) - \mathbb{E}\left[X^a\right]\mathbb{E}[X^b]\right)\end{eqnarray} In step 1 the sums are moved out of the integral (finite sums) and the conditioning can be removed. In step 2 we use that we can express the expectation of the product with known stuff, and we sum. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/126757', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/62158/']} | jdg_75000 |
stackexchange | llm_judgeable_groundtruth_similarity | 1080945 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm manipulating the items in a list that's a System.Management.ManagementObjectCollection . Each of these items is a System.Management.ManagementObject which contains properties indexed by string. See: foreach (ManagementObject queryObj in searcher.Get()){ string osversion = (string)queryObj["Version"]; string os = (string)queryObj["Name"]; uint spmajor = (uint)queryObj["ServicePackMajorVersion"]; uint spminor = (uint)queryObj["ServicePackMinorVersion"]; ... ... ...} Each "dictionary access" to queryObj returns a C# object which is in fact whatever the property is supposed to be -- I have to know their "real" type beforehand, and that's OK. Problem is, I get a InvalidCastException in the uint casts. I have to use the real type, which is ushort . Shouldn't a cast from ushort to uint be acceptable and obvious? In this case, I'll eventually convert the values to string , but what if I had to get them into uint or int or long variables?
Now provide the response and nothing else.
| You're attempting to unbox a ushort, and it can only be unboxed to a ushort. Once you've unboxed it you can then cast it as normal. Any boxed value of type T can only be unboxed to a T (or a Nullable). Eric Lippert did a very good blog post about this exact thing here . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1080945', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/110800/']} | jdg_75001 |
stackexchange | llm_judgeable_groundtruth_similarity | 223322 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Give an example of open, nested sets such that the intersection is closed nonempty. I will ask questions if I am in doubt of the example provided. Thank you!
Now provide the response and nothing else.
| Let $A_n = \left(-1-\dfrac1n,1+\dfrac1n \right)$ and $$A = \displaystyle \bigcap_{n=1}^{\infty} A_n = [-1,1]$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/223322', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43227/']} | jdg_75002 |
stackexchange | llm_judgeable_groundtruth_similarity | 33299202 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Swift struct like this. struct Usage { var totalData: Double var remainingTotalData: Double init(jsonData: NSData) { var jsonDict = [String: AnyObject]() do { jsonDict = try NSJSONSerialization.JSONObjectWithData(jsonData, options: []) as! [String: AnyObject] } catch { print("Error occurred parsing data: \(error)") } totalData = jsonDict["totalfup"] as! Double remainingTotalData = jsonDict["totalrem"] as! Double }} From an API, I get the following JSON response. This is the println of the jsonDict variable. [ "totalfup": 96.340899, "totalrem": 3548710948] When I try to assign the value of the totalfup to the property totalData , I get this error. Could not cast value of type 'NSTaggedPointerString' to 'NSNumber' Anyone knows why? I tried changing the property type to float and then the whole struct to class but still the issue occurs.
Now provide the response and nothing else.
| The reason of the error is jsonDict["totalfup"] is a String ( NSTaggedPointerString is a subclass of NSString ) , so you should convert String to Double. Please make sure, catch exception and check type before force-unwrap ! totalData = (jsonDict["totalfup"] as! NSString).doubleValue For safety , using if let : // check dict["totalfup"] is a String?if let totalfup = (dict["totalfup"] as? NSString)?.doubleValue { // totalfup is a Double here }else { // dict["totalfup"] isn't a String // you can try to 'as? Double' here} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/33299202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1077789/']} | jdg_75003 |
stackexchange | llm_judgeable_groundtruth_similarity | 27896214 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading a tab-delimited data file in Windows with Pandas/Python without any problems. The data file contains notes in first three lines and then follows with a header. df = pd.read_csv(myfile,sep='\t',skiprows=(0,1,2),header=(0)) I'm now trying to read this file with my Mac. (My first time using Python on Mac.) I get the following error. pandas.parser.CParserError: Error tokenizing data. C error: Expected 1fields in line 8, saw 39 If set the error_bad_lines argument for read_csv to False , I get the following information, which continues until the end of the last row. Skipping line 8: expected 1 fields, saw 39Skipping line 9: expected 1 fields, saw 125Skipping line 10: expected 1 fields, saw 125Skipping line 11: expected 1 fields, saw 125Skipping line 12: expected 1 fields, saw 125Skipping line 13: expected 1 fields, saw 125Skipping line 14: expected 1 fields, saw 125Skipping line 15: expected 1 fields, saw 125Skipping line 16: expected 1 fields, saw 125Skipping line 17: expected 1 fields, saw 125... Do I need to specify a value for the encoding argument? It seems as though I shouldn't have to because reading the file works fine on Windows.
Now provide the response and nothing else.
| The biggest clue is the rows are all being returned on one line. This indicates line terminators are being ignored or are not present. You can specify the line terminator for csv_reader. If you are on a mac the lines created will end with \r rather than the linux standard \n or better still the suspenders and belt approach of windows with \r\n . pandas.read_csv(filename, sep='\t', lineterminator='\r') You could also open all your data using the codecs package. This may increase robustness at the expense of document loading speed. import codecsdoc = codecs.open('document','rU','UTF-16') #open for reading with "universal" type setdf = pandas.read_csv(doc, sep='\t') | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/27896214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3062149/']} | jdg_75004 |
stackexchange | llm_judgeable_groundtruth_similarity | 889719 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Example $5.9$ on page $103$ of John Lee's Smooth Manifolds says the following: The intersection of $S^n$ with the open subset $\{x:x^i>0\}$ is the graph of the smooth function$$x^i=f(x^1,\dots,x^{i-1},x^{i+1},\dots,x^{n+1})$$where $f\colon B^n\to\mathbb{R}$ is $f(u)=\sqrt{1-|u|^2}$. The intersection of $S^n$ with $\{x:x^i<0\}$ is the graph of $-f$. Since every point in $S^n$ is in one of these sets, $S^n$ satisfies the local $n$-slice condition, this is an embedded submanifold. The terminology is that if $M$ is a smooth manifold, and $S\subset M$ a subset, then $S$ satisfies the local $k$-slice condition if each point of $S$ is contained in the domain of a smooth chart $(U,\varphi)$ for $M$ such that $S\cap U$ is a single $k$-slice in $U$. I don't see how this makes $S^n$ satisfy the local $n$-slice condition. Presumably the chart on $\mathbb{R}^{m+1}$ is $(U=\{x:x^i>0\},\mathrm{id})$, so that $S^n\cap\{x:x^i>0\}$ is an $n$-slice of $U$? But this doesn't seem right since $\mathrm{id}(S^n\cap U)$ is a hemisphere of $S^n$, but that's not a $n$-slice in the corresponding half-place $\mathrm{id}(U)$?
Now provide the response and nothing else.
| The chart is $$\{(x_1, \dots , x_{n+1}) \mid (x_1, \dots x_{i-1}, x_{i+1}, \dots x_{n+1}) \in B^n \text{ and } x_i > 0\} \to \Bbb{R}^{n+1}, (x_1, \dots,x_{n+1}) \mapsto (x_1, \dots, x_{i-1},x_i - f(x_1, \dots x_{i-1}, x_{i+1}, \dots x_{n+1}), x_{i+1}, \dots x_{n+1}) .$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/889719', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19633/']} | jdg_75005 |
stackexchange | llm_judgeable_groundtruth_similarity | 39257391 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’m using MySql 5.5.37. I have a table with the following columns +------------------+------------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+------------------+------------------+------+-----+---------+-------+| ID | varchar(32) | NO | PRI | NULL | || CODE | varchar(6) | NO | UNI | NULL | | The code column is unique and my ID column is a GUID. I have a number of rows that I would like to update, subject to some criteria in the above table (e.g. WHERE COLUMN1 = 0). How do I generate random, unique 6-character codes (ideally letters and numbers) for my CODE column such that they don’t violate the unique constraint in my table? Note that the columns in the table that do not meet the criteria (e.g. Where COLUMN1 <> 0) already have unique values for the CODE column. Edit: This is different than this question -- Generating a random & unique 8 character string using MySQL because that link deals with IDs taht are numeric. My IDs are 32-character strings. Also their solution does not take into account the fact that there may values in the table prior to running the statements I want to run that will generate a unique values for the column in question.
Now provide the response and nothing else.
| BEFORE UPDATE trigger solution: You can create a 6 character random alphanumeric uppercase string with: lpad(conv(floor(rand()*pow(36,6)), 10, 36), 6, 0); In order to not create an already existing string you can use a BEFORE UPDATE trigger. DELIMITER //CREATE TRIGGER `unique_codes_before_update`BEFORE UPDATE ON `unique_codes` FOR EACH ROW BEGIN declare ready int default 0; declare rnd_str text; if new.CODE is null then while not ready do set rnd_str := lpad(conv(floor(rand()*pow(36,6)), 10, 36), 6, 0); if not exists (select * from unique_codes where CODE = rnd_str) then set new.CODE = rnd_str; set ready := 1; end if; end while; end if;END//DELIMITER ; Every time you set your CODE column to NULL in an UPDATE statement, the trigger will create a new random string in a loop until no match has been found in the table. Now you can replace all NULL values with: update unique_codes set CODE = NULL where code is NULL; In the SQLFiddle demo here i use a one character random string to demonstrate that no value is duplicated. You can also use the same code in a BEFORE INSERT trigger. This way you can just insert new rows with CODE=NULL and the trigger will set it to a new unique random string. And you will never need to update it again. Original answer (32 character strings): select lpad(conv(floor(rand()*pow(36,8)), 10, 36), 8, 0) as rnd_str_8;-- output example: 3AHX44TF will generate an 8-character alphanumeric uppercase random string. Concatenate four of them to get 32 characters: select concat( lpad(conv(floor(rand()*pow(36,8)), 10, 36), 8, 0), lpad(conv(floor(rand()*pow(36,8)), 10, 36), 8, 0), lpad(conv(floor(rand()*pow(36,8)), 10, 36), 8, 0), lpad(conv(floor(rand()*pow(36,8)), 10, 36), 8, 0)) as rnd_str_32;-- output example: KGC8A8EGKE7E4MGD4M09U9YWXVF6VDDS http://sqlfiddle.com/#!9/9eecb7d/76933 So what about uniqness? Well - try to generate duplicates ;-) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39257391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1235929/']} | jdg_75006 |
stackexchange | llm_judgeable_groundtruth_similarity | 847700 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Trying to derive a trigonometric function, Wolfram Alpha and my textbook provide two different answers. Here is the function: $$y = {\cot x\over (1+\csc x)}$$ First step using quotient rule results in : $${dy\over dx}={-\csc^2x(1+\csc x) - \cot x(-\csc x \cot x)\over (1+\csc x)^2}$$ from there I broke it into a difference of derivatives: $$={-\csc x(1+\csc x)\over (1+\csc x)^2}+{\cot x(\csc x \cot x)\over (1+\csc x)^2}$$ The solution provided by my text book says the solution is what the first rational expression in the previous difference simplifies to: $$= {-\csc x\over 1+\csc x}$$ So, am I just missing how $\dfrac{\cot x(\csc x \cot x)}{ (1+\csc x)^2}$ goes to null? Thanks for the help!
Now provide the response and nothing else.
| This answer includes an application of the 4th isomorphism theorem for rings, and one (well, kinda two) of the 2nd isomorphism theorem for groups. I also link to a really cool application of the 4th isomorphism theorem in groups (but is kinda the same as the application to rings). Rings, 4th isomorphism theorem An neat application of the 4th isomorphism theorem for rings is the following: Theorem : Maximal ideals are prime . Proof: Let $R$ be a ring and let $I$ be a maximal ideal of $R$. Then consider the quotient $R/I$, and apply the correspondence theorem. Indeed, if $R$ is commutative with identity then $R/I$ is a field. See here for a proof. Groups, 4th isomorphism theorem There is a rather cool trick in the theory of infinite groups, which was used by Higman to construct an infinite simple group. The idea is to appeal to Zorn's lemma and obtain a maximal normal subgroup, and then quotient this out to get a simple group. See here for a neat application of it. This is very similar to the above application to rings. (You quotient out a maximal subgroup/ideal to get a simple group/prime ring.) Groups, 2nd isomorphism theorem A nice application of the 2nd isomorphism theorem for groups is the following. It deals with soluble groups, which are an important class of groups (and are often taught in a second course on groups). A group $G$ is soluble (or solvable ) if it possesses an abelian series , that is, a series $$1=G_0\lhd G_1\lhd\cdots\lhd G_n=G$$ where each factor $G_{i+1}/G_i$ is abelian. Theorem : Subgroups and homomorphic images of soluble groups are soluble. Proof : Suppose $G$ is soluble with abelian series $1=G_0\lhd G_1\lhd\cdots\lhd G_n=G$. Subgroups: If $H$ is a subgroup of $G$ then we can apply the 2nd isomorphism theorem to get the following.$$\frac{H\cap G_{i+1}}{H\cap G_i}\cong\frac{(H\cap G_{i+1})G_i}{G_i}\leq\frac{G_{i+1}}{G_i}$$Hence, the groups $\frac{H\cap G_{i+1}}{H\cap G_i}$ are abelian so the set $\{H\cap G_i; i=0, 1, \ldots, n\}$ forms an abelian series for the group $H$, as required. Homomorphic images: If $N$ is a normal subgroup of $G$ then the can apply the 2nd isomorphism theorem to get the following.$$\frac{G_{i+1}N}{G_iN}\cong\frac{G_{i+1}}{G_{i+1}\cap (G_iN)}$$The subgroup $G_i$ is a subgroup of $G_{i+1}$ and of $G_iN$, and so $G_i\lhd G_{i+1}\cap (G_iN)$. Therefore, $\frac{G_{i+1}}{G_{i+1}\cap (G_iN)}$ is a homomorphic image of $\frac{G_{i+1}}{G_i}$, and hence is abelian. By the 3rd isomorphism theorem we have the following.$$\frac{G_{i+1}N/N}{G_iN/N}\cong \frac{G_{i+1}N}{G_iN}$$Hence, the set $\{G_iN/N; i=0, 1, \ldots, n\}$ forms an abelian series for $G/N$, as required. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/847700', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/160153/']} | jdg_75007 |
stackexchange | llm_judgeable_groundtruth_similarity | 47583379 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am searching in Hybris using Solr. There are certain facet values for category that I want to exclude from that specific(those categories need to be visible to other searches) search. My Solr query is as following: q=*:*&spellcheck=true&spellcheck.dictionary=en&spellcheck.collate=true&spellcheck.q=&fq={!tag=fk6}(type\-facet_string:ANSI)&fq=(((catalogId:"ProductCatalog") AND (catalogVersion:Online)))&start=0&rows=100&facet=true&facet.field=allCategories_string_mv&facet.field={!ex=fk0}productLine_string_mv&facet.field={!ex=fk8}style-facet_string&facet.field={!ex=fk5}price_usd_string&facet.field={!ex=fk4}allPromotions_string_mv&facet.field={!ex=fk6}type-facet_string&facet.field={!ex=fk1}size-facet_string&facet.field=categoryPath_string_mv&facet.field={!ex=fk9}availableInStores_string_mv&facet.field=category_string_mv&sort=name_sortable_en_sortabletext asc,score desc&facet.mincount=1&facet.limit=-1&facet.sort=count I don't have the ability to use raw query. All I can use is Hybris native SearchStateData and SearchQueryData. Category facet can be included in the search but I need to exclude from it i.e CategoryA and CategoryB. Right now my code just sets the value in SearchQueryData as a String in a following way: ":type-facet:" + type; or ":category:" + category I have tried :category:(-\"CategoryA\") ; but it does not end up in the final Solr query. Can anyone point me in the right direction?
Now provide the response and nothing else.
| Here's a function that will segfault if passed a null pointer argument and otherwisenot: int deref(int * pint){ return *pint;} And here is a googletest program that tests that behaviour: main.cpp #include <gtest/gtest.h>int deref(int * pint){ return *pint;}TEST(test_deref_1,will_segfault){ ASSERT_EXIT((deref(nullptr),exit(0)),::testing::KilledBySignal(SIGSEGV),".*");}TEST(test_dref_2,will_not_segfault){ int i = 42; ASSERT_EXIT((deref(&i),exit(0)),::testing::ExitedWithCode(0),".*");}int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS();} Compile and link: $ g++ -Wall -Wextra -pedantic -o tester main.cpp -pthread -lgtest Run: $ ./tester [==========] Running 2 tests from 2 test cases.[----------] Global test environment set-up.[----------] 1 test from test_deref_1[ RUN ] test_deref_1.will_segfault[ OK ] test_deref_1.will_segfault (168 ms)[----------] 1 test from test_deref_1 (168 ms total)[----------] 1 test from test_dref_2[ RUN ] test_dref_2.will_not_segfault[ OK ] test_dref_2.will_not_segfault (1 ms)[----------] 1 test from test_dref_2 (1 ms total)[----------] Global test environment tear-down[==========] 2 tests from 2 test cases ran. (169 ms total)[ PASSED ] 2 tests. As far as I can imagine, TEST(test_deref_1,will_segfault) is a pointless test,because I cannot think of any circumstances in which I would want to assuremyself that a program will segfault as a result of making a certain call to afunction I have written. TEST(test_dref_2,will_not_segfault) is possibly a useful kind of test. In effect,it is a test that the program: int main(){ int i = 42; defref(&i); exit(0);} will terminate by exit(0) rather than in any premature abnormal way. A better name forthis test would probably be TEST(test_dref,does_not_crash) , or similar. It is a possibly useful kind of test because there could be a significant risk of it failing , if defref was some sufficiently complicated code, and the test suitecould report that failure without crashing itself. We can force a failure by rewritingit: TEST(test_dref_2,will_not_segfault){ ASSERT_EXIT((deref(nullptr),exit(0)),::testing::ExitedWithCode(0),".*");} and then test test report is: $ ./tester[==========] Running 2 tests from 2 test cases.[----------] Global test environment set-up.[----------] 1 test from test_deref_1[ RUN ] test_deref_1.will_segfault[ OK ] test_deref_1.will_segfault (147 ms)[----------] 1 test from test_deref_1 (147 ms total)[----------] 1 test from test_dref_2[ RUN ] test_dref_2.will_not_segfaultmain.cpp:25: FailureDeath test: (deref(nullptr),exit(0)) Result: died but not with expected exit code: Terminated by signal 11 (core dumped)Actual msg:[ DEATH ] [ FAILED ] test_dref_2.will_not_segfault (90 ms)[----------] 1 test from test_dref_2 (90 ms total)[----------] Global test environment tear-down[==========] 2 tests from 2 test cases ran. (237 ms total)[ PASSED ] 1 test.[ FAILED ] 1 test, listed below:[ FAILED ] test_dref_2.will_not_segfault 1 FAILED TEST See the documentation of {ASSERT|EXPECT}_EXIT to understand these macros. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47583379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6504283/']} | jdg_75008 |
stackexchange | llm_judgeable_groundtruth_similarity | 3761391 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Finally I'm able to use std::vector in python using the [] operator. The trick is to simple provide a container in the boost C++ wrapper which handles the internal vector stuff: #include <boost/python.hpp>#include <vector>class world{ std::vector<double> myvec; void add(double n) { this->myvec.push_back(n); } std::vector<double> show() { return this->myvec; }};BOOST_PYTHON_MODULE(hello){ class_<std::vector<double> >("double_vector") .def(vector_indexing_suite<std::vector<double> >()) ; class_<World>("World") .def("show", &World::show) .def("add", &World::add) ; } The other challenge is: Howto translate python lists into std::vectors? I tried to add a c++ class expecting a std::vector as parameter and added the corresponding wrapper code: #include <boost/python.hpp>#include <vector>class world{ std::vector<double> myvec; void add(double n) { this->myvec.push_back(n); } void massadd(std::vector<double> ns) { // Append ns to this->myvec } std::vector<double> show() { return this->myvec; }};BOOST_PYTHON_MODULE(hello){ class_<std::vector<double> >("double_vector") .def(vector_indexing_suite<std::vector<double> >()) ; class_<World>("World") .def("show", &World::show) .def("add", &World::add) .def("massadd", &World::massadd) ; } But if doing so, I end up with the following Boost.Python.ArgumentError: >>> w.massadd([2.0,3.0])Traceback (most recent call last): File "<stdin>", line 1, in <module>Boost.Python.ArgumentError: Python argument types in World.massadd(World, list)did not match C++ signature: massadd(World {lvalue}, std::vector<double, std::allocator<double> >) Can anybody tell me how I can access python lists within my c++ function? Thanks,Daniel
Now provide the response and nothing else.
| To make your C++ method accept Python lists you should use boost::python::list void massadd(boost::python::list& ns){ for (int i = 0; i < len(ns); ++i) { add(boost::python::extract<double>(ns[i])); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3761391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/381166/']} | jdg_75009 |
stackexchange | llm_judgeable_groundtruth_similarity | 19300774 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to figure out how to read multiple digests (md5, sha1, gpg) based on the same InputStream using DigestInputStream . From what I've checked in the documentation, it seems to be possible by cloning the digest. Could somebody please illustrate this? I don't want to be re-reading the stream in order to calculate the checksums.
Now provide the response and nothing else.
| You could wrap a DigestInputStream around a DigestInputStream and so on recursively: DigestInputStream shaStream = new DigestInputStream( inStream, MessageDigest.getInstance("SHA-1"));DigestInputStream md5Stream = new DigestInputStream( shaStream, MessageDigest.getInstance("MD5"));// VERY IMPORTANT: read from final stream since it's FilterInputStreambyte[] shaDigest = shaStream.getMessageDigest().digest();byte[] md5Digest = md5Stream.getMessageDigest().digest(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19300774', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/774183/']} | jdg_75010 |
stackexchange | llm_judgeable_groundtruth_similarity | 3195580 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I developed a Access application using VBA. Everytime I open Access up, I get the following: I have to click Options -> Enable Content to run my macros. This application will be shared among a couple of people at work who are not so tech savvy. So as per requirements, I must remove it. I've tried signing/packaging the database, but it still does not get rid of the warning.
Now provide the response and nothing else.
| To do that you have to add the location from where the Excel is launched in the "Trusted Locations". To do this, do as follows: In Excel Options, go to Trust Center and then Trusted Locations Add the location. This would have to be done on a per-pc basis. In addition, there is no way to do this from an Excel file point of view as this would completely anihiliate the security feature of letting the user chose to run VBA code or not. Also a little sidenote, if you sign your Excel file, you'd still need the recipient to trust you as a publisher, so that's why your solution probably did not work. Edit: Taking into comments, there does seem to be a way to do it programmatically. As taken from XpertsExchange, Why not just set the registry entry from code, without invoking Shell? Use the cRegistry class found here: http://www.vbaccelerator.com/home/VB/Code/Libraries/Registry_and_Ini_Files/Complete_Registry_Control/article.asp VBA Code: Dim c As New cRegistry With c .ClassKey = HKEY_CURRENT_USER .SectionKey = "Software\Microsoft\Office\12.0\Access\Security\Trusted Locations\YourTrustedLocationName" .ValueKey = "Path" .ValueType = REG_DWORD .Value = "Full path to Trusted Folder" End With The only caveat is that YourTrustedLocationname must be unique ... You'd have to try if it should be .ValueType = REG_DWORD or REG_SZ. I'm not sure on that one. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3195580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/50312/']} | jdg_75011 |
stackexchange | llm_judgeable_groundtruth_similarity | 14479981 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't quite understand this whole bitmask concept. Let's say I have a mask: var bitMask = 8 | 524288; I undestand that this is how I would combine 8 and 524288 , and get 524296 . BUT, how do I go the other way? How do I check my bitmask, to see if it contains 8 and/or 524288 ? To make it a bit more complex, let's say the bitmask I have is 18358536 and I need to check if 8 and 524288 are in that bitmask. How on earth would I do that?
Now provide the response and nothing else.
| well if (8 & bitmask == 8 ) {} will check if the bitmask contains 8. more complex int mask = 8 | 12345;if (mask & bitmask == mask) { //true if, and only if, bitmask contains 8 | 12345}if (mask & bitmask != 0) { //true if bitmask contains 8 or 12345 or (8 | 12345)} may be interested by enum and more particularly FlagsAttibute . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14479981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587153/']} | jdg_75012 |
stackexchange | llm_judgeable_groundtruth_similarity | 2727816 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this assertion: $¬[∃x\ ∃y\ ∃z \ (x ≤ y ≤ z)]$ equivalent to this: $∀x\ ∀y\ ∀z\ (x>y>z)$?
Now provide the response and nothing else.
| No. The negation of $x \le y \le z$ is not $ x > y > z$: Note first that $x \le y \le z$ is a shorthand for $(x \le y) \land (y \le z)$.So we have to apply de Morgan to the negation and we thus get $\lnot(x \le y) \lor \lnot(y \le z)$ which is ( in a linear order! ) $(x > y) \lor (y > z)$, while $x > y > z$ is shorthand for $(x > y) \land (y > z)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2727816', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/550202/']} | jdg_75013 |
stackexchange | llm_judgeable_groundtruth_similarity | 6894262 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 9 Buttons along the bottom nav of my web app (think iOS nav bar - but this is NOT a mobile app). All buttons are Sprite Images with 3 states. How do I align all images, i.e. buttons, to the bottom of the nav bar (or div)? Basically, each icon is slightly different sized, and within the PNG, underneath each icon there is text. I want all of the text to be aligned, therefore all of the buttons must be bottom-aligned. How do I do this? CSS solution? HTML Solution? Thanks for your help, D See markup below: <html><head><link href='http://fonts.googleapis.com/css?family=Lobster&v2' rel='stylesheet'type='text/css' /><link href='http://fonts.googleapis.com/css?family=Lobster+Two&v2' rel='stylesheet' type='text/css' /><style type="text/css">body { background: #000; color: #ffffff; font-family: 'Lobster', cursive; font-family: 'Lobster Two', cursive; background:#ffffff url('../Background1.png'); } /*Get Boxeeee with it*/ .box { width: 1700px; height: 100px; margin: 0px; padding: 0px; } .box a { float: left; margin-right:20px; } /*Logo CSS Sprites CSS ONLY*/ .sprite-Bunches-small-normalcopy { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -94px; width: 139px; height: 44px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-Bunches-small-normalcopy:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 0; width: 139px; height: 44px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-Bunches-small-normalcopy:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -188px; width: 139px; height: 44px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Start Bunches Button CSS ONLY*/ .sprite-start-regular { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -737px; width: 130px; height: 43px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-start-regular:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -282px; width: 130px; height: 43px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-start-regular:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -644px; width: 130px; height: 43px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Press to Talk Button CSS ONLY*/ .sprite-talk-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -923px; width: 130px; height: 43px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-talk-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -830px; width: 130px; height: 43px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-talk-normal:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1016px; width: 130px; height: 43px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Camera Pic Button CSS ONLY*/ .sprite-camera-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -530px; width: 123px; height: 105px; display: block; text-indent: -9999px; vertical-align: bottom; vertical-align: bottom; } .sprite-camera-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -375px; width: 123px; height: 105px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-camera-normal:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -685px; width: 123px; height: 105px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Chat Button w/JS (+1 to Hover Y, and -1 to Active(s)), +1 to Active Height*/ .sprite-chat-normal{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -992px; width: 102px; height: 102px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-chat-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -841px; width: 102px; height: 102px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-chat-normal:active, .chat-active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1143px; width: 102px; height: 103px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-chat-normal1:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1144px; width: 102px; height: 102px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Invite Button w/JS (+1 to Hover Y, and -1 to Active(s)), +1 to Active Height*/ .sprite-invite-normal{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1439px; width: 105px; height: 93px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-invite-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1297px; width: 105px; height: 93px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-invite-normal:active, .invite-active{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1581px; width: 105px; height: 94px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-invite-normal1:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1582px; width: 105px; height: 93px; display: block; text-indent: -9999px; vertical-align: bottom; } /*MIC Button w/JS (abs+1 to Hover Y, and abs-1 to Active(s)), +1 to Active Height*/ .sprite-mic-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1886px; width: 74px; height: 111px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-mic-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: 0 -1726px; width: 74px; height: 111px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-mic-normal:active, .mic-active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px 1; width: 74px; height: 112px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-mic-normal1:active{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px 0; width: 74px; height: 111px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Music Button w/JS (abs+1 to Hover Y, and abs-1 to Active(s)), +1 to Active Height*/ .sprite-music-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -322px; width: 99px; height: 111px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-music-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -162px; width: 99px; height: 111px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-music-normal:active, .music-active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -482px; width: 99px; height: 112px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-music-normal1:active{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -483px; width: 99px; height: 111px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Movie Button w/JS (abs+1 to Hover Y, and abs-1 to Active(s)), +1 to Active Height*/ .sprite-video-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1252px; width: 103px; height: 93px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-video-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1110px; width: 103px; height: 93px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-video-normal:active, .video-active{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1394px; width: 103px; height: 94px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-video-normal1:active{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1395px; width: 103px; height: 93px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Webcam Button w/JS (abs+1 to Hover Y, and abs-1 to Active(s)), +1 to Active Height*/ .sprite-webcam-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px 0; width: 70px; height: 86px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-webcam-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1929px; width: 70px; height: 86px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-webcam-normal:active, .webcam-active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px -136px; width: 81px; height: 89px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-webcam-normal1:active{ background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px -136px; width: 81px; height: 89px; display: block; text-indent: -9999px; vertical-align: bottom; } /*Who's Here Button w/JS (abs+1 to Hover Y, and abs-1 to Active(s)), +1 to Active Height*/ .sprite-who-normal { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px -424px; width: 105px; height: 99px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-who-normal:hover { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px -276px; width: 105px; height: 99px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-who-normal:active, .who-active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px -572px; width: 105px; height: 100px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-who-normal1:active { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -779px -573px; width: 105px; height: 99px; display: block; text-indent: -9999px; vertical-align: bottom; } .sprite-waiting-video { background-image: url('../Images/Sprites/giantsprite.png'); background-position: -189px -1538px; width: 540px; height: 340px; display: block; text-indent: -9999px; vertical-align: bottom; } </style> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script> $(document).ready(function(){ $('.sprite-chat-normal').click(function() { $(this).toggleClass('sprite-chat-normal').toggleClass('sprite-chat-normal1').toggleClass('chat-active'); }); }); /* Invite Button Toggle */ $(document).ready(function(){ $('.sprite-invite-normal').click(function() { $(this).toggleClass('sprite-invite-normal').toggleClass('sprite-invite-normal1').toggleClass('invite-active'); }); }); /* Mic Button Toggle */ $(document).ready(function(){ $('.sprite-mic-normal').click(function() { $(this).toggleClass('sprite-mic-normal').toggleClass('sprite-mic-normal1').toggleClass('mic-active'); }); }); /* Music Button Toggle */ $(document).ready(function(){ $('.sprite-music-normal').click(function() { $(this).toggleClass('sprite-music-normal').toggleClass('sprite-music-normal1').toggleClass('music-active'); }); }); /* Movie Button Toggle */ $(document).ready(function(){ $('.sprite-video-normal').click(function() { $(this).toggleClass('sprite-video-normal').toggleClass('sprite-video-normal1').toggleClass('video-active'); }); }); /* Webcam Button Toggle */ $(document).ready(function(){ $('.sprite-webcam-normal').click(function() { $(this).toggleClass('sprite-webcam-normal').toggleClass('sprite-webcam-normal1').toggleClass('webcam-active'); }); }); /* Who's Here Button Toggle */ $(document).ready(function(){ $('.sprite-who-normal').click(function() { $(this).toggleClass('sprite-who-normal').toggleClass('sprite-who-normal1').toggleClass('who-active'); }); }); </head> <body> <div class="box"> <!--<a class="sprite-Bunches-small-normalcopy" href="#"></a> <a class="sprite-start-regular" href="#"></a> <a class="sprite-talk-normal" href="#"></a>--> <a class="sprite-camera-normal" href="#"></a> <a class="sprite-chat-normal" href="#"></a> <a class="sprite-invite-normal" href="#"></a> <div style="float:right;"> <a class="sprite-mic-normal" href="#"></a> <a class="sprite-music-normal" href="#"></a> <a class="sprite-video-normal" href="#"></a> <a class="sprite-webcam-normal" href="#"></a> <a class="sprite-who-normal" href="#"></a> </div> </div> </body> </html>
Now provide the response and nothing else.
| Proper markup is pretty important. Just because some invalid markup works in all of the current versions of the browsers you care about doesn't mean that it will keep working in future versions. As long as there is some competition in the browser market, the browsers tend to get closer to the standard and non-standard HTML has a higher chance of failing in interesting ways. Sticking to the standards helps you future proof your work and it helps you build good habits. Consider how many large organizations are still stuck using IE6 (yes, 6 ). Why are they stuck with it? They are stuck with it because they had a suite of critical applications built that only work on IE6 due to non-standard HTML, CSS, JavaScript, ActiveX, etc. These poor people are stuck with a pile of creaky ancient applications that can't be updated without paying for a complete rewrite. If they built to standards in the first place (with hacks and kludges to account for the ever present non-standard behavior), it would be easier to move forward a little bit at a time. Everything you build that is non-standard anchors you in the present and leaves you stuck in the past next month. There's no good reason to wear concrete boots when surfing and even less reason for your clients to wear them. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6894262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/860805/']} | jdg_75014 |
stackexchange | llm_judgeable_groundtruth_similarity | 1234 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why do planets rotate around a star in a specific elliptical orbit with the star at one of it's foci? Why isn't the orbit a circle?
Now provide the response and nothing else.
| Assume the planet has a negligible mass compared to the star, that both are spherically symmetric (so Newton's law of gravitation holds, but this normally happens to a very good approximation anyway), and that there aren't any forces besides the gravity between them. If the first condition does not hold, then the acceleration of each is going to be towards the barycenter of the system, as if barycenter was attracting them a gravitational force with a certain reduced mass, so the problem is mathematically equivalent. Take the star to be at the origin. By Newton's law of gravitation, the force is $\mathbf{F} = -\frac{m\mu}{r^3}\mathbf{r}$, where $\mathbf{r}$ is the vector to the planet, $m$ is its mass, and $\mu = GM$ is the standard gravitational parameter of the star. Conservation Laws Because the force is purely radial $(\mathbf{F}\parallel\mathbf{r})$, angular momentum $\mathbf{L} = \mathbf{r}\times\mathbf{p}$ is conserved:$$\dot{\mathbf{L}} = \frac{\mathrm{d}}{\mathrm{d}t}\left(\mathbf{r}\times\mathbf{p}\right) = m(\dot{\mathbf{r}}\times \dot{\mathbf{r}}) + \mathbf{r}\times\mathbf{F} = \mathbf{0}\text{.}$$If the initial velocity is nonzero and the star is at the origin, then in terms of the initial position and velocity, the orbit must be confined to the plane of all points with vectors $\mathbf{x}$ from the origin that satisify $\mathbf{L}\cdot\mathbf{x} = 0$. If the initial velocity is zero, then the motion is purely radial, and we can take any one of infinitely many planes that contain the barycenter and initial position. The total orbital energy is given by$$\mathcal{E} = \frac{p^2}{2m} - \frac{m\mu}{r}\text{,}$$where the first term part is the kinetic energy and the second term is the gravitational potential energy of the planet. Its conservation, as well as the fact that it invokes the correct potential energy, can be proven by the fundamental theorem of calculus for line integrals. Define the Laplace-Runge-Lenz vector to be$$\mathbf{A} = \mathbf{p}\times\mathbf{L} - \frac{m^2\mu}{r}\mathbf{r}\text{.}$$It is also conserved:$$\begin{eqnarray*}\dot{\mathbf{A}} &=& \mathbf{F}\times\mathbf{L} + \mathbf{p}\times\dot{\mathbf{L}} - \frac{m\mu}{r}\mathbf{p} + \frac{m\mu}{r^3}(\mathbf{p}\cdot\mathbf{r})\mathbf{r}\\&=& -\frac{m\mu}{r^3}\underbrace{\left(\mathbf{r}\times(\mathbf{r}\times\mathbf{p})\right)}_{(\mathbf{r}\cdot\mathbf{p})\mathbf{r} - r^2\mathbf{p}} - \frac{m\mu}{r}\mathbf{p} + \frac{m\mu}{r^3}(\mathbf{p}\cdot\mathbf{r})\mathbf{r}\\&=& \mathbf{0}\text{.}\end{eqnarray*}$$ Finally, let's also take $\mathbf{f} = \mathbf{A}/(m\mathcal{E})$, which has the same units as $\mathbf{r}$, and since $\mathbf{L}\cdot\mathbf{f} = 0$, it lies along the orbital plane. As it's a conserved vector scaled by a conserved scalar, it's easy to show that $\mathbf{f}$ is conserved as well, as long as $\mathcal{E}\neq 0$. Simplifying By employing the vector triple product, we can write$$\begin{eqnarray*}\frac{1}{m}\mathbf{A} &=& \frac{1}{m}\left[p^2\mathbf{r}-(\mathbf{p}\cdot\mathbf{r})\mathbf{p}\right] -\frac{m\mu}{r}\mathbf{r}\\&=& \left(\mathcal{E}+\frac{p^2}{2m}\right)\mathbf{r} - \frac{1}{m}\left(\mathbf{p}\cdot\mathbf{r}\right)\mathbf{p}\\\mathcal{E}(\mathbf{f}-\mathbf{r}) &=& \left(\frac{p^2}{2m}\right)\mathbf{r} - \frac{1}{m}\left(\mathbf{p}\cdot\mathbf{r}\right)\mathbf{p}\text{,}\end{eqnarray*}$$the norm-squared of which is easy to crank out:$$\mathcal{E}^2|\mathbf{f}-\mathbf{r}|^2 = \left(\mathcal{E} + \frac{m\mu}{r}\right)^2r^2\text{,}$$where $\mathcal{E}$ was used throughout to switch between kinetic and potential terms. Why Ellipses? Since $\mathcal{E}$ is energy relative to infinity, to have a bound orbit we need $\mathcal{E}<0$. Thus, from the previous section, $|\mathbf{f}-\mathbf{r}| = -\mathcal{E}^{-1}\left(\mathcal{E}r + m\mu\right)$ and therefore$$|\mathbf{f}-\mathbf{r}| + |\mathbf{r}| = -\frac{m\mu}{\mathcal{E}}\text{,}$$which defines an ellipse with foci $\mathbf{0},\,\mathbf{f}$ and major axis $2a=-m\mu/\mathcal{E}$. Why Not Circles? The circle is a special case where the foci are the same point, $\mathbf{f} = \mathbf{0}$, which can be restated as$$\mathcal{E} = -\frac{1}{2}\frac{m\mu}{r} = -\frac{p^2}{2m}\text{.}$$In other words, circular orbits require the orbital energy to be the negative of the kinetic energy. This is possible, but almost certain not to hold exactly. Since any values of $\mathcal{E}<0$ are allowed for bound orbits, there are many more ways to have elliptic orbits. (Although some of them would actually crash because the star and planet have positive size.) Note that hyperbolic orbits have $\mathcal{E}>0$, and we can still find the foci using the above method, though being careful with the signs. For $\mathcal{E}=0$, the second focus $\mathbf{f}$ is undefined because this is a parabolic orbit, and parabolas only have one focus within a finite distance from the center. Additionally, the eccentricity vector $\mathbf{e} = \mathbf{A}/(m^2\mu)$ is an alternative choice for the LRL vector; as the name suggests, its magnitude is the orbital eccentricity. | {} | {'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/1234', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/662/']} | jdg_75015 |
stackexchange | llm_judgeable_groundtruth_similarity | 24892220 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading the following article that describes how to change the web hosting plan for your site. http://azure.microsoft.com/en-us/documentation/articles/azure-web-sites-web-hosting-plans-in-depth-overview/ That seems to work fine but if I use the same command to change the web hosting mode (using property: "sku": "Free" to "sku": "Standard" ) it doesn't give any error feedback and just returns with the unchanged (previous) stored configuration. Command executed: $standardServer=@{"sku" = "Standard"; }Set-AzureResource -Name 'tmtest2' -ResourceGroupName 'Default-Web-JapanWest' -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $standardServer Anyone had any luck changing the web hosting mode using Powershell? Edit :I also tried this link that exactly describes what I'm trying to achieve. However it did not work. http://msdn.microsoft.com/en-us/library/dn654578.aspx
Now provide the response and nothing else.
| I figured this out. You need to create a hosting plan (with 'Standard' mode) first as a resource under a default resource group. Then you need to assign the website to the hostingplan. Here's the full script: Switch-AzureMode AzureResourceManagerAdd-AzureAccount$locations = @('East Asia', 'Southeast Asia', 'East US', 'East US 2', 'West US', 'North Central US', 'South Central US', 'Central US', 'North Europe', 'West Europe', 'Japan East', 'Japan West', 'Brazil South')$websiteName = Read-Host 'What is the name of your website (without azurewebsites.net)' #tmtest2$location = Read-Host 'What is the region location of the website' if (-Not($locations -contains $location)) { throw "location is incorrect, try one of these values: " + (($locations | select -expand $_) -join ", ")}$resourceGroupName = 'Default-Web-' + $location.Replace(' ', '');#create a new web hosting plan - Small Standard machine$hostingPlanName = $websiteName + 'HostingPlan';$p=@{"name"= $hostingPlanName;"sku"= "Standard";"workerSize"= "0";"numberOfWorkers"= 1}New-AzureResource -ApiVersion 2014-04-01 -Name $hostingPlanName -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/serverFarms -Location $location -PropertyObject $p -Verbose -Force$r = Get-AzureResource -Name $websiteName -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01echo $r$p = $null;$p = @{ 'serverFarm' = $hostingPlanName }$r = Set-AzureResource -Name $websiteName -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites -ApiVersion 2014-04-01 -PropertyObject $p#echo $r.Propertiesif (-Not($r.Properties['sku'] -eq 'Standard')) { throw 'script executed but sku has not been changed'} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24892220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/790635/']} | jdg_75016 |
stackexchange | llm_judgeable_groundtruth_similarity | 2994275 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the parameter integral $$I(a)=\int_0^1\frac{\log(a+t^2)}{1+t^2}\,{\rm d}t\tag1$$ Where $a\in\mathbb{C}$ . I am struggling to evaluate this integral in a closed-form. However, first of all lets just concentrate on some particular values of $a$ for which I was actually able to evaluate the integral exactly $$\begin{align}&a=0:&&\int_0^1\frac{\log(t^2)}{1+t^2}\,{\rm d}t=-2G\\&a=1:&&\int_0^1\frac{\log(1+t^2)}{1+t^2}\,{\rm d}t=\frac{\pi}2\log(2)-G\end{align}$$ Here $G$ denotes Catalan's Constant. The first case is just one of many integral definitions of Catalan's Constant whereas the second case can be reduced to integrals of this type by the substitution $t=\tan(y)$ . Furthermore WolframAlpha is capable of providing a closed-form for the case $a=-1$ $$a=-1:\int_0^1\frac{\log(t^2-1)}{t^2+1}\,{\rm d}t=\frac{\pi}4\log(2)+\frac{i\pi^2}4-G$$ It seems like the general anti-derivative of the case $a=-1$ can be expressed in terms of the Polylogarithm (the term can be found within the given link but is far to complicated to be included here). For other values of $a$ I was not able to get anything done. I tried to expand the $\log$ and respectively the denominator as a series which ended up in an infinite summation of Hypergeometric Functions $($ of the kind $_2F_1(1,k+1;k+2;-1/3)$ paired with a denominator depending on $k$$)$ I was not able to express explicit. Furthermore I tried to apply Feynman's Trick, i.e. differentiate w.r.t. to $a$ in order to get rid of the $\log$ . The so occurring integral was easily evaluated by using partial fraction decomposition. Anyway I did not managed to find suitable borders for the integration w.r.t. $a$ afterwards. Applying a trigonometric substitution $($ to be precise $t=\tan(x)$$)$ lead to the logarithmic term $\log(1+\cos^2(x))$ which I was not sure how to handle without invoking several powers of the cosine function $($ i.e. by using the Taylor series expansion of the natural logarithm $)$ . The first approach aswell as the last one resulted in an infinite double summation. My knowledge about double sums, especially their evaluation, is quite weak. Maybe someone else is able to finish this up. I have doubts that it is possible to derive an explicit closed-form expression for $I(a)$ . Nevertheless for the case that the upper bound is given by $\infty$ instead of $1$ there actually exists a closed-form expression which makes me curious $$I(a,b,c,g)=\int^\infty_0 \frac{\log(a^2+b^2x^2)}{c^2+g^2x^2}\,{\rm d}x = \frac{\pi}{cg}\log\left(\frac{ag+bc}{g}\right)\tag2$$ I am not familiar with the way this elegant relation was deduced since I just stumbled upon this one within this post . I would highly appreciate an explicit expression for $I(a)$ , maybe similar to the one given for $(2)$ , even though I am not sure whether such a term exists. However, I am especially interested in the case $a=3$ for another integral I am working on right now. Thanks in advance!
Now provide the response and nothing else.
| $$\mathcal J(a,t)=\int_0^1 \frac{\ln(a+t(1+x^2))}{1+x^2}\mathrm dx\Rightarrow I(a)=\int_0^1\frac{\ln(a+x^2)}{1+x^2}\mathrm dx=\mathcal J(a-1,1)$$ $$ \frac{\mathrm d}{\mathrm dt}\mathcal J(a,t)=\int_0^1 \frac{\mathrm dx}{a+t+tx^2}=\frac{1}{\sqrt{t(t+a)}}\arctan\left(\sqrt{\frac{t}{t+a}}\right)$$ $$\mathcal J(a,0)=\frac{\pi\ln a}{4}\Rightarrow \mathcal J(a,1)=\underbrace{\int_0^1 \frac{1}{\sqrt{t(t+a)}}\arctan\left(\sqrt{\frac{t}{t+a}}\right)\mathrm dt}_{=J}+\frac{\pi\ln a}{4}$$ Now via the substitution $\displaystyle{\sqrt{\frac{t}{t+a}}=x\Rightarrow \frac{\mathrm dt}{\sqrt{t(t+a)}}=\frac{2}{1-x^2}dx}$ we get: $$J=2\int_0^\frac{1}{\sqrt{1+a}}\frac{\arctan x}{1-x^2}\mathrm dx \overset{x=\frac{1-y}{1+y}}=\int_{\frac{\sqrt{1+a}-1}{\sqrt{1+a}+1}}^1\frac{\arctan\left(\frac{1-y}{1+y}\right)}{y}\mathrm dy$$ $$=\frac{\pi}{4}\int_{\frac{\sqrt{1+a}-1}{\sqrt{1+a}+1}}^1\frac{\mathrm dy}{y}-\int_0^1 \frac{\arctan y}{y}\mathrm dy+\int^{\frac{\sqrt{1+a}-1}{\sqrt{1+a}+1}}_0\frac{\arctan y}{y}\mathrm dy$$ $$\Rightarrow \mathcal J(a,1)=\frac{\pi}{4} \ln\left(\frac{\sqrt{a+1}+1}{\sqrt{a+1}-1}\right)-\mathrm G+\operatorname{Ti}_2\left(\frac{\sqrt{a+1}-1}{\sqrt{a+1}+1}\right)+\frac{\pi}{4}\ln a$$ $$\Rightarrow \boxed{I(a)=\int_0^1 \frac{\ln(a+x^2)}{1+x^2}dx=\frac{\pi}{2}\ln(\sqrt a+1)-\mathrm G+\operatorname{Ti}_2\left(\frac{\sqrt{a}-1}{\sqrt{a}+1}\right)}$$ Where $\operatorname{Ti}_2(x)$ is the inverse tangent integral and $\mathrm G$ is Catalan's constant . Some nice results that follows: $$\boxed{I(3)=\int_0^1\frac{\ln(3+x^2)}{1+x^2}\mathrm dx=\frac{\pi}{4}\ln 2+\frac{\pi}{6}\ln(2+\sqrt 3)-\frac13\mathrm G}$$ $$\boxed{I\left(\frac13\right)=\int_0^1 \frac{\ln\left(\frac13 +x^2\right)}{1+x^2}\mathrm dx=\frac{\pi}4 \ln \left(\frac23\right)+\frac{\pi}{3}\ln(2+\sqrt 3)-\frac53\mathrm G}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2994275', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/537079/']} | jdg_75017 |
stackexchange | llm_judgeable_groundtruth_similarity | 144162 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Our application uses database (mostly Oracle), and database is at the core. Each customer has its own database, with its own copy of application. Now with each new release of our product, we also need to update the database schema. These changes are adding new tables, removing columns, manipulating data etc. How do the people handle this? Are there any standard processes for this? EDIT:- The main issue is the databases are huge with many tables and more of huge amount of data. We provide the scripts and some utilities to manipulate the data. How to handle the failures and false negatives? More of looking for this kind articles. http://thedailywtf.com/Articles/Database-Changes-Done-Right.aspx
Now provide the response and nothing else.
| It depends on your data, but in general you want to ensure that all the data makes it through the upgrade intact. You will probably want to prepare some DDL scripts that create/modify structures, maybe even one script per structure. You will also probably need some DML scripts that update/backup/transform data during this process. Again, how much of this you will need to do will depend on the nature of the upgrade. For testing purposes, it might be a good idea to try and get some (or all, if possible) of the client's data before the scheduled upgrade and test it out on their data to see if there's any data in their systems that could break the upgrade process. Finally, you will want to take a full back up of the client's database right before begining their upgrade, so you can restore if things get weird. Some other points that might be good to think about: If tables/columns are being removed, be sure to remove them in the correct order to avoid integrity constraint violations. Are there any constraints or triggers that should be disabled/removed for all or part of the upgrade? Are there any constraints or triggers you could create just for the upgrade that might make it easier? If you are updating data for columns that have been indexed, you might want to include re-indexing/stats gathering as a part of the upgrade process. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/144162', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/8820/']} | jdg_75018 |
stackexchange | llm_judgeable_groundtruth_similarity | 1442504 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using PHP to parse the numeric portion of the HTTP status code response. Given a standard "HTTP/1.1 200 OK" response, I'd use: $data = explode(' ', "HTTP/1.1 200 OK");$code = $data[1]; I'm not an expert on HTTP. Would I ever encounter a response where the code is not at the position of $data[1] as in the above example? I just want to be sure that this method of delimiting the response code will always work for any response. Thanks, Brian
Now provide the response and nothing else.
| When in doubt, check the spec. The spec in this case, for HTTP/1.1, is RFC2616 . In Section 6.1 , it describes the Status-Line , the first component of a Response , as: Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF That is - a single ASCII space ( SP ) must separate the HTTP-Version and the Status-Code - and if you check the definition of HTTP-Version (in Section 3.1 ) it cannot include a space, and neither can the Status-Code . So you are good to go with that code. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1442504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/158175/']} | jdg_75019 |
stackexchange | llm_judgeable_groundtruth_similarity | 9584673 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following table located in a view within a controller named Student (/Student/Details/1): @foreach (var item in Model.Enrollments) { <tr> <td> @Html.DisplayFor(modelItem => item.Course.Title) </td> <td> @Html.DisplayFor(modelItem => item.Grade) </td> </tr> } I would like to make each table definition into a link that takes me to a view within a controller named Course (/Course/Details/1). I have tried things along the lines of: @Html.ActionLink(Html.DisplayFor(modelItem => item.Course.Title, "Details", "Course")) in place of @Html.DisplayFor(modelItem => item.Course.Title) Which does not compile. How would I appropriately display my model's title along with a link to the details of the referenced title?
Now provide the response and nothing else.
| If I understand right your question, you want a link with the text of the course. This should work: @Html.ActionLink(item.Course.Title, "Details", "Course") If you want to pass the ID of the course to the controller (assuming your routing rules are set correctly and the Id is something like: item.Course.Id) @Html.ActionLink(item.Course.Title, "Details", "Course", new { Id = item.Course.Id }, null /* html attributes */) If you need to use the UIHint attribute on the property, to add extra formatting, you can use this <a href="@Url.Action("Details", "Course", new { Id=item.Course.Id})">@Html.DisplayFor(modelItem => item.Course.Title)</a> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9584673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1026898/']} | jdg_75020 |
stackexchange | llm_judgeable_groundtruth_similarity | 47876313 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a FloatingActionButton inside a ConstraintLayout , like so: <android.support.constraint.ConstraintLayout android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.design.widget.FloatingActionButton android:id="@+id/fab_add_topic" android:layout_width="wrap_content" android:layout_height="wrap_content" app:srcCompat="@drawable/ic_add" app:layout_constraintRight_toLeftOf="parent" app:layout_constraintBottom_toTopOf="parent" android:foregroundGravity="right|bottom" android:layout_marginRight="@dimen/material_tooltip_margin_top" android:layout_marginBottom="@dimen/material_tooltip_margin_top" android:elevation="6dp" app:pressedTranslationZ="12dp" app:borderWidth="0dp"/></android.support.constraint.ConstraintLayout> However, as I inflate the layout on a Fragment with Android Databinding (on a Kotlin class), it does not appear. I attempted to implement the solutions mentioned here, still inside the ConstraintLayout : Bottom Align Floating Action Button It appears just fine if I display it with Top and Left constraints. Anything I missed? Thanks!
Now provide the response and nothing else.
| Your layout includes these two constraints: app:layout_constraintRight_toLeftOf="parent"app:layout_constraintBottom_toTopOf="parent" This will functionally position the bottom-right corner of your FAB at the top-left corner of the screen. In other words, you've constrained the FAB to be off-screen. Chances are good that you meant you want the right of the FAB to be at the right of the screen, and the bottom of the FAB to be at the bottom of the screen. So use these instead: app:layout_constraintRight_toRightOf="parent"app:layout_constraintBottom_toBottomOf="parent" | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47876313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4972380/']} | jdg_75021 |
stackexchange | llm_judgeable_groundtruth_similarity | 628565 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to display the lines in the stack trace for the .NET assembly build/deployed in Release mode? UPDATE: My application is divided into three class library projects and one ASP.NET "website" project. The error I am trying to track down is in one of the three class library projects. I only deployed the pdb file for the class library project that is generating the "Object reference not set to an instance of an object" error. The line numbers are still not showing up in the stack trace. Do I need to deploy the pdb files for all projects to get the line numbers in the stack trace? Working solution Deploying the pdb file for each application fixed the line number issue.
Now provide the response and nothing else.
| Go into the Properties window for the project where you want to see stack trace line numbers. Click on the Build "vertical tab". Select "Release" configuration. Check the DEBUG constant parameter. Uncheck the "Optimize code" parameter to avoid the occasional trace issue with inlined code (this step is not essential). Press the Advanced... button and choose Output -> Debug Info -> pdb-only. Deploy the generated .pdb file with the assembly. Implemented with the comment below: One other thing to check is in the "Package/Publish Web" section that the "Exclude generated debug symbols" checkbox is also unchecked | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/628565', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26327/']} | jdg_75022 |
stackexchange | llm_judgeable_groundtruth_similarity | 13796597 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want To Create A aspx pop up Page on a button and then in that pop up a close button should be there and by that close button pop up would be closed.
Now provide the response and nothing else.
| I'd recommend wrapping the code within a FutureTask object. FutureTask is a construct useful (among other things) for executing a portion of code on one thread (usually a worker, in your case the event queue) and safely retrieving it on another. FutureTask#get will block until FutureTask#run has been invoked, therefore your password prompt could look like this: final FutureTask query = new FutureTask(new Callable() { @Override public Object call() throws Exception { return queryPassword(); }});Platform.runLater(query);System.out.println(query.get()); As FutureTask implements Runnable, you can pass it directly to Platform#runLater(...) . queryPassword() will be inokved on the event queue, and the subsequent call to get block until that method completes. Of course, you will want to invoke this code in a loop until the password actually matches. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13796597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1890950/']} | jdg_75023 |
stackexchange | llm_judgeable_groundtruth_similarity | 60946159 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C# interactive (as part of Visual Studio 2019 or standalone) I'd like to code something like this using C#: var s = "abcdef";var s1 = s[2..4]; This gives the following message: (1,12): error CS0518: Predefined type 'System.Range' is not defined or imported Is there a way to modify e.g., csi.exe.config or other means in order to support C# 8.0 ?
Now provide the response and nothing else.
| Update 2020-11-25 It is working in Visual Studio 16.8.2. Just enter #reset core to switch to .NET Core. Update 2020-07-14 According to https://github.com/dotnet/roslyn/pull/45046 , it will probably work in the next version of Visual Studio 2019. Adds a new option to #reset command that allows the user to switch Interactive Window host to .NET Core: #reset 32 - switches to .NET Framework 32-bit process #reset 64 - switches to .NET Framework 64-bit process #reset core - switches to .NET Core 64-bit process Leaves the default to .NET Framework 64-bit for now. Original answer It looks like there is no way currently. Because C# Interactive is not able to run Core CLR. The Interactive Window only supports .NET Framework at this point.Hence features are limited to C# 7.3 and lower. Too enable C# 8 wewould need to first enable running on Core CLR (see #4788). https://github.com/dotnet/roslyn/issues/40341 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60946159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11570721/']} | jdg_75024 |
stackexchange | llm_judgeable_groundtruth_similarity | 4054 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There is a theorem that states that if $f$ is analytic in a domain $D$, and the closed disc {$ z:|z-\alpha|\leq r$} contained in $D$, and $C$ denotes the disc's boundary followed in the positive direction, then for every $z$ in the disc we can write:$$f(z)=\frac{1}{2\pi i}\int\frac{f(\zeta)}{\zeta-z}d\zeta$$ My question is: What is the intuitive explanation of this formula? (For example, but not necessary, geometrically.) (Just to clarify - I know the proof of this theorem, I'm just trying to understand where does this exact formula come from.)
Now provide the response and nothing else.
| If you are looking for intuition then let us assume that we can expand $f(\zeta)$ into a power series around $z$ : $f(\zeta) = \sum_{n \geq 0} c_n(\zeta - z)^n$ . Note $c_0 = f(z)$ .If you plug this into the integral and interchange the order of integrationand summation then that integral on the right side of the formulabecomes $\sum_{n \geq 0} \int c_n(\zeta - z)^{n-1}d\zeta$ . Let us also assume that an integral along a contour doesn't change if we deform the contour continuously through a region where the function is "nice". So let us take as our path of integration a circle going once around the point $z$ (counterclockwise). Then you are basically reduced toshowing that $\int (\zeta - z)^{m}d\zeta$ is 0 for $m \geq 0$ and is $2\pi i$ for $m = -1$ .These can be done by direct calculations using polar coordinates with $\zeta = z + e^{it}$ . Now divide by $2\pi i$ and you have the formula. Of course this is a hand-wavy argument in places, but the question was not asking for a rigorous proof. Personally, this is how I first came to terms with understanding how Cauchy's integral formula could be guessed. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/4054', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/766/']} | jdg_75025 |
stackexchange | llm_judgeable_groundtruth_similarity | 307982 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently trying to learn C# and want to enhance my understanding of Object Oriented Programming (OOP). I'm hoping to accomplish this by experimenting with a small program that keeps track of my school assignments/information. I've constructed classes relating to my Institutions, Semesters(Terms), Courses, and Assignments. My question is whether or not I have created and implemented my classes correctly in regards to the information they represent. My thinking suggests that my classes should not inherit from one another in a parent/child like fashion because they are physically unrelated. However accessing objects through multilevel lists seems impractical (I think).. Is there a better way of doing this that doesn't force me to access object by iterating through collections, but still implements OOP best practices? Program Source Code Entry Point / Main static void Main(string[] args){ List<Institution> Institutions = new List<Institution>(); Institutions.Add(new Institution("My college")); Institutions[0].AddNewTerm("2016", "Spring"); Institutions[0].Terms[0].AddNewCourse("Math 210"); Institutions[0].Terms[0].Courses[0].AddNewAssignment("Chapter 1"); MessageBox.Show(Institutions[0].Terms[0].Courses[0].Assignments[0].Name);} Class List Institution class Institution{ public string Name { get; set; } public List<Term> Terms { get; set; } public Institution(string UP_Name) { this.Name = UP_Name; this.Terms = new List<Term>(); } public void AddNewTerm(string NewTermYear, string NewTermSeason) { Terms.Add(new Term(NewTermYear, NewTermSeason)); }} Term class Term{ public string Name { get; set; } public string Year { get; set; } public string Season { get; set; } public List<Course> Courses { get; set; } public Term(string NewSeason, string NewYear) { this.Season = NewSeason; this.Year = NewYear; this.Courses = new List<Course>(); this.Name = (this.Season + " " + this.Year); } public void AddNewCourse(string NewCourseName) { this.Courses.Add(new Course(NewCourseName)); }} Course class Course{ public string Name { get; set; } public List<Assignment> Assignments { get; set; } public Course(string UP_Name) { this.Name = UP_Name; this.Assignments = new List<Assignment>(); } public void AddNewAssignment(string NewAssignmentName) { Assignments.Add(new Assignment(NewAssignmentName)); }} Assignment class Assignment{ public string Name { get; set; } public Assignment(string UP_Name) { this.Name = UP_Name; }}
Now provide the response and nothing else.
| It's not bad for a newbie. You're right to be concerned about accessing objects through multilevel lists. Also, hard coding the indexes will hurt you eventually (but it's OK for now). You might try changing AddNewxxx() to return what was added. For example, if you change Term's AddNewClass to public Course AddNewCourse(string NewCourseName){ Course result = new Course(NewCourseName); this.Courses.Add(result); return result;} in your Main() function you'll be able to do something like var mathCourse = Institutions[0].Terms[0].AddNewCourse("Math 210");mathCourse.AddNewAssignment("Chapter 1");mathCourse.AddNewAssignment("Chapter 2"); and so on. Be patient; there is a lot to learn. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/307982', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/185811/']} | jdg_75026 |
stackexchange | llm_judgeable_groundtruth_similarity | 5015447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to use some Haskell libraries (e.g. Darcs, Pandoc) from Python, but it seems there’s no direct foreign function interface to Haskell in Python. Is there any way to do that?
Now provide the response and nothing else.
| Provided you can get your Python code to call C, you can call Haskell functions that have been exported via the FFI Another approach would be to write a standard IPC interface, in the case of darcs and pandoc just calling them as vanilla executables and parsing their output might be the way to go. As to automating the generation of boring, repetitive, FFI and marshalling code on the Haskell side, I'd recommend c2hs , which allows you to auto-generate a lot based on an existing C interface. There's probably similar things for python. SWIG, alas, has, to the best of my knowledge, never been implemented for Haskell, presumably because it caters to less strictly-typed languages. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5015447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/383405/']} | jdg_75027 |
stackexchange | llm_judgeable_groundtruth_similarity | 11680495 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to write a function that behaves differently depending on its type parameter.A simple example of what I want is shown below: def f[Int] = "I'm an int"def f[Float] = "I'm a float"def f[Burger] = "You want fries with that?" Is this possible in Scala or do I need some kind of work around?
Now provide the response and nothing else.
| Not directly; the usual way you'd do this in Scala is with a typeclass. trait FAble[T] { def doF: String }object FAble { implicit val fInt = new FAble[Int] { def doF = "I'm an int" } implicit val fFloat = new FAble[Float] { def doF = "I'm a float" } implicit val fBurger = new FAble[Burger] { def doF = "You want fries?" }}def f[T](implicit ev: FAble[T]) = ev.doF// ordef f[T: FAble] = implicitly[FAble[T]].doF It's a fair bit more verbose, but it has some advantages too -- the implicit instances can be computed (using implicit def s instead of val s), and there can be more than one instance for any given type, which lets you select behavior at by having different instances in scope at different points of the code. The reason you can't do it the C++ way is that Scala generics do not involve code generation for the different type-parameters ( @specialized aside, since it doesn't do what you want either). So it doesn't make sense to say "hey compiler, when you see an Int in that position, instead of generating the code you would from the generic template, use this specific code instead". | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11680495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1556395/']} | jdg_75028 |
stackexchange | llm_judgeable_groundtruth_similarity | 171201 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm about to start writing a process for saving some data structure from code in to a file of some proprietary, as-yet-undefined type. However, I've never designed a file type or structure before. Are there any things, generally speaking, that I should consider before starting my design? Are there any accepted good practices here? Bad practices I should avoid? Any absolute do's and don'ts?
Now provide the response and nothing else.
| First, try to find a format that is close enough to what you are about to build. In general, it is better to use someone's format than to invent your own, even if the format appears to be slightly more complex than what you need 1 . If you cannot find a suitable ready-made format, see if you can build your own on top of an existing general-purpose format, such as XML or Binary XML . This should be possible in nearly all cases when you are about to start a new file format. Text-based XML takes more space, but gives humans some measure of readability. However, if you find yourself using Base-64 encoding inside an XML file, that's a clear indication that you should have used a binary encoding instead. As far as good and bad practices go, make sure that you do not "bake in" the hardware feature of your initial target platform into the design of your file format. Specifically, make sure that your numbers are stored in a format that can be read correctly on platforms with endianness that is different from that of the writer, and that your user-facing strings are stored in UNICODE. Another good practice is to include a header from which it is possible to determine the type of your file in case its extension is missing or incorrect. It is a good idea to include a version of your file format in the header. This would let you change the format later, and stay backward-compatible. If possible, do not make your format dependent on the specifics of the default serialization mechanism built into your platform. For example, binary-serialized Java objects do not make a good file format 2 . Finally, decide if your files need to be streamable . This introduces additional complexity, because one should be able to interpret individual "frames" of your file in isolation. In cases when you need streamability, however, you should almost always be able to locate a suitable file format that already exists. 1 On the other hand, you should avoid formats that require extraordinary efforts to support the complexity that your application requires. 2 This does not mean, however, that you should not attempt to custom-integrate reading and writing of your new format with the serialization scheme of your platform, only that you should not rely on the default mechanisms of serialization. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/171201', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/31673/']} | jdg_75029 |
stackexchange | llm_judgeable_groundtruth_similarity | 10354613 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Follow-Up question on Electrical Engineering Stackexchange I want to write the value of a Bluetooth Low Energy characteristic repeatedly in a short amount of time (as a possible use case, imagine a mouse). The characteristic with a 128bit UUID is 20 bytes long. Therefore, it can be written in a single Low Energy transaction. Writes occur at a rate of 50 Hz, that is equal to a write once every 20ms . Therefore, 20 * 50 * 8 = 8 kbit/s are written. I am using the Command / Write without response mode to write the characteristic. Therefore, no acknowledgments happen on the attribute layer. No other Bluetooth or Bluetooth Low Energy devices are connected. Nothing is performed over WLAN. No other characteristics are read or written during the tests. I test the program by sending packets containing sequence numbers from an iPhone 4S. The sequence number is incremented by one after each sent packet. On the receiving side, a programmable development board, that incorporates a CSR1000 BLE chip, is used that receives the packets and prints the received sequence numbers to the serial connection. My problems are the following: After some time, packets start getting dropped. The first ~100 packets work fine @ 50 Hz. From then on, packets start getting dropped. 0x00 - 0x46 received 0x47, 0x48 missing0x49, 0x4a, 0x4b, 0x4c received 0x4d missing0x4e, 0x4f, 0x50, 0x51 received 0x52 missing0x53, 0x54, 0x55, 0x56 received 0x57 missing... Most often, a pack of four packets is transmitted fine (rarely, only 2 packets). Then, 1-7 packets are missing. When I reduce the characteristic value size, the problem still persists. When I write at 100Hz instead of at 50Hz, the picture is the same - only that the drops start occuring after around 35 packets and that 5-7 packets are dropped between the successful transmission of four packets. With the lost packets, the resulting transmission rate is around 5 kbit/s, regardless of the frequency of writes. This is clearly below the ~305 kbit/s that should be technically possible over Bluetooth Low Energy. The problem also occurs into the opposite direction, when I am sending packets from the development board to the iPhone 4S. Again, 5 kbit/s is the maximum that I am getting. The Notification mechanism is used for this scenario. Again, no acknowledgments happen on the attribute layer. When I try to send in both directions simultaneously, things start to blow up to the point where I have to reset both the development board and the iPhone 4S. Questions: May this be a problem of the Bluetooth Low Energy chip, that is used on the development board? If yes, why does the problem also occur into the opposite direction, where the iPhone acts as the receiver? Are there any development boards on the market that support high-frequency access to characteristics? What could be the origin of the problem? Please also try to reference parts of the Bluetooth Spec / presentation slides / articles in addition to assumptions. There exist Bluetooth Low Energy mice on the market. Mice have typical polling rates of 125 Hz and have to at least send two 16 byte values plus additional HID overhead per tick. Therefore, a solution for my problem should be available. Update The LE Connection Complete Event is described in Bluetooth Specification Version 4.0 Vol 2 Part E Section 7.7.65.1 . I receive the following values for the different connection parameters: Parameter Value Description--------------------------------------------------Conn_Interval 0x0054 Time = 105 msConn_Latency 0x0000 Time = 0 msSupervision_Timeout 0x00fc Time = 2520 msMaster_Clock_Accuracy 0x05 50 ppm
Now provide the response and nothing else.
| Issuing a Connection Parameters Update solved the problem and increased throughput from 5 kbit/s to ~33 kbit/s . However, this is still below the expected ~305 kbit/s. Conn_Interval = 0x000f = 18.75 msConn_Latency = 0x0000Supervision_Timeout = 0x00fc Are there any methods to reach the full ~305 kbit/s? Follow-Up question on Electrical Engineering Stackexchange Could get a reply from Apple by burning a TSI and waiting for a month. Basically, they tell that the behavior is intended in iOS 5.1. It somehow makes sense, because they don't want that your app's performance depends on whether another app uses Bluetooth or WiFi. Per the engineers comments - Under iOS 5.1 there should be 6 pairs of notifications during a connection interval, meaning 6*packetSize*1000/interval . This should translate to ~55kbps max (min interval is 20ms, packetsize is 23 bytes). We made the decision to limit the number of pairs per interval and have a minimum interval due to the fact that the iPhone and iPad both have shared antenna between BT classic, BT LE and WiFi. iOS LE is designed to be a low power transport. For higher throughput BT classic is a better transport method. Back to me - Based on the engineers comments above, if the desire is to achieve a 200 kbs throughput, Classic bluetooth is the answer. However, if the desire is to work with an application on the iPhone, I can understand that this is no simple change - Classic BT requires MFI licensing. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10354613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151706/']} | jdg_75030 |
stackexchange | llm_judgeable_groundtruth_similarity | 2638775 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to do a relative jump in x86 assembly, however I can not get it to work. It seems that for some reason my jump keeps getting rewritten as an absolute jump or something. A simple example program for what I'm trying to do is this: .global mainmain: jmp 0x4 ret Since the jmp instruction is 4 bytes long and a relative jump is offset from the address of the jump + 1, this should be a fancy no-op. However, compiling and running this code will cause a segmentation fault. The real puzzler for me is that compiling it to the object level and then disassembling the object file shows that it looks like the assembler is correctly doing a relative jump, but after the file gets compiled the linker is changing it into another type of jump. For example if the above code was in a file called asmtest.s: $gcc -c asmtest.s$objdump -D asmtest.o... Some info from objdump00000000 <main>: 0: e9 00 00 00 00 jmp 5 <main+0x5> 5: c3 ret This looks like the assembler correctly made a relative jump, although it's suspicious that the jmp instruction is filled with 0s. I then used gcc to link it then disassembled it and got this: $gcc -o asmtest asmtest.o$objdump -d asmtest...Extra info and other disassembled functions08048394 <main>: 8048394: e9 6b 7c fb f7 jmp 4 <_init-0x8048274> 8048399: c3 ret This to me looks like the linker rewrote the jmp statement, or substituted the 5 in for another address. So my question comes down to, what am I doing wrong? Am I specifying the offset incorrectly? Am I misunderstanding how relative jumps work? Is gcc trying to make sure I don't do dangerous things in my code?
Now provide the response and nothing else.
| Actually, the assembler thought that you were trying to do an absolute jump. However, the jmp opcode is, at the metal level, relative. Hence, the assembler could not know what to write after the 0xe9 byte, because the assembler does not know at which address your code will end up. The assembler does not know, but the linker does. So the assembler wrote in the asmtest.o headers somewhere a request for the linker, something which goes like this: "when you know at which address the code will get loaded, adjust those bytes just after the 0xe9 so that they will be appropriate for a jump from that point (with relative addressing) to the absolute address '4'". The linker did just that. It saw that the 0xe9 was at address 0x08048394, and the next opcode at 0x08048399, and it computed: to go from 0x08048399 to 0x00000004, one has to subtract 0x08048395, which is equivalent to adding (on 32-bit machines) 0xf7fb7c6b. Hence your "6b 7c fb f7" sequence in the resulting binary. You can encode a relative jump "manually" like this: .global mainmain: .byte 0xe9 .long 0x4 ret Thus, the assembler will not notice that your 0xe9 is really a jmp , and it will not try to outsmart you. In the binary, you will get the 'e9 04 00 00 00' sequence that you wish, and no linker interaction. Note that the code above may crash, because the relative offset is counted from the address immediately after the offset (i.e. the address of the next opcode, here ret ). This will jump in the no-man's-land 4 bytes after the ret and a segfault or something strange seems likely. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2638775', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/79106/']} | jdg_75031 |
stackexchange | llm_judgeable_groundtruth_similarity | 43938418 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am confused with how Scala handles division by zero. Here is a REPL code snippet. scala> 1/0java.lang.ArithmeticException: / by zero ... 33 elidedscala> 1.toDouble/0.toDoubleres1: Double = Infinityscala> 0.0/0.0res2: Double = NaNscala> 0/0java.lang.ArithmeticException: / by zero ... 33 elidedscala> 1.toInt/0.toIntjava.lang.ArithmeticException: / by zero ... 33 elided As you can see in the above example, depending on how you divide by zero, you get one of the following: "java.lang.ArithmeticException: / by zero" "Double = NaN" "Double = Infinity" This makes debugging quite challenging especially when dealing with data of unknown characteristics. What is the reasoning behind this approach, or even a better question, how to handle division by zero in a unified manner in Scala?
Now provide the response and nothing else.
| It's all down to the division by zero rules for various types. 0 / 0 is an integer division by zero (as both arguments are integer literals), and that is required to throw a java.lang.ArithmeticException . 1.toDouble/0.toDouble is a floating point division by zero with a positive numerator, and that is required to evaluate to +Infinity . 0.0/0.0 is a floating point division by zero with a zero numerator, and that is required to evaluate to +NaN . The first is a Java and Scala convention, the other two are properties of IEEE754 floating point, which is what Java and Scala both use. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43938418', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1129408/']} | jdg_75032 |
stackexchange | llm_judgeable_groundtruth_similarity | 53767 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
1) Fields have algebraic closures unique up to a non-unique isomorphism. 2) Nice spaces (without base point) have universal covering spaces unique up to a non-unique isomorphism. 3) Modules have injective hulls unique up to a non-unique isomorphism. Such situations can lead to interesting groups - the absolute Galois group, the fundamental group, and the "Galois" groups of modules introduced by Sylvia Wiegand in Can. J. Math., Vol. XXIV, No. 4, 1972, pp. 573-579. I'd appreciate any insight into the abstract features of situations which give rise to this type of phenomenon. And I'd appreciate as many examples from as many parts of mathematics as possible.
Now provide the response and nothing else.
| Any two injective resolutions (of an object in an abelian category) are homotopy equivalent, but this homotopy equivalence is not unique. This is of course because the lifting property in the definition of "injective" does not require any uniqueness. The connected sum of oriented manifolds is unique up to homeomorphism, but this homeomorphism is not unique. A bit silly, but: In a short exact sequence $0 \to A \to B \to C \to 0$ in a semisimple abelian category $B$ is unique up to isomorphism (namely, $B \cong A \oplus C$), but the isomorphism is not unique. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/53767', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10909/']} | jdg_75033 |
stackexchange | llm_judgeable_groundtruth_similarity | 10726 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought heavy tail = fat tail, but some articles I read gave me a sense that they aren't. One of them says: heavy tail means the distribution have infinite jth moment for some integer j. Additionally all the dfs in the pot-domain of attraction of a Pareto df are heavy-tailed. If the density has a high central peak and long tails, then the kurtosis is typically large. A df with kurtosis larger than 3 is fat-tailed or leptokurtic.I still don't have a concrete distinction between these two (heavy tail vs. fat tail). Any thoughts or pointers to relevant articles would be appreciated.
Now provide the response and nothing else.
| I would say that the usual definition in applied probability theory is that a right heavy tailed distribution is one with infinite moment generating function on $(0, \infty)$, that is, $X$ has right heavy tail if $$E(e^{tX}) = \infty, \quad t > 0.$$This is in agreement with Wikipedia , which does mention other used definitions such as the one you have (some moment is infinite). There are also important subclasses such as the long-tailed distributions and the subexponential distributions . The standard example of a heavy-tailed distribution, according to the definition above, with all moments finite is the log-normal distribution. It may very well be that some authors use fat tailed and heavy tailed interchangeably, and others distinguish between fat tailed and heavy tailed. I would say that fat tailed can be used more vaguely to indicate fatter than normal tails and is sometimes used in the sense of leptokurtic (positive kurtosis) as you indicate. One example of such a distribution, which is not heavy tailed according to the definition above, is the logistic distribution. However, this is not in agreement with e.g. Wikipedia , which is much more restrictive and requires that the (right) tail has a power law decay . The Wikipedia article also suggests that fat tail and heavy tail are equivalent concepts, even though power law decay is much stronger than the definition of heavy tails given above. To avoid confusions, I would recommend to use the definition of a (right) heavy tail above and forget about fat tails whatever that is. The primary reason behind the definition above is that in the analysis of rare events there is a qualitative difference between distributions with finite moment generating function on a positive interval and those with infinite moment generating function on $(0, \infty)$. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/10726', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/4497/']} | jdg_75034 |
stackexchange | llm_judgeable_groundtruth_similarity | 536043 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have my ~/.ssh/config configured with various hosts that are accessible either while on our company VPN, or via a SSH proxy server. At the moment I just have Host internal-server ProxyCommand ssh -W internal.ip:22 external-server However if I'm in the internal network I can directly access the internal ip, so proxying through the external server just adds a delay to connecting. Is there a way I can provisionally proxy if the internal ip isn't reachable, and connect directly otherwise?
Now provide the response and nothing else.
| I usually setup something like this. It assumes the intermediate host will be able to resolve the name. Host *%homeproxy ProxyCommand ssh user@proxyhost /bin/netcat -w 1 $(echo %h | cut -d%% -f1) 22 So I would connect to like ssh blah%homeproxy . | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/536043', 'https://serverfault.com', 'https://serverfault.com/users/97022/']} | jdg_75035 |
stackexchange | llm_judgeable_groundtruth_similarity | 16156597 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In a windows python environment I can get the local timezone like this, but it's not usable with pytz: >>> import win32timezone>>> win32timezone.TimeZoneInfo.local()TimeZoneInfo(u'US Mountain Standard Time', True)>>> win32timezone.TimeZoneInfo.local().timeZoneNameu'US Mountain Standard Time'>>> tz = pytz.timezone(win32timezone.TimeZoneInfo.local().timeZoneName)Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\pytz\__init__.py", line 185, in timezone raise UnknownTimeZoneError(zone)pytz.exceptions.UnknownTimeZoneError: 'US Mountain Standard Time' What's a good way to convert that output to a timezone name pytz.timezone() will understand? Here's the answer using tzlocal (thanks to Matt): >>> from tzlocal.win32 import get_localzone_name>>> get_localzone_name()'America/Phoenix'>>> tz = pytz.timezone(get_localzone_name())>>> tz<DstTzInfo 'America/Phoenix' MST-1 day, 17:00:00 STD>
Now provide the response and nothing else.
| Don't make any assumptions about what a Windows time zone ID means based on its name. For example US Mountain Standard Time is actually the Windows time zone for the majority of Arizona, which is permanently in MST because it does not implement daylight savings. But the Windows ID for the rest of the mountain time zone is Mountain Standard Time - which does follow daylight savings during Mountain Daylight Time, yet the time zone ID does not change! The only difference between these two zone's IDs is the "US" prefix. In the IANA/Olson database, these are two very distinct zones - America/Phoenix and America/Denver . What you need are the mappings from Windows to Olson time zone IDs that are provided by the Unicode CLDR project. Read the TimeZone tag wiki for info and links. I am uncertain to if there is already a library that implements this in Python - you may need to do some research, or implement it yourself from the raw data. UPDATE A bit of searching, and I found a Python library called tzlocal that has the CLDR mappings. It even is kind enough to include a script that will go fetch the current mappings from the CLDR website and update itself. I haven't tried it myself, but it seems to have the correct approach. It is primarily focused on returning the current system timezone, in an IANA/Olson id that is suitable for use with pytz. Here is the author's blog post describing its usage. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16156597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6356/']} | jdg_75036 |
stackexchange | llm_judgeable_groundtruth_similarity | 114785 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In relativity, if two events are simultaneous in a specified frame, they cannot be simultaneous in any other frame. My question is this: given any two events, is there always a frame in which these two events are simultaneous? For example, if I drop a blue ball on one side of a tennis court, and my friend drops a red ball on the opposite side of the court one day later -- from my frame one day later -- is there a frame in which the blue and red balls hit the ground simultaneously?
Now provide the response and nothing else.
| Is there always a frame in which spatially separated events are simultaneous? The answer is no . Two events that are spatially separated in one frame of reference (1) will be co-located in another frame of reference and not simultaneous in any frame if the interval is time-like (2) will be simultaneous in another frame of reference and not co-located in any frame if the interval is space-like . (3) will be neither co-located nor simultaneous in any other frame if the interval is light-like. Time-like interval If the interval is time-like, the separation in time, $|c\Delta t|$, is larger than the separation in space $|\Delta x|$: $$|c\Delta t| \gt |\Delta x|$$ Thus, there is a frame of reference in which $\Delta x' = 0$; the two events are co-located in this frame. Space-like interval If the interval is space-like, the separation in time is less than the separation in space: $$|c\Delta t| \lt |\Delta x|$$ Thus, there is a frame of reference in which $c\Delta t' = 0$; the two events are simultaneous in this frame. Light-like interval If the interval is light-like the separation in time equals the separation in space: $$|c\Delta t| = |\Delta x|$$ Thus, in all frames of reference , the events are neither co-located nor simultaneous, i.e., $$|c\Delta t'| = |\Delta x'|$$ All of this follows directly from the Lorentz transformation. Let's take your example of two events with spatial separation of a tennis court so $$|\Delta x| = 78\mathrm m$$ Light travels this distance in $\Delta t_c = \frac{78}{300 \cdot 10^6} = 260\mathrm{ns}$ Thus, if the two events occur within 260ns in this frame of reference, the events have space-like interval and are thus simultaneous in another, relatively moving reference frame of reference. Since, in your example, the events occur 1 day apart, the events have time-like interval and cannot be simultaneous in any reference frame. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/114785', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7348/']} | jdg_75037 |
stackexchange | llm_judgeable_groundtruth_similarity | 69001 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I assume that enough people have zsh installed to run scripts with a #!/usr/bin/env zsh as shebang? Or will this make my scripts un-runnable on too many systems? Clarification: I’m interested in programs/scripts an end user might want to run (like on Ubuntu, Debian, SUSE, Arch &c.)
Now provide the response and nothing else.
| For portability, no. While zsh can be compiled on any Unix or Unix-like and even Windows at least via Cygwin, and is packaged for most Open Source Unix-likes and several commercial ones, it is generally not included in the default install. bash on the other end is installed on GNU systems (as bash is the shell of the GNU project) like the great majority of non-embedded Linux based systems and sometimes on non-GNU systems like Apple OS/X. In the commercial Unix side, the Korn shell (the AT&T variant, though more the ksh88 one) is the norm and both bash and zsh are in optional packages. On the BSDs, the preferred interactive shell is often tcsh while sh is based on either the Almquist shell or pdksh and bash or zsh need to be installed as optional packages as well. zsh is installed by default on Apple OS/X. It even used to be the /bin/sh there. It can be found by default in a few Linux distributions like SysRescCD, Grml, Gobolinux and probably others, but I don't think any of the major ones. Like for bash , there's the question of the installed version and as a consequence the features available. For instance, it's not uncommon to find systems with bash3 or zsh3 . Also, there's no guarantee that the script that you write now for zsh5 will work with zsh6 though like for bash they do try to maintain backward compatibility. For scripts, my view is: use the POSIX shell syntax as all Unices have at least one shell called sh (not necessarily in /bin ) that is able to interpret that syntax. Then you don't have to worry so much about portability. And if that syntax is not enough for your need, then probably you need more than a shell. Then, your options are: Perl which is ubiquitous (though again you may have to limit yourself to the feature set of old versions, and can't make assumptions on the Perl modules installed by default) Specify the interpreter and its version (python 2.6 or above, zsh 4 or above, bash 4.2 or above...), as a dependency for your script, either by building a package for every targeted system which specifies the dependency or by stipulating it in a README file shipped alongside your script or embedded as comments at the top of your script, or by adding a few lines in Bourne syntax at the beginning of your script that checks for the availability of the requested interpreter and bails out with an explicit error when it's not, like this script needs zsh 4.0 or above . Ship the interpreter alongside your script (beware of licensing implications) which means you also need one package for every targeted OS. Some interpreters make it easier by providing a way to pack the script and its interpreter in a single executable. Write it in a compiled language. Again, one package per targeted system. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/69001', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/31810/']} | jdg_75038 |
stackexchange | llm_judgeable_groundtruth_similarity | 5199600 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This might come across as a silly question but I am curious to know if given a maximization algorithm and asked to get the dual (minimization version), it is just a matter of converting all max's into min's and doing other basic adjustments? If yes, are there any problems where this would not be the case? If not, is there a good intuitive reason why this does not work?
Now provide the response and nothing else.
| Yes, maximization and minimization problems are basically the same. The solution for max(f(x)) is the same as -min(-f(x)) . When searching game trees this relation is used for example to convert a minimax search into a negamax search. This has the advantage that instead of writing two functions, one for maximizing your score and another for minimizing the opponent's score, you write a single maximizing function but flip the sign of the result of the evaluation function when it's the other person's move. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5199600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184046/']} | jdg_75039 |
stackexchange | llm_judgeable_groundtruth_similarity | 10915770 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a MVC based website using JSF 2.0 and RichFaces 4. Every input text validation is been done using bean validation annotations. I am using Hibernate Validator as bean validation implementation. How can I display a localized message? If I use @NotNull(message="<h:outputText value=\"#{msg['Mymessage']}\" />") then it literally displays <h:outputText value="#{msg['Mymessage']}" /> as message. How is this caused and how can I solve it?
Now provide the response and nothing else.
| You should and can not put JSF tags in the message. Also, JSF's own resource bundle won't be used to resolve localized validation messages. JSR303 bean validation is a completely separate API unrelated to JSF. To internationalize JSR303 bean validation messages, you need to create a separate ValidationMessages.properties file in the classpath root which can be localized by ValidationMessages_xx_XX.properties files. E.g. ERVNomView=Your message here Which is then to be specified with {key} syntax. @NotEmpty(message="{ERVNomView}") See also: Chapter 4.3.1.1 of the JSR303 specification | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10915770', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1346690/']} | jdg_75040 |
Subsets and Splits