source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
5627284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's an contrived example of what's going on: http://jsfiddle.net/adamjford/YNGcm/20/ HTML: <a href="#">Click me!</a><div></div> JavaScript: function getSomeDeferredStuff() { var deferreds = []; var i = 1; for (i = 1; i <= 10; i++) { var count = i; deferreds.push( $.post('/echo/html/', { html: "<p>Task #" + count + " complete.", delay: count }).success(function(data) { $("div").append(data); })); } return deferreds;}$(function() { $("a").click(function() { var deferreds = getSomeDeferredStuff(); $.when(deferreds).done(function() { $("div").append("<p>All done!</p>"); }); });}); I want "All done!" to appear after all of the deferred tasks have completed, but $.when() doesn't appear to know how to handle an array of Deferred objects. "All done!" is happening first because the array is not a Deferred object, so jQuery goes ahead and assumes it's just done. I know one could pass the objects into the function like $.when(deferred1, deferred2, ..., deferredX) but it's unknown how many Deferred objects there will be at execution in the actual problem I'm trying to solve. Now provide the response and nothing else.
To pass an array of values to any function that normally expects them to be separate parameters, use Function.prototype.apply , so in this case you need: $.when.apply($, my_array).then( ___ ); See http://jsfiddle.net/YNGcm/21/ In ES6, you can use the ... spread operator instead: $.when(...my_array).then( ___ ); In either case, since it's unlikely that you'll known in advance how many formal parameters the .then handler will require, that handler would need to process the arguments array in order to retrieve the result of each promise.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/5627284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14064/']}
jdg_84041
stackexchange
llm_judgeable_groundtruth_similarity
5031778
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As far as I understand one of the purposes of adding move semantics is to optimize code by calling special constructor for copying "temporary" objects. For example, in this answer we see that it can be used to optimize such string a = x + y stuff. Because x+y is an rvalue expression, instead of deep copying we can copy only the pointer to the string and the size of the string. But as we know, modern compilers support return value optimization , so without using move semantics our code will not call the copy constructor at all. To prove it I write this code: #include <iostream>struct stuff{ int x; stuff(int x_):x(x_){} stuff(const stuff & g):x(g.x) { std::cout<<"copy"<<std::endl; }}; stuff operator+(const stuff& lhs,const stuff& rhs){ stuff g(lhs.x+rhs.x); return g;}int main(){ stuff a(5),b(7); stuff c = a+b;} And after executing it in VC++2010 and g++ in optimize mode I'm getting empty output. What kind of optimization is it, if without it my code still works faster? Could you explain what I'm understanding wrong? Now provide the response and nothing else.
Move semantics should not be thought as an optimization device, even if they can be used as such. If you are going to want copies of objects (either function parameters or return values), then RVO and copy elision will do the job when they can. Move semantics can help, but are more powerful than that. Move semantics are handy when you want to do something different whether the passed object is a temporary (it then binds to a rvalue reference ) or a "standard" object with a name (a so called const lvalue ). If you want for instance to steal the resources of a temporary object, then you want move semantics (example: you can steal the contents a std::unique_ptr points to). Move semantics allow you to return non copyable objects from functions, which is not possible with the current standard. Also, non copyable objects can be put inside other objects, and those objects will automatically be movable if the contained objects are. Non copyable objects are great, since they don't force you to implement an error-prone copy constructor. A lot of the time, copy semantics do not really make sense, but move semantics do (think about it). This also enables you to use movable std::vector<T> classes even if T is non copyable. The std::unique_ptr class template is also a great tool when dealing with non copyable objects (eg. polymorphic objects).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5031778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556049/']}
jdg_84042
stackexchange
llm_judgeable_groundtruth_similarity
1530763
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have this table: Location date temp1 12-12-2009 191 14-12-2009 211 13-12-2009 172 12-12-2009 182 14-12-2009 162 18-12-2009 122 15-12-2009 14 now i want to select the latest (newest date) rows for all locations, so tin his example row 2 and row 7.How can i do this? i can select the latest for 1 location with top 1 date for a single location, i tried grouping by location, date but then i miss the temp.... Kind regards,Michel Now provide the response and nothing else.
Have a look at this DECLARE @Table TABLE( Location INT, Date DATETIME, Temp INT)INSERT INTO @Table (Location,Date,Temp) SELECT 1, '12 Dec 2009', 19INSERT INTO @Table (Location,Date,Temp) SELECT 1, '14 Dec 2009', 21INSERT INTO @Table (Location,Date,Temp) SELECT 1, '13 Dec 2009', 17INSERT INTO @Table (Location,Date,Temp) SELECT 2, '12 Dec 2009', 18INSERT INTO @Table (Location,Date,Temp) SELECT 2, '14 Dec 2009', 16INSERT INTO @Table (Location,Date,Temp) SELECT 2, '18 Dec 2009', 12INSERT INTO @Table (Location,Date,Temp) SELECT 2, '15 Dec 2009', 14SELECT t.*FROM @Table t INNER JOIN ( SELECT Location, MAX(Date) MaxDate FROM @Table GROUP BY Location ) MaxDates ON t.Location = MaxDates.Location AND t.Date = MaxDates.MaxDateORDER BY 1 The only thing you might want to look out for is when the maxdat for a given location might ocuur more than once, the join will return more than one result for the location, date combination. You might want to decide on how to select which of these results or all you want to return.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1530763', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103089/']}
jdg_84043
stackexchange
llm_judgeable_groundtruth_similarity
1080309
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose a group $G$ has $r$ distinct subgroups of prime order $p$. Show that $G$ contains at least $r(p-1)$ elements of order $p$. Aside: I know how to use this to prove that a group of order $56$ has a proper normal subgroup. Now provide the response and nothing else.
We can multiply $c_{pj}$ into inner sum:$$\sum_{p=1}^k \left(\sum_{q=1}^n a_{iq} b_{qp}\right)c_{pj}= \sum_{p=1}^k \sum_{q=1}^n a_{iq} b_{qp}c_{pj}.$$ Because the sums are finite, we can switch the order of summation:$$\sum_{p=1}^k \sum_{q=1}^n \cdots = \sum_{q=1}^n \sum_{p=1}^k \cdots.$$ Then, since $a_{iq}$ does not depend on $p$, we can factor it out of the inner sum.$$\sum_{p=1}^k a_{iq} b_{qp} c_{pj} = a_{iq} \sum_{p=1}^k b_{qp} c_{pj}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1080309', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/173027/']}
jdg_84044
stackexchange
llm_judgeable_groundtruth_similarity
20089820
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Laravel we can setup relationships like so: class User { public function items() { return $this->belongsToMany('Item'); }} Allowing us to to get all items in a pivot table for a user: Auth::user()->items(); However what if I want to get the opposite of that. And get all items the user DOES NOT have yet. So NOT in the pivot table. Is there a simple way to do this? Now provide the response and nothing else.
Looking at the source code of the class Illuminate\Database\Eloquent\Builder , we have two methods in Laravel that does this: whereDoesntHave (opposite of whereHas ) and doesntHave (opposite of has ) // SELECT * FROM users WHERE ((SELECT count(*) FROM roles WHERE user.role_id = roles.id AND id = 1) < 1) AND ... User::whereDoesntHave('Role', function ($query) use($id) { $query->whereId($id); }) ->get(); this works correctly for me! For simple "Where not exists relationship", use this: User::doesntHave('Role')->get(); Sorry, do not understand English. I used the google translator.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20089820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1200670/']}
jdg_84045
stackexchange
llm_judgeable_groundtruth_similarity
35265613
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How would you perform a full outer join a cross join of two dataframes with no columns in common using pandas? In MySQL, you can simply do: SELECT *FROM table_1[CROSS] JOIN table_2; But in pandas, doing: df_1.merge(df_2, how='outer') gives an error: MergeError: No common columns to perform merge on The best solution I have so far is using sqlite : import sqlalchemy as saengine = sa.create_engine('sqlite:///tmp.db')df_1.to_sql('df_1', engine)df_2.to_sql('df_2', engine)df = pd.read_sql_query('SELECT * FROM df_1 JOIN df_2', engine) Now provide the response and nothing else.
Update: From Paul's comment, you can now use df = df1.merge(df2, how="cross") . The older method of creating temporary columns: IIUC you need merge with temporary columns tmp of both DataFrames : import pandas as pddf1 = pd.DataFrame({'fld1': ['x', 'y'], 'fld2': ['a', 'b1']})df2 = pd.DataFrame({'fld3': ['y', 'x', 'y'], 'fld4': ['a', 'b1', 'c2']})print df1 fld1 fld20 x a1 y b1print df2 fld3 fld40 y a1 x b12 y c2df1['tmp'] = 1df2['tmp'] = 1df = pd.merge(df1, df2, on=['tmp'])df = df.drop('tmp', axis=1)print df fld1 fld2 fld3 fld40 x a y a1 x a x b12 x a y c23 y b1 y a4 y b1 x b15 y b1 y c2
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35265613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2063031/']}
jdg_84046
stackexchange
llm_judgeable_groundtruth_similarity
46960
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've been playing with virtual machines lately, and I wondered if I could run a virtual machine inside a virtual machine. Is this possible? Is it practical? Now provide the response and nothing else.
Nesting VMs is something that has been done for forever on IBM Mainframe hardware. That hardware does lots of stuff to make the process very very efficient. You can have VMs nested to an arbitrary depth and it works very well. PC hardware very recently has kinda made this barely possible. A document on VMware's web site discusses it, but the gist is that you can have VMs nested 2 deep, but only on very modern hardware that supports true hardware virtualization (VT-x or AMD-V), and the second VM depth must be running the older style BT/binary translation style virtualization. There are also severe restrictions on the virtual monitors you're able to run on the inner guest. Needless to say, it's not supported and I'd expect it to be really flakey if you do anything even remotely weird (like Hyper-V under ESX). And performance will not be good, regardless of if it is stable.
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/46960', 'https://serverfault.com', 'https://serverfault.com/users/9230/']}
jdg_84047
stackexchange
llm_judgeable_groundtruth_similarity
310832
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have two continuous random variables, $X$ and $Y$, whose distributions are unknown. I can draw samples from them and in particular, I am interested in estimating $P(X<Y)$ based on those samples. Let's assume that each sample has $n$ elements: $x_1, x_2, \ldots, x_n$ are drawn from $X$ and $y_1, y_2, \ldots, y_n$ from $Y$. Can I use $$T=\frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}I(x_i < y_j)$$as the estimate of $P(X<Y)$ if I assume $X$ and $Y$ are independent and is such an estimator asymptotically normally distributed? If I knew Also, if I assume that $X$ and $Y$ are dependent, can I use $$S=\frac{1}{n}\sum_{i=1}^{n}I(x_i < y_i)$$ as the estimator of said probability and is it asymptotically normally distributed? I'm asking about asymptotic normality because I would like to determine lower and upper bounds of $P(X<Y)$ and think that confidence intervals can act as such. P.S. $I(E)=1$ if $E$ is true, and $0$ otherwise. Now provide the response and nothing else.
The statistic $T$ is an example of a general $U$-statistic, introducedby Hoeffding in his 1948 paper A class of statistics with asymptoticallynormal distribution . Moreover, it is among the most famous of that class,namely the Mann-Whitney U test . It has lower meansquared error than $S$, for it equals $E\left(S\mid X_{\left(1\right)},X_{\left(2\right)},\ldots,Y_{\left(1\right)},Y_{\left(2\right)},\ldots\right)$,and since the order statistics are sufficient, the Rao-Blackwell theorem can be applied. Furthermore, it is normally distributed, a fact thatfollows from a general theorem on $U$-statistics, see the chapterson $U$-statistics in e.g. Lehmann's Elements of Large SampleTheory or Serfling's Approximation Theorems of Mathematical Statistics .Its limiting distribution, with a small sample correction, can befound on wikipedia. , and it is implemented in R through the function $\mathtt{wilcox.test}$ (with the option $\mathtt{paired=FALSE}$). Note that the definition of the Mann-Whitney U test is ambiguous. Sometimes it is defined as the $T$ above, but sometimes it counts "victories for $x_i$" as positive and "victories for $y_i$" as negative. This is the approach taken in e.g. the R function $\mathtt{wilcox.test}$.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/310832', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/100101/']}
jdg_84048
stackexchange
llm_judgeable_groundtruth_similarity
74022
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've built a kernel in tmpfs, then I rebooted. Now I see a message when I compile the 3rd party module, NO SIGN [M] XXXX.ko How can I get it signed? The key pair generated during rpmbuild is lost already I guess Now provide the response and nothing else.
This was surprisingly lacking in documentation. I found this file, module-signing.txt , which is part of the RHEL6 Kernel Documentation. In this document it shows how to generate signing keys, assuming you want to sign all your modules as part of a kernel build: cat >genkey <<EOF%pubring kernel.pub%secring kernel.secKey-Type: DSAKey-Length: 512Name-Real: A. N. OtherName-Comment: Kernel Module GPG key%commitEOFmake scripts/bin2cgpg --homedir . --batch --gen-key genkeygpg --homedir . --export --keyring kernel.pub keyname | scripts/bin2c ksign_def_public_key __initdata >crypto/signature/key.h Also the article from Linux Journal titled: Signed Kernel Modules has some good details and steps on how to do pieces of this, but I couldn't find the user space tools, extract_pkey and mod that it references. You might want to poke around Greg Kroah's site , you may find something useful in one of his presentations. References Signed Kernel Modules - linux journal article Howto Use Signed Kernel Modules
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/74022', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/11318/']}
jdg_84049
stackexchange
llm_judgeable_groundtruth_similarity
21732447
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having huge problems trying to use bower (to install foundation 5) or get anything bower related to work on the command line. I've installed ruby 1.9.3 git (with run from command prompt option) Nodejs I've successfully ran npm install -g bower grunt-cligem install foundation and these have ran fine. I've looked in the npm/node_modules directory and I can see bower folder in there. Every time I type bower either into the cmd , ruby cmd or git bash I get command not recognized I've set the git path in the environment variables. What's annoying is the instruction on the bower site. It states: npm install -g bowerbower install Well this doesn't work! Now provide the response and nothing else.
I bumped into this problem after npm install -g bower too. I solved the problem by adding npm's binary folder to my path. Here are some helpful hints for doing that: Find the location of your npm global binaries: npm config get prefix . This path may look something like C:\Users\username\AppData\Roaming\npm (or C:\ProgramData\chocolatey\lib\nodejs.commandline.X.XX.XX\tools if you use Chocolatey ). Add the path from step 1 to your Path. Open the Windows Control Panel, search for environment , then click on either edit environment variables for your account , or Edit the system environment variables`. Find the variable named Path or PATH , or create one if it doesn't exist. Paste the path from step 1 here ( ; delimited). You may need to restart your command prompt window. You should now be able to enter bower commands.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/21732447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2932975/']}
jdg_84050
stackexchange
llm_judgeable_groundtruth_similarity
25043759
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to change the background-color of rows that contain my found class in a striped bootstrap table. It works for even rows because bootstrap doesn't have a background color for them, but odd rows I am blocked by bootstraps CSS. Bootstrap CSS: .table-striped > tbody > tr:nth-child(odd) > td,.table-striped > tbody > tr:nth-child(odd) > th { background-color: #f9f9f9;} Custom CSS: tr.found{ background-color:#CECBCB;} How would I override bootstrap's CSS for only a single row at a time (as you can see in demo, odd rows are not overridden)? BOOTPLY DEMO Now provide the response and nothing else.
Write specific selector to override the bootstrap ones table.table.table-striped tr.found td { background-color:#CECBCB;} Demo Also, not only specificity matters here, make sure you apply the background to the td element and not the tr because bootstrap is applying to the td element so even if you apply the background to tr won't make sense. As you said that you wanted the explanation for the selector I wrote, so here it goes, let us break that and understand.. Starting off with this table.table.table-striped - Over here am selecting a table element having classes .table AS WELL AS .table-striped Going further with the selector, tr.found we select the tr elements having a class called .found and lastly, we select the nested td elements.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25043759', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2232744/']}
jdg_84051
stackexchange
llm_judgeable_groundtruth_similarity
30699119
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Flink has been compared to Spark , which, as I see it, is the wrong comparison because it compares a windowed event processing system against micro-batching; Similarly, it does not make that much sense to me to compare Flink to Samza. In both cases it compares a real-time vs. a batched event processing strategy, even if at a smaller "scale" in the case of Samza. But I would like to know how Flink compares to Storm, which seems conceptually much more similar to it. I have found this (Slide #4) documenting the main difference as "adjustable latency" for Flink. Another hint seems to be an article by Slicon Angle that suggest that Flink better integrates into a Spark or HadoopMR world, but no actual details are mentioned or referenced. Finally, Fabian Hueske himself notes in an interview that "Compared to Apache Storm, the stream analysis functionality of Flink offers a high-level API and uses a more light-weight fault tolerance strategy to provide exactly-once processing guarantees." All that is a bit sparse for me and I do not quite get the point. Can someone explain what problem(s?) with stream processing in Storm is (are?) exactly solved by Flink? What is Hueske referring to by the API issues and their "more light-weight fault tolerance strategy"? Now provide the response and nothing else.
Disclaimer : I'm an Apache Flink committer and PMC member and only familiar with Storm's high-level design, not its internals. Apache Flink is a framework for unified stream and batch processing. Flink's runtime natively supports both domains due to pipelined data transfers between parallel tasks which includes pipelined shuffles. Records are immediately shipped from producing tasks to receiving tasks (after being collected in a buffer for network transfer). Batch jobs can be optionally executed using blocking data transfers. Apache Spark is a framework that also supports batch and stream processing. Flink's batch API looks quite similar and addresses similar use cases as Spark but differs in the internals. For streaming, both systems follow very different approaches (mini-batches vs. streaming) which makes them suitable for different kinds of applications. I would say comparing Spark and Flink is valid and useful, however, Spark is not the most similar stream processing engine to Flink. Coming to the original question, Apache Storm is a data stream processor without batch capabilities. In fact, Flink's pipelined engine internally looks a bit similar to Storm, i.e., the interfaces of Flink's parallel tasks are similar to Storm's bolts. Storm and Flink have in common that they aim for low latency stream processing by pipelined data transfers. However, Flink offers a more high-level API compared to Storm. Instead of implementing the functionality of a bolts with one or more readers and collectors, Flink's DataStream API provides functions such as Map, GroupBy, Window, and Join. A lot of this functionality must be manually implemented when using Storm. Another difference are processing semantics. Storm guarantees at-least-once processing while Flink provides exactly-once. The implementations which give these processing guarantees differ quite a bit. While Storm uses record-level acknowledgments, Flink uses a variant of the Chandy-Lamport algorithm. In a nutshell, data sources periodically inject markers into the data stream. Whenever an operator receives such a marker, it checkpoints its internal state. When a marker was received by all data sinks, the marker (and all records which have been processed before) are committed. In case of a failure, all sources operators are reset to their state when they saw the last committed marker and processing is continued. This marker-checkpoint approach is more lightweight than Storm's record-level acknowledgments. This slide set and the corresponding talk discuss Flink's streaming processing approach including fault tolerance, checkpointing, and state handling. Storm also offers an exactly-once, high-level API called Trident. However, Trident is based on mini-batches and hence more similar to Spark than Flink. Flink's adjustable latency refers to the way that Flink sends records from one task to the other. I said before, that Flink uses pipelined data transfers and forwards records as soon as they are produced. For efficiency, these records are collected in a buffer which is sent over the network once it is full or a certain time threshold is met. This threshold controls the latency of records because it specifies the maximum amount of time that a record will stay in a buffer without being sent to the next task. However, it cannot be used to give hard guarantees about the time it takes for a record from entering to leaving a program because this also depends on the processing time within tasks and the number of network transfers among other things.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/30699119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1847419/']}
jdg_84052
stackexchange
llm_judgeable_groundtruth_similarity
47286
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: $\newcommand{\Gal}{\mathrm{Gal}} \newcommand{\Q}{\mathbf Q}$ Consider the set of all Galois extensions $E/\Q(\zeta_n)$ of a given cyclotomic field $\Q(\zeta_n)$ such that $$\Gal(E/\Q) \simeq\Gal(E/\Q(\zeta_n)) \rtimes \Gal(\Q(\zeta_n)/\Q).$$ In other words, such that there is a homomorphism $$\Gal(E/\Q) \leftarrow \Gal(\Q(\zeta_n)/\Q)$$ inverting the natural quotient map $$\Gal(E/\Q) \to \frac{\Gal(E/\Q) }{\Gal(E/\Q(\zeta_n))}\simeq \Gal(\Q(\zeta_n)/\Q).$$ Are they classified? Is there a "largest" one? What can be said about them (or about their cohomology) in general? Are there any prominent examples of such extensions arising "in nature"? Now provide the response and nothing else.
$\newcommand{\Gal}{\mathrm{Gal}} \newcommand{\Q}{\mathbf Q} \newcommand{\Z}{\mathbf Z} \newcommand{\F}{\mathbf F}$ Abbreviate $K=\Q(\zeta_n)$. Note first that a galoisian extension $E$ of $K$ need not be galoisian over $\Q$, so I'm assuming that you are considering only those $E$ which are. We then have an exact sequenece$$1\to\Gal(E|K)\to\Gal(E|\Q)\to\Gal(K|\Q)\to1$$in which the last group is $(\Z/n\Z)^\times$, of order $\varphi(n)$. A sufficient condition for the sequence to split is : the degree $[E:K]$ is prime to $\varphi(n)$ (Schur-Zassenhaus). I don't think there is a classification of all such extensions. Note finally that this answer does not depend on the fact that $K$ is the cyclotomic field of level $n$, or even the fact that the base field is $\Q$. It applies to any galoisian tower $E|K|F$: the associated short exact sequence$$1\to\Gal(E|K)\to\Gal(E|F)\to\Gal(K|F)\to1$$splits if the degrees $[E:K]$, $[K:F]$ are mutually prime. Addendum (at Alex Bartel's suggestion): Let's return to the case $F=\Q$, $K=\Q(\zeta_n)$, $\Delta=\Gal(K|\Q)$, and suppose that $n$ is a prime $p$, for simplicity. Kummer theory tells us that abelian extensions $E|K$ of exponent dividing $p$ correspond bijectively to subgroups $D\subset K^\times/K^{\times p}$ under $E=K(\root p\of D)$; such an $E$ is galoisian over $\Q$ if and only if the subgroup $D$ is $\Delta$-stable. When such is the case, we get examples of the kind of extensions envisaged in the question, with "split Galois group". I guess the group $\Gal(E|\Q)$ will be commutative if and only if the $\Delta$-action on the $\F_p$-space $D$ is via the "mod $p$" cyclotomic character, namely the canonical isomorphism $\Delta\to\F_p^\times$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/47286', 'https://mathoverflow.net', 'https://mathoverflow.net/users/434/']}
jdg_84053
stackexchange
llm_judgeable_groundtruth_similarity
249451
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Proposition: An abelian group is not isomorphic to an non-abelian group. I need to prove this proposition and conclude that:$$\Bbb{Z}/8\Bbb{Z},\Bbb{Z}/4\Bbb{Z}\times \Bbb{Z}/2\Bbb{Z},\Bbb{Z}/2\Bbb{Z}\times\Bbb{Z}/2\Bbb{Z}\times\Bbb{Z}/2\Bbb{Z}\not\cong D_4$$$$\Bbb{Z}/8\Bbb{Z},\Bbb{Z}/4\Bbb{Z}\times \Bbb{Z}/2\Bbb{Z},\Bbb{Z}/2\Bbb{Z}\times\Bbb{Z}/2\Bbb{Z}\times\Bbb{Z}/2\Bbb{Z}\not\cong Q$$($D_4$ stands for the dihedral group of order eight and $Q$ for the quaternion group of order eight) But I have no idea how to begin? Can someone give me some hints/tips? Now provide the response and nothing else.
Suppose you have an abelian group $G$ and an isomorphism $\varphi: G \to H$, for some group $H$. Then for any $a,b \in G$, you have that $\varphi(ab) = \varphi(a)\varphi(b)$, but since $ab = ba$ ($G$ is abelian), we have that $\varphi(ab) = \varphi(ba) = \varphi(b)\varphi(a)$, so $\varphi(a)\varphi(b) = \varphi(b)\varphi(a)$ And since $\varphi$ is surjective, $H$ must be abelian. I have proved "If $G \simeq H$ and $G$ is abelian, then $H$ is abelian", which is equivalent to saying "If $H$ is not abelian then either $G$ is not abelian or $G \not\simeq H$". Since we have assumed that $G$ is abelian, we know that $G \not\simeq H$, and we are done.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/249451', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/44180/']}
jdg_84054
stackexchange
llm_judgeable_groundtruth_similarity
19561
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I asked this question on stack overflow, but haven't gotten to solve the problem yet, then I thought I should rather post it here. Is there a way to bulk-reverse the order of headings in Emacs org-mode? I'd like to change this kind of list ( but much bigger, not just two or three items, thus "bulk" ): * personal computer * Windows * Mac * Linux* StackExchange * stackoverflow* countries* people to this: * people* countries* StackExchange * stackoverflow* personal computer * Windows * Mac * Linux I just want to reverse the list so that the last heading with its subtrees would be the first heading of a new list, second-last would be the second heading of the new list, and so on. Now provide the response and nothing else.
The following interactive function does what you want.Nevertheless, Junacho's warning applies: This code is very fragile. It assumes that the parsed buffer has the structure (org-data nil (headline...) (headline...) ...). Anything else will fail. Put the following code into your initialization file, e.g., ~/.emacs , evaluate this buffer or re-start emacs, open your org-file and run M-x org-reverse-headers . (defun org-reverse-headers () "Reverse headers of current org file" (interactive) (let (str (content (nthcdr 2 (org-element-parse-buffer 'headline)))) ;; `org-element-parse-buffer' returns (ORG-DATA PROPERTIES CONTENT), CONTENT contains the headlines (setq content (nreverse content)) ;; reversal of sections (insert (with-output-to-string (dolist (header content) (princ (buffer-substring (plist-get (cadr header) :begin) (plist-get (cadr header) :end)))) (delete-region (point-min) (point-max))))))
{}
{'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/19561', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/10707/']}
jdg_84055
stackexchange
llm_judgeable_groundtruth_similarity
991008
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove that if $H$ or $K$ are normal subgroups then $HK=\{hk\mid h\in H,k\in K\}$ is a subgroup. Then if both are normal subgroups, prove that HK is normal. Now provide the response and nothing else.
Clearly $HK=\{hk:h\in H,k\in K\}$ is nonempty (as $H$ and $K$ are nonempty). We appeal to the subgroup test to show that $HK$ is indeed a subgroup of $G$. Suppose that $h_1k_1$, $h_2k_2$ are two elements of $HK$ (so $h_1,h_2\in H$ and $k_1,k_2\in K$). We must show that $(h_1k_1)(h_2k_2)^{-1}\in HK$ also. Then by the subgroup test, we can conclude that $HK<G$ ("$<$" denotes subgroup here). We have $(h_2k_2)^{-1}=k_2^{-1}h_2^{-1}$ (check this), and so:$$(h_1k_1)(h_2k_2)^{-1}= (h_1k_1)k_2^{-1}h_2^{-1}.$$ Writing $k_1k_2^{-1}=k\in K$, the above becomes $h_1 kh_2^{-1}\enspace (*)$. Case 1: $K$ normal If $K$ is normal, then $h_2 kh_2^{-1}=\overline{k}$ for some $\overline{k}\in K$. Left multiplying $h_2 kh_2^{-1}=\overline{k}$ by $h_2^{-1}$ gives : $kh_2^{-1}=h_2^{-1}\overline{k}$. With this in mind $(*)$ becomes: $$h_1 kh_2^{-1}= h_1h_2^{-1}\overline{k}$$Since $H$ is a subgroup $h_1h_2^{-1}\in H$ and so the above element belongs to $HK$ as desired. Case 2: $H$ normal The argument here is incredibly similar to the one above. $kh_2^{-1}k^{-1}=h$ for some $h\in H$ by normality of $H$ (since $h_2^{-1}\in H$) and so $kh_2^{-1}=hk$. Substituting this into $(*)$ gives the desired result. Thus $HK$ is a subgroup of $G$ provided at least one of $H,K$ is normal. $HK$ is normal if $H$ and $K$ are: Let $hk$ be an arbitrary element of $HK$ and let $g\in G$. we must show $g(hk)g^{-1}\in HK$. Since $g^{-1}g=e$, where $e$ is the group identity of $G$, we can rewrite the above as:$$g(hk)g^{-1}=(gh)g^{-1}g(kg)=(ghg^{-1})(gkg^{-1}).$$Since $H$ is normal $ghg^{-1}\in H$. Since $K$ is normal $gkg^{-1}\in K$. Thus the above belongs to $HK$, proving $HK$ is normal.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/991008', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/130948/']}
jdg_84056
stackexchange
llm_judgeable_groundtruth_similarity
2811991
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_{-3}^5 f(x)\,dx$$ for$$ f(x) =\begin{cases}1/x^2, & \text{if }x \neq 0 \\-10^{17}, & \text{if }x=0\end{cases}$$ I tried with Newton Leibniz formula, is this correct ? $\int_{-3}^0 f(x)dx$ + $\int_{0}^5 f(x)dx$ = $1/x^2 |_{-3}^{0} $ $ + $ $1/x^2 |_0^5$= $3/(-3)^2+10^{17}+(-10^{17})-3/5^2)$= $16/75$ I know I made a mistake, but I dont know what, could someone please correct me and help me. Now provide the response and nothing else.
The $-10^{17}$ is a red herring: the value of a function at a single point has no effect on the integral. On the other hand, this does signal that something funny is likely to be happening as $x \to 0$, and indeed it does: the integrand goes to $\infty$ there. Since the integrand is unbounded, this is not an ordinary Riemann integral, but rather an improper integral. For that, you want to take a limit: $$ \int_{-3}^5 f(x)\; dx = \lim_{a \to 0-} \int_{-3}^a f(x)\; dx + \lim_{b \to 0+} \int_b^5 f(x)\; dx $$However, (if you use a correct antiderivative) you'll find that both of these limits are $+\infty$. So the conclusion is that the improper integral does not exist (or is $+\infty$, depending on your point of view).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2811991', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/525516/']}
jdg_84057
stackexchange
llm_judgeable_groundtruth_similarity
99614
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I suspect that the $N$ factorial in the partition function for N indistinguishable particles $$ Z = \frac{ Z_0^N } {N!} $$is an approximation. Please someone correct me if I am wrong and why or why not. Thanks. A simple case : each particle has two states with energy $0$ and $E$. The partition function for a single particle is $$ Z_0 = 1 + e^{- \beta E} . $$If there are only two particles, there is the total partition function$$Z = \frac{ Z_0^2 } {2}.$$But regarding the whole system consisting of these two particles, we can also write$$Z = 1 + e^{- \beta E} + e^{-2 \beta E} .$$And it is certain that $$ \frac{ Z_0^2 } {2} \neq 1 + e^{- \beta E} + e^{-2 \beta E}$$ Now provide the response and nothing else.
In the figure above, consider the different configurations that are possible with 3 particles and 5 energy levels. Dividing by 3! gets the symmetry factor correct only for configurations of type 1 but is wrong for configurations of type 2 and 3. You can see this by explicitly writing out $Z$ and comparing with $z^3/3!$. That is why the OP's statement that $z^3/3!$ is an approximation is correct. I can add details to this, if necessary. (Notation: The lower-case $z$ is the single particle partition function) To add to Josh's remark above, even for fermions where terms of type I are only allowed, the expansion of $z^N/N!$ contains terms of type 2 and 3 which are not present in $Z$. Nevertheless, the dominant contribution at large $N$ (as well as number of energy levels) is from terms of type 1. Hence my statement that it is a fairly good approximation holds.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/99614', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/14234/']}
jdg_84058
stackexchange
llm_judgeable_groundtruth_similarity
2982735
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Isn't every language compiled into low-level computer language? If so, shouldn't all languages have the same performance? Just wondering... Now provide the response and nothing else.
As pointed out by others, not every language is translated into machine language; some are translated into some form (bytecode, reverse Polish, AST) that is interpreted. But even among languages that are translated to machine code, Some translators are better than others Some language features are easier to translate to high-performance code than others An example of a translator that is better than some others is the GCC C compiler. It has had many years' work invested in producing good code, and its translations outperform those of the simpler compilers lcc and tcc , for example. An example of a feature that is hard to translate to high-performance code is C's ability to do pointer arithmetic and to dereference pointers: when a program stores through a pointer, it is very difficult for the compiler to know what memory locations are affected. Similarly, when an unknown function is called, the compiler must make very pessimistic assumptions about what might happen to the contents of objects allocated on the heap. In a language like Java, the compiler can do a better job translating because the type system enforces greater separation between pointers of different types. In a language like ML or Haskell, the compiler can do better still, because in these languages, most data allocated in memory cannot be changed by a function call. But of course object-oriented languages and functional languages present their own translation challenges. Finally, translation of a Turing-complete language is itself a hard problem: in general, finding the best translation of a program is an NP-hard problem, which means that the only solutions known potentially take time exponential in the size of the program. This would be unacceptable in a compiler (can't wait forever to compile a mere few thousand lines), and so compilers use heuristics. There is always room for improvement in these heuristics.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2982735', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224922/']}
jdg_84059
stackexchange
llm_judgeable_groundtruth_similarity
8582176
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Shouldn't the content of my container be cut off when the container has border-radius ? Sample HTML and CSS: .progressbar { height: 5px; width: 100px; border-radius: 5px; }.buffer { width: 25px; height: 5px; background: #999999; } <div class="progressbar"> <div class="buffer"></div></div> As you can see I use border-radius on the container ( .progressbar ), but the content ( .buffer ) goes 'outside' the container . I'm seeing this on Google Chrome. Is this the expected behavior? P.S. This isn't about how to fix it, this is about whether it should work like this. Now provide the response and nothing else.
Is this the expected behavior? Yes, as crazy as it sounds, it actually is. Here's why: The default overflow for <div> elements (and most other things) is visible , and the spec says this about overflow: visible : visible This value indicates that content is not clipped, i.e., it may be rendered outside the block box. In turn, §5.3 Corner clipping in the Backgrounds and Borders module says: A box's backgrounds, but not its border-image, are clipped to the appropriate curve (as determined by ‘background-clip’). Other effects that clip to the border or padding edge (such as ‘overflow’ other than ‘visible’) also must clip to the curve. The content of replaced elements is always trimmed to the content edge curve. Also, the area outside the curve of the border edge does not accept mouse events on behalf of the element. The sentence that I've emphasized specifically mentions that the overflow value of the box must be something other than visible (that means auto , hidden , scroll and others) in order for the corners to clip its children. If the box is defined to have visible overflow, which like I said is the default for most visual elements, then the content is not supposed to be clipped at all. And that's why the square corners of .buffer go over the rounded corners of .progressbar . Consequently, the simplest way to get .buffer to clip within .progressbar 's rounded corners is to add an overflow: hidden style to .progressbar , as shown in this updated fiddle .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8582176', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/508666/']}
jdg_84060
stackexchange
llm_judgeable_groundtruth_similarity
142380
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Based on figure. Maybe a simple question, but I'd like to know why that if forced response is zero, then $$\omega L = \frac{1}{\omega C}$$ where \$L\$ is the inductance and \$C\$ is the capacitance. Where in this case, \$\omega\$ can be solved for to obtain the answer. Now provide the response and nothing else.
Everything is done in "sections". Where a section goes depends on its use. What gets put into a section, again, depends on its use. In general you get two "data" sections - .data and .rodata. .data is stored in RAM and is read-write. .rodata is stored in Flash and is read-only. Things that are "const" generally end up in .rodata. Literals (string, numeric, etc) also end up in .rodata. For instance, these would normally be stored in Flash: const char *foo = "foo";const int bar = 234;"baz" (string literal) These will be stored in RAM: char foo[10];int bar; But then of course, it can be more MCU specific, especially when you get into Harvard (split RAM / Flash address spaces) architecture, where accessing data stored in Flash can be more of a special task and not done as normal variables. For instance, the const char *foo="foo"; would be copied into RAM for easy access during the initialization of the program. When you allocate a variable of any size within a function, such as: void func() { char mybuf[100];} the variable will be allocated as part of the stack (or an emulated stack-like system using static variables on MCUs with a limited hardware stack, such as low-end 8-bit PICs). As soon as you return from the function all the stack space is reclaimed. However, if you allocate it using malloc(): void func() { char *mybuf = malloc(100);} it will be allocated on the heap, and it is your responsibility to reclaim that space using free(). The register keyword, as in: int register foo; will, if a register can be allocated, store the data purely within that allocated register. It will never touch RAM, and consequently it will run faster. However, if a register cannot be allocated (they're all in use in the code) then the register keyword will be ignored and the variable allocated in RAM as per normal. One thing to note: char register *foo; That will only hold the pointer to the data in a register, not the data itself. The data itself will still be in RAM. In general, though, the use of the register keyword is frowned upon since the compiler's optimizer usually does a better job, and using register can in fact cause other parts of your code to not be optimized as well and end up more complex than they would otherwise be.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/142380', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/57531/']}
jdg_84061
stackexchange
llm_judgeable_groundtruth_similarity
226820
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assumptions: Computer architecture: Describes how the different modules of a processor interact with each other. A computer architecture is defined using vhdl files Computer Organization: Describes the physical layout of the processor modules on silicon. A computer organization is defined using a set of photo masks (and manufacturing process eg chemical, that goes at each step) Computer Organization, therefore, requires that the fab process be taken into account. ARM is not in the fabrication business, therefore it does not sell photo masks. My question(s): What exactly is ARM selling to a vendor (eg: freescale)? For a SoC (System On Chip), (eg: iMx6 ), which part is ARM and which is Freescale? Who did the integration? Now provide the response and nothing else.
You're using those terms wrong. "Computer organization" is a rarely-used term for the microarchitecture, and "computer architecture" is a superset of that. Integrated circuit IP blocks come in two basic forms: A soft macro is the RTL (VHDL or Verilog) that describes the functional implementation of the IP. This is compiled into a gate-level netlist, which is then turned into a physical layout to produce the mask set for manufacturing. Here's an example from Cadence -- an Ethernet MAC. When you buy it, you get Verilog files, documentation, and a Verilog testbench for verification. A hard macro is a physical layout of the IP suitable for a given process. It's added to the larger chip layout as a single block, which saves some steps in the design process. Here's another Cadence example -- an Ethernet PHY. It's offered in 180nm and 130nm processes at TSMC, UMC, and SMIC, and is delivered to the customer in the form of GDSII layout files. ARM sells both of these. The MCUs I've worked on usually use soft macros of ARM Cortex CPUs. We had some older product with ARM7 hard macros, but I don't know if they were hardened by ARM or us. Today, ARM has hard macro versions of the Cortex-A series listed on their web site . Most of their products are synthesizable (soft macros), though. It looks like you can download the (soft) Cortex-M0 for free for non-commercial use on the ARM DesignStart site. In an SoC, the ARM part is just the CPU. (The designer can also buy peripheral IP from ARM, but it's not required.) The SoCs I've worked on have a mix of third-party and internal IP.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/226820', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/18887/']}
jdg_84062
stackexchange
llm_judgeable_groundtruth_similarity
14782901
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Having a solid experience in non-Java and non-Android area, I'm learning Android. I have a lot of confusion with different areas, one of them is how to handle button clicks. There are at least 4 way of doing that (!!!), they are briefly listed here for consistency purpose I will list them: Have a member of the View.OnClickListener class in the activity and assign it to an instance that will handle onClick logic in the onCreate activity method. Create 'onClickListener' in the 'onCreate' activity method and assign it to the button using setOnClickListener Implement 'onClickListener' in activity itself and assign 'this' as a listener for the button. For the case if activity has few buttons, button id should be analyzed to execute 'onClick' handler for the proper button Have public method on the activity that implements 'onClick' logic and assign it to the button in the activity xml declaration Question #1: Are those all methods, is there any other option? (I don't need any other, just curious) For me, the most intuitive way would be the latest one: it requires the least amount of code to be typed and is the most readable (at least for me). Though, I don't see this approach used widely. What are cons for using it? Question #2: What are pros/cons for each of these methods? Please share either your experience or a good link. Any feedback is welcome! P.S. I've tried to Google and find something for this topic, but the only things I've found are description "how" to do that, not why is it good or bad. Now provide the response and nothing else.
Question 1:Unfortunately the one in which you you say is most intuitive is the least used in Android. As I understand, you should separate your UI (XML) and computational functionality (Java Class Files). It also makes for easier debugging. It is actually a lot easier to read this way and think about Android imo. Question 2:I believe the two mainly used are #2 and #3. I will use a Button clickButton as an example. 2 is in the form of an anonymous class. Button clickButton = (Button) findViewById(R.id.clickButton);clickButton.setOnClickListener( new OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub ***Do what you want with the click here*** } }); This is my favorite as it has the onClick method right next to where the button variable was set with the findViewById. It seems very neat and tidy that everything that deals with this clickButton Button View is located here. A con that my coworker comments, is that imagine you have many views that need onclick listener. You can see that your onCreate will get very long in length. So that why he likes to use: 3 Say you have, 5 clickButtons: Make sure your Activity/Fragment implement OnClickListener // in OnCreateButton mClickButton1 = (Button)findViewById(R.id.clickButton1);mClickButton1.setOnClickListener(this);Button mClickButton2 = (Button)findViewById(R.id.clickButton2);mClickButton2.setOnClickListener(this);Button mClickButton3 = (Button)findViewById(R.id.clickButton3);mClickButton3.setOnClickListener(this);Button mClickButton4 = (Button)findViewById(R.id.clickButton4);mClickButton4.setOnClickListener(this);Button mClickButton5 = (Button)findViewById(R.id.clickButton5);mClickButton5.setOnClickListener(this);// somewhere else in your codepublic void onClick(View v) { switch (v.getId()) { case R.id.clickButton1: { // do something for button 1 click break; } case R.id.clickButton2: { // do something for button 2 click break; } //.... etc }} This way as my coworker explains is neater in his eyes, as all the onClick computation is handled in one place and not crowding the onCreate method. But the downside I see is, that the: views themselves, and any other object that might be located in onCreate used by the onClick method will have to be made into a field. Let me know if you would like more information. I didn't answer your question fully because it is a pretty long question. And if I find some sites I will expand my answer, right now I'm just giving some experience.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14782901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159179/']}
jdg_84063
stackexchange
llm_judgeable_groundtruth_similarity
3484050
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know in the following case which is a better option: In the PHP script, if the $fileSize variable is larger than 100, I stop the script; Case I: <?phpif ( $fileSize > 100 ){ $results['msg'] = 'fileSize is too big!'; echo json_encode( $results ); exit();} Case II: <?phpif ( $fileSize > 100 ){ $results['msg'] = 'fileSize is too big!'; exit( json_encode( $results ) );} Case III: <?phpif ( $fileSize > 100 ){ $results['msg'] = 'fileSize is too big!'; return( json_encode( $results ) );} Which of the three (3) options above is the best? Now provide the response and nothing else.
Since you are using exit and return within the global scope (not inside a function), then the behavior is almost the same. The difference in this case will appear if your file is called through include() or require() . exit will terminate the program, while return will take the control back to the calling script (where include or require was called).
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3484050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/391104/']}
jdg_84064
stackexchange
llm_judgeable_groundtruth_similarity
30622906
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there another way to connect Google API client? I use auto complete places and I have to use this code some where in MYFRAGMENT mGoogleApiClient = new GoogleApiClient.Builder(MainActivity.this) .addApi(Places.GEO_DATA_API) .enableAutoManage(this, GOOGLE_API_CLIENT_ID, this) .addConnectionCallbacks(this).build(); My problem with enableAutoManage(this, GOOGLE_API_CLIENT_ID, this) .addConnectionCallbacks(this).build(); I can't deal with it because when I replace this with getActivity() I have many problem with casting thanks for help and sorry if this question is silly. Now provide the response and nothing else.
If you want to use enableAutoManage then you must make your activity extend FragmentActivity . The callbacks it makes are required for the automatic management of the GoogleApiClient to work. So the easiest solution is to add extends FragmentActivity to your activity. Then your cast would not fail and cause the app to crash at runtime. The alternate solution is to manage the api client yourself. You would remove the enableAutoManage line from the builder, and make sure you connect / disconnect from the client yourself. The most common place to do this is onStart() / onStop() . Something like... @Overrideprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mGoogleApiClient = new GoogleApiClient.Builder(MainActivity.this) .addApi(Places.GEO_DATA_API) .addConnectionCallbacks(this).build();}@Overrideprotected void onStart() { super.onStart(); mGoogleApiClient.connect();}@Overrideprotected void onStop() { super.onStop(); mGoogleApiClient.disconnect();}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/30622906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4884649/']}
jdg_84065
stackexchange
llm_judgeable_groundtruth_similarity
4140133
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say a person is working in a small team SCRUM environment. Several teams are working on different branches of the same code base. In our team we usually split up into pairs to work on code. Occasionally people need to take time off etc. and it would be advantageous to be able to merge person A's code into person B's codebase. But they often don't want to commit at the end of the day for fear of breaking the build. This led me to DVCS -- well suited to P2P merging and doesn't require "the big commit". A team member suggested that TFS's shelveset would be able to do this as well. How do the two approaches compare? Team is currently using Subversion. Now provide the response and nothing else.
The TFS shelveset allows you to create something akin to a changeset, that isn't stored in the main repository. This shelveset is a backup, and can be "pulled" by another developer from another machine. This "private repository", and the ability to "pull" it is as far as the shelveset compares to DVCS. Shelvesets however are not true repositories; they are merely snapshots, they do not maintain history, and you cannot check in or manage changes as you would with the source control repository on your TFS. A DVCS is a lot more than this; with a DVCS, you have a complete copy of the repository on your machine. You can commit locally as often as you want, storing each and every change you make; you can revert changes as much as you want, since your machine is for all intents and purposes, your source control.And then you can push your changes to another machine (central repositor, build machine, or whatever). DVCS is a completely different and flexible paradigm, that allows but doesn't require the use of a central repository. It frees you of having to choose between stable and frequent check-ins (you can check-in locally whenever you want, and push when you're stable). There's a lot more to it. If you like, you can checkout Joel Spolskey's excellent (and hillarious) intro to Mercurial DVCS at hginit.com . Hope this helps, Assaf.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4140133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/30946/']}
jdg_84066
stackexchange
llm_judgeable_groundtruth_similarity
365947
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I think a related question might be this ( Set-Theoretic Issues/Categories ). There are many ways in which you can avoid set theoretical paradoxes in dealing with category theory (see for instance Shulman - Set theory for category theory ). Some important results in category theory assume some kind of ‘smallness’ of your category in practice. A very much used result in homological algebra is the Freyd–Mitchell embedding theorem: Every small abelian category admits an fully faithful exact embedding in a category $\text{$R$-mod}$ for a suitable ring $R$ . Now, in everyday usage of this result, the restriction that the category is small is not important: for instance, if you want to do diagram chasing in a diagram on any category, you can always restrict your attention to the abelian subcategory generated by the objects and maps on the diagram, and the category will be small. I am wondering: What are results of category theory, commonly used in mathematical practice, in which considerations of size are crucial? Shulman in [op. cit.] gives what I think is an example, the Freyd Special Adjoint Functor Theorem : a functor from a complete, locally small, and well-powered category with a cogenerating set to alocally small category has a left adjoint if and only if it preserves small limits. I would find interesting to see some discussion on this topic. Now provide the response and nothing else.
Very often one has the feeling that set-theoretic issues are somewhat cheatable, and people feel like they have eluded foundations when they manage to cheat them. Even worse, some claim that foundations are irrelevant because each time they dare to be relevant, they can be cheated. What these people haven't understood is that the best foundation is the one that allows the most cheating (without falling apart). In the relationship between foundation and practice, though, what matters the most is the phenomenology of every-day mathematics. In order to make this statement clear, let me state the uncheatable lemma. In the later discussion, we will see the repercussion of this lemma. Lemma (The uncheatable).A locally small, large-cocomplete category is a poset. The lemma shows that no matter how fat are the sets where you enrich your category, there is no chance that the category is absolutely cocomplete. Example . In the category of sets, the large coproduct of all sets is not a set. If you enlarge the universe in such a way that it is, then some other (even larger) coproduct will not exist. This is inescapable and always boils down to the Russel Paradox. Remark . Notice that obvious analogs of this lemma are true also for categories based on Grothendieck Universes (as opposed to sets and classes). One can't escape the truth by changing its presentation . Excursus . Very recently Thomas Forster, Adam Lewicki, Alice Vidrine have tried to reboot category theory in Stratified Set Theory in their paper Category Theory with Stratified Set Theory (arXiv: https://arxiv.org/abs/1911.04704 ). One could consider this as a kind of solution to the uncheatable lemma. But it's hard to tell whether it is a true solution or a more or less equivalent linguistic reformulation. This theory is at its early stages. At this point one could say that I haven't shown any concrete problem, we all know that the class of all sets is not a set, and it appears as a piece of quite harmless news to us. In the rest of the discussion, I will try to show that the uncheatable lemma has consequences in the daily use of category theory. Categories will be assumed to be locally small with respect to some category of sets. Let me recall a standard result from the theory of Kan extensions. Lemma (Kan). Let $\mathsf{B} \stackrel{f}{\leftarrow} \mathsf{A} \stackrel{g}{\to} \mathsf{C}$ be a span where $\mathsf{A}$ is small and $\mathsf{C}$ is (small) cocomplete. The the left Kan extension $\mathsf{lan}_f g$ exists. Kan extensions are a useful tool in everyday practice, with applications in many different topics of category theory. In this lemma (which is one of the most used in this topic) the set-theoretic issue is far from being hidden: $\mathsf{A}$ needs to be small (with respect to the size of $\mathsf{C})$ ! There is no chance that the lemma is true when $\mathsf{A}$ is a large category. Indeed since colimits can be computed via Kan extensions, the lemma would imply that every (small) cocomplete category is large cocomplete, which is not allowed by the uncheatable . Also, there is no chance to solve the problem by saying: well, let's just consider $\mathsf{C}$ to be large-cocomplete , again because of the the uncheatable . This problem is hard to avoid because the size of the categories of our interest is as a fact always larger than the size of their inhabitants (this just means that most of the time Ob $\mathsf{C}$ is a proper class, as big as the size of the enrichment). Notice that the Kan extension problem recovers the Adjoint functor theorem one, because adjoints are computed via Kan extensions of identities of large categories, $$\mathsf{R} = \mathsf{lan}_\mathsf{L}(1) \qquad \mathsf{L} = \mathsf{ran}_\mathsf{R}(1) .$$ Indeed, in that case, the solution set condition is precisely what is needed in order to cut down the size of some colimits that otherwise would be too large to compute, as can be synthesized by the sharp version of the Kan lemma. Sharp Kan lemma. Let $\mathsf{B} \stackrel{f}{\leftarrow} \mathsf{A} \stackrel{g}{\to} \mathsf{C}$ be a span where $\mathsf{B}(f-,b)$ is a is small presheaf for every $b \in \mathsf{B}$ and $\mathsf{C}$ is (small) cocomplete. Then the left Kan extension $\mathsf{lan}_f g$ exists. Indeed this lemma allows $\mathsf{A}$ to be large, but we must pay a tribute to its presheaf category: $f$ needs to be somehow locally small (with respect to the size of $\mathsf{C}$ ). Kan lemma Fortissimo. Let $ \mathsf{A} \stackrel{f}{\to} \mathsf{B} $ be a functor. The following are equivalent: for every $g :\mathsf{A} \to \mathsf{C}$ where $\mathsf{C}$ is a small-cocomplete category, $\mathsf{lan}_f g$ exists. $\mathsf{lan}_f y$ exists, where $y$ is the Yoneda embedding in the category of small presheaves $y: \mathsf{A} \to \mathcal{P}(\mathsf{A})$ . $\mathsf{B}(f-,b)$ is a is small presheaf for every $b \in \mathsf{B}$ . Even unconsciously, the previous discussion is one of the reasons of the popularity of locally presentable categories. Indeed, having a dense generator is a good compromise between generality and tameness . As an evidence of this, in the context of accessible categories the sharp Kan lemma can be simplified. Tame Kan lemma. Let $\mathsf{B} \stackrel{f}{\leftarrow} \mathsf{A} \stackrel{g}{\to} \mathsf{C}$ be a span of accessible categories, where $f$ is an accessible functor and $\mathsf{C}$ is (small) cocomplete. Then the left Kan extension $\mathsf{lan}_f g$ exists (and is accessible). Warning . The proof of the previous lemma is based on the density (as opposed to codensity) of $\lambda$ -presentable objects in an accessible category. Thus the lemma is not valid for the right Kan extension. References for Sharp. I am not aware of a reference for this result. It can follow from a careful analysis of Prop. A.7 in my paper Codensity: Isbell duality, pro-objects, compactness and accessibility . The structure of the proof remains the same, presheaves must be replaced by small presheaves. References for Tame. This is an exercise, it can follow directly from the sharp Kan lemma, but it's enough to properly combine the usual Kan lemma, Prop A.1&2 of the above-mentioned paper, and the fact that accessible functors have arity. This answer is connected to this other.
{}
{'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/365947', 'https://mathoverflow.net', 'https://mathoverflow.net/users/160378/']}
jdg_84067
stackexchange
llm_judgeable_groundtruth_similarity
5630743
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Quick snippet first: Dim GUID As StringDim givenNames, familyName, preferredName, gender, comments, carer, medicareNumber, patientNumber As StringDim dob As VariantDim deceased, resolved, consultNotes As BooleanDim age As VariantgivenNames = NullfamilyName = NullpreferredName = Nullgender = Nulldob = Nullcomments = Nulldeceased = Falseresolved = Falsecarer = Nullage = NullconsultNotes = FalsepatientNumber = Null ' This is where I get the error Any idea why this last variable would be the one to trip up? I've assigned Null to a bunch of other strings without any errors. Now provide the response and nothing else.
In VBA/VB6, strings cannot be set to Null; only Variants can be set to null. In addition, when you declare variables inline comma-separated like in the question, only the last one will be typed as string; all of the others are typed as variants. To declare them on one line as a type you have to include the type Dim a As String, Dim b As String ... That's why it makes sense to just declare them on a single line. (Btw, it should be noted that deceased, resolved are also typed as variants for the same reason.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5630743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184124/']}
jdg_84068
stackexchange
llm_judgeable_groundtruth_similarity
19041067
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get delete_if to delete empty strings from an array. With the solution below, the array still contains many empty strings. products = my_text.split(/\t+/)products.delete_if {|element| element == " " || "" || element.nil?} Is there anything missing? Now provide the response and nothing else.
The problem with your code is explained by Ed S. Otherwise, you can do products.reject! { |s| s.nil? || s.strip.empty? } Why do you need to test nil? first? Let's check few lines. nil.strip# NoMethodError: undefined method `strip' for nil:NilClass" ".strip# => "" Now, with a different order, what the code does if the object is a string, and then if it is nil. " ".strip || " ".nil?# => ""nil.strip || nil.nil?# NoMethodError: undefined method `strip' for nil:NilClass# Oh you don't want that to happen, do you? This means you don't want to call strip.empty? when your object is nil . And as you know, when you have a || b , if a is truthy (i.e. not nil nor false ), b will never be called. You test first if the string is nil ; if it is, you don't need to check the right part (so you won't get a undefined method error) and the object will be removed from your products list.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19041067', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1373836/']}
jdg_84069
stackexchange
llm_judgeable_groundtruth_similarity
374590
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm not the best at this, so I thought it would be best to post a discussion rather than directly editing the excerpt. Current excerpt: C# (pronounced "see sharp") is a high level, object-oriented programming language that is designed for building a variety of applications that run on the .NET Framework (or .NET Core). C# is simple, powerful, type-safe, and object-oriented. Problems I see: For some reason, object-oriented is repeated. Saying it's only object-oriented is very misleading, since it has a lot of functional features (the entire LINQ library couldn't exist without them, and that's only an example). It's much more a multi-paradigm language than an object-oriented one. It's misleading saying it "runs on .NET and .NET Core". What about Mono and, currently in preview..., WebAssembly? I don't see the point of including this part at all. It also supports the DLR, so it's as type-safe as you want it to be (I'm looking at you, Office interop ) For a starting point, I'd suggest something like the following. C# (pronounced "see sharp") is a high-level, multi-paradigm programming language that is designed for building a variety of applications. I don't think we need to say much more. We could add that a version-specific tag should only be used if the question is related to a feature introduced in that version. We could go with something similar to what Revision 36 and Revision 45 had: C# (pronounced "see sharp") is a multi-paradigm, managed programming language developed by Microsoft. Questions should include code examples, sufficient to reproduce the problem. Add extra tags, relevant to the used technology or library. Now provide the response and nothing else.
Problems with the current excerpt at time of writing : Reads like marketing material. Includes a lot of meaningless fluff ("building a variety of applications", "simple," "powerful") Does not explain when to use the tag Additional problems with the older revision: C# is not multiparadigm. It's OO. The wiki is the thing that's misleading. It has a few features that enable stuffing functions into variables, but it is definitely not a functional language. Functional languages are not imperative, and C# is definitely not equipped to avoid writing imperative code. Telling people how to write questions is out of scope for a tag wiki. Suggestion to solve these problems: C# (pronounced "see sharp") is a high level, statically typed, object oriented programming language developed by Microsoft. C# code usually targets Microsoft's .NET family of tools and runtimes, which include the .NET Framework and .NET Core. Use this tag for questions about code written in C# or C#'s formal specification. Most questions also need a tag for the target .NET platform, such as [.net] (for the .NET Framework), [.net-core], or [.net-standard]. Reasoning: It's important to give the reader some brief description of what we're talking about. It's likely that people will be confused about the difference between .NET and C#, so this attempts to clarify that .NET is the tools and runtime while C# is the language. Tag wikis should indicate when to use the tag. Since most C# questions will be targeting a .NET platform and sometimes it matters which one, most C# questions will also need a tag indicating which platform is being targeted. Possible improvements over this: I don't like using the word "platform." I don't think it's a term MS uses. Any info about what terms MS uses when discussing the different tool sets and runtimes is appreciated. Things I notice in the full wiki : The the bit about "multiparadigm" needs to be removed there. It claims dynamic enables "type inference," but that's not what it does. It disables static typing for that variable, preventing the compiler checks on type compatibility and member references. I don't know why there's a bunch of info about which versions introduced certain keywords and the features that correspond to them. Maybe it should just talk about the features themselves, without making it a change log? An SO tag wiki seems like a bad place for a change log.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/374590', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/2141621/']}
jdg_84070
stackexchange
llm_judgeable_groundtruth_similarity
19588029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to make a customize list view in javafx. Here I need to bind multiple component in list cell as follow, like one label, one textfield, one button under one HBox andtwo button, one hyperlink, one label in another HBox and these HBox comes under one VBox and this VBox comes under the single list cell and it will repeat and make a list View. The code is <ListView fx:id="ListView" layoutX="0" layoutY="30" prefWidth="600" prefHeight="300"> <HBox fx:id="listBox" alignment="CENTER_LEFT"> <padding><Insets top="5" bottom="5" left="5"></Insets> </padding> <HBox alignment="CENTER_LEFT" prefWidth="170" minWidth="88"> <Label fx:id="surveyName" text="Field A" styleClass="Name"></Label> </HBox> <VBox styleClass="Description" prefWidth="155" minWidth="86"> <HBox> <HBox styleClass="surveyDesIcon" prefWidth="20" prefHeight="16"></HBox> <Label fx:id="surveyCode" text="PRW3456HJ"></Label> </HBox> <HBox> <HBox styleClass="DateIcon" prefWidth="20" prefHeight="16"></HBox> <Label fx:id="Date" text="PRW3456HJ"></Label> </HBox> </VBox> <HBox fx:id="Status" prefWidth="160" minWidth="80"> <Label fx:id="StatusLabel" text="Checking Files.."/> </HBox> <HBox fx:id="StatusIcon1" prefWidth="50" prefHeight="50" alignment="CENTER"> <Label styleClass="StatusIcon1" prefWidth="24" prefHeight="24" alignment="CENTER"/> </HBox> <HBox fx:id="StatusIcon2" prefWidth="50" prefHeight="50" styleClass="StatusIconBox" alignment="CENTER"> <Hyperlink styleClass="StatusIcon2" prefWidth="24" maxHeight="24" alignment="CENTER"/> </HBox> </HBox></ListView> Now provide the response and nothing else.
I understand your question. There are mainly two ways to set items in a Listview : 1. Create the ObservableList and set the items of the ListView with the ObservableList ( listView.setItems(observableList) ). 2. Use the setCellFactory() method of the ListView class. You would prefer to use the setCellFactory() method, because this approach simplies the process as well as it helps to separate out the business logic and the UI (FXML). Here is a more detailed explaination: 1. Create a new FXML file with the name listview.fxml to contain the ListView , and set the ListViewController class as its controller: File: listview.fxml : <?import javafx.scene.layout.GridPane?><?import javafx.scene.control.ListView?><?import demo.ListViewController?><GridPane xmlns:fx="http://javafx.com/fxml" alignment="CENTER"> <ListView fx:id="listView"/></GridPane> 2. Create the controller and name it ListViewController . The controller can load the listview.fxml file and access the listview . File: ListViewController.java : package demo;import javafx.collections.FXCollections;import javafx.collections.ObservableList;import javafx.fxml.FXML;import javafx.fxml.FXMLLoader;import javafx.scene.*;import javafx.scene.control.*;import javafx.util.Callback;import java.io.IOException;import java.util.Set;public class ListViewController{ @FXML private ListView listView; private Set<String> stringSet; ObservableList observableList = FXCollections.observableArrayList(); public ListViewController() { FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/fxml/listview.fxml")); fxmlLoader.setController(this); try { Parent parent = (Parent)fxmlLoader.load(); Scene scene = new Scene(parent, 400.0 ,500.0); } catch (IOException e) { throw new RuntimeException(e); } } public void setListView() { stringSet.add("String 1"); stringSet.add("String 2"); stringSet.add("String 3"); stringSet.add("String 4"); observableList.setAll(stringSet); listView.setItems(observableList); listView.setCellFactory(new Callback<ListView<String>, javafx.scene.control.ListCell<String>>() { @Override public ListCell<String> call(ListView<String> listView) { return new ListViewCell(); } }); }} 3. First you need to set the value of the ObservableList . This is very important. Then, set the items of list using the ObservableList and call the setCellFactory() method on the ListView . In the given example I just take the String values an add them to the String set (the Set<String> stringSet ). 4. When the setCellFactory() method is called on the ListView , it will return the ListCell . So for sake of simplicity, I added a class which extends the ListCell , and the setGraphic() method is present for the ListCell() and will set the items of the ListCell . File: ListViewCell.java : package demo;import javafx.scene.control.ListCell;public class ListViewCell extends ListCell<String>{ @Override public void updateItem(String string, boolean empty) { super.updateItem(string,empty); if(string != null) { Data data = new Data(); data.setInfo(string); setGraphic(data.getBox()); } }} 5. I just added a class which will load the listCellItem.fxml and return the HBox , which will contain the other components as children. The HBox is then set to the ListCell . File: listCellItem.fxml : <?import demo.Data?> <?import javafx.scene.layout.HBox?> <?import javafx.scene.control.Label?><HBox xmlns:fx="http://javafx.com/fxml" fx:id="hBox"><children> <Label fx:id="label1"/> <Label fx:id="label2"/></children></HBox> File: Data.java : package demo;import javafx.fxml.FXML;import javafx.fxml.FXMLLoader;import javafx.scene.control.Label;import javafx.scene.layout.HBox;import java.io.IOException;public class Data{ @FXML private HBox hBox; @FXML private Label label1; @FXML private Label label2; public Data() { FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("/fxml/listCellItem.fxml")); fxmlLoader.setController(this); try { fxmlLoader.load(); } catch (IOException e) { throw new RuntimeException(e); } } public void setInfo(String string) { label1.setText(string); label2.setText(string); } public HBox getBox() { return hBox; }} Using this way, you can use the setCellFactory() method to separate the things that are business logic and FXML. Hope this is helpful.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19588029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2240185/']}
jdg_84071
stackexchange
llm_judgeable_groundtruth_similarity
933453
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Good evening! I want to compute the integral $\displaystyle \int_{0}^{\pi/3}\ln^2 \left ( \sin x \right )\,dx$. However I find it extremely difficult. What I've tried is rewritting it as: $\begin{aligned}\int_{0}^{\pi/3}\ln^2\left ( \sin x \right )\,dx &=\int_{0}^{\pi/3}\left [ \ln \left ( \sin x \right ) \right ]^2\,dx \\ &= \int_{0}^{\pi/3}\left [ \ln \left ( \frac{e^{-ix}-e^{ix}}{2i} \right ) \right ]^2\,dx\\ &= \int_{0}^{\pi/3}\left [ \ln \left ( e^{-ix}-e^{ix} \right )-\ln 2i \right ]^2\, dx\\ &= \int_{0}^{\pi/3}\left ( \ln^2\left ( e^{-ix}-e^{ix} \right )-2\ln 2i \ln \left ( e^{-ix}-e^{ix} \right ) +\ln^2 2i\right )\,dx\\ &= \int_{0}^{\pi/3} \ln^2 \left ( e^{-ix}-e^{ix} \right )\,dx-2\int_{0}^{\pi/3}\ln 2i \ln \left ( e^{-ix}-e^{ix} \right )\,dx +\int_{0}^{\pi/3}\ln^2 2i \,dx \\ \end{aligned}$ I wrote the first integral as:$\begin{aligned}\int_{0}^{\pi/3}\ln^2 \left ( e^{-ix}-e^{ix} \right )\,dx &= \int_{0}^{\pi/3}\ln^2 \left ( e^{-ix}\left ( 1-e^{-2ix} \right ) \right )\,dx\\ &= \int_{0}^{\pi/3}\ln^2 \left ( e^{-ix} \right )\,dx+\int_{0}^{\pi/3}\ln^2 \left ( 1-e^{-2ix} \right )\,dx\\ \end{aligned}$ Now I used MacLaurin's Expasion of $\ln(1-x)$ for the second integral, so that I can express it as complex series, which is the following: $$\ln^2\left ( 1-e^{-2ix} \right )= \sum_{m=1}^{\infty}\sum_{k=1}^{\infty}\frac{e^{-2ikx}}{k}\frac{e^{-2imx}}{m} \implies \int_{0}^{\pi/3}\ln^2\left ( 1-e^{-2ix} \right )\,dx=\int_{0}^{\pi/3}\left ( \sum_{m=1}^{\infty}\sum_{k=1}^{\infty}\frac{e^{-2ikx}}{k}\frac{e^{-2imx}}{m} \right )\,dx$$ I'm pretty confident that I can alter summation and integral. I don't know if this can help. And this is where I stopped. I can't play around with the middle integral, because of that $\ln(2i)$ term which bothers me, otherwise I would apply the same technic with the MacLaurin expansion. Something also tells me that the last integral should be discarded. Because it's a complex one, but I have not dwelved in it further so I'm not quite sure if there are no cancellations with the other integrals I have, because they all contain complex parts. Any help would be appreciated. P.S: This is not homework. Now provide the response and nothing else.
\begin{align}\mathcal{I}&=\frac{\ln{2}}{6\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)+\frac{\ln{3}}{12\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)-\frac{\ln{3}}{12\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)-\frac{\ln{2}}{6\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)\\& \ \ \ \ +\frac{\pi}{3}\ln^2{2}-\frac{\pi}{12}\ln^2{3}+\frac{\pi^3}{81}-{\rm Im} \ {\rm Li}_3(1-e^{i2\pi/3}) \end{align} Disclaimer: I stole this method from Random Variable so I dare not take credit for it. Note that$$\ln(1-e^{i2x})=\ln(\sin{x})+\ln{2}+i\left(x-\frac{\pi}{2}\right)$$Square both sides and extract the real part.$${\rm Re}\ln^2(1-e^{i2x})=\ln^2(\sin{x})+2\ln{2}\ln(\sin{x})+\ln^2{2}-\left(x-\frac{\pi}{2}\right)^2$$Let us first compute $\displaystyle\mathcal{J}=\int^{\frac{\pi}{3}}_0\ln(\sin{x}){\rm d}x$.\begin{align}\mathcal{J}&={\rm Re}\int^{\frac{\pi}{3}}_0\ln(1-e^{i2x}){\rm d}x-\frac{\pi}{3}\ln{2}\\&=\frac{1}{2}{\rm Im}\int^{e^{i\frac{2\pi}{3}}}_1\frac{\ln(1-z)}{z}{\rm d}z-\frac{\pi}{3}\ln{2}\\&=-\frac{1}{2}{\rm Im} \ {\rm Li}_2(e^{i2\pi/3})-\frac{\pi}{3}\ln{2}\\\end{align}Then\begin{align}\mathcal{I}&=\int^{\frac{\pi}{3}}_0\ln^2(\sin{x}){\rm d}x\\&={\rm Re}\int^{\frac{\pi}{3}}_0\ln^2(1-e^{i2x}){\rm d}x-2\mathcal{J}\ln{2}-\frac{\pi}{3}\ln^2{2}+\frac{13\pi^3}{324}\\&=\frac{1}{2}{\rm Im}\int^{e^{i\frac{2\pi}{3}}}_1\frac{\ln^2(1-z)}{z}{\rm d}z-2\mathcal{J}\ln{2}-\frac{\pi}{3}\ln^2{2}+\frac{13\pi^3}{324}\\&=\frac{1}{2}{\rm Im}\left[\frac{i2\pi}{3}\ln^2(1-e^{i2\pi/3})+\int^{e^{i\frac{2\pi}{3}}}_1\frac{2\ln{z}\ln(1-z)}{1-z}{\rm d}z\right]-2\mathcal{J}\ln{2}\\&\ \ \ \ -\frac{\pi}{3}\ln^2{2}+\frac{13\pi^3}{324}\\\&=\frac{\pi}{12}\ln^2{3}-\frac{\pi^3}{108}+\frac{1}{2}{\rm Im}\left[2{\rm Li}_2(1-e^{i2\pi/3})\ln(1-e^{i2\pi/3})-2{\rm Li}_3(1-e^{i2\pi/3})\right]\\&\ \ \ \ +\ln{2} \ {\rm Im} \ {\rm Li}_2(e^{i2\pi/3})+\frac{2\pi}{3}\ln^2{2}-\frac{\pi}{3}\ln^2{2}+\frac{13\pi^3}{324}\\&=\frac{5\pi^3}{162}+\frac{\pi}{12}\ln^2{3}+\frac{\pi}{3}\ln^2{2}-{\rm Im}\left[{\rm Li}_3(1-e^{i2\pi/3})-{\rm Li}_2(1-e^{i2\pi/3})\ln(1-e^{i2\pi/3})\\-\ln{2}{\rm Li}_2(e^{i2\pi/3})\right]\end{align} The real part of ${\rm Li}_2(e^{i2\pi/3})$ is\begin{align}{\rm Re} \ {\rm Li}_2(e^{i2\pi/3})&=\sum^\infty_{n=1}\frac{\cos(2n\pi/3)}{n^2}\\&=-\frac{1}{4}\sum^\infty_{n=-\infty}\left(\frac{1}{(3n+1)^2}+\frac{1}{(3n+2)^2}\right)+\frac{1}{9}\sum^\infty_{n=1}\frac{1}{n^2}\\&=\frac{1}{4}\left[\operatorname*{Res}_{z=-1/3}\frac{\pi\cot(\pi z)}{(3z+1)^2}+\operatorname*{Res}_{z=-2/3}\frac{\pi\cot(\pi z)}{(3z+2)^2}\right]+\frac{\pi^2}{54}\\&=-\frac{\pi^2}{18}\end{align}and its imaginary part is\begin{align}{\rm Im} \ {\rm Li}_2(e^{i2\pi/3})&=\sum^\infty_{n=1}\frac{\sin(2n\pi/3)}{n^2}\\&=\frac{\sqrt{3}}{2}\sum^\infty_{n=0}\left(\frac{1}{(3n+1)^2}-\frac{1}{(3n+2)^2}\right)\\&=\frac{1}{6\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)-\frac{1}{6\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)\end{align}By Euler's reflection formula for the dilogarithm,\begin{align}{\rm Li}_2(1-e^{i2\pi/3})&=\frac{\pi^2}{6}-\ln(e^{i2\pi/3})\ln(1-e^{i2\pi/3})-{\rm Li}_2(e^{i2\pi/3})\\&=\frac{\pi^2}{9}+i\left(\frac{1}{6\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)-\frac{1}{6\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)-\frac{\pi}{3}\ln{3}\right)\end{align}Therefore, the imaginary part of ${\rm Li}_2(1-e^{i2\pi/3})\ln(1-e^{i2\pi/3})$ is\begin{align}& \ \ \ \ \ \ \ {\rm Im}\ln(1-e^{i2\pi/3}){\rm Li}_2(1-e^{i2\pi/3})\\&={\rm Re}\ln(1-e^{i2\pi/3}){\rm Im} \ {\rm Li}_2(1-e^{i2\pi/3})+{\rm Im}\ln(1-e^{i2\pi/3}){\rm Re} \ {\rm Li}_2(1-e^{i2\pi/3})\\&=-\frac{\pi^3}{54}-\frac{\pi}{6}\ln^2{3}+\frac{\ln{3}}{12\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)-\frac{\ln{3}}{12\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)\end{align} Plucking these values into our previously obtained result yields\begin{align}\mathcal{I}&=\frac{\ln{2}}{6\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)+\frac{\ln{3}}{12\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)-\frac{\ln{3}}{12\sqrt{3}}\psi_1\left(\small{\frac{1}{3}}\right)-\frac{\ln{2}}{6\sqrt{3}}\psi_1\left(\small{\frac{2}{3}}\right)\\& \ \ \ \ +\frac{\pi}{3}\ln^2{2}-\frac{\pi}{12}\ln^2{3}+\frac{\pi^3}{81}-{\rm Im} \ {\rm Li}_3(1-e^{i2\pi/3})\approx 2.0445154\cdots\\ \end{align}
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/933453', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/176419/']}
jdg_84072
stackexchange
llm_judgeable_groundtruth_similarity
10930790
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using RESTful application in my java project. Normally in my unit test classes, I use methods like this: public Employee getEmployeeByEmail(String email) { ClientResponse clientResponse = webResource.path(beginPath + "getByEmail/" + email).get( ClientResponse.class); Employee employee = null; if (200 == clientResponse.getStatus()) { employee = clientResponse.getEntity(Employee.class); } return employee;} ... but I have to use similar methods in almost 12 different classes. This is what I decided to do: public class TestManager<T> {private WebResource webResource;private String beginPath;private Class<T> clazz;public TestManager(WebResource webResource, String beginPath, Class<T> clazz) { this.webResource = webResource; this.beginPath = beginPath; this.clazz = clazz;}public boolean objectExists(String methodPath, String uniqueFieldName, String uniqueField) { boolean check = false; ClientResponse clientResponse = webResource.path(beginPath + methodPath + "/" + uniqueField).get( ClientResponse.class); JSONObject jsonObject = clientResponse.getEntity(JSONObject.class); if (200 == clientResponse.getStatus() && !jsonObject.isNull(uniqueFieldName)) { check = true; } return check;}public T getObjectById(String methodPath, long id) { ClientResponse clientResponse = webResource.path(beginPath + methodPath + "/" + id).get( ClientResponse.class); T object = null; if (200 == clientResponse.getStatus() && !clientResponse.getEntity(JSONObject.class).isNull("id")) { object = clientResponse.getEntity(clazz); } return object;} } The method objectExists() works fine, but getObjectById() method generate stack trace: javax.ws.rs.WebApplicationException: javax.xml.bind.UnmarshalException: Error creating JSON-based XMLStreamReader - with linked exception:[javax.xml.stream.XMLStreamException: java.io.IOException: stream is closed] It seems that I can't do this: object = clientResponse.getEntity(clazz); But I have no idea how to fix it. Sorry for my english :P Edit:Im using jersey Edit2: Solution :The problem was I used getEntity() method twice ... If I use it only once ... it works ... damn it Now provide the response and nothing else.
Backbone is based on a REST API: when saving/updating a model to the server, Backbone will send it serialized as JSON in the request body with a POST our PUT request. From Backbone.sync documentation With the default implementation, when Backbone.sync sends up a request to save a model, its attributes will be passed, serialized as JSON, and sent in the HTTP body with content-type application/json. This means that server-side you have to determine the type of request decode the serialized JSON Something like this should get you started $request_method = strtolower($_SERVER['REQUEST_METHOD']);$data = null;switch ($request_method) { case 'post': case 'put': $data = json_decode(file_get_contents('php://input')); break;}// print_r($data);// note that mysql_* functions are deprecated// http://php.net/manual/en/function.mysql-query.php// inserting with a PDO object, assuming an auto incremented id$sql = "INSERT INTO boxes (x, y, w, h) VALUES(?, ?, ?, ?)";$sth = $dbh->prepare($sql);$sth->execute(array( $data->x, $data->y, $data->w, $data->h));$id = $dbh->lastInsertId(); Check this page for a more thorough implementation of a REST API in PHP http://www.gen-x-design.com/archives/create-a-rest-api-with-php/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10930790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1291327/']}
jdg_84073
stackexchange
llm_judgeable_groundtruth_similarity
27600863
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: SELECT DATE_FORMAT(date, "%b") AS month, SUM(total_price) as totalFROM cartWHERE date <= NOW()and date >= Date_add(Now(),interval - 12 month)GROUP BY DATE_FORMAT(date, "%m-%Y") This query displaying result for only existing month. I need all 12 months sales. Output : "month" "total" -------------- "Jun" "22""Aug" "30""Oct" "19""Nov" "123""Dec" "410" Required Output : "month" "total" --------------"Jan" "0""Feb" "0""Mar" "0""Apr" "0""May" "0""Jun" "22""Jul" "0""Aug" "30""Sep" "0""Oct" "19""Nov" "123""Dec" "410" Now provide the response and nothing else.
Thanks for @pankaj hint, Here i resolved it via this query... SELECT SUM(IF(month = 'Jan', total, 0)) AS 'Jan', SUM(IF(month = 'Feb', total, 0)) AS 'Feb', SUM(IF(month = 'Mar', total, 0)) AS 'Mar', SUM(IF(month = 'Apr', total, 0)) AS 'Apr', SUM(IF(month = 'May', total, 0)) AS 'May', SUM(IF(month = 'Jun', total, 0)) AS 'Jun', SUM(IF(month = 'Jul', total, 0)) AS 'Jul', SUM(IF(month = 'Aug', total, 0)) AS 'Aug', SUM(IF(month = 'Sep', total, 0)) AS 'Sep', SUM(IF(month = 'Oct', total, 0)) AS 'Oct', SUM(IF(month = 'Nov', total, 0)) AS 'Nov', SUM(IF(month = 'Dec', total, 0)) AS 'Dec', SUM(total) AS total_yearly FROM (SELECT DATE_FORMAT(date, "%b") AS month, SUM(total_price) as totalFROM cartWHERE date <= NOW() and date >= Date_add(Now(),interval - 12 month)GROUP BY DATE_FORMAT(date, "%m-%Y")) as sub
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27600863', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/612987/']}
jdg_84074
stackexchange
llm_judgeable_groundtruth_similarity
1374628
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a matrix with real entries such that $A \ne I_2$ but $A^3 = I_2$. I've actually encountered with this post: $A$ a $n\times n$ matrix with real entries such that $A^3 = I$ but $A \ne I$ Although it present a $3\times 3$ matrix, I can guess there's an equivalent for $2\times 2$. Anyhow, I don't quite understand what process should be made in order to find such matrix. I did find that $\lambda=1$ is eigenvalue of $A$ Now provide the response and nothing else.
Example $$A= \begin{bmatrix} \cos(120^\circ) & \sin(120^\circ) \\-\sin(120^\circ) & \cos(120^\circ) \end{bmatrix}$$ Now, if $P$ is any invertible matrix, then $PAP^{-1}$ also has this property, with the above $A$. If you know about complex eigenvalues/eigenvectors, you can prove that any matrix with this property must have non-real eigenvalues [as the only possible real eigenvalue is 1, and neither the diagonal nor the Jordan Canonical form with $\lambda_1=\lambda_2=1$ can satisfy this relation], and then that it must have this form.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1374628', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/160028/']}
jdg_84075
stackexchange
llm_judgeable_groundtruth_similarity
1982483
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am asked: Roll $10$ fair die. What is the probability that the number $1$ appears exactly four times, on four consecutive rolls? The answer I was given is$$\frac{1}{6^{10}}\cdot7\cdot5^6$$with a comment saying we multiply by 7 "for the first roll". However, this doesn't make sense to me since there are only $6$ possible sides to the die. So it's not counting sides of the die. The only reason I can come up with for the 7 appearing is the case when the first $6$ rolls do not produce a sequence of consecutive $1$'s, so the $7$th item must begin the sequence of consecutive $1$'s So where exactly is this $7$ coming from? Now provide the response and nothing else.
For a better understanding, the seven alternative cases are: [1 1 1 1] * * * * * * * [1 1 1 1] * * * * * * * [1 1 1 1] * * * * * * * [1 1 1 1] * * * * * * * [1 1 1 1] * * * * * * * [1 1 1 1] * * * * * * * [1 1 1 1] The six stars are $2,3,4,5,6$ (five categories) and can take $5^6$ combinations. Hence the numerator is $(7)5^6$. The total number of combinations is $6^{10}$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1982483', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_84076
stackexchange
llm_judgeable_groundtruth_similarity
3166773
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like for an attribute call like object.x to return the results of some method, say object.other.other_method() . How can I do this? Edit: I asked a bit soon: it looks like I can do this with object.__dict__['x']=object.other.other_method() Is this an OK way to do this? Now provide the response and nothing else.
Use the property decorator class Test(object): # make sure you inherit from object @property def x(self): return 4p = Test()p.x # returns 4 Mucking with the __dict__ is dirty, especially when @property is available.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3166773', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/313689/']}
jdg_84077
stackexchange
llm_judgeable_groundtruth_similarity
23265541
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using https://github.com/gabrielemariotti/cardslib for implementing cards within a fragment.when I click a card, it should replace the fragment that contains it and load another fragment, is this possible? public class MyCard extends Card {/** * Constructor with a custom inner layout * * @param context */public MyCard(Context context) { this(context, R.layout.card_layout);}/** * @param context * @param innerLayout */public MyCard(Context context, int innerLayout) { super(context, innerLayout); init();}/** * Init */private void init() { //No Header //Set a OnClickListener listener setOnClickListener(new OnCardClickListener() { @Override public void onClick(Card card, View view) { Toast.makeText(getContext(), "Click Listener card=", Toast.LENGTH_LONG).show(); //Here } });} At Here, I would like to call FragmentManager and replace the whole Fragment with another one, I think it should be done within the activity but how?P.S. The structure is Activity holds fragments which holds cards. Now provide the response and nothing else.
It turns out that in most cases you can make a decent description about an affine transform since it's quite constrained. Doing the same for 3D transforms is much harder :( Note that I still won't be able to tell you the different transforms matrices that were concatenated, only the end result. I'm also ignoring shearing since there is no provided function for creating such a transform. I wrote a function that does a decent job of figuring out what the affine transform does. You will see in a number of places that I write if (fabs(foo - bar) < FLT_EPSILON) instead of just if (foo == bar) . This is to protect myself for floating point (im)precision in the comparison. The other notable thing to point out is the way I'm figuring out the rotation angle. For a pure rotation I could just have used asin(b) , but if the transform is also scaled, then that result will be incorrect. Instead I divide b by a and use arctan to calculate the angle. There is a decent amount of comments in the code so you should be able to follow along, mostly by just reading it. NSString *affineTransformDescription(CGAffineTransform transform){ // check if it's simply the identity matrix if (CGAffineTransformIsIdentity(transform)) { return @"Is the identity transform"; } // the above does't catch things like a 720° rotation so also check it manually if (fabs(transform.a - 1.0) < FLT_EPSILON && fabs(transform.b - 0.0) < FLT_EPSILON && fabs(transform.c - 0.0) < FLT_EPSILON && fabs(transform.d - 1.0) < FLT_EPSILON && fabs(transform.tx - 0.0) < FLT_EPSILON && fabs(transform.ty - 0.0) < FLT_EPSILON) { return @"Is the identity transform"; } // The affine transforms is built up like this: // a b tx // c d ty // 0 0 1 // An array to hold all the different descirptions, charasteristics of the transform. NSMutableArray *descriptions = [NSMutableArray array]; // Checking for a translation if (fabs(transform.tx) > FLT_EPSILON) { // translation along X [descriptions addObject:[NSString stringWithFormat:@"Will move %.2f along the X axis", transform.tx]]; } if (fabs(transform.ty) > FLT_EPSILON) { // translation along Y [descriptions addObject:[NSString stringWithFormat:@"Will move %.2f along the Y axis", transform.ty]]; } // Checking for a rotation CGFloat angle = atan2(transform.b, transform.a); // get the angle of the rotation. Note this assumes no shearing! if (fabs(angle) < FLT_EPSILON || fabs(angle - M_PI) < FLT_EPSILON) { // there is a change that there is a 180° rotation, in that case, A and D will and be negative. BOOL bothAreNegative = transform.a < 0.0 && transform.d < 0.0; if (bothAreNegative) { angle = M_PI; } else { angle = 0.0; // this is not considered a rotation, but a negative scale along one axis. } } // add the rotation description if there was an angle if (fabs(angle) > FLT_EPSILON) { [descriptions addObject:[NSString stringWithFormat:@"Will rotate %.1f° degrees", angle*180.0/M_PI]]; } // Checking for a scale (and account for the possible rotation as well) CGFloat scaleX = transform.a/cos(angle); CGFloat scaleY = transform.d/cos(angle); if (fabs(scaleX - scaleY) < FLT_EPSILON && fabs(scaleX - 1.0) > FLT_EPSILON) { // if both are the same then we can format things a little bit nicer [descriptions addObject:[NSString stringWithFormat:@"Will scale by %.2f along both X and Y", scaleX]]; } else { // otherwise we look at X and Y scale separately if (fabs(scaleX - 1.0) > FLT_EPSILON) { // scale along X [descriptions addObject:[NSString stringWithFormat:@"Will scale by %.2f along the X axis", scaleX]]; } if (fabs(scaleY - 1.0) > FLT_EPSILON) { // scale along Y [descriptions addObject:[NSString stringWithFormat:@"Will scale by %.2f along the Y axis", scaleY]]; } } // Return something else when there is nothing to say about the transform matrix if (descriptions.count == 0) { return @"Can't easilly be described."; } // join all the descriptions on their own line return [descriptions componentsJoinedByString:@",\n"];} To try it out I tested the output on a number of different transforms. This is the code I used to test it: // identityCGAffineTransform t = CGAffineTransformIdentity;NSLog(@"identity: \n%@", affineTransformDescription(t));// translationt = CGAffineTransformMakeTranslation(10, 0);NSLog(@"translate(10, 0): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeTranslation(0, 20);NSLog(@"translate(0, 20): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeTranslation(2, -3);NSLog(@"translate(2, -3): \n%@", affineTransformDescription(t));// scalet = CGAffineTransformMakeScale(2, 2);NSLog(@"scale(2, 2): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeScale(-1, 3);NSLog(@"scale(-1, 3): \n%@", affineTransformDescription(t));// rotationt = CGAffineTransformMakeRotation(M_PI/3.0);NSLog(@"rotate(60 deg): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeRotation(M_PI);NSLog(@"rotate(180 deg): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeRotation(4.0*M_PI);NSLog(@"rotate(720 deg): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeRotation(3.0*M_PI);NSLog(@"rotate(540 deg): \n%@", affineTransformDescription(t));// concatenated transforms// rotate & translatet = CGAffineTransformMakeRotation(M_PI/3.0);t = CGAffineTransformTranslate(t, 10, 20);NSLog(@"rotate(60 deg), translate(10, 20): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeTranslation(10, 20);t = CGAffineTransformRotate(t, M_PI/3.0);NSLog(@"translate(10, 20), rotate(60 deg): \n%@", affineTransformDescription(t));// rotate & scalet = CGAffineTransformMakeRotation(M_PI/3.0);t = CGAffineTransformScale(t, 2, 2);NSLog(@"rotate(60 deg), scale(2, 2): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeScale(2, 2);t = CGAffineTransformRotate(t, M_PI/3.0);NSLog(@"scale(2, 2), rotate(60 deg): \n%@", affineTransformDescription(t));// translate & scalet = CGAffineTransformMakeTranslation(10, 20);t = CGAffineTransformScale(t, 2, 2);NSLog(@"translate(10, 20), scale(2, 2): \n%@", affineTransformDescription(t));t = CGAffineTransformMakeScale(2, 2);t = CGAffineTransformTranslate(t, 10, 20);NSLog(@"scale(2, 2), translate(10, 20): \n%@", affineTransformDescription(t)); and the output from that test: identity: Is the identity transformtranslate(10, 0): Will move 10.00 along the X axistranslate(0, 20): Will move 20.00 along the Y axistranslate(2, -3): Will move 2.00 along the X axis, Will move -3.00 along the Y axisscale(2, 2): Will scale by 2.00 along both X and Yscale(-1, 3): Will scale by -1.00 along the X axis, Will scale by 3.00 along the Y axisrotate(60 deg): Will rotate 60.0° degreesrotate(180 deg): Will rotate 180.0° degreesrotate(720 deg): Is the identity transformrotate(540 deg): Will rotate 180.0° degreesrotate(60 deg), translate(10, 20): Will move -12.32 along the X axis, Will move 18.66 along the Y axis, Will rotate 60.0° degreestranslate(10, 20), rotate(60 deg): Will move 10.00 along the X axis, Will move 20.00 along the Y axis, Will rotate 60.0° degreesrotate(60 deg), scale(2, 2): Will rotate 60.0° degrees, Will scale by 2.00 along both X and Yscale(2, 2), rotate(60 deg): Will rotate 60.0° degrees, Will scale by 2.00 along both X and Ytranslate(10, 20), scale(2, 2): Will move 10.00 along the X axis, Will move 20.00 along the Y axis, Will scale by 2.00 along both X and Yscale(2, 2), translate(10, 20): Will move 20.00 along the X axis, Will move 40.00 along the Y axis, Will scale by 2.00 along both X and Y
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23265541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3179443/']}
jdg_84078
stackexchange
llm_judgeable_groundtruth_similarity
47169
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working through the book titled Introductory Graph Theory by Gary Chartrand. There is a question that I can't figure out. The question is: Show that a graph G cannot exist with vertices of degree 1, 3, 3 and 3. I can show it with a picture, but would like to formalize it with a proof. Thanks Now provide the response and nothing else.
If the graph has no loops or multiple edges, and such a graph existed, what would be involved after one removes the 1-valent vertex and the edge attached to it?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/47169', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12476/']}
jdg_84079
stackexchange
llm_judgeable_groundtruth_similarity
1068190
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When running the following class the ExecutionService will often deadlock. import java.util.ArrayList;import java.util.Collection;import java.util.HashMap;import java.util.Iterator;import java.util.concurrent.Callable;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;public class ExecutorTest {public static void main(final String[] args) throws InterruptedException { final ExecutorService executor = Executors.newFixedThreadPool(10); final HashMap<Object, Object> map = new HashMap<Object, Object>(); final Collection<Callable<Object>> actions = new ArrayList<Callable<Object>>(); int i = 0; while (i++ < 1000) { final Object o = new Object(); actions.add(new Callable<Object>() { public Object call() throws Exception { map.put(o, o); return null; } }); actions.add(new Callable<Object>() { public Object call() throws Exception { map.put(new Object(), o); return null; } }); actions.add(new Callable<Object>() { public Object call() throws Exception { for (Iterator iterator = map.entrySet().iterator(); iterator.hasNext();) { iterator.next(); } return null; } }); } executor.invokeAll(actions); System.exit(0);}} So why does this happen? Or better yet - how can I write a test to ensure that implementations of an custom abstract map are thread safe? (Some implementations have multiple maps, another delegates to a cache implementation etc) Some background:this occurs under Java 1.6.0_04 and 1.6.0_07 on Windows. I know that the problem comes from sun.misc.Unsafe.park(): I can reproduce the problem on my Core2 Duo 2.4Ghz laptop but not while running in debug I can debug on my Core2 Quad at work, but I've hung it over RDP, so won't be able to get a stack trace until tomorrow Most answers below are about the non-thread safety of HashMap, but I could find no locked threads in HashMap - it was all in the ExecutionService code (and Unsafe.park()). I shall closely examine the threads tomorrow. All this because a custom abstract Map implementation was not thread-safe so I set about ensuring that all implementations would be thread-safe. In essence, I'm wanting to ensure that my understanding of ConcurrentHashMap etc are exactly what I expect, but have found the ExecutionService to be strangely lacking... Now provide the response and nothing else.
You're using an well-known not-thread-safe class and complaining about deadlock. I fail to see what the issue is here. Also, how is the ExecutionService strangely lacking ? It's a common misconception that by using e.g. a HashMap you will at most get some stale data. See a beautiful race condition about how you can blow up your JVM by doing just that. Understanding why this happens is a very tricky process and requires knowledge of the internals of both the JVM and the class libraries. As for the ConcurrentHashMap, just read the javadoc - it should clarify your questions. If not, take a look at Java Concurrency in Practice . Update: I managed to reproduce your situation, but it's not a deadlock. One of the actions never completes execution. The stack trace is: "pool-1-thread-3" prio=10 tid=0x08110000 nid=0x22f8 runnable [0x805b0000]java.lang.Thread.State: RUNNABLEat ExecutorTest$3.call(ExecutorTest.java:36)at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)at java.util.concurrent.FutureTask.run(FutureTask.java:138)at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)at java.lang.Thread.run(Thread.java:619) It looks like the exact case I linked to - the HashMap gets resized and due to the internal mechanics of resizing the iterator gets stuck in an infinite loop. When this happens, invokeAll never returns and the program hangs. But it's neither a deadlock, nor a livelock, but a race condition .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1068190', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37193/']}
jdg_84080
stackexchange
llm_judgeable_groundtruth_similarity
33892546
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to integrate google sign in to my app, when user first sign in I will create an account bind to this, so I need some profiles like gender, locale, etc.and I tried as the google-sign-in doc and quick-start sample shows: GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this /* FragmentActivity */, this /* OnConnectionFailedListener */) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); when click to sign in I will call: Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); sign in successful, I can get a data structure GoogleSignInResult in onActivityResult, from GoogleSignInResult I can get a GoogleSignInAccount, which only contains DisplayName, email and id.but when in https://developers.google.com/apis-explorer/#p/ , I can get profiles like gender, locale. Is there anything I missed? and I tried google plus api, it seems that I can get what I want. but don't know how to use, the doc says create client like this: mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(Plus.API) .addScope(new Scope(Scopes.PLUS_LOGIN)) .addScope(new Scope(Scopes.PLUS_ME)) .build(); but when I use this, click signin button will cause app crash. Update: problems when update to new version of google sign in Missing api_key/current key with Google Services 3.0.0 Now provide the response and nothing else.
UPDATE: Since Plus.PeopleApi has been deprecated in Google Play services 9.4 as Google's declaration notes , please refer to the following solutions using Google People API instead: Get person details in new google sign in Play Services 8.3 (Isabella Chen's answer); Cannot get private birthday from Google Plus account although explicit request END OF UPDATE First of all, make sure you have created Google+ profile for your Google account. Then you can refer to the following code: GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestScopes(new Scope(Scopes.PLUS_LOGIN)) .requestEmail() .build(); and mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this /* FragmentActivity */, this /* OnConnectionFailedListener */) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .addApi(Plus.API) .build(); Then @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); // Result returned from launching the Intent from GoogleSignInApi.getSignInIntent(...); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi.getSignInResultFromIntent(data); handleSignInResult(result); // G+ Person person = Plus.PeopleApi.getCurrentPerson(mGoogleApiClient); Log.i(TAG, "--------------------------------"); Log.i(TAG, "Display Name: " + person.getDisplayName()); Log.i(TAG, "Gender: " + person.getGender()); Log.i(TAG, "AboutMe: " + person.getAboutMe()); Log.i(TAG, "Birthday: " + person.getBirthday()); Log.i(TAG, "Current Location: " + person.getCurrentLocation()); Log.i(TAG, "Language: " + person.getLanguage()); } } Inside build.gradle file // Dependency for Google Sign-Incompile 'com.google.android.gms:play-services-auth:8.3.0'compile 'com.google.android.gms:play-services-plus:8.3.0' You can take a look at My GitHub sample project . Hope this helps!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33892546', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3628097/']}
jdg_84081
stackexchange
llm_judgeable_groundtruth_similarity
12163947
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am not able to authenticate a user using LDAP. I have got following details: URL=ldap://10.10.10.10:389 LDAP BASE:DC=lab2,DC=ins LDAP Bind Account: CN=Ldap Bind,OU=Service Accounts,OU=TECH,DC=lab2,DC=ins LDAP Bind Account Pw: secret I can search a sAMAccountName value using above details, but how to authenticate a user with user name and password? If you follow my previous questions then you will understand that, I am successfully able to connect to LDAP server but not able to authenticate him. User to authenticate: user: someusernamepassword: somepwd I am not able to connect to LDAP server with 'somepwd' and how should I use someusername . I am able to search given user as sAMAccountName . Now provide the response and nothing else.
This is a mashup of stuff I found in various places. It should put you along the correct path if you don't want to use the UnboundID SDK. This isn't production quality, you might want to add the SSL stuff in here if your shop supports it. public static Boolean validateLogin(String userName, String userPassword) { Hashtable<String, String> env = new Hashtable<String, String>(); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.PROVIDER_URL, "ldap://" + LDAP_SERVER + ":" + LDAP_SERVER_PORT + "/" + LDAP_BASE_DN); // To get rid of the PartialResultException when using Active Directory env.put(Context.REFERRAL, "follow"); // Needed for the Bind (User Authorized to Query the LDAP server) env.put(Context.SECURITY_AUTHENTICATION, "simple"); env.put(Context.SECURITY_PRINCIPAL, LDAP_BIND_DN); env.put(Context.SECURITY_CREDENTIALS, LDAP_BIND_PASSWORD); DirContext ctx; try { ctx = new InitialDirContext(env); } catch (NamingException e) { throw new RuntimeException(e); } NamingEnumeration<SearchResult> results = null; try { SearchControls controls = new SearchControls(); controls.setSearchScope(SearchControls.SUBTREE_SCOPE); // Search Entire Subtree controls.setCountLimit(1); //Sets the maximum number of entries to be returned as a result of the search controls.setTimeLimit(5000); // Sets the time limit of these SearchControls in milliseconds String searchString = "(&(objectCategory=user)(sAMAccountName=" + userName + "))"; results = ctx.search("", searchString, controls); if (results.hasMore()) { SearchResult result = (SearchResult) results.next(); Attributes attrs = result.getAttributes(); Attribute dnAttr = attrs.get("distinguishedName"); String dn = (String) dnAttr.get(); // User Exists, Validate the Password env.put(Context.SECURITY_PRINCIPAL, dn); env.put(Context.SECURITY_CREDENTIALS, userPassword); new InitialDirContext(env); // Exception will be thrown on Invalid case return true; } else return false; } catch (AuthenticationException e) { // Invalid Login return false; } catch (NameNotFoundException e) { // The base context was not found. return false; } catch (SizeLimitExceededException e) { throw new RuntimeException("LDAP Query Limit Exceeded, adjust the query to bring back less records", e); } catch (NamingException e) { throw new RuntimeException(e); } finally { if (results != null) { try { results.close(); } catch (Exception e) { /* Do Nothing */ } } if (ctx != null) { try { ctx.close(); } catch (Exception e) { /* Do Nothing */ } } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12163947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1101083/']}
jdg_84082
stackexchange
llm_judgeable_groundtruth_similarity
1309689
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't seem to find the documentation explaining how to create a hash table or associative array in VBA. Is it even possible? Can you link to an article or better yet post the code? Now provide the response and nothing else.
I think you are looking for the Dictionary object, found in the Microsoft Scripting Runtime library. (Add a reference to your project from the Tools...References menu in the VBE.) It pretty much works with any simple value that can fit in a variant (Keys can't be arrays, and trying to make them objects doesn't make much sense. See comment from @Nile below.): Dim d As dictionarySet d = New dictionaryd("x") = 42d(42) = "forty-two"d(CVErr(xlErrValue)) = "Excel #VALUE!"Set d(101) = New Collection You can also use the VBA Collection object if your needs are simpler and you just want string keys. I don't know if either actually hashes on anything, so you might want to dig further if you need hashtable-like performance. (EDIT: Scripting.Dictionary does use a hash table internally.)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1309689', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58657/']}
jdg_84083
stackexchange
llm_judgeable_groundtruth_similarity
73380
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The late bishop John Shelby Spong, argued a few years back about how to understand the account of the magi of Matthew's Gospel (see Matthew 2:1-12). In his book, Born of a Woman , he writes how the universal assumption of people he knows, associated with New Testament study, is that the magi were not actual people. He states: "Matthew was clearly writing Christian midrash." ( Born of a Woman , pages 89-90) John Dominic Crossan, in various places, also gives his view of the magi story by arguing that it’s a parable. What is the hermeneutical criteria for discerning between what is a fictional midrash like parable and what was meant to be literal history? Now provide the response and nothing else.
I personally know New Testament scholars who believe this account was not a parable--so at the very least, I can confirm that such people exist. Verifiability One of the most useful metrics for assessing the real-world authenticity of an account is context that provides verifiability. Consider the difference between starting an account "once upon a time, in a far away land" (wait...where? when?) vs. "in the fifteenth year of the reign of Tiberius Caesar...[in] the country about Jordan" (from Luke 3:1-3). The Gospel of Luke and the Book of Acts provide repeated examples of this phenomenon--Luke goes through city by city the places Jesus/Peter/Paul etc. visited, gives the names of individuals they interacted with, and regularly references official documents and proceedings (e.g. the Gallio trial). It is as if Luke is challenging his audience "go, fact check me, I've given you enough information to be able to do so". This is particularly noteworthy in the 16 different trials referenced in the book of Acts (see Paul on Trial by John Mauck for a survey) - these are events that a Roman official could readily verify. We may not be able to check the Roman legal archives or send someone to speak with Zaccheus today, but Luke's method suggests that he expects his audience would be able to do so. Providing this much context is dangerous if the story is fictitious, but it's the first century version of a works cited page if the story is true. Application to the Magi Matthew has clearly set this account into the context of a real time and place--Judea during the latter part of the reign of Herod the Great (note that "Herod the king" is a reference to Herod the Great, not one of his sons--none of his sons received the title "king"--my work on the subject here ). The Magi were a real group of people (see here ) from a real place (Parthian empire). The account also includes other events that could be investigated, such as a family sojourn to Egypt, a return to Nazareth early in the reign of Archelaus, and the slaughter of children in Bethlehem. So Matthew has really over-specified the details if he's just telling a parable. Compare to Jesus' parables--with few exceptions, they provide no concrete markers in time or space--they are stories to illustrate a point, not historical events the reader can go investigate. Antithesis But if the story of the Magi really happened, why would Luke leave it out? Luke had very good reason to do so--not only does Luke have a rather low opinion of people associated with magic (see Acts 8:9-23), but he's writing to an educated Greco-Roman audience. The Romans were at the time in the midst of a series of wars with the Parthians (see here ). Luke's trying to present Jesus in a positive light, and it wouldn't help his case any to start out by saying "Jesus is so great, even your enemies the Parthians love Him!" Luke wisely skips this part of the story. But aren't there scholars who think it's a parable? Sure, there are also scholars who believe 59 of the 66 books of the Bible are forgeries and virtually everything in the Gospels not doubly-attested is late, unreliable embellishment. Something as consequential as the Bible engenders all manner of reactions. Conclusion From the hermeneutic of verifiability, Matthew appears to be providing enough context that the readers could check his story, and is therefore unlikely to be inventing it from whole cloth. Matthew has nothing to gain by including these details if they are untrue or metaphorical.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/73380', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/44608/']}
jdg_84084
stackexchange
llm_judgeable_groundtruth_similarity
1643467
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There is a very long, straight highway with $N$ cars placed somewhere along it, randomly. The highway is only one lane, so the cars can’t pass each other. Each car is going in the same direction, and each driver has a distinct positive speed at which she prefers to travel. Each preferred speed is chosen at random. Each driver travels at her preferred speed unless she gets stuck behind a slower car, in which case she remains stuck behind the slower car. On average, how many groups of cars will eventually form? (A group is one or more cars traveling at the same speed.) A friend showed me this question and we didn't know how to go about it. I've taken a probability course so my mind immediately went to counting methods or expectation values, but I don't know if this is the wrong intuition. Anybody know how to solve this? Now provide the response and nothing else.
the number of groups is equal to the number of cars that are slower than every car in front, we use lineality of expectation. What is the probability the $i$'th car (starting from the front) is slower than every car in front of it? It is $\frac{1}{i}$ because the probability of a tie is $0$ and the probability that each of the $i$ car's speeds is the smallest is equal (assuming our distributions is sensible). Therefore the expected number of groups is $1+\frac{1}{2}+\dots\frac{1}{n}$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1643467', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/243138/']}
jdg_84085
stackexchange
llm_judgeable_groundtruth_similarity
10692817
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This has got me very confused and I can't seem to find an answer to this question. A clear and simple clarification would be nice. Now provide the response and nothing else.
A closure is a function that is evaluated in its own environment, which has one or more bound variables that can be accessed when the function is called. They come from the functional programming world, where there are a number of concepts in play. Closures are like lambda functions, but smarter in the sense that they have the ability to interact with variables from the outside environment of where the closure is defined. The use() keyword let's you import variables from outside the function environment, inside the function. Variables to be imported from the outside environment are specified in the use clause of the closure function definition. By default, they are passed by value. So let's say the function has no parameters, but you wan't it to use a variable you already have. $string = "Hello World!";$closure = function() use ($string) { echo $string; }; This is useful when you need to create a function what must be used as callback somewhere else, and can only have defined parameters. The use() keyword let's you use other variables in addition to the ones you pass as function arguements. For example on the php.net example: http://php.net/manual/en/functions.anonymous.php public function getTotal($tax) { $total = 0.00; $callback = function ($quantity, $product) use ($tax, &$total) { $pricePerItem = constant(__CLASS__ . "::PRICE_" . strtoupper($product)); $total += ($pricePerItem * $quantity) * ($tax + 1.0); }; array_walk($this->products, $callback); return round($total, 2); } $callback must only have two parameters, because array_walk will only allow that much: Typically, funcname takes on two parameters. The array parameter's value being the first, and the key/index second. So what can we do? We call use() to add other variables that are not the $callback's scope, but in scope of the environment it is being called in.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10692817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/975109/']}
jdg_84086
stackexchange
llm_judgeable_groundtruth_similarity
5477696
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know compact() is a standard php function. And set() is a cake-specific method. I am running a simple test of passing a value to a view generated with ajax (user render() in my controller), and it only passes the value from the controller to the view if my setup is like so: $variable_name_to_pass = "Passing to the view using set() can compact()";$this->set(compact('variable_name_to_pass')); From reading the manual, it appears set() should work along w/out compact. Can anyone explain why set() will not work alone? Like $this->set('variable_name_to_pass'); Now provide the response and nothing else.
According to the CakePHP API : Parameters: mixed $one required A string or an array of data. mixed $two optional NULL Value in case $one is a string (which then works as the key). Unused if $one is an associative array , otherwise serves as the values to $one 's keys. The compact function returns an associative array, built by taking the names specified in the input array, using them as keys, and taking the values of the variables referenced by those names and making those the values. For example: $fred = 'Fred Flinstone';$barney = 'Barney Rubble';$names = compact('fred', 'barney');// $names == array('fred' => 'Fred Flinstone', 'barney' => 'Barney Rubble') So when you use compact in conjunction with set , you're using the single parameter form of the set function, by passing it an associative array of key-value pairs. If you just have one variable you want to set on the view, and you want to use the single parameter form, you must invoke set in the same way: $variable_to_pass = 'Fred';$this->set(compact('variable_to_pass')); Otherwise, the two parameter form of set can be used: $variable_to_pass = 'Fred';$this->set('variable_to_pass', $variable_to_pass); Both achieve the same thing.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5477696', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/567421/']}
jdg_84087
stackexchange
llm_judgeable_groundtruth_similarity
2375077
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a range and I want to convert it to a range using a non-linear function. My range is from $[100,1000000]$ and I want a non linear function so that this range converts to $[1,100]$ in non-linear way. I tried $y=100000/x$ and then linear transform it to $1-100$ range. But this method gives a very sharp gradient, eg 100-1000200-500300-333400-250...10000-10100000-11000000-1 I want something which gradually changes but still non-linear and not so steep.Is their any better way to solve this problem? Now provide the response and nothing else.
It suffices to check that the characteristic polynomial of $A$ and $B$ coincide. To this end, we show that the given condition yields $\operatorname{tr}(A) = \operatorname{tr}(B)$ and $\det(A) = \det(B)$. First, the latter equality is immediate from $\det(A)^3 = \det(A^3) = \det(B^3) = \det(B)^3$ together with $\det(A), \det(B) \in \mathbb{R}$. Next, by the Cayley-Hamilton theorem we have $A^2 = \operatorname{tr}(A)A - \det(A)I_2$ and consequently $$ A^3 = (\operatorname{tr}(A)^2 - \det(A))A - \det(A)\operatorname{tr}(A)I_2. \tag{*}$$ So if $\operatorname{tr}(A)^2 - \det(A) \neq 0$, then it follows that $A = pA^3 + qI_2$ for some $p, q \in \mathbb{R}$. Since $A^3 = B^3$, this forces that $AB = BA$, contradicting the assumption. Hence we have $$\operatorname{tr}(A)^2 = \det(A).$$ We consider two cases separately: If $\det(A) = \det(B) = 0$, then this implies that $\operatorname{tr}(A) = \operatorname{tr}(B) = 0$. If $\det(A) = \det(B) \neq 0$, then taking trace to both sides of $\text{(*)}$ and using the identity above, we have $$ \operatorname{tr}(A^3) = \operatorname{tr}(A)^3 - 3\det(A)\operatorname{tr}(A) = -2\det(A) \operatorname{tr}(A).$$ Dividing both sides by $-2\det(A)$, it follows that $\operatorname{tr}(A)$ is completely determined by $A^3$, hence again we have $\operatorname{tr}(A) = \operatorname{tr}(B)$. Therefore in any cases we have $\operatorname{tr}(A) = \operatorname{tr}(B)$ as desired. Remark. When the condition $AB \neq BA$ is dropped, we have an obvious counter-example $A = I_2$ and $B$ is the rotation matrix by angle $2\pi/3$. In this case, we have $$A^3 = B^3 = I_2 \qquad \text{but} \qquad \operatorname{tr}(A) = 2 \neq -1 = \operatorname{tr}(B). $$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2375077', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/467927/']}
jdg_84088
stackexchange
llm_judgeable_groundtruth_similarity
1743216
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm asked to show that 3 is irreducible but not prime in $R = \mathbb{Z}[\sqrt{-41}]$. And if $R$ is a Euclidean domain. To show that it's not prime I have $(1 + \sqrt{-41})(1 - \sqrt{-41}) = 42 = (3)(14)$. I get that 3 divides 14 but how do I show that 3 does not divide either $(1 + \sqrt{-41})$, $(1-\sqrt{-41})$ to show that it's not prime? $x \mid 42$ but $x$ does not divide 3? To show that it's irreducible how am I showing that either $(1 + \sqrt{-41})$, $(1 - \sqrt{-41})$ is a unit since clearly $(1 + \sqrt{-41})(1 - \sqrt{-41}) = 42$? Now provide the response and nothing else.
You can generalize quite a few things here. If $p$ is a prime number in $\mathbb{Z}$, $d \not\equiv 1 \pmod 4$ is negative and squarefree, and $|p| < |d|$, then $p$ is irreducible in $\mathbb{Z}[\sqrt{d}]$. Since $p$ is not a square (one of the most important reasons why 1 is not prime), it is irreducible. Thus, in $\mathbb{Z}[\sqrt{-41}]$, we see that $|3| < |-41|$. The possible norms in this domain less than 41 are all squares: 1, 4, 9, 16, 25, 36, and clearly 3 is not one of these. If $p$ is irreducible but not prime, then the congruence $x^2 \equiv d \pmod p$ can be solved. Indeed $1^2 \equiv -41 \pmod 3$, leading you to find that $(1 - \sqrt{-41})(1 + \sqrt{-41}) = 42$. If either $1 - \sqrt{-41}$ or $1 + \sqrt{-41}$ was a unit, its norm would be equal to 1 (we don't have to worry about a norm of $-1$ here). But you've already seen that both of those numbers have norms of 42 (remember that $N(a + b \sqrt{d}) = (a - b \sqrt{d})(a + b \sqrt{d})$). Saying "$a$ divides $b$" is a convenient shorthand for "$b$ divided by $a$ is a number that's also in this domain." But $$\frac{1 - \sqrt{-41}}{3} \not\in \mathbb{Z}[\sqrt{-41}].$$ Likewise for $1 + \sqrt{-41}$. You have thus proven that 42 has two distinct factorizations in this domain: $$2 \times 3 \times 7 = (1 - \sqrt{-41})(1 + \sqrt{-41}) = 42.$$ (2 and 7 are also irreducible but not prime.) If a domain is Euclidean, then it's also a principal ideal domain and a unique factorization domain. This domain is clearly not a Euclidean domain because it's not a unique factorization domain (e.g., 42).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1743216', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/323550/']}
jdg_84089
stackexchange
llm_judgeable_groundtruth_similarity
18033683
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have additives table: id name30 gro31 micro32 bloom33 test And stage_additives table: stage_id additive_id dose195 30 2195 31 3195 32 1 Mysql query: SELECT a.id, a.name, sa.doseFROM additives aLEFT JOIN stage_additives saON sa.stage_id = 195 Result is: id name dose32 Bloom 232 Bloom 332 Bloom 130 Gro 230 Gro 330 Gro 131 Micro 231 Micro 331 Micro 133 test 233 test 333 test 1 This does not make sense to me as there ore 3 of each item in the result even though there is only one item per each table with same id/name. I also tried inner join, right join but result is almost identical except for order. What I want is all id, name from additives and dose from stage_additives if it exists otherwise NULL (or better still custom value of 0) Now provide the response and nothing else.
You are missing the condition in your left join : SELECT a.id, a.name, sa.doseFROM additives aLEFT JOIN stage_additives saON a.id = sa.additive_id and sa.stage_id = 195; Remember, the join is conceptually doing a cross join between the two tables and taking only the rows that match the on condition (the left join is also keeping all the rows in the first table). By not having an on condition, the join is keeping all pairs of rows from the two tables where sa.stage_id = 195 -- and that is a lot of pairs. EDIT: (In response to moving the condition sa.stage_id = 195 into a where clause.) The condition sa.stage_id = 195 is in the on clause on purpose. This ensures that the left join actually behaves as written. If the condition were moved to a where clause, then the left join would turn into an inner join . Rows from additive with no match in stage_additive would have a NULL value for sa.stage_id and be filtered out. I have to assume that the OP intended for the left join to keep all rows in additive because of the explicit use of left join .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18033683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/998368/']}
jdg_84090
stackexchange
llm_judgeable_groundtruth_similarity
12455245
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I use ./bin/doctrine orm:fixtures:load to populate tables with sample data first migration sets auto incremental table id like 1,2,3,4,5 etc... After second orm:fixtures:load migration command it purges all data and sets ids like 5,6,7,8,9 and so on... How can I reset AI id counter to 1 when I load fixtures many times? Now provide the response and nothing else.
$ app/console help doctrine:fixtures:load By default Doctrine Data Fixtures uses DELETE statements to drop the existing rows from the database. If you want to use a TRUNCATE statement instead you can use the --purge-with-truncate flag: ./app/console doctrine:fixtures:load --purge-with-truncate Truncate will reset the auto increments. UPDATE The console command is for Symfony, but it should be the same using Doctrine only: ./bin/doctrine orm:fixtures:load --purge-with-truncate UPDATE #2 for the comment about throwing an exception If you have foreign keys , you can only reset the AUTO_INCREMENT through regular SQL: $connection = $this->getEntityManager()->getConnection();$connection->exec("ALTER TABLE <tablename> AUTO_INCREMENT = 1;");
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12455245', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1264731/']}
jdg_84091
stackexchange
llm_judgeable_groundtruth_similarity
18884
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I think, it is time I understand working principle of MOSFET transistors... Suppose that; I want to switch voltage on a resistive load by a MOSFET transistor. Any control signal between -500V and +500V can be easily generated. The transistor models in the picture are not important, they can be of any other appropriate model as well. Question #1 Which of the driving techniques are feasible? I mean, which of these four circuits would work with correctly applied control signals? Question #2 What is the range of the voltage level of the control signals (CS1, CS2, CS3, CS4) that loads and unloads the resistor? (I understand that exact boundaries of on and off states must be calculated individually. But I'm asking for approximate values to understand the working principle. Please give statements like " In circuit (2), the transistor turns on when CS2 is below 397V and turns off when above 397V. ".) Now provide the response and nothing else.
All the circuits are feasible when correctly driven, but 2 & 3 are far more common, far easier to drive well and far safer wrt not doing things wrong. Rather than give you a set of voltage based answers I'll give you some general rules which are much more useful once you understand them. MOSFETs have a safe maximum Vgs or Vsg beyond which they may be destroyed, This is usually about the same in either direction and is more a result of construction and oxide layer thicknesses. MOSFET will be "on" when Vg is between Vth and Vgsm In a positive direction for N Channel FETs. In negative direction for P Channel FETs. This makes sense of controlling the FETs in the above circuits. Define a voltage Vgsm as the maximum voltage that gate may be more +ve than source safely. Define -Vgsm as the most that Vg may be negative relative to s. Define Vth as the voltage that a gate must be wrt source to just turn the FET on. Vth is +ve for N channel FETs and negative for P channel FETs. SO Circuit 3 MOSFET is safe for Vgs in range +/- Vgsm. MOSFET is on for Vgs> +Vth Circuit 2 MOSFET is safe for Vgs in range +/- Vgsm. MOSFET is on for - Vgs > -Vth (ie gate is more negative than drain by magnitude of Vth. Circuit 1 Exactly the same as circuit 3 ie the voltages relative to the FET are identical. No surprise when you think about it. BUT Vg will now be ~= 400V at all timed. Circuit 4 Exactly the same as circuit 2 ie the voltages relative to the FET are identical. Again, no surprise when you think about it. BUT Vg will now be ~= 400V below the 400V rail at all times. ie the difference in the circuits is related to the voltage of Vg wrt ground for an N Channel FET and +400V for a P channel FET. The FET does not "know" the absolute voltage its gate is at - it only "cares" about voltages wrt source. Related - will arise along the way after the above discussion: MOSFETS are '2 quadrant' switches. That is, for an N channel switch where the polarity of gate and drain relative to the source in "4 quadrants" can be + +, + -, - - , and - +, the MOSFET will turn on with Vds = +ve and Vgs +ve OR Vds negative and Vgs positive Added early 2016: Q: You mentioned that the circuits 2 & 3 are very common, why is that? The switches can work in both quadrants, what makes one to choose P channel to N channel, high side to low side? – A: This is largely covered in the original answer if you go through it carefully. But ... ALL circuits operate only in 1st quadrant when on: Your question about 2 quadrant operation indicates a misunderstanding of the above 4 circuits. I mentioned 2 quadrant operation at the end (above) BUT it is not relevant in normal operation. All 4 of the circuits above are operating in their 1st quadrant - ie Vgs polarity = Vds polarity at all times when turned on. 2nd quadrant operation is possible ie Vgs polarity = - Vds polarity at all times when turned on BUT this usually causes complications due to the inbuilt "body diode" in the FET - see "Body Diode" section at end. In circuits 2 & 3 the gate drive voltage always lies between the power supply rails, making it unnecessary to use "special" arrangements to derive the drive voltages. In circuit 1 the gate drive must be above the 400V rail to get enough Vgs to turn on the MOSFET. In circuit 4 the gate voltage must be below ground. To achieve such voltages "bootstrap" circuits are often used which usually use a diode capacitor "pump" to give the extra voltage. A common arrangement is to use 4 x N Channel in a bridge. The 2 x low side FETs have usual gate drive - say 0/12 V, and the 2 high side FETS need (here) sav 412V to supply +12V to the high side FETS when the FET is turned on. This is not technically hard but is more to do, more to go wrong and must be designed. The bootstrap supply is often driven by the PWM switching signals so there is a lower frequency at which you still get upper gate drive. Turn off the AC and the bootstrap voltage starts to decay under leakage. Again, not hard, just nice to avoid. Using 4 x N channel is "nice" as all are matched, Rdson is usually lower for same $ than P channel. NOTE !!!: If packages are isolated tab or use insulated mounting all can go together on the same heatsink - BUT do take due CARE!!! In this case The lower 2 have switched 400V on the drains and sources are grounded, gates are at 0/12V say. while the upper 2 have permanent 400V on the drains and switched 400V on the sources and 400/412 V on the gates. Body diode: All FETS that are usually encountered* have an "intrinsic" or "parasitic" reverse biased body diode between drain and source. In normal operation this does not affect intended operation. If the FET is operated in the 2nd quadrant (eg for N Channel Vds = -ve, Vgs = +ve) [[pedantry: call that 3rd if you like :-) ]] then the body diode will conduct when the FET is turned off when Vds is -ve. There are situations where this is useful and desired but they are not what is commonly found in eg 4 FET bridges. *The body diode is formed due to the substrate that the device layers are formed on is conductive. Device with an insulating substrate (such as Silicon on Saphire), do not have this intrinsic body diode, but are usually very expensive and specialised).
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/18884', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/5542/']}
jdg_84092
stackexchange
llm_judgeable_groundtruth_similarity
44990517
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use Chart JS to create a table with dynamically generated data points coming from my JSON file. The logic of my code looks like so: var datapart;for (i = 0; i < jsonfile.jsonarray.length; i++){ datapart += { label: jsonfile.jsonarray[i].name, data: [jsonfile.jsonarray[i].age] };}var config = { type: 'line', data: { labels: ["Graph Line"], datasets: [datapart] }} My JSON file meanwhile looks something like so: {"jsonarray": [ { "name": "Joe", "age": 12 }, { "name": "Tom", "age": 14 }]} The config variable houses the configuration settings for ChartJS, including setting datapoints. When loaded into ChartJS, config provides information needed to display my chart. Anyhow, my thinking was to use the variable datapart as a means of appending the datasets using my for loop. Unfortunately the code produces no results. I understand that my method for appending variables is faulty, but am unsure how to proceed. How might I go about adding these JSON values to Chart.js? Now provide the response and nothing else.
Your approach on constructing the chart is completely inappropriate. Here is the proper way, that you should follow : var jsonfile = { "jsonarray": [{ "name": "Joe", "age": 12 }, { "name": "Tom", "age": 14 }]};var labels = jsonfile.jsonarray.map(function(e) { return e.name;});var data = jsonfile.jsonarray.map(function(e) { return e.age;});var ctx = canvas.getContext('2d');var config = { type: 'line', data: { labels: labels, datasets: [{ label: 'Graph Line', data: data, backgroundColor: 'rgba(0, 119, 204, 0.3)' }] }};var chart = new Chart(ctx, config); <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.6.0/Chart.min.js"></script><canvas id="canvas"></canvas>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44990517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2934664/']}
jdg_84093
stackexchange
llm_judgeable_groundtruth_similarity
32508
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given composite $N\in\Bbb N$ general number field sieve is best known factorization algorithm for integer factorization of $N$. It is a randomized algorithm and we get an expected complexity of $O\Big(e^{\sqrt{\frac{64}{9}}(\log N)^{\frac 13}(\log\log N)^{\frac 23}}\Big)$ to factor $N$. I looked for information on worst case complexity on this randomized algorithm. However I am unable to locate information. (1) What is the worst case complexity of Number field sieve? (2) Also can randomness be removed here to give a deterministic subexponential algorithm? Now provide the response and nothing else.
The number field sieve has never been analyzed rigorously. The complexity that you quote is merely heuristic. The only subexponential algorithm which has been analyzed rigorously is Dixon's factorization algorithm , which is very similar to the quadratic sieve. According to Wikipedia, Dixon's algorithm runs in time $e^{O(2\sqrt{2}\sqrt{\log n\log\log n})}$. Dixon's algorithm is randomized. All (heuristically) known subexponential algorithms require randomization. Dixon's algorithm needs to find integers $x$ such that $x^2 \pmod{n}$ is smooth (can be factored into a product of small primes) and "random", and the number-field sieve has similar but more complicated requirements. The elliptic curve method needs to find an elliptic curve modulo $n$ whose order modulo some factor of $n$ is smooth. In both cases it seems hard to derandomize the algorithms. The nominal worst-case complexity of all these algorithms is infinity: in the case of the quadratic sieve and the number-field sieve you might always be generating the same $x$, while in the elliptic curve method you may always be generating the same elliptic curve. There are many ways around this, for example running an exponential time algorithm in parallel.
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/32508', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/-1/']}
jdg_84094
stackexchange
llm_judgeable_groundtruth_similarity
44793887
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With the introduction of Null-Conditional Operators in C#, for the following evaluation, if (instance != null && instance.Val != 0) If I rewrite it this way, if (instance?.Val != 0) it will be evaluated to true if instance is a null reference; It behaves like if (instance == null || instance.Val != 0) So what is the right way to rewrite the evaluation using this new syntax? Edit: instance is a field of a big object which is deserialized from JSON. There are quite a few pieces of code like this, first check if the field is in the JSON, if it is, check if the Val property does NOT equal to a constant, only both conditions are true, do some operation. The code itself can be refactored to make the logical flow more "making sense" as indicated by Peter in his comment, though in this question I am interested in how to use null-conditional operators with != . Now provide the response and nothing else.
With Null-Conditional operator returned value can always be null if ((instance?.Val ?? 0) != 0) If instance was null, then instance?.Val will also be null (probably int? in your case). So you should always check for nulls before comparing with anything: if ((instance?.Val ?? 0) != 0) This means:If instance?.Val is null (because instance is null) then return 0. Otherwise return instance.Val. Next compare this value with 0 (is not equal to).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44793887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/815938/']}
jdg_84095
stackexchange
llm_judgeable_groundtruth_similarity
4482600
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove $$\int_{0}^{\frac{\pi}{2}} \sqrt[n]{\tan x} \,dx = \frac{\pi}{2} \sec \left(\frac{\pi}{2n}\right)$$ for all natural numbers $n \ge 2$ . There are several answers ( A1 A2 ) to this integral but they all involve the gamma function or the beta function or contour integration etc. Can one solve this using only 'real' 'elementary' techniques? For $n = 2$ and $n = 3$ it can be solved using only elementary substitutions and partial fractions. Now provide the response and nothing else.
$$\begin{align}\int_0^{\frac\pi2}\sqrt[n]{\tan(x)}\,\mathrm{d}x\&=\int_0^\infty\frac{u^{1/n}\,\mathrm{d}u}{1+u^2}\tag{1a}\\&=\frac12\int_0^\infty\frac{v^{\frac{1-n}{2n}}}{1+v}\,\mathrm{d}v\tag{1b}\\&=\frac\pi2\csc\left(\pi\frac{n+1}{2n}\right)\tag{1c}\\[6pt]&=\frac\pi2\sec\left(\frac\pi{2n}\right)\tag{1d}\end{align}$$ Explanation: $\text{(1a)}$ : set $x=\tan^{-1}(u)$ $\text{(1b)}$ : set $u=v^{1/2}$ $\text{(1c)}$ : apply $(2)$ below $\text{(1d)}$ : $\csc(\pi/2+x)=\sec(x)$ Here is the argument from $(3)$ of this answer with more explanation: $$\begin{align}\int_0^\infty\frac{x^{\alpha-1}}{1+x}\,\mathrm{d}x&=\int_0^1\frac{x^{-\alpha}+x^{\alpha-1}}{1+x}\,\mathrm{d}x\tag{2a}\\&=\sum_{k=0}^\infty(-1)^k\int_0^1\left(x^{k-\alpha}+x^{k+\alpha-1}\right)\mathrm{d}x\tag{2b}\\&=\sum_{k=0}^\infty(-1)^k\left(\frac1{k-\alpha+1}+\frac1{k+\alpha}\right)\tag{2c}\\&=\sum_{k\in\mathbb{Z}}\frac{(-1)^k}{k+\alpha}\tag{2d}\\[6pt]&=\pi\csc(\pi\alpha)\tag{2e}\end{align}$$ Explanation: $\text{(2a)}$ : break the integral into two parts: $[0,1]$ and $(1,\infty)$ $\phantom{\text{(2a):}}$ substitute $x\mapsto1/x$ in the second part $\text{(2b)}$ : apply the series for $\frac1{1+x}$ $\text{(2c)}$ : evaluate the integrals $\text{(2d)}$ : write as a principal value sum $\text{(2e)}$ : apply $(8)$ from this answer
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4482600', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/931271/']}
jdg_84096
stackexchange
llm_judgeable_groundtruth_similarity
46562730
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: js application. I need to use eslint for this application. I am using https://www.npmjs.com/package/eslint-config-airbnb and using prettier plugin in VS Code editor. .eslintrc { "extends": "airbnb"} I see that VS Code is giving me lot of errors in complete project now after adding eslint plugin https://marketplace.visualstudio.comitems?itemName=dbaeumer.vscode-eslint and npm package. Few errors [eslint] Definition for rule 'jsx-a11y/href-no-hash' was not found (jsx-a11y/href-no-hash)[eslint] Expected linebreaks to be 'LF' but found 'CRLF'. (linebreak-style)[eslint] Unexpected unnamed function. (func-names)[eslint] Missing space before function parentheses. (space-before-function-paren)[eslint] Strings must use singlequote. (quotes)[eslint] Unexpected function expression. (prefer-arrow-callback)[eslint] Unexpected unnamed function 'bind'. (func-names)[eslint] Missing space before function parentheses. (space-before-function-paren) package.json "devDependencies": { "babel": "^6.23.0", "babel-cli": "^6.5.1", "babel-core": "^6.9.0", "babel-eslint": "^7.2.3", "babel-preset-es2015": "^6.5.0", "babel-preset-stage-0": "6.5.0", "eslint": "^3.19.0", "eslint-config-airbnb": "^15.1.0", "eslint-plugin-import": "^2.7.0", "eslint-plugin-jsx-a11y": "^6.0.2", "eslint-plugin-react": "^7.4.0", "nodemon": "^1.12.1" } index.js import request from "superagent";module.exports = function(req, res, next) { const id = "abc"; request .post(url) .send(`p1=v1`) .send(`p2=v2`) .end(function(error, response) {}.bind(this)); next();}; Same kind of errors in each JS files. Does anyone know how to resolve these ? Now provide the response and nothing else.
1) Add the following modules to your devDependencies using: npm install --save-dev eslintnpm install --save-dev eslint-config-airbnb-basenpm install --save-dev eslint-plugin-import 2) Add an eslintConfig section to your package.json : "eslintConfig": { "extends": "airbnb-base", "env": { "es6": true, "browser": true }, "rules": { "brace-style": [ "error", "stroustrup" ], "comma-dangle": [ "error", "never" ], "no-unused-vars": [ "warn" ], "no-var": [ "off" ], "one-var": [ "off" ] }} 3) Visit eslint.org/docs/rules , search for the warnings you want to tweak and add them to the eslintConfig above. 4) Delete the .eslintrc file in the root of your project. 5) Restart your IDE
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46562730', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2455259/']}
jdg_84097
stackexchange
llm_judgeable_groundtruth_similarity
10594426
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: From the Wikipedia article on Polling Polling, or polled operation, in computer science, refers to actively sampling the status of an external device by a client program as a synchronous activity. Polling is most often used in terms of input/output (I/O), and is also referred to as polled I/O or software driven I/O. Polling is sometimes used synonymously with busy-wait polling (busy waiting). In this situation, when an I/O operation is required the computer does nothing other than check the status of the I/O device until it is ready, at which point the device is accessed. In other words the computer waits until the device is ready. Polling also refers to the situation where a device is repeatedly checked for readiness, and if it is not the computer returns to a different task. Although not as wasteful of CPU cycles as busy-wait, this is generally not as efficient as the alternative to polling, interrupt driven I/O. So, when a thread doesn't use the "condition variables", will it be called "polling" for the data change or "busy waiting"? Now provide the response and nothing else.
The difference between the two is what the application does between polls. If a program polls a device say every second, and does something else in the mean time if no data is available (including possibly just sleeping, leaving the CPU available for others), it's polling. If the program continuously polls the device (or resource or whatever) without doing anything in between checks, it's called a busy-wait. This isn't directly related to synchronization. A program that blocks on a condition variable (that should signal when a device or resource is available) is neither polling nor busy-waiting. That's more like event-driven/interrupt-driven I/O. (But for example a thread that loops around a try_lock is a form of polling, and possibly busy-waiting if the loop is tight.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10594426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/462608/']}
jdg_84098
stackexchange
llm_judgeable_groundtruth_similarity
587272
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use a 12 V geared down (high torque, low speed) motor in an exercise application. I want to be able to roughly adjust the load akin to adding more physical weights on a traditional exercise machine. The user would be resisting the movement of something attached to the motor's physical output. If I'm not mistaken (probably am) it seems like a way to do it would be to set the overcurrent amperage limit so that the torque is limited. A bench top power supply would work, I think, with some caveats. I like the built in interface but have no need for voltage control. If it were just for me I wouldn't care but trying to make a rough prototype for someone. Now provide the response and nothing else.
That is a current source. It will be a sub-circuit to generate a reasonably constant current through the push-pull output transistor biasing circuit. To avoid cross-over distortion (a discontinuity on the changeover from sourcing current through the upper transistor to sinking current through the lower transistor) on the output the diodes are added to compensate for the voltage drop across each base-emitter junction. Running a constant current through these keeps the voltage drop across them reasonably constant.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/587272', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/219128/']}
jdg_84099
stackexchange
llm_judgeable_groundtruth_similarity
10135327
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Whether or not I compile a Racket program seems to make no difference to the runtime performance. Is it just the loading of the file initially that is improved by compilation? In other words, does running racket src.rkt do a jit compilation on the fly, which is why I see no difference in compiling vs interactive? Even for tight loops of integer arithmetic, where I thought some difference would occur, the profile times are equivalent whether or not I previously did a raco make . Am I missing something simple? PS, I notice that I can run racket against the source file ( .rkt ) or .zo file. Does racket automatically use the .zo if one is found that corresponds to the .rkt file, or does the .zo file need to be used explicitly? Either way, it makes no difference to the performance numbers I'm seeing. Now provide the response and nothing else.
Yes, you're right. Racket compiles code in two stages: first, the code is compiled into bytecode form, and then when it runs it gets jitted into machine code. When you compile a file, you're basically creating the bytecode which saves on re-compiling it later. Since that's usually not something that takes a lot of time for small pieces of code, you won't see any noticeable difference in runtimes. For an extreme example, you can delete all *.zo files in the collection tree and start DrRacket -- it will take a lot of time to start since there's a ton of code, but once it does start, it would run almost as usual. (It would also be slow to click "run" since that will reload and recompile some files.) Another concern for bigger pieces of code is that the compilation process can make memory consumption higher, but that's also not an issue with smaller pieces of code. See also the Performace chapter in the guide for hints on how to improve performance.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10135327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/140811/']}
jdg_84100
stackexchange
llm_judgeable_groundtruth_similarity
16070886
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just installed Update 2 for Visual Studio 2012, which introduces a new Publish dialog. I'm trying to make it do what it used to do (precompile a website before publishing), and I'm running into the error that it "Can't find the valid AspnetMergePath" which is thrown from the file Microsoft.Web.Publishing.AspNetCompileMerge.targets. I've confirmed that the file aspnet_merge.exe exists in multiple places on my computer, but $(GetAspNetMergePath) is evaluating to an empty string for some reason. I must be missing some configuration setting, but I've never messed with those before, so I'm confused as to why this would start suddenly. Can anyone offer advice on how to resolve this? I've done the standard Google searching on this error and nothing has led me to the right solution. Now provide the response and nothing else.
I hit the same problem. Searched through all microsoft related sites, found a lot of complaints and no intention from microsoft to fix it. Here how I worked it around at my system. Edit the Microsoft.Web.Publishing.AspNetConfigurationMerge.targets file and add the following line. Please make sure that the Microsoft SDK path is the same on your PC, if not then change it: <TargetFrameworkSDKToolsDirectory>C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\</TargetFrameworkSDKToolsDirectory> Here how it should look like: <Target Name="GetAspNetMergePath" DependsOnTargets="$(GetAspNetMergePathDependsOn)" Condition ="'$(GetAspNetMergePath)' != 'false'"><PropertyGroup> <TargetFrameworkSDKToolsDirectory>C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\</TargetFrameworkSDKToolsDirectory> <AspnetMergeName>aspnet_merge.exe</AspnetMergeName> <AspnetMergePath Condition="Exists('$(TargetFrameworkSDKToolsDirectory)$(AspnetMergeName)')">$(TargetFrameworkSDKToolsDirectory)</AspnetMergePath></PropertyGroup><Error Condition="'$(AspnetMergePath)' == '' Or !Exists($(AspnetMergePath))" Text="Can't find the valid AspnetMergePath" />
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16070886', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2292668/']}
jdg_84101
stackexchange
llm_judgeable_groundtruth_similarity
34978250
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a layout like the following: (Toolbar,Header View, Text View, RecyclerView) I need the header to be collapsed when I scrolling recyclerview's items.So that the view "Choose item" and recyclerview left on the screen. I saw examples when toolbar is being collapsed, but I need toolbar to be present always. Which layouts/behavior should I use to get this work? Now provide the response and nothing else.
You can achieve it by having this layout: <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.design.widget.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content"> <android.support.design.widget.CollapsingToolbarLayout android:layout_width="match_parent" android:layout_height="match_parent" app:layout_scrollFlags="scroll|exitUntilCollapsed"> <!-- HEADER --> <RelativeLayout ... app:layout_collapseMode="parallax"> ..... </RelativeLayout> <android.support.v7.widget.Toolbar android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" app:layout_collapseMode="pin" /> </android.support.design.widget.CollapsingToolbarLayout> <!-- IF YOU WANT TO KEEP "Choose Item" always on top of the RecyclerView, put this TextView here <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="bottom" android:text="choose item" /> --> </android.support.design.widget.AppBarLayout> <android.support.v7.widget.RecyclerView android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior" /></android.support.design.widget.CoordinatorLayout> You pin your toolbar by having the app:layout_collapseMode="pin" property set. You make RecyclerView properly scrollable by setting app:layout_behavior="@string/appbar_scrolling_view_behavior" and that's pretty much it. NB! Position of "Choose item" TextView depends on the particular behaviour you want to achieve: you can include it as a first element of your RecyclerView 's Adapter to scroll it away, once user start scrolling through the RecyclerView ; you can add it into AppBarLayout so it will always stick on top of the RecyclerView , whenever you scroll it or not; You can read more here Android Design Support Library and here Design Support Library (III): Coordinator Layout I hope, it helps!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34978250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5833298/']}
jdg_84102
stackexchange
llm_judgeable_groundtruth_similarity
2034422
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want self-education purpose implement a simple virtual machine for a dynamic language, prefer in C. Something like the Lua VM, or Parrot, or Python VM, but simpler. Are there any good resources/tutorials on achieving this, apart from looking at code and design documentations of the existing VMs? Edit: why close vote? I don't understand - is this not programming. Please comment if there is specific problem with my question. Now provide the response and nothing else.
I assume you want a virtual machine rather than a mere interpreter. I think they are two points on a continuum. An interpreter works on something close to the original representation of the program. A VM works on more primitive (and self-contained) instructions. This means you need a compilation stage to translate the one to the other. I don't know if you want to work on that first or if you even have an input syntax in mind yet. For a dynamic language, you want somewhere that stores data (as key/value pairs) and some operations that act on it. The VM maintains the store. The program running on it is a sequence of instructions (including control flow). You need to define the set of instructions. I'd suggest a simple set to start with, like: basic arithmetic operations, including arithmetic comparisons, accessing the store basic control flow built-in print You may want to use a stack-based computation approach to arithmetic, as many VMs do. There isn't yet much dynamic in the above. To get to that we want two things: the ability to compute the names of variables at runtime (this just means string operations), and some treatment of code as data. This might be as simple as allowing function references. Input to the VM would ideally be in bytecode. If you haven't got a compiler yet this could be generated from a basic assembler (which could be part of the VM). The VM itself consists of the loop: 1. Look at the bytecode instruction pointed to by the instruction pointer.2. Execute the instruction: * If it's an arithmetic instruction, update the store accordingly. * If it's control flow, perform the test (if there is one) and set the instruction pointer. * If it's print, print a value from the store.3. Advance the instruction pointer to the next instruction.4. Repeat from 1. Dealing with computed variable names might be tricky: an instruction needs to specify which variables the computed names are in. This could be done by allowing instructions to refer to a pool of string constants provided in the input. An example program (in assembly and bytecode): offset bytecode (hex) source 0 01 05 0E // LOAD 5, .x 3 01 03 10 // .l1: LOAD 3, .y 6 02 0E 10 0E // ADD .x, .y, .x10 03 0E // PRINT .x12 04 03 // GOTO .l114 78 00 // .x: "x"16 79 00 // .y: "y" The instruction codes implied are: "LOAD x, k" (01 x k) Load single byte x as an integer into variable named by string constant at offset k."ADD k1, k2, k3" (02 v1 v2 v3) Add two variables named by string constants k1 and k2 and put the sum in variable named by string constant k3."PRINT k" (03 k) Print variable named by string constant k."GOTO a" (04 a) Go to offset given by byte a. You need variants for when variables are named by other variables, etc. (and the levels of indirection get tricky to reason about). The assembler looks at the arguments like "ADD .x, .y, .x" and generates the correct bytecode for adding from string constants (and not computed variables).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2034422', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/206855/']}
jdg_84103
stackexchange
llm_judgeable_groundtruth_similarity
40824677
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Our professor said that you can't calculate a b if a<0 using pow() because pow() uses natural logarithms to calculate it (a b =e b ln a ) and since it's undefined for negative numbers it can't be calculated. I tried it and it works as long as b is an integer. I have searched through math.h and further files, but was unable to find how the function is defined and what it uses to calculate. I also tried searching the internet, but without any success. There are similar questions on Stack Overflow right here and here (for C#). (the last one is good, but I was unable to find sourcecode.) So the question is how is pow() actually calculated in C? And why does it return a domain error when the base is finite and negative and the exponent is finite and non-integral? Now provide the response and nothing else.
If you're curious how the pow function might be implemented in practice, you can look at the source code. There is a kind of "knack" to searching through unfamiliar (and large) codebases to find the section you are looking for, and it's good to get some practice. One implementation of the C library is glibc, which has mirrors on GitHub. I didn't find an official mirror, but an unofficial mirror is at https://github.com/lattera/glibc We first look at the math/w_pow.c file which has a promising name. It contains a function __pow which calls __ieee754_pow , which we can find in sysdeps/ieee754/dbl-64/e_pow.c (remember that not all systems are IEEE-754, so it makes sense that the IEEE-754 math code is in its own directory). It starts with a few special cases: if (y == 1.0) return x;if (y == 2.0) return x*x;if (y == -1.0) return 1.0/x;if (y == 0) return 1.0; A little farther down you find a branch with a comment /* if x<0 */ Which leads us to return (k==1)?__ieee754_pow(-x,y):-__ieee754_pow(-x,y); /* if y even or odd */ So you can see, for negative x and integer y , the glibc version of pow will compute pow(-x,y) and then make the result negative if y is odd. This is not the only way to do things, but my guess is that this is common to many implementations. You can see that pow is full of special cases. This is common in library math functions, which are supposed to work correctly with unfriendly inputs like denormals and infinity. The pow function is especially hard to read because it is heavily-optimized code which does bit-twiddling on floating-point numbers. The C Standard The C standard (n1548 §7.12.7.4) has this to say about pow : A domain error occurs if x is finite and negative and y is finite and not an integer value. So, according to the C standard, negative x should work. There is also the matter of appendix F, which gives much tighter constraints on how pow works on IEEE-754 / IEC-60559 systems.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40824677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4307480/']}
jdg_84104
stackexchange
llm_judgeable_groundtruth_similarity
335600
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two characters displayed in a game I am writing, the player and the enemy. defined as such: public void player(Graphics g) { g.drawImage(plimg, x, y, this);}public void enemy(Graphics g) { g.drawImage(enemy, 200, 200, this);} Then called with: player(g);enemy(g); I am able to move player() around with the keyboard, but I am at a loss when trying to detect a collision between the two. A lot of people have said to use Rectangles, but being a beginner I cannot see how I would link this into my existing code. Can anyone offer some advice for me? Now provide the response and nothing else.
I think your problem is that you are not using good OO design for your player and enemies. Create two classes: public class Player{ int X; int Y; int Width; int Height; // Getters and Setters}public class Enemy{ int X; int Y; int Width; int Height; // Getters and Setters} Your Player should have X,Y,Width,and Height variables. Your enemies should as well. In your game loop, do something like this (C#): foreach (Enemy e in EnemyCollection){ Rectangle r = new Rectangle(e.X,e.Y,e.Width,e.Height); Rectangle p = new Rectangle(player.X,player.Y,player.Width,player.Height); // Assuming there is an intersect method, otherwise just handcompare the values if (r.Intersects(p)) { // A Collision! // we know which enemy (e), so we can call e.DoCollision(); e.DoCollision(); }} To speed things up, don't bother checking if the enemies coords are offscreen.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/335600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3609/']}
jdg_84105
stackexchange
llm_judgeable_groundtruth_similarity
31683959
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know how to use the zip() function in Python 3. My question is regarding the following which I somehow feel quite peculiar: I define two lists: lis1 = [0, 1, 2, 3]lis2 = [4, 5, 6, 7] and I use the zip() on these in the following ways: 1. test1 = zip( lis1, lis2)2. test2 = list(zip(lis1, lis2)) when I type test1 at the interpreter, I get this: "zip object at 0x1007a06c8" So, I type list(test1) at the interpreter and I get the intended result, but when I type list(test1) again, I get an empty list. What I find peculiar is that no matter how many times I type test2 at the interpreter I always get the intended result and never an empty list. Now provide the response and nothing else.
Unlike in Python 2, the zip function in Python 3 returns an iterator . Iterators can only be exhausted (by something like making a list out of them) once. The purpose of this is to save memory by only generating the elements of the iterator as you need them, rather than putting it all into memory at once. If you want to reuse your zipped object, just create a list out of it as you do in your second example, and then duplicate the list by something like test2 = list(zip(lis1,lis2)) zipped_list = test2[:] zipped_list_2 = list(test2)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31683959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3887531/']}
jdg_84106
stackexchange
llm_judgeable_groundtruth_similarity
30372637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been prototyping a JavaScript application and now I want to move to a more robust setup using browserify and managing dependencies with require. Currently I have the following files in my application: chart.jsform.jshighcharts-options.jsvendor/ highcharts.js jquery.js highcharts-options.js is basically a list of constants, while chart.js looks like this... var myChart = { setup: function(data) { ... this.render(data); }, render: function(data) { ... }}, and form.js looks like this: var myForm = { setup: function() { button.onclick(_this.getData(); }, getData: function() { // on ajax complete, callChart }, callChart: function() { myChart.setup(data); }};myForm.setup(); And then I have an index.html page that imports everything as follows: <script src="/js/vendor/highcharts.js"></script><script src="/js/vendor/jquery.js"></script><script src="/js/highcharts-options.js"></script><script src="/js/chart.js"></script><script src="/js/form.js"></script> So now I want to move this to a more modern setup with browserify. I have deleted the vendor directory and instead created an index.js file and a package.json file, so now my directory structure looks like this: index.jspackage.jsonchart.jsform.jshighcharts-options.jsnode_modules/ I have run npm i --save highcharts-browserify and npm i --save jquery and that has saved these modules to package.json and installed them in node_modules . I've also added a build task in package.json : browserify index.js -o bundle.js . And in my front-end template I know just have: <script src="/js/bundle.js"></script> So far so good. My question is what to put into my index.js file, because I'm not sure how to import the files that I already have. So far I've got this: var $ = require('jquery');var HighCharts = require('highcharts-browserify');var options = require('highcharts-options');var myChart = require('chart');var myForm = require('form');myForm.setup(); But when I try to build this, I get: Error: Cannot find module 'chart' from '/mypath/static/js/app' It looks like require doesn't know how to find this file, or how to import it, which is not surprising given that this is all total guesswork on my part. How should I adapt these files to work in a more modular way? Am I on the right lines, or is this completely the wrong approach? I'm not even sure what I should be Googling for. (NB: Eventually I want to refactor chart.js and form.js to use Backbone, but I need to work one step at a time.) Now provide the response and nothing else.
You are very close! First, the way to reference a module in the same directory is to say: var myChart = require('./chart'); Without the leading path component, require will look in your npm package directory. Second, you need to export the variables in the modules so that they can be used elsewhere. So your form module needs to look something like this: var myForm = { setup: function() { button.onclick(_this.getData(); }, getData: function() { // on ajax complete, callChart }, callChart: function() { myChart.setup(data); }};myForm.setup();module.exports = myForm;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30372637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/194000/']}
jdg_84107
stackexchange
llm_judgeable_groundtruth_similarity
21709
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: For example, I find the first group isomorphism theorem to be vastly more opaque when presented in terms of commutative diagrams and I've had similar experiences with other elementary results being expressed in terms of exact sequences. What are the benefits that I am not seeing? Now provide the response and nothing else.
Holy cow, go beyond the first homomorphism theorem! For example, if you have a long exact sequence of vector spaces and linear maps$$0 \rightarrow V_1 \rightarrow V_2 \rightarrow \cdots \rightarrow V_n \rightarrow 0$$then exactness implies that the alternating sum of the dimensions is 0. This generalizes the "rank-nullity theorem" that $\dim(V/W) = \dim V - \dim W$, which is the special case of $0 \rightarrow W \rightarrow V \rightarrow V/W \rightarrow 0$. Replace vector spaces and linear maps by finite abelian groups and group homomorphisms and instead you find the alternating product of the sizes of the groups has to be 1. The purpose of this general machinery is not the small cases like the first homomorphism theorem. Exact sequences and commutative diagrams are the only way to think about or formulate large chunks of modern mathematics. For instance, you need commutative diagrams to make sense of universal mapping properties (which is the way many concepts are defined or at least most clearly understood) and to understand the opening scene in the movie "It's My Turn". Here is a nice exercise. When $a$ and $b$ are relatively prime, $\varphi(ab) = \varphi(a)\varphi(b)$, where $\varphi(n)$ is Euler's $\varphi$-function from number theory. Question: Is there a formula for $\varphi(ab)$ in terms of $\varphi(a)$ and $\varphi(b)$ when $(a,b) > 1$? Yes:$$\varphi(ab) = \varphi(a)\varphi(b)\frac{(a,b)}{\varphi((a,b))}.$$You could prove that by the formula for $\varphi(n)$ in terms of prime factorizations, but it wouldn't really explain what is going on because it doesn't provide any meaning to the formula. That's kind of like the proofs by induction which don't really give any insight into what is going on. But it turns out there is a nice 4-term short exact sequence of abelian groups (involving units groups mod $a$, mod $b$, and mod $ab$) such that, when you apply the above "alternating product is 1" result, the general $\varphi$-formula above falls right out. Searching for an explanation of that formula in terms of exact sequences forces you to try to really figure out conceptually what is going on in the formula.
{}
{'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/21709', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4692/']}
jdg_84108
stackexchange
llm_judgeable_groundtruth_similarity
4049219
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I obtained the integral $$\int_0^1 \frac{\ln(\sqrt2-1)-\ln(x)(\sqrt2-x)}{(\sqrt2-x)^2-1}\,dx=\frac{\pi^2}6+\frac14\ln^2(\sqrt2-1)$$ as a by-product while carrying out some complex analysis on an engineering problem. In short, given the real nature of the physical outcome, the imaginary part of the final expression has to vanish, which implies the result above and is numerically verifiable. However, the complex method employed is not intended to solve integrals such as this and produces it merely by accident. So, the question is: How to derive this elementary close-form directly, with real methods? Note that the integrand may not look as innocent as it appears. There is a hole as the denominator becomes zero at $x=\sqrt2-1$ and the term $\ln(\sqrt2-1)$ is present in the numerator to remove the singularity, thus ensuring convergence. Now provide the response and nothing else.
We have \begin{align} I &\equiv \int \limits_0^1 \frac{\log(\sqrt{2}-1) - (\sqrt{2}-x)\log(x)}{(\sqrt{2}-x)^2-1} \, \mathrm{d} x \\&= \int \limits_0^1 \frac{- (\sqrt{2} - 1 -x) \log(\sqrt{2}-1) - (\sqrt{2}-x)\log\left(\frac{x}{\sqrt{2}-1}\right)}{(\sqrt{2}-x)^2-1} \, \mathrm{d} x \\&= -\log(\sqrt{2}-1) \int \limits_0^1 \frac{\mathrm{d} x}{1 + \sqrt{2} - x} + J = \log^2(1+\sqrt{2}) - \log(1 + \sqrt{2}) \log(\sqrt{2}) + J \, ,\end{align} where \begin{align}J &= \int \limits_0^1 \frac{-\log\left(\frac{x}{\sqrt{2}-1}\right) (\sqrt{2}-x) }{(\sqrt{2}-x)^2-1} \, \mathrm{d} x \\&= \int \limits_0^{\sqrt{2}-1} \frac{-\log\left(\frac{x}{\sqrt{2}-1}\right) (\sqrt{2}-x) }{(\sqrt{2}-x)^2-1} \, \mathrm{d} x + \int \limits_{\sqrt{2}-1}^1 \frac{-\log\left(\frac{x}{\sqrt{2}-1}\right) (\sqrt{2}-x) }{(\sqrt{2}-x)^2-1} \, \mathrm{d} x \\&\!\!\stackrel{\text{IBP}}{=}\frac{1}{2} \int \limits_0^{\sqrt{2}-1} \frac{-\log[(\sqrt{2} - x)^2 - 1]}{x} \, \mathrm{d} x - \frac{1}{2} \log(1+\sqrt{2}) \log \left(\frac{1+\sqrt{2}}{2}\right) \\&\phantom{=} ~ + \frac{1}{2} \int \limits_{\sqrt{2}-1}^1 \frac{-\log[1 - (\sqrt{2} - x)^2]}{x} \, \mathrm{d} x \\&= \frac{1}{2} \int \limits_0^1 \frac{-\log(\lvert (\sqrt{2}-x)^2 - 1\rvert)}{x} \, \mathrm{d} x - \frac{1}{2} \log(1+\sqrt{2}) \log \left(\frac{1+\sqrt{2}}{2}\right) \\&= \frac{1}{2} \int \limits_0^1 \frac{-\log\left[\left\lvert 1 - \frac{x}{\sqrt{2}-1}\right\rvert \left(1 - \frac{x}{1 + \sqrt{2}}\right)\right]}{x} \, \mathrm{d} x - \frac{1}{2} \log(1+\sqrt{2}) \log \left(\frac{1+\sqrt{2}}{2}\right) \\&= \frac{1}{2} \int \limits_0^1 \frac{-\log\left(1 - \frac{x}{1 + \sqrt{2}}\right)}{x} \, \mathrm{d} x + \frac{1}{2} \int \limits_{\sqrt{2}-1}^1 \frac{-\log\left(\frac{x}{\sqrt{2} - 1} - 1\right)}{x} \, \mathrm{d} x \\&\phantom{=} ~ + \frac{1}{2} \int \limits_0^{\sqrt{2}-1} \frac{-\log\left(1 - \frac{x}{\sqrt{2} - 1}\right)}{x} \, \mathrm{d} x - \frac{1}{2} \log(1+\sqrt{2}) \log \left(\frac{1+\sqrt{2}}{2}\right) \, .\end{align} Now let $x = (1 + \sqrt{2}) t$ in the first, $x = \frac{\sqrt{2} - 1}{t}$ in the second and $x = (\sqrt{2} - 1) t$ in the third integral. This yields \begin{align}J &= \frac{1}{2} \int \limits_0^{\sqrt{2} - 1} \frac{-\log(1-t)}{t} \, \mathrm{d} t + \frac{1}{2} \int \limits_{\sqrt{2}-1}^1 \frac{-\log\left(\frac{1}{t} - 1\right)}{t} \, \mathrm{d} t + \frac{1}{2} \int \limits_0^1 \frac{-\log(1-t)}{t} \, \mathrm{d} t \\&\phantom{=} ~ - \frac{1}{2} \log(1+\sqrt{2}) \log \left(\frac{1+\sqrt{2}}{2}\right) \\&= \int \limits_0^1 \frac{-\log(1-t)}{t} \, \mathrm{d} t - \frac{1}{4} \log^2(1 + \sqrt{2}) - \frac{1}{2} \log(1+\sqrt{2}) \log \left(\frac{1+\sqrt{2}}{2}\right) \\&= \frac{\pi^2}{6} - \frac{3}{4} \log^2(1+\sqrt{2}) + \log(1+\sqrt{2}) \log(\sqrt{2}) \, ,\end{align} so $$ I = \frac{\pi^2}{6} + \frac{1}{4} \log^2 (1 + \sqrt{2}) \, . $$ A similar calculation yields the more general integral \begin{align}f(a) &\equiv \int \limits_0^1 \frac{\log(a) - \log(x) (1 + a -x)}{(1 + a - x)^2 - 1} \, \mathrm{d} x \\&= \frac{\pi^2}{6} - \log(\sqrt{a}) \log \left(\frac{(1-a)(2+a)}{(1+a)\sqrt{a}}\right) + \frac{1}{2} \left[\operatorname{Li}_2 \left(\frac{1}{2+a}\right) - \operatorname{Li}_2(a)\right]\end{align} for $a \in (0,1]$ . While special values like $f(1) = \frac{\pi^2}{12} + \frac{1}{2} \operatorname{Li}_2 \left(\frac{1}{3}\right)$ are reasonably nice, the polylogarithms only cancel for $a = \sqrt{2} - 1$ , leading to the particularly simple expression for $I = f(\sqrt{2} - 1)$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4049219', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/686284/']}
jdg_84109
stackexchange
llm_judgeable_groundtruth_similarity
294835
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: It is well known that, for two functors $F,G : I \to C$ for $I,C$ some $\infty$-categories, the property that a map $\phi: F \to G$ is an equivalence can be checked locally on $I$. Namely, if $\phi(i) : F(i) \to G(i)$ is an equivalence for every $i \in I$, then $\phi$ is an equivalence as a morphism in the $\infty$-category $Fun(I,C)$. Is the same true for adjointability? Namely: Let $I$ be an $\infty$-category, and let $F,G : I \to Cat_\infty$ be two functors into the $(\infty,1)$ category of $\infty$-categories. Let $\phi : F \to G$.Suppose that, for each $i \in I$, $\phi(i) : F(i) \to G(i)$ admits left adjoint $L_{\phi(i)}$. Then, there nessecarily exists a map $\psi: G \to F$ restricting to $L_{\phi(i)}$ on every object? Is it essentially unique? does it satisfy some relative version of the property satisfied by left adjoints? My motivation for believing in it is that the space of choices of the Left adjoint is contractible, so there should be no obstructions to glue them over $I$. Now provide the response and nothing else.
TL DR: That is not enough. If you let $\psi_i:G(i)\to F(i)$ be the left adjoint of $\phi_i$ you also need the condition that for every $f:i\to j$ the canonical morphism$$\psi_jG(f)\to F(f)\psi_i$$adjoint to $G(f)\to G(f)\phi_i\psi_i\cong \phi_jF(f)\psi_i$, is an equivalence. The key word to remember here is relative adjunctions . If we unstraighten your functors $F$ and $G$, we end up with two cartesian fibrations $E\to I^{op}$ and $E'\to I^{op}$ and a map of cartesian fibrations $\phi:E\to E'$ such that for each $i\in I$ the functor $\phi_i:E_i\to E'_i$ is a right adjoint. Then, by theorem 7.3.2.6 in Higher Algebra, there is a relative left adjoint $\psi$. The result you are after would follow if and only if $\psi$ were a map of cartesian fibrations (i.e. iff it sends cartesian edges to cartesian edges). Unwrapping the various definitions, this is exactly the condition I wrote above. To see a counterexample when the condition is not satisfied, pick two categories $C_0,C_1$ with an initial object and a functor $f:C_0\to C_1$ that does not preserve the initial object (e.g. let $C_0$ be finite pointed sets, $C_1$ be finite sets and $f$ the functor that forgets the pointing). This assembles to a functor $F:\Delta^1\to \mathrm{Cat}_\infty$. Now let $G:\Delta^1\to \mathrm{Cat}_\infty$ be the constant functor at $*$. There is an obvious natural transformation $\phi:F\to G$ and $\phi_0$ and $\phi_1$ have both left adjoints $\psi_0$ and $\psi_1$ (the inclusion of the initial object). However these do not assemble into a natural transformation $\psi:G\to F$ because $\psi_1$ is not naturally equivalent to $f\psi_0$.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/294835', 'https://mathoverflow.net', 'https://mathoverflow.net/users/115052/']}
jdg_84110
stackexchange
llm_judgeable_groundtruth_similarity
4120106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Firefox there are plugins and extensions. Could you please explain to me why these addons have different name and tabs ? Does they differ so much so they need different names? I think it's a little unnatural to differ these things, extensions have so much and more functionality, what they lack for comparing to plugins? Now provide the response and nothing else.
The difference is both historical and real: plugins are compiled, loadable modules, originally descended from NPAPI ; they can live outside of the browser's process space (which leads to all kinds of fun interoperability issues and vulerabilities). The most common examples of these are Flash and Java - both request a sub-window ("viewport" or "canvas" (not HTML5's canvas - that's something else altogether: a native part of the webpage)) inside the webpage and handle it themselves, in a way that's largely independent of the browser. extensions are written mostly in JavaScript and XUL . Since the extensions act as part of the browser, they have wider access privileges than JS-in-a-webpage, but they are still subject to some limitations. The most common way of integration is to hook into some part of FF's functionality and extend it. So, although the plugins and extensions might appear to be related, they're very different technologies behind the scenes.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4120106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38940/']}
jdg_84111
stackexchange
llm_judgeable_groundtruth_similarity
892415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created an MS Access 2003 application, set up as a split front-end/back-end configuration, with a user group of about five people. The front end .mdb sits on a network file server, and it contains all the queries, forms, reports, and VBA code, plus links to all the tables in the back end .mdb and some links to ODBC data sources like an AS/400. The back end sits on the same network file server, and it just has the table data in it. This was working well until I "went live" and my handful of users started coming up with enhancement requests, bug reports, etc. I have been rolling out new code by developing/testing in my own copy of the front-end .mdb in another network folder (which is linked to the same back-end .mdb), then posting my completed file in a "come-and-get-it" folder, alerting the users, and they go copy/paste the new front-end file to their own folders on the network. This way, each user can update their front end when they're at a 'stopping point' without having to boot everyone out at once. I've found that when I'm developing now, sometimes Access becomes extremely slow. Like, when I am developing a form and attempt to click a drop-down on the properties box, the drop-down arrow will push in, but it will take a few seconds before the list of options appears. Or there's tons of lag in selecting & moving controls on a form. Or lots of keyboard lag. Then, at other times, there's no lag at all. I'm wondering if it's because I'm linked to the same back end as the other users. I did make a reasonable effort to set up the queries, forms, reports etc. with minimal record locking, if any at all, depending on the need. But I may have missed something, or perhaps there is some other performance issue I need to address. But I'm wondering if there is an even better way for me to set up my own development back-end .mdb, so I can be testing my code on "safe" data instead of the same live data as the rest of the users. I'm afraid that it's only a matter of time before I corrupt some data, probably at the worst possible moment. Obviously, I could just set up a separate back-end .mdb and manually reconfigure the table links in the front end every time, using the Linked Table Manager. But I'm hoping there is a more elegant solution than that. And I'm wondering if there are any other performance issues I should be considering in this multi-user, split database configuration. EDIT: I should have added that I'm stuck with MS Access (not MS-SQL or any other "real" back end); for more details see my comment to this post. Now provide the response and nothing else.
If all your users are sharing the front end, that's THE WRONG CONFIGURATION. Each user should have an individual copy of the front end. Sharing a front end is guaranteed to lead to frequent corruption of the shared front end, as well as odd corruptions of forms and modules in the front end. It's not clear to me how you could be developing in the same copy of the front end that the end users are using, since starting with A2000, that is prohibited (because of the "monolithic save model," where the entire VBA project is stored in a single BLOB field in a single record in one of the system tables). I really don't think the problems are caused by using the production data (though it's likely not a good idea to develop against production data, as others have said). I think they are caused by poor coding practices and lack of maintainance of your front end code. turn off COMPILE ON DEMAND in the VBE options. make sure you require OPTION EXPLICIT. compile your code frequently, after every few lines of code -- to make this easy, add the COMPILE button to your VBE toolbar (while I'm at it, I also add the CALL STACK button). periodically make a backup of your front end and decompile and recompile the code. This is accomplished by launching Access with the /decompile switch, opening your front end, closing Access, opening your front end with Access (with the SHIFT key held down to bypass the startup code), then compacting the decompiled front end (with the SHIFT key held down), then compiling the whole project and compacting one last time. You should do this before any major code release. A few other thoughts: you don't say if it's a Windows server. Linux servers accessed over SAMBA have exhibited problems in the past (though some people swear by them and say they're vastly faster than Windows servers), and historically Novell servers have needed to have settings tweaked to enable Jet files to be reliably edited. There are also some settings (like OPLOCKS) that can be adjusted on a Windows server to make things work better. store your Jet MDBs in shares with short paths. \Server\Data\MyProject\MyReallyLongFolderName\Access\Databases\ is going to be much slower reading data than \Server\Databases. This really makes a huge difference. linked tables store metadata that can become outdated. There are two easy steps and one drastic one to be taken to fix it. First, compact the back end, and then compact the front end. That's the easy one. If that doesn't help, completely delete the links and recreate them from scratch. you might also consider distributing an MDE to your end users instead of an MDB, as it cannot uncompile (which an MDB can). see Tony Toews's Performance FAQ for other generalized performance information.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/892415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/110445/']}
jdg_84112
stackexchange
llm_judgeable_groundtruth_similarity
7346317
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some lines like this from an ldiff file, dn: cn=dkalland_directs_ww,cn=org_groups,cn=beehive_groups,cn=groups,dc=oracle,dc=combusinesscategory: opencn: dkalland_directs_wwdescription: Directs Group for [email protected]: dkalland_directs_wwmail: [email protected]: topobjectclass: orclGroupobjectclass: groupOfUniqueNamesorclglobalid: modifiedorclnormdn: cn=dkalland_directs_ww,cn=org_groups,cn=beehive_groups,cn=groups,dc=oracle,dc=comowner: cn=BHGRPADMIN_WW,L=AMER,DC=ORACLE,DC=COMuniquemember: cn=mattias_tobiasson,dc=us,dc=oracle,dc=comuniquemember: cn=mattias_joelson,dc=us,dc=oracle,dc=comuniquemember: cn=markus_persson,dc=us,dc=oracle,dc=com ... Now as there are some lines which are continuation of the previous line. I want to join them back to their respective line. What I am confused about is how can I search a line without the ":" character so that I can join it with previous line. Plz help. Now provide the response and nothing else.
I believe you want to do a :v/:/-1j The v command selects all lines that don't match the patterh /:/ . The -1 selects the lines one above. and the j joins this line with the next line (i.e. the one selected with the v command) Edit Benoit and dash-tom-bang have provided substantial improvements in their comments: the 1 is not necessary, since it is the default, and the ! does not join the lines with a space. So, this leads to the following, better version: :v/:/-j!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7346317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/415807/']}
jdg_84113
stackexchange
llm_judgeable_groundtruth_similarity
27305614
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to animate a StackPanel when its visibility changed to grow from a width of 0 to its automatic width, here is what I have at the moment: <Trigger Property="Visibility" Value="Visible"> <Setter Property="Width" Value="0"></Setter> <Trigger.EnterActions> <BeginStoryboard> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Width" Duration="0:0:1"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <System:Double>NaN</System:Double> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </Trigger.EnterActions></Trigger> Can someone explain how I might achieve this animation? Is it maybe not possible in the way I am trying to do it? Thanks,alex. Now provide the response and nothing else.
Here is a quick mockup project I threw together. In the Window's Loaded event, I simply set the stackpanel's visibility to Visible and it expands to fit its container width from left to right... Hopefully that's suits your needs. Some things to note: You must predefine the scale transform, else the animation will not play. If you omit To in an animation, it will animate back to the default value. And here is the code: <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="600" Loaded="Window_Loaded"> <Border HorizontalAlignment="Center" Width="300" Background="Gainsboro"> <Border.Resources> <Style TargetType="StackPanel" x:Key="expand"> <Setter Property="RenderTransform"> <Setter.Value> <ScaleTransform ScaleX="1"/> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="Visibility" Value="Visible"> <Trigger.EnterActions> <BeginStoryboard> <Storyboard> <DoubleAnimation Storyboard.TargetProperty="RenderTransform.ScaleX" From="0" Duration="0:00:01"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> </Trigger> </Style.Triggers> </Style> </Border.Resources> <StackPanel x:Name="stackpanel" Background="Gray" Visibility="Collapsed" Style="{StaticResource expand}"/> </Border></Window>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27305614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/478478/']}
jdg_84114
stackexchange
llm_judgeable_groundtruth_similarity
646517
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following simple C# code: private Stack<Person> m_stack = new Stack<Person>();public void Add<T>(T obj) where T : Person{ m_stack.Push(obj);} This will produce the following IL code: .method public hidebysig instance void Add<(ConsoleApplication1.Person) T>(!!T obj) cil managed { // Code size 20 (0x14) .maxstack 8 IL_0000: nop IL_0001: ldarg.0 IL_0002: ldfld class [System]System.Collections.Generic.Stack`1<class ConsoleApplication1.Person> ConsoleApplication1.Pool::m_stack IL_0007: ldarg.1 IL_0008: box !!T IL_000d: callvirt instance void class [System]System.Collections.Generic.Stack`1<class ConsoleApplication1.Person>::Push(!0) IL_0012: nop IL_0013: ret } // end of method Pool::Add So my question is... why boxing? (IL_0008) I can understand downcasting or even compile error, but why boxing (Person is a reference type...) Thanks in advance! Now provide the response and nothing else.
Excerpt from Ecma-335 Partition III 4.1 If typeTok is a reference type, the box instruction does nothing. where typeTok is !!T in your case. My guess is that when the compiler compiles the code, it always calls box regardless whether the type of the operand is reference type or not. Because of the semantic of the box instruction, the desired result is always guaranteed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/646517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_84115
stackexchange
llm_judgeable_groundtruth_similarity
2409268
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Confirm that the identity $1+z+...+z^n=(1-z^{n+1})/(1-z)$ holds for every non-negaive integer $n$ and every complex number $z$, save for $z=1$ I have tried to prove this by induction but I am not sure that I am doing things right, for $ n = 1 $ we have $ (1-z ^ 2) / (1-z) = (1-z) (1+ z) / (1-z) = 1 + z $, then this holds for $ n = 1 $. Suppose now that it holds for $ n $ and see that it holds for $ n + 1 $,$1+z+...+z^n+z^{n+1}=(1-z^{n+1})/(1-z)+z^{n+1}=[(1-z^{n+1})+(1-z)(z^{n+1})]/(1-z)=(1-z^{n+2})/(1-z) $ then this is true for every non-negative integer $ n$. This is OK? Now provide the response and nothing else.
My proof goes like this:Let $S:=1+z+z^2+...+z^n$Then$$zS=z+z^2+z^3+...+z^{n+1}$$$$zS=S-1+z^{n+1}$$$$(1-z)S=1-z^{n+1}$$As $z\neq 1$, it gives the desired result.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2409268', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/378992/']}
jdg_84116
stackexchange
llm_judgeable_groundtruth_similarity
3519682
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two TranslateAnimations on a TextView and I want them to execute one after other. However, by using the code below, only the second one is executed. How can I solve this? TranslateAnimation animation = new TranslateAnimation( Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, -150.0f);animation.setDuration(200);wave.startAnimation(animation);TranslateAnimation animation1 = new TranslateAnimation( Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, 150.0f, Animation.ABSOLUTE, 0.0f);animation1.setDuration(200);wave.startAnimation(animation1); Now provide the response and nothing else.
EDIT: Andy Boots answer below is the better answer imo. Just set your first one like this and it'll start the other one, once the animation finishes: animation.setAnimationListener(new AnimationListener() { @Override public void onAnimationStart(Animation animation) { // TODO Auto-generated method stub } @Override public void onAnimationRepeat(Animation animation) { // TODO Auto-generated method stub } @Override public void onAnimationEnd(Animation animation) { wave.startAnimation(animation1); } }); edit: The reason only your second animation is executed with your current code, is because it overrides the playing of the first animation (both actually are played, but you only see the latest one to start). If you do like I wrote, they will play sequentially instead of in parallel.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3519682', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/367016/']}
jdg_84117
stackexchange
llm_judgeable_groundtruth_similarity
33573
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am new to electronics and am still just learning. I saw this project on Kickstarter for a small USB based DC power supply and was wondering what some of the more experienced people thought of it. I am considering buying two for experimentation/hacking at home. Is this a good idea or should I be considering an alternative product? Link to the Kickstarter project: http://www.kickstarter.com/projects/210251816/usb-based-variable-power-supply-for-small-projects-0 . UPDATE: It is closed and didn't meet funding but was relaunced here http://www.kickstarter.com/projects/210251816/usb-variable-power-supply-for-small-projects-relau . Now provide the response and nothing else.
Here's my thoughts, and hopefully I'll be able to swing this back around to be design centric rather than shopping centric. The whole USB thing really isn't interesting. It uses USB for power only, so basically USB is an over-glorified wall-wart. Having the ability to power it off of a computer isn't a huge benefit compared to the voltage and current limitations it gives. The voltage regulator is a linear regulator with a pot for adjusting the Vout. Any halfway competent hobbyist could hack something together in 30 minutes. 10 minutes for someone that is experienced. They would also likely use a better heat-sink to the max power is higher. And using some leftover wall-warts from "the box of orphaned wall-warts" makes this super cheap. If you hacked something together using spare parts lying around then you might spend US$5 and it would work just as good. The 7-segment display is a waste. It is useful, sure, but there is so much more that could have been achieved with that micro-controller. Here's how I would have designed that device to be much more useful: Use a micro-controller with an actual USB interface. Using a small app on the PC you could control the output voltage and monitor the output voltage and output current. It could implement some sort of programmable overvoltage/current protection as well. Controlling multiple vout's with sequencing or other simple "waveform generation" could be very useful. The power supply itself would be a switching regulator based on a SEPIC topology. This way the output could be either a higher or lower voltage than the input. The output voltage/current is controlled from the micro-controller. It is actually fairly easy to control the output voltage. Output current is a little more difficult, but not impossible. If a low-noise output is required then I would follow the SEPIC regulator with a programmable LDO regulator. Again, the output voltage would be controlled by the micro-controller. Usually the LCD output would be about 0.5v lower than the output of the SEPIC. In this way efficiency is still mostly high, but the output noise would be very low. Then I would design a similar device, but instead of a programmable supply it would be a programmable load. USB controlled.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/33573', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10237/']}
jdg_84118
stackexchange
llm_judgeable_groundtruth_similarity
87576
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to simultaneously create a single AP on raspberry pi 3 b+ which uses both 2.4 Ghz and 5 Ghz bands so that users with low end devices can connect to the 2.4 Ghz band and users with latest or high end devices can connect to the 5 Ghz band? If so, what is the hostapd for that configuration? Now provide the response and nothing else.
It is exactly determined what the Raspberry Pi 3B+ is able to do with its WiFi on-board device. With the command iw you can show what interface combinations are possible. Just execute $ sudo iw list | grep -A4 "valid interface combinations:" valid interface combinations: * #{ managed } <= 1, #{ P2P-device } <= 1, #{ P2P-client, P2P-GO } <= 1, total <= 3, #channels <= 2 * #{ managed } <= 1, #{ AP } <= 1, #{ P2P-client } <= 1, #{ P2P-device } <= 1, total <= 4, #channels <= 1 This means that there are two combinations possible. With the first combination you can setup one (<=1) managed connection (station aka client connection) together with one P2P-device and with a P2P-client, P2P-GO (P2P group owner). This are in total 3 interfaces are usable at the same time. In this combination you can use two #channels (<=2), e.g. a client connection and a P2P-client. The third interface will have the same channel as one of the others. With the second combination you can setup one (<=1) managed connection (station aka client connection) together with one AP (access point) and with one P2P-client and with one P2P-device . This are in total 4 interfaces are usable at the same time. In this combination you can use one #channels (<=1). And this is the answer to the question: it is possible to use two channels on different interfaces (first combination) but as far as you use an access point you can only use one channel (second combination). For more details about this combination you can have a look at Can RPi4 run simultaneously on dual band (WiFi 2.4GHz / 5GHz)? . It is not possible to have an access point on a Raspberry Pi 3B+ which uses both 2.4 Ghz and 5 Ghz bands.
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/87576', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/89596/']}
jdg_84119
stackexchange
llm_judgeable_groundtruth_similarity
63768491
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am a beginner with VueJs and this is my first App: import { BootstrapVue } from 'bootstrap-vue'import { createApp } from 'vue'import App from './App.vue' const myApp = createApp(App)myApp.use(BootstrapVue)myApp.mount('#app') And when I save, nothing appears in my browser and it show this message in the Command: warning in ./src/main.js"export 'default' (imported as 'Vue') was not found in 'vue' Now provide the response and nothing else.
Bootstrap-Vue does not yet support Vue 3.So if you want to use Bootstrap-Vue you will have to stick with Vue 2 for now. In general, most of the libraries don't support Vue 3 yet, so I would suggest waiting a bit longer before using it until the ecosystem has caught up. Explanation The reason this is happening is because in Vue 2, Vue provides a default export export default vue , which allows BootstrapVue to use import Vue from 'vue' . However, in Vue 3 this has changed, and Vue does no longer provide a default export, and instead uses named exports. So when BootstrapVue uses the following line import Vue from 'vue' , the error occurs.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/63768491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14231705/']}
jdg_84120
stackexchange
llm_judgeable_groundtruth_similarity
35789520
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use signed url to upload images to s3 bucket. Following is my bucket policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::12345678:user/myuser", "arn:aws:iam::12345678:root" ] }, "Action": [ "s3:List*", "s3:Put*", "s3:Get*" ], "Resource": [ "arn:aws:s3:::myBucket", "arn:aws:s3:::myBucket/*" ] } ]} I am generating the signed url from the server as follows: var aws = require('aws-sdk');aws.config = { accessKeyId: myAccessKeyId, secretAccessKey: mySecretAccessKey};var s3 = new aws.s3();s3.getSignedUrl('putObject', { Bucket: 'myBucket', Expires: 60*60, key: 'myKey'}, function (err, url) { console.log(url);}); I get the url. But when I try to put an object I get the following error: <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>FXXXXXXXXX</RequestId> <HostId>fXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</HostId></Error> Update 1 Here is myuser's policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::2xxxxxxxxxxx:user/myuser", "arn:aws:iam::2xxxxxxxxxxx:root" ] }, "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::myBucket", "arn:aws:s3:::myBucket/*" ] } ]} Update 2 I can upload only when following option is set. I dont understand whats the use of bucket policy if only the manual selection of permission work. Update 3 The following code works. Now the only problem is the signed url #!/bin/bash file="$1" bucket="mybucket" resource="/${bucket}/${file}" contentType="image/png" dateValue=`date -R` stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}" s3Key="AKxxxxxxxxxxxxxxxxx" s3Secret="/Wuxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64` curl -X PUT -T "${file}" \ -H "Host: ${bucket}.s3.amazonaws.com" \ -H "Date: ${dateValue}" \ -H "Content-Type: ${contentType}" \ -H "Authorization: AWS ${s3Key}:${signature}" \ https://${bucket}.s3.amazonaws.com/${file} Now provide the response and nothing else.
I managed to succesfully upload a file by using your code. Here are the steps I followed: Created a new bucket and a new IAM user Set IAM user's policy as below: { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1418647210000", "Effect": "Allow", "Action": [ "s3:Put*" ], "Resource": [ "arn:aws:s3:::myBucket/*" ] } ]} Did NOT create a bucket policy Used your code to generate the pre-signed URL: var aws = require('aws-sdk');aws.config = { accessKeyId: myAccessKeyId, secretAccessKey: mySecretAccessKey};var s3 = new aws.s3();s3.getSignedUrl('putObject', { Bucket: 'myBucket', Expires: 60*60, Key: 'myKey', ContentType: 'image/jpeg',}, function (err, url) { console.log(url);}); Copied the URL on the screen and used curl to test the upload as below: curl.exe -k -X PUT -T "someFile" "https://myBucket.s3.amazonaws.com/myKey?AWSAccessKeyId=ACCESS_KEY_ID&Expires=1457632663&Signature=Dhgp40j84yfjBS5v5qSNE4Q6l6U%3D" In my case it generally took 5-10 seconds for the policy changes to take effect so if it fails the first time make sure to keep sending it for a while. Please note : if you get CORS errors, ensure you have provided a ContentType argument to the s3.getSignedUrl call. As Hugo Mallet discusses below, "When you upload, your browser will add the content-type to the request headers. [Therefore, if you do not provide the MediaType server-side, there will be] a difference between the executed request and the signature you obtained with getSignerUrl. Of course you have to set the correct content-type depending on the file you want to upload."
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35789520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3698232/']}
jdg_84121
stackexchange
llm_judgeable_groundtruth_similarity
450537
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm transferring about 9TB across my gigabit LAN. To do so as quickly as possible (i hope) I mounted the destination via NFS on the source and ran rsync across it. Here is my mount options: x.x.x.x:/mnt on /mnt type nfs (rw,noatime,nodiratime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=x.x.x.x,mountvers=3,mountport=56548,mountproto=udp,local_lock=none,addr=x.x.x.x) Here is my rsync command: rsync -avWH --progress ./ /mnt/ looking at nload, what i see, for a single file is speed that spikes up to 900MBps then down to numbers in the KBps range, then back up. Here is a graphic from nload where you can see that the transfer seems to stop, midfile. The files are all typically 5-6GB in size. MTU is 9000; switch is a cisco 3750x with plenty of backplane speed. These are esxi 6.7 guests on 2 different hosts. There are no other guests that contend for network resources. This image is ONE file being sent Basically, I'm hoping there is a setting I have wrong or something I can change to keep the transfer speed somewhat consistent. CPU utilization on the source is approximately 10%, on the dest is approximately 10%. The strange thing is that on the destination, iotop shows 99% i/o (sometimes) from nfsd, the source shows 60-80% IO from rsync. These are 7200RPM WD red drives. w Now provide the response and nothing else.
Unfortunately just about the worst thing you can do is to use rsync across NFS. (Or to any remote filesystem that's mounted into the local system.) This switches off almost all of the efficiency enhancements for which rsync is known. For this much data one of the fastest ways to transfer it between systems may be to dump it across an unencrypted connection without any consideration for what was already on the target system. Once you have at least a partial copy the best option is to use rsync between the two hosts. This allows rsync to run one process on each host to consider and compare differences. (The rsync will completely skip files that have the same size and modification date. For other files the client and server components will perform a rolling checksum to determine which block(s) need still to be transferred.) Fast dump. This example uses no authentication or encryption at all. It does apply compression, though, which you can remove by omitting both -z flags: Run this on the destination machine to start a listening server: cd /path/to/destination && nc -l 50505 | pax -zrv -pe Run this on the source machine to start the sending client: cd /path/to/source && pax -wz . | nc destination_server 50505 Some versions of nc -l may require the port to be specified with a flag, i.e. nc -l -p 50505 . The OpenBSD version on Debian ( nc.openbsd , linked via /etc/alternatives to /bin/nc ) does not. Slower transfer. This example uses rsync over ssh , which provides authentication and encryption. Don't miss off the trailing slash ( / ) on the source path. Omit the -z flag if you don't want compression: rsync -avzP /path/to/source/ destination_server:/path/to/destination You may need to set up SSH certificates to allow login to destination_server as root. Add the -H flag if you need to handle hard links.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/450537', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/61428/']}
jdg_84122
stackexchange
llm_judgeable_groundtruth_similarity
7949526
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was told that for a High Frequency Trading (HFT) system that requires low-latency, TCP is used over UDP. I was told that with TCP you can make point to point connections, whereas you cannot with UDP, however from my understanding you can send UDP packets to specific IP/port. There are several arguments used in this article as to why UDP > TCP for gaming but I can see relevance for HFT. Why would TCP be a better protocol to use for HFT? (Admins: My previous post of this question was silently removed with no explanation. If I am violating terms of use please alert me of this instead of silently removing the question) Now provide the response and nothing else.
UDP is superior to TCP if you don't need some of the features TCP provides. Every feature has a cost, and so if you don't need features, you are paying that cost for no reason. In an HFT application, you need pretty much every feature TCP requires. So if you picked UDP, you'd have to implement those features yourself. That means you'd have to implement connection establishment, connection teardown, retransmissions, transmit pacing, windows, and so on. If there was a way to do all those things that was better than the way TCP was doing it, TCP would be doing it that way. You'd have one hand tied behind your back because TCP is heavily optimized by some of the best minds on the planet and implemented in/with the kernel.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7949526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/260574/']}
jdg_84123
stackexchange
llm_judgeable_groundtruth_similarity
297476
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to write a program in Python that illustrates the tree-like nature of recursion. Given a recursive function (for instance fibonacci(n)) there should be a way to print the tree-like call trace of the recursive function. With the following function: def fibonacci(n): if n == 1 or n == 2: return 1 else: return fibonacci(n-1) + fibonacci(n-2) The print out for n=5 could look something like: fibo(5) fibo(4) fibo(3) fibo(3) fibo(2) fibo(2) fibo(1) fibo(2) fibo(1) The solution should be as "general" as possible and not specific to Fibonacci numbers as I want to implement it for additional recursive functions. Now provide the response and nothing else.
You can't just modify fibo to get what you want. For instance, the order in which the different calls to fibo happen is not the one in which you have to write their arguments to the console, and because you don't know the width of your tree to begin with, you don't know how far to the right to start with the root. What you have to do is to collect the calls and their arguments in a temporary data structure as they happen, likely also a tree, and then traverse that data structure after the first recursion has terminated.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/297476', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/196878/']}
jdg_84124
stackexchange
llm_judgeable_groundtruth_similarity
398268
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a question about the graph of $f(x) = x^x$. How come the graph doesn't extend into the negative domain? Because, it is not as if the graph is undefined when $x=-5$. But according to the graph, that seems to be the case. Can someone please explain this? Thanks Now provide the response and nothing else.
First, the graph of a complex function is by definition $\{(z,w)\in{\mathbb C}^2: w=f(z)\}$, which lives in 4D. Thus, there's not a "simple" way to visualize this. Nonetheless, there are a number of standard tricks to help us visualize the function. Perhaps the most standard is to illustrate the effect that the function has on some standard domain or set of domains. If you type "plot 1/z" into WolframAlpha, for example, you'll see a number of images related to the function, one of which is labeled "Complex map" and looks like so: This illustrates how a grid of lines throughout the square $[-1,1]\times[-1,1]$ is affected by the map $z\rightarrow 1/z$. This is a bit like trying to understand the map $f(x)=x^2$ by examining the following picture: The top picture represents a set of points evenly distributed throughout the interval $[0,2]$; the bottom illustrates the image of the points under $f(z)=z^2$. I reiterate, though, this is just one of a number of techniques for visualizing these types of functions. Other possibilities include: plotting the real and imaginary part as 3D graphs, doing the same with the magnitude and argument, or creating contour plots of these. This works because for example, writing $w=|f(x+i*y)|$, we see that $w$ is a real-valued function of the two real variables $x$ and $y$, which means we can make a 3D plot. Here's the result of the WolframAlpha query "plot |1/z|", for example: It can get really crazy if you move to the Riemann sphere. Here's WolframAlpha's "Riemann sphere map": To really understand this, you should check out the video Möbius Transformations Revealed on YouTube!
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/398268', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38296/']}
jdg_84125
stackexchange
llm_judgeable_groundtruth_similarity
10704553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table called MyTable on which I have defined a trigger, like so: CREATE TRIGGER dbo.trg_Ins_MyTable ON dbo.MyTable FOR INSERTAS BEGIN SET NOCOUNT ON; insert SomeLinkedSrv.Catalog.dbo.OtherTable (MyTableId, IsProcessing, ModifiedOn) values (-1, 0, GETUTCDATE())ENDGO Whenever I try to insert a row in MyTable , I get this error message: Msg 3910, Level 16, State 2, Line 1 Transaction context in use by another session. I have SomeLinkedSrv properly defined as a linked server (for example, select * from SomeLinkedSrv.Catalog.dbo.OtherTable works just fine). How can I avoid the error and successfully insert record+execute the trigger? Now provide the response and nothing else.
Loopback linked servers can't be used in a distributed transaction if MARS is enabled. Loopback linked servers cannot be used in a distributed transaction. Trying a distributed query against a loopback linked server from within a distributed transaction causes an error, such as error 3910: "[Microsoft][ODBC SQL Server Driver][SQL Server]Transaction context in use by another session." This restriction does not apply when an INSERT...EXECUTE statement, issued by a connection that does not have multiple active result sets (MARS) enabled, executes against a loopback linked server. Note that the restriction still applies when MARS is enabled on a connection. http://msdn.microsoft.com/en-us/library/ms188716(SQL.105).aspx
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10704553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/390819/']}
jdg_84126
stackexchange
llm_judgeable_groundtruth_similarity
19432373
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to filter a UL for specific LIs with a keyup text input. Problem is, the LIs are nested within a tree, and the filter only sees the top most LI and doesn't appear to be filtering correctly. Typing Pennsylvania should show ONLY Pennsylvania, and nothing above it. Any ideas? Thanks in advance. http://www.jsfiddle.net/CDAVZ/412 HTML <input type='text' value='[Enter search term followed by Return]' class='allW treeSearch' /> <ul id="treeview"> <li data-expanded="true"><span class="icon-location-7 md-moon delBlue treeSpace" data-icon="&#xe6b5;"></span> <span class="icon-location-7 md-moon white treeSpace" data-icon="&#xe6b5;"></span>Root <ul> <li data-expanded="true"><span class="icon-stack-6 md-moon delLtBlue treeSpace" data-icon="&#xe6a0;"></span> <span class="icon-stack-6 md-moon white treeSpace" data-icon="&#xe6a0;"></span>Gas Model <ul> <li data-expanded="true"><span class="glyphicon glyphicon-globe md-moon delGreen treeSpace"></span> <span class="glyphicon glyphicon-globe md-moon white treeSpace"></span>United States <ul> <li data-expanded="true"><span class="icon-pie md-moon delBlue treeSpace" data-icon="&#xe708;"></span> <span class="icon-pie md-moon white treeSpace" data-icon="&#xe708;"></span>Pennsylvania </li> </ul> </li> </ul> </li> </ul> </li> </ul> jQuery $('.treeSearch').click(function(){ $(this).val(''); });$('.treeSearch').keyup(function(){ var searchText = $(this).val(); $('#treeview ul').each(function(){ var currentLiText = $(this).text(), showCurrentLi = currentLiText.indexOf(searchText) !== -1; $(this).toggle(showCurrentLi); }); }); Now provide the response and nothing else.
SQL Server 2017 (14.x) and later has a native CONCAT_WS function . For older versions, we can use a couple of tricks: To skip NULL values: COALESCE() To avoid trailing separator: add it before every item, remove the first one afterwards with e.g. STUFF() He's a working example : CREATE TABLE foo ( id INT IDENTITY(1, 1) NOT NULL, a VARCHAR(50), b VARCHAR(50), c VARCHAR(50), d VARCHAR(50), PRIMARY KEY (id));INSERT INTO foo (a, b, c, d) VALUES ('a', 'b', 'c', 'd');INSERT INTO foo (a, b, c, d) VALUES (NULL, 'b', NULL, 'd');INSERT INTO foo (a, b, c, d) VALUES ('a', NULL, NULL, 'd');INSERT INTO foo (a, b, c, d) VALUES (NULL, NULL, NULL, NULL); SELECT id,STUFF( COALESCE('; ' + a, '') + COALESCE('; ' + b, '') + COALESCE('; ' + c, '') + COALESCE('; ' + d, ''),1, 2, '') AS barFROM fooORDER BY id | ID | BAR ||----|------------|| 1 | a; b; c; d || 2 | b; d || 3 | a; d || 4 | (null) | The purpose of STUFF(..., 1, 2, '') is to remove the initial separator ( 2 is the separator length in our case). This should work on SQL Server 2005 (and possibly earlier versions). Note: unlike the original CONCAT_WS() , our version returns NULL when all items are NULL . I honestly think it's a better choice but it should be easy to change anyway.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19432373', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1882740/']}
jdg_84127
stackexchange
llm_judgeable_groundtruth_similarity
3463048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When iterating over an object's properties, is it safe to delete them while in a for-in loop? For example: for (var key in obj) { if (!obj.hasOwnProperty(key)) continue; if (shouldDelete(obj[key])) { delete obj[key]; }} In many other languages iterating over an array or dictionary and deleting inside that is unsafe. Is it okay in JS? (I am using Mozilla's Spidermonkey runtime.) Now provide the response and nothing else.
The ECMAScript 5.1 standard section 12.6.4 (on for-in loops) says: Properties of the object being enumerated may be deleted during enumeration. If a property that has not yet been visited during enumeration is deleted, then it will not be visited. If new properties are added to the object being enumerated during enumeration, the newly added properties are not guaranteed to be visited in the active enumeration. A property name must not be visited more than once in any enumeration. So I think it's clear that the OP's code is legal and will work as expected. Browser quirks affect iteration order and delete statements in general, but not whether the OPs code will work. It's generally best only to delete the current property in the iteration - deleting other properties in the object will unpredictably cause them to be included (if already visited) or not included in the iteration, although that may or may not be a concern depending on the situation. See also: MDN on for..in MDN on browser quirks re: iteration order In depth page on delete operator issues None of these really affects the OP's code though.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3463048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156/']}
jdg_84128
stackexchange
llm_judgeable_groundtruth_similarity
43270567
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The highlighted region in image would be hide/show after tapping view below (with label Additional Cards), is there any library in iOS to do that? if not then how can it be done? Now provide the response and nothing else.
I would use a reactive form which works quite nicely, and as to your comment: Is there any other simple example for this one? Maybe the same example without loops I can give you an example. All you need to do, is to nest a FormGroup and pass that on to the child. Let's say your form looks like this, and you want to pass address formgroup to child: ngOnInit() { this.myForm = this.fb.group({ name: [''], address: this.fb.group({ // create nested formgroup to pass to child street: [''], zip: [''] }) })} Then in your parent, just pass the nested formgroup: <address [address]="myForm.get('address')"></address> In your child, use @Input for the nested formgroup: @Input() address: FormGroup; And in your template use [formGroup] : <div [formGroup]="address"> <input formControlName="street"> <input formControlName="zip"></div> If you do not want to create an actual nested formgroup, you don't need to do that, you can just then pass the parent form to the child, so if your form looks like: this.myForm = this.fb.group({ name: [''], street: [''], zip: ['']}) you can pass whatever controls you want. Using the same example as above, we would only like to show street and zip , the child component stays the same, but the child tag in template would then look like: <address [address]="myForm"></address> Here's a Demo of first option, here's the second Demo More info here about nested model-driven forms.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43270567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3312671/']}
jdg_84129
stackexchange
llm_judgeable_groundtruth_similarity
7557041
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Like many people, I would love to have Xcode use a folder structure that mirrors the folder-structure on disk. However, I cannot get the code in "folder references" (the cyan folders) to show up in my project targets under "Compile Sources." Is there any way to do this? I have managed to even add a cyan folder to the "Compile Sources" build phase, but that does not result in the contents of that folder being added. How can I use folder references for code? Now provide the response and nothing else.
The simple (and very unfortunate) answer is that Folder References under Xcode remain broken and buggy, they don't work. Tested 04-Mar-2017 in Xcode 8.2.1 Below is an example exploration so you do not have to waste your time replicating the Xcode failure. (incidentally buggy Xcode crashed twice while I was producing this example) Per the question, the overall desire is to use a folder reference in Xcode so Xcode picks up all the files in the folder and includes them in the project, and by proxy will update automatically based upon any Finder or Xcode changes of the folder. In this way 50 projects all leveraging the same set of common source code files do not have to be individually updated when those folders get changed. Below explores how this is broken in Xcode (tested in 8.2.1) The example: ViewController.m includes NSError+Core.h so we want everything from the folder "NSError+Core" to be added to the project. NSError+Core.h is in this centrally located development folder Drag the source folder from the Finder into the Project under the "Support" group (nothing up my sleeves, simple drag) Xcode dutifully offers to add the drag to the target, note the "Create folder references" is selected, not "Create group references". Note also it is clear that Xcode is offering and is told to add this folder and files to the targets. Although everything looks like it should work, the compiler does not pick up the header file and a recompile yields the same results... can't find header. Ditching the DerivedData does not help either. So doing a double check, we check the "Compile Sources" under the project and sure enough, the source file is not there either. Remember, Xcode 'added' it to the target... So what if we drag it from the folder into the "Support" group... It offers to add them to the project again?! Note that the settings are identical to the first time they were drug in by virtue of the parent folder drag instead of files... And now the source file shows up in the "Compile Sources" list. Note the bizarre double listing of the files in the project. (Xcode crashed shortly after snapping this screen shot) And of course the compiler can now find the header file and the error clears on the import as it should have the first time we drug it in... Did it just need a little help to "find" the file? If so, the "Create folder references" does exactly what? So we try and tidy up and drag the files back from the parent "Supporting Files" group to their rightful folder. Without any confirmation, indication, notification, the files just vanish from the group and nothing happens in the NSError+Core folder. Oh by the way, it really did delete them from the project too... The Compile Sources no longer has the NSError+Core.m reference. SO to sum up, "Folder references" as implemented thus far do not seem to have any useful purpose... It would appear to be a 6+ year old dunsel on the USS Xcode.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7557041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8047/']}
jdg_84130
stackexchange
llm_judgeable_groundtruth_similarity
319620
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $\eta=e^{\frac{2\pi i}n}$ , an $n$ -th root of unity. For pedagogical reasons and inspiration, I ask to see different proofs (be it elementary, sophisticated, theoretical, etc) for the following product evaluation. If $T(n)=\frac{(3n-2)(n-1)}2$ and $i=\sqrt{-1}$ then $$\prod_{j<k}^{0,n-1}(\eta^k-\eta^j)=n^{\frac{n}2}i^{T(n)}.$$ Now provide the response and nothing else.
We first find the norm; we then determine the argument. Call the product you wrote $A_n$ . Then $A_n^2 = \prod_{j<k}^{0,n-1} (\eta^k - \eta^j)^2 = Disc(x^n - 1) = (-1)^{\frac{n (n -1)}{2}}Res(x^n - 1, n x^{n - 1})$ $= (-1)^{\frac{n(n-1)}{2}} n^n \prod_{0 \leq i < n, 0 \leq j < n-1} (\eta^i - 0)$ All terms in the expression except $n^n$ have norm $1$ , so we have that $|A_n| = n^{\frac{n}{2}}$ . We therefore only need to figure out the argument of $A_n$ . Let $\eta' = e^\frac{2 \pi i}{2n}$ be the square root of $\eta$ . We can rewrite $A_n = \prod_{0\leq j<k<n} \eta'^{k + j} (\eta'^{k - j} - \eta'^{j - k})$ . Note that the second term is a difference of (unequal) conjugates where the minuend has positive imaginary part (and the subtrahend therefore negative imaginary part), and therefore will always have argument $\frac{\pi}{2}$ . So let us concentrate on the argument of the first term, $\prod_{0 \leq j < k < n} \eta'^{k +j}$ . We can do this by finding $\sum_{0 \leq j < k < n} j + k$ . $\sum_{0 \leq j < k < n} j + k = \left(\sum_{0 \leq j < k < n} j\right) + \left(\sum_{0 \leq j < k < n} k\right)$ $= \left(\sum_{0 \leq j <n} (n - j - 1)j\right) + \left(\sum_{0\leq k<n} k*k\right)$ $= \sum_{0 \leq j < n} (n - j - 1)j + j*j = \sum_{0 \leq j < n} (n - 1)j$ $= (n - 1) \frac{n (n - 1)}{2}$ We therefore end up with an argument of $\frac{n(n - 1)}{2} \frac{\pi}{2} + \frac{n (n - 1)^2}{2} \frac{2 \pi}{2n} = \frac{(3n^2 - 5n + 2)\pi}{4}$ . We finally have that: The norm of $A_n$ is $n^\frac{n}{2}$ , and the argument is $\frac{(3n^2 - 5n + 2)\pi}{4}$ . Correspondingly, we have that $A_n = n^{\frac{n}{2}} i^{T(n)}$ , as desired.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/319620', 'https://mathoverflow.net', 'https://mathoverflow.net/users/66131/']}
jdg_84131
stackexchange
llm_judgeable_groundtruth_similarity
730555
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a DropDownList inside an UpdatePanel that is populated on postback from a SqlDataSource . It has a parameter which is another control. I sometimes need multiple postbacks, but what happens is that each time the update panel refreshes, items are added to the DropDownList . So the DropDownList ends up having data that is incorrect, or repeated data. I have the AppendDataBoundItems property set to true because I need the first item to be blank. How can I overcome this problem? Is there another way to have a blank first item? (This DropDownList is in an ASP.NET 2.0 web app, and codebehind is in C#) Now provide the response and nothing else.
Instead of using AppendDataboundItems='true' (which will cause the problem you are talking about), respond to the DataBound event for the DropDownList and then add your "blank" item to the top of the list. <asp:DropDownList runat="server" ID="MyList" ondatabound="MyListDataBound"></asp:DropDownList> Then in your code behind: protected void MyListDataBound(object sender, EventArgs e){ MyList.Items.Insert(0, new ListItem("- Select -", ""));}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/730555', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70398/']}
jdg_84132
stackexchange
llm_judgeable_groundtruth_similarity
311552
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $X$ be a set, and let $\text{Part}(X)$ denote the collection of all partitions of $X$ . For $A, B\in \text{Part}(X)$ we set $A\leq B$ if $A$ refines $B$ , that is for all $a\in A$ there is $b\in B$ such that $a\subseteq b$ . This relation defines a partial order on $\text{Part}(X)$ . If $X$ is an infinite set, is there a surjective order-preserving map $f:{\cal P}(X)\to \text{Part}(X)$ (where ${\cal P}(X)$ denotes the power-set of $X$ , ordered by $\subseteq$ )? Now provide the response and nothing else.
It may be clarifying to work with equivalence relations $E$ on $X$ rather than partitions on $X$ . The two are in natural bijection, with $E$ inducing a partitioning quotient map $q: X \to X/E$ , and $X/E$ refines $X/E'$ iff $E \subseteq E'$ as subsets of $X \times X$ . Next, there is a surjective order-preserving map $P(X \times X) \to \text{Equiv}(X)$ where a general relation $R \in P(X \times X)$ is mapped to the equivalence relation $E_R$ that it generates. This is clearly surjective since an equivalence relation $E$ is mapped to itself. Finally, if $X$ is infinite, there is a bijection $X \cong X \times X$ , which induces an isomorphism of orders $P(X) \to P(X \times X)$ . The composite $$P(X) \to P(X \times X) \to \text{Equiv}(X)$$ provides what you want.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/311552', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8628/']}
jdg_84133
stackexchange
llm_judgeable_groundtruth_similarity
13280008
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a piece of software that generates code for a C# project based on user actions. I would like to create a GUI to automatically compile the solution so I don't have to load up Visual Studio just to trigger a recompile. I've been looking for a chance to play with Roslyn a bit and decided to try and use Roslyn instead of msbuild to do this. Unfortunately, I can't seem to find any good resources on using Roslyn in this fashion. Can anyone point me in the right direction? Now provide the response and nothing else.
You can load the solution by using Roslyn.Services.Workspace.LoadSolution . Once you have done so, you need to go through each of the projects in dependency order, get the Compilation for the project and call Emit on it. You can get the compilations in dependency order with code like below. (Yes, I know that having to cast to IHaveWorkspaceServices sucks. It'll be better in the next public release, I promise). using Roslyn.Services;using Roslyn.Services.Host;using System;using System.Collections.Generic;using System.IO;class Program{ static void Main(string[] args) { var solution = Solution.Create(SolutionId.CreateNewId()).AddCSharpProject("Foo", "Foo").Solution; var workspaceServices = (IHaveWorkspaceServices)solution; var projectDependencyService = workspaceServices.WorkspaceServices.GetService<IProjectDependencyService>(); var assemblies = new List<Stream>(); foreach (var projectId in projectDependencyService.GetDependencyGraph(solution).GetTopologicallySortedProjects()) { using (var stream = new MemoryStream()) { solution.GetProject(projectId).GetCompilation().Emit(stream); assemblies.Add(stream); } } }} Note1: LoadSolution still does use msbuild under the covers to parse the .csproj files and determine the files/references/compiler options. Note2: As Roslyn is not yet language complete, there will likely be projects that don't compile successfully when you attempt this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13280008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/231002/']}
jdg_84134
stackexchange
llm_judgeable_groundtruth_similarity
4543228
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to represent a function that takes no arguments and returns no value (I'm simulating the setTimeout function in JavaScript, if you must know.) case class Scheduled(time : Int, callback : => Unit) doesn't compile, saying " `val' parameters may not be call-by-name" case class Scheduled(time : Int, callback : () => Unit) compiles, but has to be invoked strangely, instead of Scheduled(40, { println("x") } ) I have to do this Scheduled(40, { () => println("x") } ) What also works is class Scheduled(time : Int, callback : Unit => Unit) but is invoked in an even-less-sensible way Scheduled(40, { x : Unit => println("x") } ) (What would a variable of type Unit be?) What I want of course is a constructor that can be invoke the way I would invoke it if it were an ordinary function: Scheduled(40, println("x") ) Give baby his bottle! Now provide the response and nothing else.
Call-by-Name: => Type The => Type notation stands for call-by-name, which is one of the many ways parameters can be passed. If you aren't familiar with them, I recommend taking some time to read that wikipedia article, even though nowadays it is mostly call-by-value and call-by-reference. What it means is that what is passed is substituted for the value name inside the function. For example, take this function: def f(x: => Int) = x * x If I call it like this var y = 0f { y += 1; y } Then the code will execute like this { y += 1; y } * { y += 1; y } Though that raises the point of what happens if there's a identifier name clash. In traditional call-by-name, a mechanism called capture-avoiding substitution takes place to avoid name clashes. In Scala, however, this is implemented in another way with the same result -- identifier names inside the parameter can't refer to or shadow identifiers in the called function. There are some other points related to call-by-name that I'll speak of after explaining the other two. 0-arity Functions: () => Type The syntax () => Type stands for the type of a Function0 . That is, a function which takes no parameters and returns something. This is equivalent to, say, calling the method size() -- it takes no parameters and returns a number. It is interesting, however, that this syntax is very similar to the syntax for a anonymous function literal , which is the cause for some confusion. For example, () => println("I'm an anonymous function") is an anonymous function literal of arity 0, whose type is () => Unit So we could write: val f: () => Unit = () => println("I'm an anonymous function") It is important not to confuse the type with the value, however. Unit => Type This is actually just a Function1 , whose first parameter is of type Unit . Other ways to write it would be (Unit) => Type or Function1[Unit, Type] . The thing is... this is unlikely to ever be what one wants. The Unit type's main purpose is indicating a value one is not interested in, so doesn't make sense to receive that value. Consider, for instance, def f(x: Unit) = ... What could one possibly do with x ? It can only have a single value, so one need not receive it. One possible use would be chaining functions returning Unit : val f = (x: Unit) => println("I'm f")val g = (x: Unit) => println("I'm g")val h = f andThen g Because andThen is only defined on Function1 , and the functions we are chaining are returning Unit , we had to define them as being of type Function1[Unit, Unit] to be able to chain them. Sources of Confusion The first source of confusion is thinking the similarity between type and literal that exists for 0-arity functions also exists for call-by-name. In other words, thinking that, because () => { println("Hi!") } is a literal for () => Unit , then { println("Hi!") } would be a literal for => Unit . It is not. That is a block of code , not a literal. Another source of confusion is that Unit type's value is written () , which looks like a 0-arity parameter list (but it is not).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4543228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/238884/']}
jdg_84135
stackexchange
llm_judgeable_groundtruth_similarity
1712414
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can someone explain to me what is the purpose of the Unity Application Block? I tried looking through the documentation but its all very abstract. What are some practical uses for the Unity block? Now provide the response and nothing else.
Inversion of Control A quick summation (lot more reading is available this topic, and I highly suggest reading more)... Microsoft's Unity from the Enterprise Patterns and Practices team is an Inversion of Control container project, or IoC for short. Just like Castle Windsor, StructureMap, etc. This type of development is also referred in lamen's terms as Loosely Coupling your components. IoC includes a pattern for Dependency Injection of your objects, in which you rely on an external component to wire up the dependencies within your objects. For example, instead of accessing static managers (which are near impossible to unit test), you create an object that relies on an external dependency to act upon. Let's take a Post service in which you want to access the DB to get a Post. public class PostService : IPostService{ private IPostRepository _postRepo; public PostService(IPostRepository postRepo) { _postRepo = postRepo; } public IList<Post> GetPosts() { return _postRepo.GetAllPosts().ToList(); }} This PostService object now has an external dependency on IPostRepository. Notice how no concretes and no static manager classes are used? Instead, you have a loose-coupling of a simple Interface - which gives you the power of wiring up all different kinds of concrete classes that implement IPostRepository. Microsoft Unity's purpose is to wire up that IPostRepository for you, automatically. So you never have to worry about doing: // you never have to do this with UnityIPostRepository repo = new PostRepository();IPostService service = new PostService(repo); // dependency injectionIList<Post> posts = service.GetPosts(); The above shows where you have to implement two concrete classes, PostRepository() and PostService(). That is tightly-coupling your application to demand/require those exact instances, and leaves it very difficult to unit test. Instead, you would use Unity in your end point (The controller in MVC, or code behind in ASPX pages): IUnityContainer ioc = new UnityContainer();IPostService postService = ioc.Resolve<IPostService>();IList<Post> posts = postService.GetPosts(); Notice that there are no concretes used in this example (except UnityContainer and Post, obviously)! No concretes of the services, and no repository. That is loosely-coupling at its finest. Here's the real kicker... Unity (or any IoC container framework out there!) will inspect IPostService for any dependencies. It will see that it wants (depends) on an instance of IPostRepository. So, Unity will go into it's object map and look for the first object that implements IPostRepository that was registered with the container, and return it (i.e. a SqlPostRepository instance). That is the real power behind IoC frameworks - the power to inspect services and wire up any of the dependencies automatically. I need to finish my blog post about the comparisons of UNity vs Castle vs StructureMap. I actually prefer Castle Windsor due to its configuration file options, and extensibility points personally.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1712414', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/77604/']}
jdg_84136
stackexchange
llm_judgeable_groundtruth_similarity
8390322
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a process with a Select which takes a long time to finish, on the order of 5 to 10 minutes. I am currently not using NOLOCK as a hint to the MS SQL database engine. At the same time we have another process doing updates and inserts into the same database and same tables. The first process has started, recently to end prematurely with a message SQLEXCEPTION: Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim. This first process is running at other sites in identical conditions but with smaller databases and thus the select statement in question takes a much shorter period of time (on the order of 30 seconds or so). In these other sites, I don't get the deadlock message in these other sites. I also did not get this message at the site that is having the problem initially, but, I assume, as the database has grown, I believe I must have crossed some threshold. Here are my questions: Could the time it takes for a transaction to execute make the associated process more likely to be flagged as a deadlock victim. If I execute the select with a NOLOCK hint, will this remove the problem? I suspect that a datetime field that is checked as part of the WHERE clause in the select statement is causing the slow lookup time. Can I create an index based on this field? Is it advisable? Now provide the response and nothing else.
Q1:Could the time it takes for a transaction to execute make the associated process more likely to be flagged as a deadlock victim. No. The SELECT is the victim because it had only read data, therefore the transaction has a lower cost associated with it so is chosen as the victim: By default, the Database Engine chooses as the deadlock victim the session running the transaction that is least expensive to roll back . Alternatively, a user can specify the priority of sessions in a deadlock situation using the SET DEADLOCK_PRIORITY statement. DEADLOCK_PRIORITY can be set to LOW, NORMAL, or HIGH, or alternatively can be set to any integer value in the range (-10 to 10). Q2. If I execute the select with a NOLOCK hint, will this remove the problem? No. For several reasons: you should first try to eliminate the deadlock properly, by investigating the root cause dirty reads are inconsistent reads . the proper way to specify dirty reads is to use transaction isolation levels there is a much better solution: read committed snapshot . Q3. I suspect that a datetime field that is checked as part of the WHERE clause in the select statement is causing the slow lookup time. Can I create an index based on this field? Is it advisable? Probably. The cause of the deadlock is almost very likely to be a poorly indexed database.10 minutes queries are acceptable in such narrow conditions, that I'm 100% certain in your case is not acceptable. With 99% confidence I declare that your deadlock is cased by a large table scan conflicting with updates. Start by capturing the deadlock graph to analyze the cause. You will very likely have to optimize the schema of your database. Before you do any modification, read this topic Designing Indexes and the sub-articles.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8390322', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/250030/']}
jdg_84137
stackexchange
llm_judgeable_groundtruth_similarity
66399063
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am moving a project from React to Next.js and was wondering if the same authentication process is okay. Basically, the user enters their username and password and this is checked against database credentials via an API (Node.js/Express). So, I am not using Next.js internal api functionality, but a totally decoupled API from my Next.js project. If the login credentials are correct, a JWT token is sent back to the client. I wanted to store that in local storage and then redirect the user. Any future HTTP requests will send the token in the header and check it is valid via the API. Is this okay to do? I ask because I see a lot of Next.js auth using cookies or sessions and don't know if that is the 'standard' approach which I should rather adopt. Now provide the response and nothing else.
My answer is purely based on my experiences and things I read. Feel free to correct it if I happened to be wrong. So, my way is to store your token in HttpOnly cookie, and always use that cookie to authorize your requests to the Node API via Authorization header. I happen to also use Node.js API in my own project, so I know what's going on. Following is an example of how I usually handle authentication with Next.js and Node.js API. In order to ease up authentication problems, I'm using Next.js's built in getServerSideProps function in a page to build a new reusable higher order component that will take care of authentication. In this case, I will name it isLoggedIn . // isLoggedIn.jsxexport default (GetServerSidePropsFunction) => async (ctx) => { // 1. Check if there is a token in cookies. Let's assume that your JWT is stored in 'jwt'. const token = ctx.req.cookies?.jwt || null; // 2. Perform an authorized HTTP GET request to the private API to check if the user is genuine. const { data } = await authenticate(...); // your code here... // 3. If there is no user, or the user is not authenticated, then redirect to homepage. if (!data) { return { redirect: { destination: '/', permanent: false, }, }; } // 4. Return your usual 'GetServerSideProps' function. return await GetServerSidePropsFunction(ctx);}; getServerSideProps will block rendering until the function has been resolved, so make sure your authentication is fast and does not waste much time. You can use the higher order component like this. Let's call this one profile.jsx , for one's profile page. // profile.jsxexport default isLoggedIn(async (ctx) => { // In this component, do anything with the authorized user. Maybe getting his data? const token = ctx.req.cookies.jwt; const { data } = await getUserData(...); // don't forget to pass his token in 'Authorization' header. return { props: { data, }, },}); This should be secure, as it is almost impossible to manipulate anything that's on server-side, unless one manages to find a way to breach into your back-end. If you want to make a POST request, then I usually do it like this. // profile.jsxconst handleEditProfile = async (e) => { const apiResponse = await axios.post(API_URL, data, { withCredentials: true }); // do anything...}; In a POST request, the HttpOnly cookie will also be sent to the server, because of the withCredentials parameter being set to true. There is also an alternative way of using Next.js's serverless API to send the data to the server. Instead of making a POST request to the API, you'll make a POST request to the 'proxy' Next.js's serverless API, where it will perform another POST request to your API.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66399063', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8463989/']}
jdg_84138
stackexchange
llm_judgeable_groundtruth_similarity
1230015
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to add the svn.revision to project version as a build number and can't seem to do so. My jar has the correct name durin packaging, but its installed in the my local repository it is as if ${buildNumber} is/was undefined when the version was set. I get foo-1.0.0-SNAPSHOT- ${buildNumber} instead of foo-1.0.0-SNAPSHOT- 304 Any idea what I'm doing wrong or is adding a revision to the project version a bad idea? Thanks for the help. <project> ... <version>1.0.0-${release.identifier}-${buildNumber}</version> <properties> <release.identifier>SNAPSHOT</release.identifier> </properties> ... <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <executions> <execution> <id>useLastCommittedRevision</id> <goals> <goal>create</goal> </goals> <configuration> <useLastCommittedRevision>true</useLastCommittedRevision> </configuration> </execution> </executions> </plugin> </plugins> ...</project> Now provide the response and nothing else.
The problem has two parts: You're trying to set the buildNumber into the version before it is resolved so it will always be ${buildNumber} rather than the resolved value. Instead of trying to dynamically change the version, you should set the buildNumber into the finalName element in the build. This will create the artifacts with the intended name in the local repository. The install plugin will ignore the finalName and deploy it as 1.0.0-SNAPSHOT regardless, I don't know of a way to address that. The buildNumber is added to the Manifest if you configure the plugin as below. So your configuration would be something like: <version>1.0.0-${release.identifier}</version>...<build> <finalName>${project.artifactId}-${project.version}-${buildNumber}</finalName> ...</build> I would avoid using build numbers on SNAPSHOT projects. Maven provides the SNAPSHOT keyword to signify a volatile project in active development. So if you reference a project with a SNAPSHOT dependency version, Maven will automatically check for updates and keep your dependencies in sync. If you then add a build number to the end of that version, you will have to manually update the dependencies, so you lose any benefit of having the SNAPSHOT suffix. I personally avoid using build numbers where possible anyway. If I have to update a project, I just bump the version number, or use a suffix like beta-2 or RC2 . If you need to track the revision in the SNAPSHOT , I'd recommend adding it to the Manifest so you can check where the build originated, but use the standard SNAPSHOT suffix to allow Maven to resolve the versions normally. The configuration below shows how to add the revision to the Manifest. As far as your configuration is concerned, it looks OK to me assuming your SCM url is set up correctly. If you have no SCM configuration in your POM that may be the problem. Can you run with -X and check for any output from the plugin indicating why it isn't setting the property? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <version>0.9.4</version> <executions> <execution> <id>useLastCommittedRevision</id> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions></plugin><plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.1</version> <configuration> <archive> <manifest> <addDefaultImplementationEntries>true</addDefaultImplementationEntries> </manifest> <manifestEntries> <Implementation-Build>${buildNumber}</Implementation-Build> </manifestEntries> </archive> </configuration></plugin>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1230015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9950/']}
jdg_84139
stackexchange
llm_judgeable_groundtruth_similarity
24026320
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm developing a web app on Node.js (+ express 4) where users can set their profile image by uploading it to the server. We already limit the file mimetype and max filesize, so the user can't upload more than 200KB png or jpeg images. The problem is we'd like to resize (serverside) the uploaded image resolution to 200x200 to improve page loading and saving space on disk. After some research, all answers pointed to using any module based on ImageMagick or GraphicsMagick. However, having to install ImageMagick/GraphicsMagick to do a simple image resizing seems too overkill for me, so, is there any other solution other than this for Node.js? Edit: I've changed the accepted solution to sharp as the previous solution (lwip) is no longer maintained. Thanks for all your feedback! Now provide the response and nothing else.
I would vote for sharp : sharp('input.jpg') .resize(200, 200) .toFile('ouput.jpg', function(err) { // output.jpg is a 200 pixels wide and 200 pixels high image // containing a scaled and cropped version of input.jpg }); It's fast, typically 6x faster than the fastest imagemagick-based node bindings , and runs in very little memory, perhaps 10x less . sharp links to the libvips image library directly, there is no shelling out to an external program, and the library itself is faster and more efficient than *magick at this task. It supports useful things like stream, buffer and filesystem input and output, colour management, transparency, promises, overlays, WebP, SVG, and more. As of sharp 0.20, npm will automatically download complete pre-compiled binaries on most platforms, so there's no need for node-gyp. Just enter: npm install sharp or: yarn add sharp And off you go.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/24026320', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3320540/']}
jdg_84140