source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
13248593
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to delete specific number of entries from the database?I did something like this EntriesToDelete=Statusmessages.objects.filter(time__lt=date)[:30000]EntriesToDelete.delete() But I get an error which says: AssertionError. Cannot use 'limit' or 'offset' with delete . How can I specify the number of entries to be deleted. Now provide the response and nothing else.
You could do it like this: Statusmessages.objects.filter(pk__in=Statusmessages.objects.filter(time__lt=date).values_list('pk')[:30000]).delete()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13248593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1602561/']}
jdg_83741
stackexchange
llm_judgeable_groundtruth_similarity
250698
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How does one derive using, say, the operator formula for reflections $$ R(r) = (I - 2nn^*)(r),$$ the reflection representation of a vector $$ R(r) = R(x\hat{i} + y\hat{j} + z\hat{k}) = xR(\hat{i}) + yR(\hat{j}) + zR(\hat{k}) = xs_x + ys_y + zs_z \\ = x \left[ \begin{array}{ c c } 0 & 1 \\ 1 & 0 \end{array} \right] + y\left[ \begin{array}{ c c } 0 & -i \\ i & 0 \end{array} \right] + z \left[ \begin{array}{ c c } 1 & 0 \\ 0 & - 1 \end{array} \right] = \left[ \begin{array}{ c c } z & x - iy \\ x+iy & - z \end{array} \right] $$ that comes up when dealing with spinors in 3-D ? Intuitively I can see the matrices are supposed to come from the following geometric picture: The first Pauli matrix is like a reflection about the "y=x" line. The third Pauli matrix is like a reflection about the "x axis". The second Pauli matrix is like a 90° counterclockwise rotation and scalar multiplication by the imaginary unit https://en.wiktionary.org/wiki/Pauli_matrix but why and how did we make these choices? I know we're doing it to end up using a basis of $su(2)$, but assuming you didn't know anything about $su(2)$, how could you set this up so that it becomes obvious that what we end up calling $su(2)$ is the right way to represent reflections? The usual ways basically postulates them or show they work through isomorphism or say the come from the fact a vector is associated with the matrix I've written above without explaining where that came from. The closest thing to an explanation is that they come from the quaternionic product whose link to all this, especially something as simple as reflections through lines, escapes me. Now provide the response and nothing else.
The fact is that, in the general case $$\vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t};$$ (signs depend on conventions used) where $\vec{A}$ is called vector potential . You can consult for example Wikipedia . Let us consider homogeneous Maxwell equations: $$\begin{cases}\vec{\nabla}\cdot\vec{B} = 0,\\\vec{\nabla}\times\vec{E} + \frac{\partial\vec{B}}{\partial t} = 0;\end{cases}$$ It is well-known that every divergenceless filed on $\mathbb{R}^3$ can be written a curl of another vector field just as we know that a curless field can be written as a gradient of a scalar function on $\mathbb{R}^3$ . Thus from the first equation, $$\vec{B} = \vec{\nabla}\times\vec{A},$$ and substituting this in the second equation, $$\vec\nabla\times\left(\vec{E} + \frac{\partial\vec{A}}{\partial t}\right)=0,$$ since one can exchange the curl with the derivative w.r.t. time, and so one can set: $$\vec{E} + \frac{\partial\vec{A}}{\partial t} = -\vec\nabla V,$$ from which $$\vec{E} = -\vec{\nabla}V - \frac{\partial\vec{A}}{\partial t}.$$ Note that if your magnetic field is time-independent, you recover the well-know formula $$\vec{E} = -\vec\nabla V.$$
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/250698', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/25851/']}
jdg_83742
stackexchange
llm_judgeable_groundtruth_similarity
1032041
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a calendar page that allows a user to click on a day, and enter an entry for that day with a form that pops up. I'm no stranger to DOM manipulation with jQuery, but this is something I've done before and I'm beginning to wonder if there's a more efficient way to do this? Would building the HTML manually within JavaScript be the most efficient way performancewise (I assume this is true, over using functions like appendTo() etc) or would creating a hidden construct within the DOM and then cloning it be better? Ideally I want to know the optimal method of doing this to provide a balance between code neatness and performance. Thanks, Will Now provide the response and nothing else.
Working with huge chunks of HTML strings in JavaScript can become very ugly very quickly. For this reason it might be worth considering a JavaScript "template engine". They needn't be complicated - Check out this one by Resig. If you're not up for that then it's probably fine to just continue as you are. Just remember that DOM manipulation is generally quite slow when compared to string manipulation. An example of this performance gap: // DOM manipulation... slowvar items = ['list item 1', 'list item 2', 'list item 3'];var UL = $('<ul/>');$.each(items, function(){ UL.append('<li>' + this + '</li>');});// String manipulation...fastvar items = ['list item 1', 'list item 2', 'list item 3'];var UL = $('<ul/>').append( '<li>' + items.join('</li><li>') + '</li>' );
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1032041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/127522/']}
jdg_83743
stackexchange
llm_judgeable_groundtruth_similarity
24686732
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't seem to find a way to save leading zeros in my VBA code. The zeros are necessary since they correspond to unique IDs. I've tried changing the number format to text and 0000.... in excel and the same approach in my actual code: ActiveSheet.Cells(i, j).NumberFormat = "00000" Any other suggestions? (If I manually type these numbers into VBE it strips the leading zeros too).EDIT: Sheets.Add.Name = "Temp"Sheets("Sheet1").SelectActiveSheet.Cells(2, 2).NumberFormat = "000"cid = Cells(2, 2)MsgBox cidSheets("Sheet2").SelectActiveSheet.Cells(6, 1).NumberFormat = "00000"sid = Cells(6, 1)Sheets("Temp").SelectUrl = _"URL;" & _"http......asp?" & _"X1=" & cid & "&" & _"X2=" & sid & "&" This is inside a loop ultimately but I'm debugging as individual iterations. Now provide the response and nothing else.
All below deliberations were about writing to a cell, not reading from it. Changing the Numberformat doesn't work that way.See if you can figure out how this code works: dim v as integerv = val(Sheets("Sheet1").Cells(2, 2))dim cid as stringcid = format(v, "000") You should never use Select and ActiveSheet that way, it is not necessary; and never use Cells without the sheet it should be referring to. ( Val can be omitted if the cell is really already numeric and not text). Earlier answer... : This code works perfectly fine for me Worksheets(1).Columns("A").NumberFormat = "@"Worksheets(1).Cells(1, "A").Value = "00023" And also does Worksheets(1).Columns("A").NumberFormat = "000000"Worksheets(1).Cells(1, "A").Value = "0023" (Here the string is converted to a number and displayed with 6 digits) EDIT: If that fails - although I could not explain that - I'd still bet that the ' should really work: (No numberformat required.) Dim s As Strings = "0002"Worksheets(1).Cells(1, "A").Value = "'" & s
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24686732', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3827228/']}
jdg_83744
stackexchange
llm_judgeable_groundtruth_similarity
14947
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is obvious is that the US has the right to prevent foreigners from entering the country. But the situation this morning is that it is not clear what would happen after the immigrant or visitor lands in the US. In such case if the immigrant or visitor is willing to take full responsibility after he reaches and give it a try, why do the foreign government and airline companies have the right to stop him? In my view, the immigrant has the clearance to leave his country, then it should only be the business of the agency in the US deciding whether let him in or not. Now provide the response and nothing else.
Because the law 1 says that companies must check the visa required by the passenger at the destination country. If they do not do that, they may get fined 2 . This is not even a law particular of the USA, but based in the Chicago Convention on International Civil Aviation( https://en.wikipedia.org/wiki/Chicago_Convention_on_International_Civil_Aviation ), that specifies: Article 13: (Entry and Clearance Regulations) A state's laws and regulations regarding the admission and departure of passengers, crew or cargo from aircraft shall be complied with on arrival, upon departure and whilst within the territory of that state. Also, in the best practices : 5.9 The aircraft operator shall be responsible for the cost of custody and care of an improperly documented person from the moment that person is found inadmissible and returned to the aircraft operator for removal from the State. and 5.14 Contracting States shall not fine aircraft operators in the event that arriving and in-transit persons are found to be improperly documented where aircraft operators can demonstrate that they have taken adequate precautions to ensure that these persons had complied with the documentary requirements for entry into the receiving State. This kind of question appears regularly at Travel Stackexchange , with people complaining that they have been denied boarding (not only towards USA, but any other country 3 ) due to visa issues. 1 BTW, this had nothing to do with 9/11. It works this way in all the world, and AFAIK it did work that way well before 9/11. IIRC, post-9/11, there were additional requirements so that any passenger boarding a plane that went over the USA had to fill a form before the flight to get clearance, even if he had visa or if the destination of the flight was not the USA. 2 And, in top of that, the airline will have to provide transportation back to country of origin to the rejected passenger (although they may try to sue him for the value of the travel). 3 Check this example where the destination was Brazil . Or this one with a Vietnam destination . Or this one with a Schengen one . If you got curious, search the site for "boarding denied visa" and you will get lots of questions and answers.
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/14947', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/6330/']}
jdg_83745
stackexchange
llm_judgeable_groundtruth_similarity
1904870
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: PROBLEM Here is a surprisingly intriguing challenge posed on Euclidea, a mobile app for Euclidean constructions. I have a geometric construction that achieves the goal but not within the constraints. Could anyone advise on a construction that would fit within these constraints? Given A point $P$ on a circle $O$ whose center is not given. Goal Inscribe an equilateral triangle $\Delta{PQR}$ in $\bigcirc{O}$. Constraints Construct all three sides of the $\Delta{PQR}$ to achieve the goal. Only an unmarked straightedge and a non-rusty collapsible compass ( i.e. , a compass that, while on paper, can have any radius but which cannot maintain said radius when not on paper) can be used. Use only from the construction steps below a number of steps that together require just five lines in total. Construct a point: 0 lines (L). Mark the intersection of two curves with a point: 0L. Construct a new line (*or line segment or ray): 1L. Extend a given line segment (*or ray): 1L. Construct a circle ( non-rusty collapsible compass): 1L. Construct the perpendicular bisector of a line segment: 1L. Construct a new line perpendicular to an old line: 1L. Construct an angle bisector: 1L. Hints from Euclidea A known solution employs five constructions in the order: Circle, circle, perpendicular bisector, perpendicular bisector, and line. PREVIOUS WORK As mentioned above, I have a construction that matches neither the 5L-constraint nor the hint. The following figure accompanies the six-line construction below, where the given and goal are shown as an inset: Construct a point $A \ne P$ on circle $\bigcirc{O}$ [0L running total] . Construct $\bigcirc{A(P)}$ centered on point $A$ and with radius $AP$ [1L running total] . $\bigcirc{A(P)}$ also intersects $\bigcirc{O}$ at another point $B \ne P$. Construct $\bigcirc{P(B)}$ [2L running total] . $\bigcirc{P(B)}$ also intersects $\bigcirc{A(P)}$ at another point $C \ne B$. Therefore, $\bigcirc{P(C)} = \bigcirc{P(B)}$. Also, $\overline{CP}$ is tangent to $\bigcirc{O}$ at point $P \Longrightarrow m\angle{CPO} = 90^\circ{}$. Construct $\bigcirc{C(P)}$ [3L running total] . $\bigcirc{C(P)} \cap \bigcirc{P(C)} = \lbrace D, E\rbrace$. $\bigcirc{C(P)} \cong \bigcirc{P(C)} \Longrightarrow m\angle{CPD} = m\angle{CPE} = 60^\circ{} \Longrightarrow m\angle{DPE} = 120^\circ{}$. Without actually constructing anything: Let $\overrightarrow{EP} \cap \bigcirc{P(C)} = E' \Longrightarrow$ $m\angle{DPE'} = m\angle{EPE'} - m\angle{DPE} = 180^\circ{} - 120^\circ{} = 60^\circ{}$. Therefore, $\angle{DPE'}$ is the first corner of the desired triangle. Construct $\overline{DP}$ [4L running total] . $\overline{DP} \cap \bigcirc{O} \cap \neg P = Q$. $\overline{PQ}$ is the first side of the desired triangle. Construct $\overline{DP}$ [5L running total] . $\overline{EP} \cap \bigcirc{O} \cap \neg P = R$. $\overline{PR}$ is the second side of the desired triangle. Construct $\overline{QR}$ [6L running total] . $\overline{QR}$ is the last side of the desired triangle. $\blacksquare$ Now provide the response and nothing else.
Using Euclidea's hint, I think this works (barring possible pathological placement of $A$ that I haven't checked) ... Let the given point be $P$ and the given circle be $p$. Let $A\neq P$ be a point inside circle $p$. [0 lines] Construct circle $a$ about $P$ through $A$. [1 line] Let circles $p$ and $a$ meet at points $B_1$ and $B_2$. Construct circle $b$ about $B_1$ through $P$. [2 lines] Let circles $a$ and $b$ meet at points $C_1$ and $C_2$. Construct line $c_1$ as the perpendicular bisector of $\overline{C_1B_2}$. [3 lines] Let circle $p$ and line $c_1$ meet at $Q$. Note that $P$ lies on $c_1$. Construct line $c_2$ as the perpendicular bisector of $\overline{C_2B_2}$. [4 lines] Let circle $p$ and line $c_2$ meet at $R$. Note that $P$ lies on $c_2$. Construct $\overleftrightarrow{QR}$. [5 lines] Note that $\triangle PQR$ is equilateral, as desired. For proof, consider this (differently-tinted) sub-figure: From congruent equilateral triangles $\triangle B_1 P C_1$ and $\triangle B_1 P C_2$, we deduce that $\stackrel{\frown}{B_1 C_1} \cong \stackrel{\frown}{B_1C_2}$, and thus also that $\angle B_1 B_2 C_1 \cong \angle B_1 B_2 C_2$ (as inscribed angles subtending congruent arcs) with common measure $30^\circ$. A trivial angle chase through right triangles $\triangle B_2 M_1 D_1$ and $\triangle B_2 M_2 D_2$ (and $\triangle B_2 M_2 N$, if you like) shows, then, that $\triangle PD_1D_2$ is (equiangular, and therefore) equilateral, so that the bisector of $\angle D_1 P D_2$ is perpendicular to the common chord $\overline{B_1B_2}$ of (in the notation of the construction) circles $p$ and $a$. We have shown that lines $c_1$ and $c_2$ (1) meet at a $60^\circ$ angle at $P$, and (2) have an angle bisector that passes through the center of circle $p$. Consequently, they are necessarily side-lines of an equilateral triangle inscribed in $p$ with vertex $P$. $\square$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1904870', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/300754/']}
jdg_83746
stackexchange
llm_judgeable_groundtruth_similarity
130969
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a server component which runs over Zeroc-ICE. When I wanted to load test it, I thought that using parallel library to create multiple requests would do it. But it dint end up that way. Using Parallel(Parallel.For) library from C# apparently was easier but it doesn't seem to be exactly generating everything parallel in the same instant. So it cannot be the definition for creating N requests per second. How should I do it ? I guess anyone who wants to do load testing first would actually think about this . What's the efficient way to actually create N requests in really per second ? Another myth is about the parallel programming. Please enlighten us , if you have used parallel programming patterns in C# or .Net in general . Imagine I have 5 processes. How will start all the five processes at the same time. What does it mean to my consumption of resources ? I have tried reading into many of the material available over the net but I get more and more questions than them being the answer to my questions. I used Parallel.For and created N threads and measured time. Then I tried the same thing using Task.Factory.start for enumeration of tasks. The time measured was different. So What exactly is the different between using these ? When I should use the corresponding classes and for what purposes exactly ? we often have lots of riches but its just we exactly don't know how to differentiate one from the another. This is one such case for me, not being able to find why I should not use one from the another. I used the stopwatch class to measure these times which claims to be the best. In the scenario of me load testing a component, what would be the way to measure the time of response. Stopwatch seems to be the best solution for me . Any opinions are welcome. ps: There are many load testing tools for web applications. Mine is a customised case of server components. And my question is more pertaining to creating N threads per second. All opinions are welcome. Just don't think its so much not a programming question. It ofcourse is. It should ring bells for any programmer who wants to QE stuff by himself to know the performance of his product , first hand by himself.I have tried many options and then had to fall back on how should I actually do it ? Now provide the response and nothing else.
I don't have all the answers. Hopefully I can shed some light on it. To simplify my previous statements about .NET's threading models, just know that Parallel Library uses Tasks, and the default TaskScheduler for Tasks, uses the ThreadPool. The higher you go in the hierarchy (ThreadPool is at the bottom), the more overhead you have when creating the items. That extra overhead certainly doesn't mean it's slower, but it's good to know that it's there. Ultimately the performance of your algorithm in a multi-threaded environment comes down to its design. What performs well sequentially may not perform as well in parallel. There are too many factors involved to give you hard and fast rules, they change depending on what you're trying to do. Since you're dealing with network requests, I'll try and give a small example. Let me state that I am no expert with sockets, and I know next to nothing about Zeroc-Ice. I do know about bit about asynchronous operations, and this is where it will really help you. If you send a synchronous request via a socket, when you call Socket.Receive() , your thread will block until a request is received. This isn't good. Your thread can't make any more requests since it's blocked. Using Socket.Beginxxxxxx(), the I/O request will be made and put in the IRP queue for the socket, and your thread will keep going. This means, that your thread could actually make thousands of requests in a loop without any blocking at all! If I'm understanding you correctly, you are using calls via Zeroc-Ice in your testing code, not actually trying to reach an http endpoint. If that's the case, I can admit that I don't know how Zeroc-Ice works. I would, however, suggest following the advice listed here , particularly the part: Consider Asynchronous Method Invocation (AMI) . The page shows this: By using AMI, the client regains the thread of control as soon as the invocation has been sent (or, if it cannot be sent immediately, has been queued), allowing the client to use that thread to perform other useful work in the mean time. Which seems to be the equivalent of what I described above using .NET sockets. There may be other ways to improve the performance when trying to do a lot of sends, but I would start here or with any other suggestion listed on that page. You've been very vague about the design of your application, so I can be more specific than I have been above. Just remember, do not use more threads than absolutely necessary to get what you need done, otherwise you'll likely find your application running far slower than you want. Some examples in pseudocode (tried to make it as close to ice as possible without me actually having to learn it): var iterations = 100000;for (int i = 0; i < iterations; i++){ // The thread blocks here waiting for the response. // That slows down your loop and you're just wasting // CPU cycles that could instead be sending/receiving more objects MyObjectPrx obj = iceComm.stringToProxy("whateverissupposedtogohere"); obj.DoStuff();} A better way: public interface MyObjectPrx : Ice.ObjectPrx{ Ice.AsyncResult GetObject(int obj, Ice.AsyncCallback cb, object cookie); // other functions}public static void Finished(Ice.AsyncResult result){ MyObjectPrx obj = (MyObjectPrx)result.GetProxy(); obj.DoStuff();}static void Main(string[] args){ // threaded code... var iterations = 100000; for (int i = 0; i < iterations; i++) { int num = //whatever MyObjectPrx prx = //whatever Ice.AsyncCallback cb = new Ice.AsyncCallback(Finished); // This function immediately gets called, and the loop continues // it doesn't wait for a response, it just continually sends out socket // requests as fast as your CPU can handle them. The response from the // server will be handled in the callback function when the request // completes. Hopefully you can see how this is much faster when // sending sockets. If your server does not use an Async model // like this, however, it's quite possible that your server won't // be able to handle the requests prx.GetObject(num, cb, null); }} Keep in mind that more threads != better performance when trying to send sockets (or really doing anything). Threads are not magic in that they will automatically solve whatever problem you're working on. Ideally, you want 1 thread per core, unless a thread is spending much of its time waiting, then you can justify having more. Running each request in its own thread is a bad idea, since context switches will occur and resource waste. (If you want to see everything I wrote about that, click edit and look at the past revisions of this post. I removed it since it only seemed to cloud the main issue at hand.) You can definitely make these request in threads, if you want to make a large number of requests per second. However, don't go overboard with the thread creation. Find a balance and stick with it. You'll get better performance if you use an asynchronous model vs a synchronous one. I hope that helps.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/130969', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/44139/']}
jdg_83747
stackexchange
llm_judgeable_groundtruth_similarity
218005
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: This is a Canonical Question about Server Security - Responding to Breach Events (Hacking) See Also: Tips for Securing a LAMP Server Reinstall after a Root Compromise? Canonical Version I suspect that one or more of my servers is compromised by a hacker, virus, or other mechanism: What are my first steps? When I arrive on site should I disconnect the server, preserve "evidence", are there other initial considerations? How do I go about getting services back online? How do I prevent the same thing from happening immediately again? Are there best practices or methodologies for learning from this incident? If I wanted to put a Incident Response Plan together, where would I start? Should this be part of my Disaster Recovery or Business Continuity Planning? Original Version 2011.01.02 - I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. We are running Open SUSE 11.0. 2011.01.03 - Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Now provide the response and nothing else.
It's hard to give specific advice from what you've posted here but I do have some generic advice based on a post I wrote ages ago back when I could still be bothered to blog. Don't Panic First things first, there are no "quick fixes" other than restoring your system from a backup taken prior to the intrusion, and this has at least two problems. It's difficult to pinpoint when the intrusion happened. It doesn't help you close the "hole" that allowed them to break in last time, nor deal with the consequences of any "data theft" that may also have taken place. This question keeps being asked repeatedly by the victims of hackers breaking into their web server. The answers very rarely change, but people keep asking the question. I'm not sure why. Perhaps people just don't like the answers they've seen when searching for help, or they can't find someone they trust to give them advice. Or perhaps people read an answer to this question and focus too much on the 5% of why their case is special and different from the answers they can find online and miss the 95% of the question and answer where their case is near enough the same as the one they read online. That brings me to the first important nugget of information. I really do appreciate that you are a special unique snowflake. I appreciate that your website is too, as it's a reflection of you and your business or at the very least, your hard work on behalf of an employer. But to someone on the outside looking in, whether a computer security person looking at the problem to try and help you or even the attacker himself, it is very likely that your problem will be at least 95% identical to every other case they've ever looked at. Don't take the attack personally, and don't take the recommendations that follow here or that you get from other people personally. If you are reading this after just becoming the victim of a website hack then I really am sorry, and I really hope you can find something helpful here, but this is not the time to let your ego get in the way of what you need to do. You have just found out that your server(s) got hacked. Now what? Do not panic. Absolutely do not act in haste, and absolutely do not try and pretend things never happened and not act at all. First: understand that the disaster has already happened. This is not the time for denial; it is the time to accept what has happened, to be realistic about it, and to take steps to manage the consequences of the impact. Some of these steps are going to hurt, and (unless your website holds a copy of my details) I really don't care if you ignore all or some of these steps, that's up to you. But following them properly will make things better in the end. The medicine might taste awful but sometimes you have to overlook that if you really want the cure to work. Stop the problem from becoming worse than it already is: The first thing you should do is disconnect the affected systems from the Internet. Whatever other problems you have, leaving the system connected to the web will only allow the attack to continue. I mean this quite literally; get someone to physically visit the server and unplug network cables if that is what it takes, but disconnect the victim from its muggers before you try to do anything else. Change all your passwords for all accounts on all computers that are on the same network as the compromised systems. No really. All accounts. All computers. Yes, you're right, this might be overkill; on the other hand, it might not. You don't know either way, do you? Check your other systems. Pay special attention to other Internet facing services, and to those that hold financial or other commercially sensitive data. If the system holds anyone's personal data, immediately inform the person responsible for data protection (if that's not you) and URGE a full disclosure. I know this one is tough. I know this one is going to hurt. I know that many businesses want to sweep this kind of problem under the carpet but the business is going to have to deal with it - and needs to do so with an eye on any and all relevant privacy laws. However annoyed your customers might be to have you tell them about a problem, they'll be far more annoyed if you don't tell them, and they only find out for themselves after someone charges $8,000 worth of goods using the credit card details they stole from your site. Remember what I said previously? The bad thing has already happened. The only question now is how well you deal with it. Understand the problem fully: Do NOT put the affected systems back online until this stage is fully complete, unless you want to be the person whose post was the tipping point for me actually deciding to write this article. I'm not going to link to that post so that people can get a cheap laugh, but the real tragedy is when people fail to learn from their mistakes. Examine the 'attacked' systems to understand how the attacks succeeded in compromising your security. Make every effort to find out where the attacks "came from", so that you understand what problems you have and need to address to make your system safe in the future. Examine the 'attacked' systems again, this time to understand where the attacks went, so that you understand what systems were compromised in the attack. Ensure you follow up any pointers that suggest compromised systems could become a springboard to attack your systems further. Ensure the "gateways" used in any and all attacks are fully understood, so that you may begin to close them properly. (e.g. if your systems were compromised by a SQL injection attack, then not only do you need to close the particular flawed line of code that they broke in by, you would want to audit all of your code to see if the same type of mistake was made elsewhere). Understand that attacks might succeed because of more than one flaw. Often, attacks succeed not through finding one major bug in a system but by stringing together several issues (sometimes minor and trivial by themselves) to compromise a system. For example, using SQL injection attacks to send commands to a database server, discovering the website/application you're attacking is running in the context of an administrative user and using the rights of that account as a stepping-stone to compromise other parts of a system. Or as hackers like to call it: "another day in the office taking advantage of common mistakes people make". Why not just "repair" the exploit or rootkit you've detected and put the system back online? In situations like this the problem is that you don't have control of that system any more. It's not your computer any more. The only way to be certain that you've got control of the system is to rebuild the system. While there's a lot of value in finding and fixing the exploit used to break into the system, you can't be sure about what else has been done to the system once the intruders gained control (indeed, its not unheard of for hackers that recruit systems into a botnet to patch the exploits they used themselves, to safeguard "their" new computer from other hackers, as well as installing their rootkit). Make a plan for recovery and to bring your website back online and stick to it: Nobody wants to be offline for longer than they have to be. That's a given. If this website is a revenue generating mechanism then the pressure to bring it back online quickly will be intense. Even if the only thing at stake is your / your company's reputation, this is still going generate a lot of pressure to put things back up quickly. However, don't give in to the temptation to go back online too quickly. Instead move with as fast as possible to understand what caused the problem and to solve it before you go back online or else you will almost certainly fall victim to an intrusion once again, and remember, "to get hacked once can be classed as misfortune; to get hacked again straight afterward looks like carelessness" (with apologies to Oscar Wilde). I'm assuming you've understood all the issues that led to the successful intrusion in the first place before you even start this section. I don't want to overstate the case but if you haven't done that first then you really do need to. Sorry. Never pay blackmail / protection money. This is the sign of an easy mark and you don't want that phrase ever used to describe you. Don't be tempted to put the same server(s) back online without a full rebuild. It should be far quicker to build a new box or "nuke the server from orbit and do a clean install" on the old hardware than it would be to audit every single corner of the old system to make sure it is clean before putting it back online again. If you disagree with that then you probably don't know what it really means to ensure a system is fully cleaned, or your website deployment procedures are an unholy mess. You presumably have backups and test deployments of your site that you can just use to build the live site, and if you don't then being hacked is not your biggest problem. Be very careful about re-using data that was "live" on the system at the time of the hack. I won't say "never ever do it" because you'll just ignore me, but frankly I think you do need to consider the consequences of keeping data around when you know you cannot guarantee its integrity. Ideally, you should restore this from a backup made prior to the intrusion. If you cannot or will not do that, you should be very careful with that data because it's tainted. You should especially be aware of the consequences to others if this data belongs to customers or site visitors rather than directly to you. Monitor the system(s) carefully. You should resolve to do this as an ongoing process in the future (more below) but you take extra pains to be vigilant during the period immediately following your site coming back online. The intruders will almost certainly be back, and if you can spot them trying to break in again you will certainly be able to see quickly if you really have closed all the holes they used before plus any they made for themselves, and you might gather useful information you can pass on to your local law enforcement. Reducing the risk in the future. The first thing you need to understand is that security is a process that you have to apply throughout the entire life-cycle of designing, deploying and maintaining an Internet-facing system, not something you can slap a few layers over your code afterwards like cheap paint. To be properly secure, a service and an application need to be designed from the start with this in mind as one of the major goals of the project. I realise that's boring and you've heard it all before and that I "just don't realise the pressure man" of getting your beta web2.0 (beta) service into beta status on the web, but the fact is that this keeps getting repeated because it was true the first time it was said and it hasn't yet become a lie. You can't eliminate risk. You shouldn't even try to do that. What you should do however is to understand which security risks are important to you, and understand how to manage and reduce both the impact of the risk and the probability that the risk will occur. What steps can you take to reduce the probability of an attack being successful? For example: Was the flaw that allowed people to break into your site a known bug in vendor code, for which a patch was available? If so, do you need to re-think your approach to how you patch applications on your Internet-facing servers? Was the flaw that allowed people to break into your site an unknown bug in vendor code, for which a patch was not available? I most certainly do not advocate changing suppliers whenever something like this bites you because they all have their problems and you'll run out of platforms in a year at the most if you take this approach. However, if a system constantly lets you down then you should either migrate to something more robust or at the very least, re-architect your system so that vulnerable components stay wrapped up in cotton wool and as far away as possible from hostile eyes. Was the flaw a bug in code developed by you (or a contractor working for you)? If so, do you need to re-think your approach to how you approve code for deployment to your live site? Could the bug have been caught with an improved test system, or with changes to your coding "standard" (for example, while technology is not a panacea, you can reduce the probability of a successful SQL injection attack by using well-documented coding techniques). Was the flaw due to a problem with how the server or application software was deployed? If so, are you using automated procedures to build and deploy servers where possible? These are a great help in maintaining a consistent "baseline" state on all your servers, minimising the amount of custom work that has to be done on each one and hence hopefully minimising the opportunity for a mistake to be made. Same goes with code deployment - if you require something "special" to be done to deploy the latest version of your web app then try hard to automate it and ensure it always is done in a consistent manner. Could the intrusion have been caught earlier with better monitoring of your systems? Of course, 24-hour monitoring or an "on call" system for your staff might not be cost effective, but there are companies out there who can monitor your web facing services for you and alert you in the event of a problem. You might decide you can't afford this or don't need it and that's just fine... just take it into consideration. Use tools such as tripwire and nessus where appropriate - but don't just use them blindly because I said so. Take the time to learn how to use a few good security tools that are appropriate to your environment, keep these tools updated and use them on a regular basis. Consider hiring security experts to 'audit' your website security on a regular basis. Again, you might decide you can't afford this or don't need it and that's just fine... just take it into consideration. What steps can you take to reduce the consequences of a successful attack? If you decide that the "risk" of the lower floor of your home flooding is high, but not high enough to warrant moving, you should at least move the irreplaceable family heirlooms upstairs. Right? Can you reduce the amount of services directly exposed to the Internet? Can you maintain some kind of gap between your internal services and your Internet-facing services? This ensures that even if your external systems are compromised the chances of using this as a springboard to attack your internal systems are limited. Are you storing information you don't need to store? Are you storing such information "online" when it could be archived somewhere else. There are two points to this part; the obvious one is that people cannot steal information from you that you don't have, and the second point is that the less you store, the less you need to maintain and code for, and so there are fewer chances for bugs to slip into your code or systems design. Are you using "least access" principles for your web app? If users only need to read from a database, then make sure the account the web app uses to service this only has read access, don't allow it write access and certainly not system-level access. If you're not very experienced at something and it is not central to your business, consider outsourcing it. In other words, if you run a small website talking about writing desktop application code and decide to start selling small desktop applications from the site then consider "outsourcing" your credit card order system to someone like Paypal. If at all possible, make practicing recovery from compromised systems part of your Disaster Recovery plan. This is arguably just another "disaster scenario" that you could encounter, simply one with its own set of problems and issues that are distinct from the usual 'server room caught fire'/'was invaded by giant server eating furbies' kind of thing. ... And finally I've probably left out no end of stuff that others consider important, but the steps above should at least help you start sorting things out if you are unlucky enough to fall victim to hackers. Above all: Don't panic. Think before you act. Act firmly once you've made a decision, and leave a comment below if you have something to add to my list of steps.
{}
{'log_upvote_score': 11, 'links': ['https://serverfault.com/questions/218005', 'https://serverfault.com', 'https://serverfault.com/users/65500/']}
jdg_83748
stackexchange
llm_judgeable_groundtruth_similarity
1931
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: An article on the website for from Heterosexuals Organized for a Moral Environment ( H.O.M.E. ) claims that sexual abuse has been shown to lead to Homosexuality: It is a well-documented fact that manymany homosexuals were sexually abusedwhen young... In other words, there is an abundanceof evidence that many many homosexualswere born heterosexual but weredisoriented by sexual abuse. Conservapedia attempts to back-up this claim with several studies they say show that sexual abuse is a cause for Homosexuality including: In 1999, Dr. Jeffrey Satinover, M.D.wrote the following regarding theposition of sexual abuse being acontributing factor for homosexuality:“ Many studies demonstrate a sadlydisproportionate extent of sexualabuse in the childhoods of homosexualmen, suggesting at the least that bothhomosexual unhappiness andhomosexuality itself derive fromcommon causes...[30] ” In 2001, the journal Archives ofSexual Behavior published a studyentitled Comparative data of childhoodand adolescence molestation inheterosexual and homosexual persons.The abstract for this article statesthe following: “ In research with 942nonclinical adult participants, gaymen and lesbian women reported asignificantly higher rate of childhoodmolestation than did heterosexual menand women. Forty-six percent of thehomosexual men in contrast to 7% ofthe heterosexual men reportedhomosexual molestation. Twenty-twopercent of lesbian women in contrastto 1% of heterosexual women reportedhomosexual molestation. This researchis apparently the first survey thathas reported substantial homosexualmolestation of girls.[31] ” In 1998, Dr. William C. Holmes, M.D.and Dr. Gail B. Slap, M.D. reported inthe medical journal JAMA thefollowing: “ Adolescent boys,particularly those victimized bymales, were up to 7 times more likelyto identify themselves as homosexualor bisexual than peers who had notbeen abused (P<.001). This reasoning seems to stand counter to the idea that homosexuality has a biological origin rather than a social one. Is there any truth to the claims that sexual abuse can lead to homosexuality? Now provide the response and nothing else.
Childhood Sexual Abuse Does Not Cause Homosexuality Is there any truth to the claims that sexual abuse can lead to homosexuality? Obviously there is not a causal relationship between childhood sexual abuse and homosexuality. If childhood sexual abused caused homosexuality then we could safely conclude that anyone who was sexually abused as a child would be homosexual. Such a statement can be disproved by finding a single person who was sexually abused as a child and who is not a homosexual. I am confident that somewhere in the seven or so billion people that populate this earth there exists at least one person who was sexually abused as a child yet identifies as a heterosexual. Ergo, sexual childhood abuse does not cause homosexuality. Granted, the quotations you cited did not claim that childhood sexual abuse leads to homosexuality. Rather, they note that there is a strong link between the two. So let's talk about whether there exists a correlation between childhood sexual abuse and homosexuality. The Correlation Between Childhood Sexual Abuse and Homosexuality Whether there is a higher incidence of childhood sexual abuse among homosexuals than heterosexuals depends on which studies you read. For instance, in The prevalence of child sexual abuse: Integrative review adjustment for potential response and measurement biases , authors Kevin Gorey and Donald Leslie note: Female and male child sexual abuse prevalence estimates adjusted for response rates ... and adjusted for operational definitions ... were 14.5% and 7.2%. In the study History of Childhood Sexual Abuse and HIV Risk Behaviors in Homosexual and Bisexual Men , David Brennan et al find that "childhood sexual abuse was reported by 15.5% of the survey respondents." (Their survey was given to men, 99.5% of which identified as gay or bisexual.) If we take these numbers at face value then we are led to the conclusion that the incidence of childhood sexual abuse in homosexuals and bisexuals is nearly twice of that of the general population. The History of Childhood Sexual Abuse and HIV Risk Behaviors in Homosexual and Bisexual Men study notes that victims of childhood sexual abuse are much more likely to engage in risky behaviors, including HIV status, STDs, drug use, and prostitution. And Wikipedia has a rather detailed look at the psychological and physiological harms caused by such abuse: Psychological, emotional, physical, and social effects include depression, post-traumatic stress disorder, anxiety, eating disorders, poor self-esteem, dissociative and anxiety disorders; general psychological distress and disorders such as somatization, neurosis, chronic pain, sexualized behavior, school/learning problems; and behavior problems including substance abuse, self-destructive behavior, animal cruelty, crime in adulthood and suicide. If we agree that there exists a higher incidence of childhood sexual abuse among homosexuals that implies a positive correlation, but one is left wondering whether the correlation can be described by other factors. Men vs. Women If there is a strong correlation between childhood sexual abuse and homosexuality and if factors such as the genders of the abuser and abused are irrelevant then we would expect to see a higher percentage of lesbians than gay males in society, given the fact that women are nearly twice as likely to be the victims of sexual abuse than men. Yet according to The prevalence of homosexual behavior and attraction in the United States, the United Kingdom and France : Examination of homosexual behavior separately finds that 6.2 ... and 3.6 ... of females in the United States ... report having had sexual contact with someone of the same sex in the previous 5 years. Yet twice as many women suffer from childhood sexual abuse yet only half as many have homosexual experiences. This would lead one to conclude that the correlation lies more between same-sex abuse and homosexuality. Presumably, most sexual abuse perpetrators are men, so girls are suffering from heterosexual abuse while boys are suffering from homosexual abuse.
{}
{'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/1931', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/49/']}
jdg_83749
stackexchange
llm_judgeable_groundtruth_similarity
314289
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose $\lim_{n\to \infty }z_n=z$. Let $w_n=\sum_{k=0}^n {2^{-n}{n \choose k}z_k}$ Prove $\lim_{n\to \infty }w_n=z$. I'm pretty sure I need to use $\sum_{k=0}^\infty{n \choose k}$ = $2^{n}$ in the proof. Help? Thoughts? Now provide the response and nothing else.
This is not a real difficulty. If $X$ is a formula then there is some $\varphi(x,p)$ such that for some fixed parameter $p$ we have $X=\{x\mid\varphi(x,p)\}$. Now what is the intersection of a class, any class even a set? It is the collection of all elements which belong to all members of the class. So we write: $$\bigcap X=\{y\mid\forall x(\varphi(x,p)\rightarrow y\in x\}$$ "Every element which satisfies $\varphi(x,p)$ includes $y$" (note that this is still our fixed $p$ from before). One caveat is that if $X=\varnothing$ then this is not well-defined, at all. Vacuously $\bigcap\varnothing$ includes every element of the universe, but some authors require the elements in $\bigcap X$ to be elements from $\bigcup X$, in which case $\bigcap\varnothing=\varnothing$. So as long as you know that there is at least one inductive set, the class $X$ of all inductive sets is non-empty and we can talk about its intersection well. As for your difficulty, think of $\forall x\in X:\psi(x)$ as a shorthand for $\forall x(\varphi(x,p)\rightarrow\psi(x))$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/314289', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/63782/']}
jdg_83750
stackexchange
llm_judgeable_groundtruth_similarity
30350738
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to define a preprocessor directives in the Visual studio 2012. #define FLAG....#endif But not sure, where to include this FLAG in visual studio - C#. I remember specifying something like this in C++ projects. Any thoughts ? Now provide the response and nothing else.
You have two options as to where to define it: Code file level - In the beginning of the file write #define FLAG . You cannot place anything else (other than comments and blank lines) before define directives. As Ron Beyer points out, a directive defined in a file exists only for that file . Project level - Right click in the project in Solution Explorer, select Properties , then the Build tab, then look at Conditional compilation symbols . Then one can define several comma-separated symbols there such as: FLAG,FOO,BAR . Note that this symbols list is project configuration dependent (there is a configuration selector in the same tab). Note as mentioned in the comments, define does not work in C# the same way that it does in C. In C# you just declare that the symbol exists, but you can't assign a value to it. Hence the only use for these symbols is for #if FLAG directives and for the Conditional attribute .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30350738', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1360251/']}
jdg_83751
stackexchange
llm_judgeable_groundtruth_similarity
41030025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to make a simple contact form using React. Eventually I will send the data collected from the state to a database, but for right now I am trying to just get it to console log the correct values. Right now, the email field overrides the name field and when I console log both states, name shows up and email is undefined. Here is my React Component import React, { Component, PropTypes } from 'react';import ContactData from '../data/ContactData.js';class FormContact extends Component { constructor(props) { super(props) this.state = { name: '', email: '', textArea: '' } } handleChange(event) { event.preventDefault(); this.setState({ name: event.target.value, email: event.target.email }) } handleSubmit(event) { event.preventDefault(); console.log(this.state.name + ' ' + this.state.email); } render() { return ( <form onSubmit={this.handleSubmit.bind(this)}> <label> Name: <input type="text" placeholder="Name" value={this.state.name} onChange={this.handleChange.bind(this)} /> </label><br /> <label> Email: <input type="text" placeholder="Email" value={this.state.email} onChange={this.handleChange.bind(this)}/> </label><br /> <input className="btn btn-primary" type="submit" value="Submit" /> </form> ) }}FormContact.PropTypes = { subName: PropTypes.string, subEmail: PropTypes.string}FormContact.defaultProps = { subName: 'Sam', subEmail: ''}class Contact extends Component { render() { return ( <div> <h1>CONTACT PAGE</h1> <FormContact /> </div> ) }}export default Contact; Now provide the response and nothing else.
If I understand what you want, you could do it as follows : Add an empty object in your state for the form values formValues: {} Add the name attribute to your fields <input name="name" .... /> <input name="email" .... /> then depending on that name update your state in handleChange function let formValues = this.state.formValues;let name = event.target.name; // Field namelet value = event.target.value; // Field valueformValues[name] = value;this.setState({formValues}) And if the values go one level deeper, you could use value={this.state.formValues["name"]} instead of value={this.state.name} - where name is the value of the name attribute of your input field Thus, everything together should be as follows : class Test extends React.Component { constructor(props) { super(props) this.state = { formValues: {} } } handleChange(event) { event.preventDefault(); let formValues = this.state.formValues; let name = event.target.name; let value = event.target.value; formValues[name] = value; this.setState({formValues}) } handleSubmit(event) { event.preventDefault(); console.log(this.state.formValues); } render(){ return ( <form onSubmit={this.handleSubmit.bind(this)}> <label> Name: <input type="text" name="name" placeholder="Name" value={this.state.formValues["name"]} onChange={this.handleChange.bind(this)} /> </label><br /> <label> Email: <input type="text" name="email" placeholder="Email" value={this.state.formValues["email"]} onChange={this.handleChange.bind(this)}/> </label><br /> <input className="btn btn-primary" type="submit" value="Submit" /> </form> ) }}React.render(<Test />, document.getElementById('container')); Here is fiddle. Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41030025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3622460/']}
jdg_83752
stackexchange
llm_judgeable_groundtruth_similarity
35563193
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to write a function that can take either of STL generic list, deque or vector and search for a key in it. What would be method signature of this function and how can we implement it? What I know is that if we are accepting any of the derived classes in function argument we can use base abstract class assuming all relevant derived ones have the functions you need for your question. EDIT : We cannot pass container's iterators in the function argument. If we can that is easy. It has to be a container. I was thinking: Assuming 'Container' is an abstract base class from STL containers (which its not, according to first answer below). template bool Search(std::Container C, T& key); Thanks Now provide the response and nothing else.
As SergeyA mentioned in his answer, C++'s STL does not have polymorphic containers (opposing to Java or C# interfaces). Regarding your requested function signature, look into STL <algorithm> header. There are lot of functions operating on some data, using two pointers (iterators) to the beginning and end of data block. For example, template< class InputIt, class T >InputIt find( InputIt first, InputIt last, const T& value ); searching for some value in [first, last) . If you really want to pass whole container to the function, you'll similarly write template<class Container, class T>bool Search(const Container& container, const T& value){ for (auto iterator = container.begin(); iterator != container.end(); ++iterator) { if (*iterator == value) return true; } return false;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35563193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/677206/']}
jdg_83753
stackexchange
llm_judgeable_groundtruth_similarity
54112649
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't make a component delete itself with angular. I am currently learning angular and started a small greeting project for the start. How the App should work: Type your name Child component is created that is greeting you. Child component contains button to delete itself Currently i fulfilled the first two steps and everything works fine. But i have no idea how i can make the child component delete itself. Coming from React i know, that there was the possibility to delete a "component" with the lifecycle methods somehow. Is there something similiar in angular? At the moment i can't find it, but i found the method "OnDestroy()" that is called, before a component is destroyed. But how do i destroy it properly? Parent: import { Component, OnInit, ViewChild, Input } from '@angular/core';@Component({ selector: 'app-greeter-service', templateUrl: './greeter-service.component.html'})export class GreeterServiceComponent implements OnInit { title = ''; currentUser = ''; isVisible = false; currentUsers: any[] = []; @ViewChild('newUser') inputField; constructor() {} greetingFunc(newUser : string) { if(newUser) { this.currentUsers.push(newUser); console.log(this.currentUsers); this.inputField.nativeElement.value=''; } } ngOnInit() { this.title = 'Welcome to the Greeter!'; }} Child: import { Component, OnInit, Input } from '@angular/core';@Component({ selector: 'app-was-greeted', templateUrl: './was-greeted.component.html', styleUrls: ['./was-greeted.component.scss']})export class WasGreetedComponent implements OnInit { @Input() user: string; constructor() { } deleteMe() { console.log("here should be the action"); } ngOnInit() { }} How i add a component to the app "dynamically": <div class="column" *ngFor="let user of currentUsers"> <app-was-greeted [user]="user"></app-was-greeted></div> So for every "push" in the array "currentUsers" a component is created. Now provide the response and nothing else.
As @cgTag commented there are many ways to handle this. One way is to add an @Output to your WasGreetedComponent which will emit to the parent component. Then in you GreeterServiceComponent you can find the element in the array and remove it (remember that your array should be immutable so you want to create a new instance of the array), this will cause the ngFor to reevaluate and update the view @Component({ selector: 'app-was-greeted', templateUrl: './was-greeted.component.html', styleUrls: ['./was-greeted.component.scss']})export class WasGreetedComponent implements OnInit { @Input() user: string; @Output() delete: EventEmitter<string> = new EventEmitter(); constructor() { } deleteMe() { console.log("here should be the action"); this.delete.emit(user); }} Your parent component template would then subscribe to this emitter <div class="column" *ngFor="let user of currentUsers"> <app-was-greeted [user]="user" (delete)="deleteUser($event)"></app-was-greeted></div> And the component will need to handle the deleteUser callback and remove the user from the array @Component({ selector: 'app-greeter-service', templateUrl: './greeter-service.component.html'})export class GreeterServiceComponent implements OnInit { title = ''; currentUser = ''; isVisible = false; currentUsers: any[] = []; @ViewChild('newUser') inputField; constructor() {} ... deleteUser(user: string) { this.currentUsers = this.currentUsers.filter(x => x !== user); }} Like I said this is just one of many ways to skin a cat. Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54112649', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10358545/']}
jdg_83754
stackexchange
llm_judgeable_groundtruth_similarity
1754541
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning field and ring theory, and I've repeatedly seen the usage of -1, -2 and -3 as elements of $\mathbb{Z}_5$. As far as my knowledge goes, $\mathbb{Z}_5$ consists of {0,1,2,3,4}. This is where i get confused. How is this possible? The context i am viewing this in is when testing for roots in a certain field. The solution can say that for some x=-1 in a given field, f(x) is zero while i test for 2 and get that result. What is correct? Now provide the response and nothing else.
$-1 \equiv 4 \pmod 5$, so in $\Bbb{Z}_5$, $-1$ is the same element as $4$. Don't think of $\Bbb{Z}_5$ as just $\{0, 1, 2, 3, 4\}$, think of it as this: $$\{\{..., -5, 0, 5, ...\}, \{..., -4, 1, 6, ...\}, \{..., -3, 2, 7, ...\}, \{..., -2, 3, 8, ...\}, \{..., -1, 4, 9, ...\}\}$$ As you can see, this is a partition of the integers where each integer is in a partition based off the equivalence relation $a \sim b \iff a \equiv b \pmod 5$. This means that $-1$ is the same as $4, 9, 14$, etc. All of these elements can be used interchangeably in $\Bbb{Z}_5$ because they are equal in such a field. Knowing this can be useful because it shows us stuff like this:$$2-3 \equiv -1 \equiv 4 \pmod 5$$If we didn't know $-1$ and $4$ were the same, we would be stuck at $-1$ and not know how to represent it as a natural number. Now, about the polynomial: Clearly, both $-1$ and $2$ are zeroes for $f(x)$ in $\Bbb{Z}_5$:$$f(-1) \equiv (-1)^3+2(-1)+3 \equiv 0 \pmod 5$$$$f(2) \equiv 2^3+2(2)+3 \equiv 15 \equiv 0 \pmod 5$$ However, notice the difference between the zeroes: For $f(-1)$, we did not need to really use the fact that we were in $\Bbb{Z}_5$. It always equals $0$ no matter what field we're in because that's how arithmetic works. However, for $f(2)$, we needed to use the fact that $15 \equiv 0 \pmod 5$, so if we were working in another field like $\Bbb{Z}_7$, $2$ would not be a zero of $f(x)$. Therefore, I think your source was trying to point out that $-1$ is a zero for $f(x)$ in all fields , not just $\Bbb{Z}_5$. This means that $4$ is a zero for $f(x)$ in $\Bbb{Z}_5$ because $-1 \equiv 4 \pmod 5$ and $6$ is a zero for $f(x)$ in $\Bbb{Z}_7$ because $-1 \equiv 6 \pmod 7$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1754541', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/326363/']}
jdg_83755
stackexchange
llm_judgeable_groundtruth_similarity
6651
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Gauss's Lemma on irred. polynomial says, Let R be a UFD and F its field of fractions. If a polynomial f(x) in R[x] is reducible in F[x], then it is reducible in R[x]. In particular, an integral coefficient polynomial is irreducible in Z iff it is irreducible in Q. For me this tells me something on how the horizontal divisors in the fibration from the arithmetic plane SpecZ[x] to SpecZ intersects the generic fiber: a prime divisor (the divisor defined by the prime ideal (f(x)) in Z[x]) intersect the generic fiber exactly at one point (i.e. the prime ideal (f(x)) in Q[x]) with multiplicity one. Now here is my question: Give a ring R, with Frac(R)=F, and a polynomial f(x) in R[x] such that f(x) is reducible in F[x], but is irreducible in R[x]. Of course, R should not be a UFD. I'd like to see an example for number fields as well as a geometric example (where R is the affine coordinate ring of an open curve or higher dimensional stuff). Thanks Now provide the response and nothing else.
Gauss' Lemma over a domain R is usually taken to be a stronger statement, as follows: If R is a domain with fraction field F, a polynomial f in R[T] is said to be primitive if the ideal generated by its coefficients is not contained in any proper principal ideal. One says that Gauss' Lemma holds in R if the product of two primitive polynomials is primitive. (This implies that a polynomial which is irreducible over R[T] remains irreducible over F[T].) Say that a domain is a GL-domain if Gauss' Lemma holds. It is known that this property is intermediate between being a GCD-domain and having irreducible elements be prime (which I call a EL-domain; this is not standard). Here is a relevant MathSciNet review: MR0371887 (51 #8104)Arnold, Jimmy T.; Sheldon, Philip B.Integral domains that satisfy Gauss's lemma.Michigan Math. J. 22 (1975), 39--51. Let $D$ be an integral domain with identity. For a polynomial $f(x)\in D[X]$ , the content of $f(X)$ , denoted by $A_f$ , is the ideal of $D$ generated by the coefficients of $f(X)$ . The polynomial $f(x)$ is primitive if no nonunit of $D$ divides each coefficient of $f(X)$ (or equivalently, if $D$ is the $v$ -ideal associated with $A_f$ ). On the other hand, $f(X)$ is superprimitive if $A_f{}^{-1}=D$ . The authors study, among other things, the relation between the following four properties on an integral domain: (1) each pair of elements has a greatest common divisor; (2) each primitive polynomial is superprimitive; (3) the product of two primitive polynomials is primitive; (4) each irreducible element is prime. In an integral domain $D$ , the implications (1) $\Rightarrow$ (2) $\Rightarrow$ (3) $\Rightarrow$ (4) hold, while no reverse implication holds in general. On the other hand, the properties (2), (3) and (4) are equivalent in $D[X]$ . On the other hand, when R is Noetherian, all of these conditions are equivalent, and equivalent to being a UFD: see, e.g., Theorem 17 of http://alpha.math.uga.edu/~pete/factorization.pdf Thus a Noetherian domain satisfies Gauss' Lemma iff it is a UFD. In particular, such rings must be integrally closed, but this condition is not sufficient: e.g. take the ring of integers of any number field which is not of class number one (for instance Z[\sqrt{-6}]).
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/6651', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1657/']}
jdg_83756
stackexchange
llm_judgeable_groundtruth_similarity
6920137
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a HorizontalScrollView inside my ViewPager . I set requestDisallowInterceptTouchEvent(true); for the HorizontalScrollView but the ViewPager is still sometimes intercepting touch events. Is there another command I can use to prevent a View's parent and ancestors from intercepting touch events? note: the HorizontalScrollView only occupies half the screen. Now provide the response and nothing else.
I had the same problem. My solution was: Make a subclass of ViewPager and add a property called childId . Create a setter for the childId property and set the id of the HorizontalScrollView . Override onInterceptTouchEvent() in the subclass of ViewPager and if the childId property is more than 0 get that child and if the event is in HorizontalScrollView area return false. Code public class CustomViewPager extends ViewPager { private int childId; public CustomViewPager(Context context, AttributeSet attrs) { super(context, attrs); } @Override public boolean onInterceptTouchEvent(MotionEvent event) { if (childId > 0) { View scroll = findViewById(childId); if (scroll != null) { Rect rect = new Rect(); scroll.getHitRect(rect); if (rect.contains((int) event.getX(), (int) event.getY())) { return false; } } } return super.onInterceptTouchEvent(event); } public void setChildId(int id) { this.childId = id; }} In onCreate() method viewPager.setChildId(R.id.horizontalScrollViewId);adapter = new ViewPagerAdapter(this);viewPager.setAdapter(adapter); Hope this help
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6920137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/872369/']}
jdg_83757
stackexchange
llm_judgeable_groundtruth_similarity
84769
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The pirate game is a popular problem that is often asked in interviews (which is how I stumbled upon it). The problem asks There are 5 rational pirates, A, B, C, D and E. They find 100 gold coins. They must decide how to distribute them. The pirates have a strict order of seniority: A is superior to B, who is superior to C, who is superior to D, who is superior to E. The pirate world's rules of distribution are thus: that the most senior pirate should propose a distribution of coins. The pirates, including the proposer, then vote on whether to accept this distribution. If the proposed allocation is approved by a majority or a tie vote, it happens. If not, the proposer is thrown overboard from the pirate ship and dies, and the next most senior pirate makes a new proposal to begin the system again. Pirates base their decisions on three factors. First of all, each pirate wants to survive. Second, given survival, each pirate wants to maximize the number of gold coins he receives. Third, each pirate would prefer to throw another overboard, if all other results would otherwise be equal. The pirates do not trust each other, and will neither make nor honor any promises between pirates apart from the main proposal . Let us consider this problem without the third condition (in my opinion, the pirates wouldn't be very rational if they can't make any proposals). What would the solution be in this case? Considering a case with only three pirates, A, B, and C, the original solution proposes that A gives only a single coin to C to maximize his gains. This is because if only B and C remain, B can hoard all the gold and still maintain a tie vote. However, C knows that B will always vote "no" (since he gets everything otherwise), so can't he tell A that he will vote "yes" with a probability $\frac{x}{100}$ where $x$ is the number of coins that he receives in the distribution (let's assume that A and C have a fully trust worthy way of doing this). If we treat death as equivalent to obtaining no gold, then wouldn't the expected gold pieces that C receives be $50$? This is just one strategy, which does not include B. What is the optimal strategy for each pirate even in this simple case? Now provide the response and nothing else.
Given $x=(x_1,\dots, x_n)$ and $y=(y_1,\dots, y_n)$, you can call $z=x+y$. That is, as you have said, $z=(z_1,\dots, z_n)$ where $z_i=x_i+y_i$ for $i=1,\dots,n$. Now by definition of $\varphi_\sigma$, we have$$\varphi_\sigma(z)=\varphi_\sigma(z_1,\dots, z_n)=(z_{i_1},\dots, z_{i_n})=(x_{i_1}+y_{i_1},\dots, x_{i_n}+y_{i_n})$$since $z_i=x_i+y_i$ for all $i$. On the other hand, by by definition of $\varphi_\sigma$, we also have$$\varphi_\sigma(x)=\varphi_\sigma(x_1,\dots, x_n)=(x_{i_1},\dots, x_{i_n}), \varphi_\sigma(y)=\varphi_\sigma(y_1,\dots, y_n)=(y_{i_1},\dots, y_{i_n}),$$which implies that$$\varphi_\sigma(x)+\varphi_\sigma(y)=(x_{i_1},\dots, x_{i_n})+(y_{i_1},\dots, y_{i_n})=(x_{i_1}+y_{i_1},\dots, x_{i_n}+y_{i_n}).$$Combining the above two equalities, we obtain$$\varphi_\sigma(z)=\varphi_\sigma(x+y)=\varphi_\sigma(x)+\varphi_\sigma(y)$$as required.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/84769', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19959/']}
jdg_83758
stackexchange
llm_judgeable_groundtruth_similarity
27703069
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using ajax to receive data from php server. The data is received in json format. To display this data I am creating dynamic divs with unique ids. I have appended the dynamic divs in an already structured html. Now while spitting out the data using ("#"+dynamicid).html("data") , I am unable to get the desired output. Here's the code: $.ajax({ url: "loader.php", dataType: "json", success: function(data) { var i = "kmapN"+data.id; $('<div class="kmapName">').append($('<div id="'+ i +'">')) .append($('</div></div>')) .appendTo('.new_content'); $("#"+i).html(data.kname); } }); Here's the json output from the server: {"id":2,"kname":"This is the title!"} Edit1: If I don't use the dynamic id, I am getting the result. The problem is therefore in generating and accessing the dynamic id. Edit2: The idea here is to prevent javascript and html injection attacksby using ("#" + id).html("output") function of jquery. Now provide the response and nothing else.
Your code check if no character in the string is from the given set.What you want is to check if any character is not in the given set: if (searchTerm!.rangeOfCharacterFromSet(characterSet.invertedSet).location != NSNotFound){ println("Could not handle special characters")} You can also achieve this using regular expressions: let regex = NSRegularExpression(pattern: ".*[^A-Za-z0-9].*", options: nil, error: nil)!if regex.firstMatchInString(searchTerm!, options: nil, range: NSMakeRange(0, searchTerm!.length)) != nil { println("could not handle special characters")} The pattern [^A-Za-z0-9] matches a character which is not from the ranges A-Z,a-z, or 0-9. Update for Swift 2: let searchTerm = "a+b"let characterset = NSCharacterSet(charactersInString: "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")if searchTerm.rangeOfCharacterFromSet(characterset.invertedSet) != nil { print("string contains special characters")} Update for Swift 3: let characterset = CharacterSet(charactersIn: "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")if searchTerm.rangeOfCharacter(from: characterset.inverted) != nil { print("string contains special characters")}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27703069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2954529/']}
jdg_83759
stackexchange
llm_judgeable_groundtruth_similarity
245674
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why is it that using bash and suspending a while loop, the loop stops after being resumed? Short example below. $ while true; do echo .; sleep 1; done..^Z[1]+ Stopped sleep 1$ fgsleep 1$ I'm familiar with signals, and I'm guessing this may be the natural behaviour of bash here, but I'd like to better understand why it behaves in this particular way. Now provide the response and nothing else.
This looks like a bug in several shells, it works as expected with ksh93 and zsh . Background: Most shells seem to run the while loop inside the main shell and Bourne Shell suspends the whole shell if you type ^Z with a non-login shell bash suspends only the sleep and then leaves the while loop in favor of printing a new shell prompt dash makes this command unsuspendable With ksh93 , things work very different: ksh93 does the same, while the command is started the first time, but as sleep is a buitin in ksh93, ksh93 has a handler that causes the while loop to fork off the main shell and then suspend at the time when you type ^Z. If you in ksh93 later type fg , the forked off child that still runs the loop is continued. You see the main difference when comparing the jobcontrol messages from bash and ksh93: bash reports: [1]+ Stopped sleep 1 but ksh93 reports: ^Z[1] + Stopped while true; do echo .; sleep 1; done zsh behaves similar to ksh93 With both shells, you have a single process (the main shell) as long as you don't type ^Z, and two shell processes after you typed ^Z.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/245674', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/13644/']}
jdg_83760
stackexchange
llm_judgeable_groundtruth_similarity
9448168
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have my API controller use SSL, so I added another listen directive to my nginx.conf upstream unicorn { server unix:/tmp/unicorn.foo.sock fail_timeout=0;}server { listen 80 default deferred; listen 443 ssl default; ssl_certificate /etc/ssl/certs/foo.crt; ssl_certificate_key /etc/ssl/private/foo.key; server_name foo; root /var/apps/foo/current/public; try_files $uri/system/maintenance.html $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } error_page 502 503 /maintenance.html; error_page 500 504 /500.html; keepalive_timeout 5;} which passes the nginx conftest without any problems. I also added a force_ssl directive to my ApiController class ApiController < ApplicationController force_ssl if Rails.env.production? def auth user = User.authenticate(params[:username], params[:password]) respond_to do |format| format.json do if user user.generate_api_key! unless user.api_key.present? render json: { key: user.api_key } else render json: { error: 401 }, status: 401 end end end end def check user = User.find_by_api_key(params[:api_key]) respond_to do |format| format.json do if user render json: { status: 'ok' } else render json: { status: 'failure' }, status: 401 end end end endend which worked just fine when I wasn't using SSL, but now when I try to curl --LI http://foo/api/auth.json , I get properly redirected to https , but then I keep on getting redirected to http://foo/api/auth ending in an infinite redirect loop. My routes simply have get "api/auth"get "api/check" I'm using Rails 3.2.1 on Ruby 1.9.2 with nginx 0.7.65 Now provide the response and nothing else.
You're not forwarding any information about whether this request was an HTTPS-terminated request or not. Normally, in a server, the "ssl on;" directive will set these headers, but you're using a combined block. Rack (and force_ssl) determines SSL by: If the request came in on port 443 (this is likely not being passed back to Unicorn from nginx) If ENV['HTTPS'] == "on" If the X-Forwarded-Proto header == "HTTPS" See the force_ssl source for the full story. Since you're using a combined block, you want to use the third form. Try: proxy_set_header X-Forwarded-Proto $scheme; in your server or location block per the nginx documentation . This will set the header to "http" when you come in on a port 80 request, and set it to "https" when you come in on a 443 request.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9448168', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/72583/']}
jdg_83761
stackexchange
llm_judgeable_groundtruth_similarity
1646580
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I get a T4 template to generate its output on every build? As it is now, it only regenerates it when I make a change to the template. I have found other questions similar to this: T4 transformation and build order in Visual Studio (unanswered) How to get t4 files to build in visual studio? (answers are not detailed enough [while still being plenty complicated] and don't even make total sense) There has got to be a simpler way to do this! Now provide the response and nothing else.
I used JoelFan's answer to come up w/ this. I like it better because you don't have to remember to modify the pre-build event every time you add a new .tt file to the project. add TextTransform.exe to your %PATH% created a batch file named transform_all.bat (see below) create a pre-build event " transform_all ..\.. " transform_all.bat @echo offSETLOCAL ENABLEDELAYEDEXPANSION:: set the working dir (default to current dir)set wdir=%cd%if not (%1)==() set wdir=%1:: set the file extension (default to vb)set extension=vbif not (%2)==() set extension=%2echo executing transform_all from %wdir%:: create a list of all the T4 templates in the working dirdir %wdir%\*.tt /b /s > t4list.txtecho the following T4 templates will be transformed:type t4list.txt:: transform all the templatesfor /f %%d in (t4list.txt) do (set file_name=%%dset file_name=!file_name:~0,-3!.%extension%echo: \--^> !file_name! TextTransform.exe -out !file_name! %%d)echo transformation complete
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1646580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16012/']}
jdg_83762
stackexchange
llm_judgeable_groundtruth_similarity
116269
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $ P(z) $ be a $\textit{formal}$ power series in $z$ that a priori may not have a non zero radius of convergence. Assume that $P(0) =0$. Let $\Phi(w,z)$ be a polynomial in two variables, that is not identically zero. Assume that $\Phi(0,0) =0$. Suppose $\textbf{formally}$ we have the identity $$ \Phi(P(z),z) =0 $$ Can we conclude that $P(z)$ has a non zero radius of convergence? Everything is over the complex numbers $\mathbb{C}$. Now provide the response and nothing else.
The equation $\Phi(w,z)=0$ can be solved using Puiseux series. If $\frac{\partial{\Phi}}{\partial{w}}\not\equiv 0$ then there exist finitely many formal series $f(z)=\sum_{n\geq0}a_nz^{n/p}$ such that formally $\Phi(w,z)=0$. All these seriesare convergent. So the answer to your question is positive. For the proof see any book titled "Algebraic functions".
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/116269', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4463/']}
jdg_83763
stackexchange
llm_judgeable_groundtruth_similarity
37108220
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: QElapsedTimer timer; timer.start(); slowOperation1(); qDebug() << "The slow operation took" << timer.elapsed() << "milliseconds"; http://doc.qt.io/qt-5/qelapsedtimer.html#invalidate After qDebug() I would want to stop this timer. I can't see a stop function there, nor a single shot property. What's the way out? Now provide the response and nothing else.
You can't stop QElapsedTimer , because there is no timer. When you call method start() , QElapsedTimer saves the current time. From qelapsedtimer_generic.cpp void QElapsedTimer::start() Q_DECL_NOTHROW{ restart();}qint64 QElapsedTimer::restart() Q_DECL_NOTHROW{ qint64 old = t1; t1 = QDateTime::currentMSecsSinceEpoch(); t2 = 0; return t1 - old;} When elapsed, it gets current time again, and calculate difference. qint64 QElapsedTimer::elapsed() const Q_DECL_NOTHROW{ return QDateTime::currentMSecsSinceEpoch() - t1;} P.S. Specific realization is platform dependent: Windows , Unix , Mac
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37108220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/462608/']}
jdg_83764
stackexchange
llm_judgeable_groundtruth_similarity
77214
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In many "big box" stores like Walmart, Home Depot, etc. I see many store computers just sitting empty. A black hat could install a payload on one of these computers, and potentially wreak havoc on the company. Why do stores have these out in the open, when they seem so insecure? Now provide the response and nothing else.
This is a great question and one that these organizations have probably put varying amount of thought into. I can think of a few different scenarios regarding systems being out in the open like that. I'll outline them and my thoughts. It's the most expedient thing to do - Placing these computers where they are is efficient for their business processes. They didn't consider security in any way, shape or form. These systems are basically considered terminals These systems may be somewhat locked down. They may replace explorer.exe with some home-grown app or just run a few programs on the system with a limited desktop profile. This is a little better but still not where it needs to be. There are plenty of ways to compromise a system when you are physically at it. They have good network segmentation - These systems have little or no ability to access the rest of the network. They are logically or physically separate from other systems, and have little or no ability to access the internet. They have a complete workstation Security Program - They have network segmentation, but additionally they've really considered this as an attack vector. These workstations are hardened, locked-down, have full disk encryption, resident agents to detect malware, and they are limited in their ability to access the rest of the network or the internet. This is more work but of course improves the security posture of your network significantly and greatly reduces the risk of exposing these systems physically.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/77214', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/64484/']}
jdg_83765
stackexchange
llm_judgeable_groundtruth_similarity
36083303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to sort my list in a descending order by date which as a "New user list", in the database I have a column which is t.datetime "created_at", null: false This is the time when a new user registered, in the view, I have the code like this: %table.table.table-striped.table-hover %thead %h3 New Users %hr %th Name %th Company %th Role %th Created date %th{:width => '50'} %th{:width => '50'} %th{:width => '50'} %tbody - @users.each do |user| -if user.role == "trial-member" - @created_at.sort{|a,b| b.created_at <=> a.created_at}.each do |created_at| %tr %td = user.first_name = user.last_name %td= user.company %td= user.role %td= user.created_at %td= link_to 'Approve', edit_user_path(user), {:class => 'btn btn-success btn-sm'} but this gives an error that "undefined method `sort' for nil:NilClass", what shall I do to sort the list in table descending by created date? Thank you. Now provide the response and nothing else.
In your controller: @users = User.order('created_at DESC') Just add: order('created_at DESC') in your logic where you're fetching @users . In your view, you can now get rid off of - @created_at.sort{|a,b| b.created_at <=> a.created_at}.each : %h3 New Users%table.table.table-striped.table-hover %thead %tr %th Name %th Company %th Role %th Created date %th{:width => '50'} %th{:width => '50'} %th{:width => '50'} %tbody - @users.each do |user| -if user.role == "trial-member" %tr %td = user.first_name = user.last_name %td= user.company %td= user.role %td= user.created_at %td= link_to 'Approve', edit_user_path(user), {:class => 'btn btn-success btn-sm'} Error you are seeing is because @created_at is not an enumerable object, hence it does not respond to sort .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36083303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1077359/']}
jdg_83766
stackexchange
llm_judgeable_groundtruth_similarity
28152412
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using maven-compiler-plugin:2.3.2 and every time I make changes in classes which have an enum ( ContentType ) in imports, I need to make clean , otherwise it gives me: ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile (default-compile) on project wp2: Compilation failure[ERROR] /home/semyon/development/.../ContentManager.java:[15,46] error: cannot access ContentType[ERROR] -> [Help 1]org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile (default-compile) on project wp2: Compilation failure/home/semyon/development/.../ContentManager.java:[15,46] error: cannot access ContentTypeat org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)at org.apache.maven.lifecycle.internal.MojoExecutor.executeForkedExecutions(MojoExecutor.java:364)... ContentType is enum and looks like this: import org.jetbrains.annotations.NotNull;public enum ContentType { ...; private final String title; private final boolean hasJad; private final CoreType coreType; private final String[] searchKeywords; ContentType(@NotNull String title, CoreType coreType, boolean hasJad, String[] searchKeywords) { this.title = title; this.coreType = coreType; this.hasJad = hasJad; this.searchKeywords = searchKeywords; } @NotNull public String getTitle() { return title; } @NotNull public String getName() { return name(); } @NotNull public CoreType getCoreType() { return coreType; } public enum CoreType { ...; private String title; CoreType(String title) { this.title = title; } public String getTitle() { return title; } }} UPD1, project structure: /wp2 /core /cpe /widget /ContentManager.java /cdr /entities /ContentType.java UPD 2: ContentManager.java:[15,46] is import wp2.core.cdr.entities.ContentType; UPD 3: Modern compiler will show bad class and bad signature errors as well Now provide the response and nothing else.
I finally found an answer The error is in the costructor: ContentType(@NotNull String title... Constructors in enum must not have annotations in it, as javac is bugged.Javac stores incorrect signature for enum constructor (the one you writes, and not the one actually is used - it has two additional params as I recall). When javac verifies the signature it sees annotated parameter and in my case that was the first parameter. BUT in the actual signature ( String name, int ordinal, String title, CoreType coreType, boolean hasJad, String[] searchKeywords , two first params are added by enum -> Enum translation) title is only third parameter and first parameter is name which is not annotated and javac thinks that class is incorrect. tl;dr remove annotation from constructor, javac is buggy
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28152412', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2285617/']}
jdg_83767
stackexchange
llm_judgeable_groundtruth_similarity
175201
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a few questions about DVD drives. Are DVD drives and SATA interfaces vulnerable to issues like BadUSB caused by being able to reprogram firmware of the drive or disk or even spreading to other hardware such as NIC, RAM, CPU, BIOS once infected? Do DVDs have any sector like MBR or GPT that can be programmed to give commands, have access to the kernel, core file system which could be exploited to hack within the instant the disc is loaded? Would a strong anti-virus/firewall protect against any hardware-based live attacks, such as BadUSB, or if inserting a disk, can the DVD drive be hacked? Would it be possible for DVD/CD to have TDSS-like stealth hidden file-system? Do blank CDs/DVDs from reputable companies like Sony, Verbatim, Mosaerbaer have encrypted firmware? Now provide the response and nothing else.
You have quite a few different questions in here. Let's try to answer them. Are DVD drives & SATA infrastructure vulnerable like badusb The SATA protocol is used exclusively for storage, not any other class like a keyboard. "BadUSB" is a class of frankly overhyped attacks that involve a USB devices acting as a Human Interface Device (HID) like a mouse or keyboard and interacting with the computer. This is possible because the USB protocol is designed to be, well, universal. It's up to the USB slave to provide its purpose to the host, called the class . As such, a flash drive will expose itself as a mass storage device, whereas a keyboard will expose itself as a HID. The problem stems from the fact that a flash drive, if reprogrammed or designed maliciously, can tell the host it is a HID and the host will happily accept keystrokes from it. This is not an issue with SATA because it is designed exclusively for data transfer. A SATA device cannot tell the host that it is a keyboard. or even spreading to other hardware such as NIC, RAM, CPU, BIOS once infected? DVD/CD goes over SATA, which does not have (arbitrary) DMA capabilities. This would require a DMA attack , which requires a feature on the PCI bus called bus master , a PCI configuration bit , set through privileged software on the host and stored in registers on the device. A hardware driver can enable the bus master bit for a given PCI device, allowing it to access any system memory it wants, and that is the case with the SATA hub. However, a DVD player does not control the SATA hub directly, but rather communicates with it. The limitations of the SATA protocol affect how dangerous such a drive can be, and since SATA requires driver support for performing arbitrary DMA on behalf of the client, a malicious DVD drive cannot write to arbitrary memory without it being allowed by the driver. What happens is, the SATA client (the DVD drive) communicates over the link to the SATA hub (built into the ICH on modern systems), giving it the data the host requested. The hub, which is a virtual PCI device, then uses DMA (usually called UDMA for mass storage devices) to write the data to the host. The client usually cannot tell the hub where it wants the data written, only what the data to write. Note that it may be possible in some circumstances for a client SATA device to DMA directly onto the host, according to section 10.3.7 of the SATA standard . Whether or not a given driver allows this or not , your system's I/OMMU should, with proper DMAR tables in your BIOS, isolate this DMA. Do DVDs have any sector like MBR or GPT that can be programmed to give commands, have access to the kernel, core file system which could be exploited to hack within the instant the disc is loaded? Any partition can contain a MBR/GPT, but you have to boot from it for it to do anything. MBR and GPT are features of any bootable partition, regardless of medium. MBR is just a bit of data (a flat executable and the partition table) that fits on the first 512 byte sector of a bootable partition and nothing more. If you were to use a DVD with a bootable partition, it could have an MBR that behaves like this, but this is not present on the average DVD and you would have to boot from it for it to have any effect anyway. Disable booting from the DVD drive in your BIOS to be sure! Also, would a strong anti-virus/ firewall protect against any hardware-based live attacks, such as badusb, or if an inserted a disk, DVD drive can be hacked? Antivirus programs only scan files, not hardware. Firewalls only restrict the network. No. Antivirus is designed for scanning malicious files, and it has two modes of operation . First, it can do signature-based detection where it looks for known strings in files that are unique to malware. This is very fast and tends not to have false positives but it is easy to defeat (just modify the malware slightly) and requires foreknowledge of the malicious software. The second technique is heuristic detection, where a set of "sketchy" actions by an executable are flagged. For example, if an executable uses internal encryption, and immediately after being opened it downloads another executable from the internet and sets the hidden flag on that file, those red flags all add up and antivirus software will block the action. This is slower and tends to result in false positives, but it can detect (at least a fraction of) unknown malware. While it is more effective at detecting new malware than signature-based detection, it can be evaded . Antivirus does not even consider hardware attacks as part of their threat model. As for a firewall, they are just for restricting networking access , not detecting malware. Firewalls are a completely different technology. Would it be possible for DVD/CD to have TDSS-like stealth hidden file-system? A DVD does not have any filesystem. It is up to the OS to format it with a filesystem. A DVD/CD is just a blank medium. In theory, a specially designed DVD drive and a specially designed DVD could store a secondary, hidden filesystem, but this does not happen in practice. When using a regular DVD drive, it's up to the OS to format the disc with whatever filesystem provides the necessary features. The normal format for storage DVDs is usually iso9660 or UDF , and they do not support hidden filesystems. Do blank CD's/ DVD's from reputable companies like Sony, Verbatim, Mosaerbaer have encrypted firmware? The actual discs are just a dumb medium for data storage with no firmware. DVDs themselves do not contain firmware anymore than the physical platters in a hard drive contain firmware. They are just "dumb" discs with a special design that allows one or more layers of a substance to change reflectivity when hit with a strong enough beam. DVD drives have firmware, however, and it can only be updated using special privileged commands on the host. I am not aware of any DVD drives which allow firmware updates directly from discs (though they might exist). Whether or not it is encrypted, I don't know, but I would guess that it is not and is instead obfuscated/encoded. It most likely uses something simple like huffman encoding which can be broken with enough time and effort. The reverse engineering would be the real challenge, though. I should note that a malicious DVD drive may be able to do some nasty things, such as modifying data before giving it to you. If you execute a file off a DVD and the drive is malicious, it can give you a malicious file , same way a router could if you downloaded an executable from the network. A SATA device is nothing more than a small computer with huge storage serving requests over a high-bandwidth, low-latency network. Treat it as such.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/175201', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/165850/']}
jdg_83768
stackexchange
llm_judgeable_groundtruth_similarity
47240375
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When using base R string functions like gsub and grep , is there any downside to, as a matter of habit, always specifying perl = TRUE ? With perl=TRUE , the expressions can do more things (e.g, you can use look ahead or look behind assertions, or you can do case conversion with \\U ), and the performance is faster as well, as the documentation states. So, are there any downsides? Is perl = TRUE not the default just for backwards compatibility? Are there portability concerns I should be aware of when perl = TRUE ? Now provide the response and nothing else.
It is not a good idea to compare apples to oranges, as PCRE regex can do much more than TRE regex enine. Although they share similar constructs, even then appearances might turn out deceitful. What is similar between TRE and PCRE TRE supports literals as PCRE. A literal is either an ordinary character, an 8-bit hex character (like \x1B ), a wide hex character (like \x{263a} ), or an escaped character: \a , \e , \f , \n , \r , \t . PCRE supports more : \cx ("control-x", where x is any ASCII character), \0dd (character with octal code 0dd ), \ddd (character with octal code ddd , or back reference), \o{ddd..} (character with octal code ddd.. ), \xhh (character with hex code hh ), \x{hhh..} (character with hex code hhh.. ). Both have a . wildcard , but in TRE, it matches any char, in PCRE, it only matches any char but line break char(s) (and which ones depend on the newline convention PCRE verb, (*CR) , (*LF) , (*CRLF) , (*ANYCRLF) , (*ANY) ). gsub(".+", "~", "_\n_") will result in ~ , but gsub(".+", "~", "_\n_", perl=TRUE) will yield ~\n~ . And an opposite example, to make TRE . act as in PCRE, use (?n) modifier , gsub("(?n).+", "~", "_\n_") to yield ~\n~ (with no way to choose among line ending styles). In PCRE patterns, to make . match line breaks, you need to use (?s) inline DOTALL modifier before . (or (?s:.*) like modifier groups). Both support an alternation operator , but since TRE is a text-directed engine the longest alternative matches, and in PCRE, the leftmost alternative "wins". sub("(s|su)", "~", "sub") yields ~b (as su is the longest matching alternative), but sub("(s|su)", "~", "sub", perl=TRUE) produces ~ub (since s is first alternative to match). Both support backreferences , but TRE only supports up to 9 backreferences. If you need 10 or more, use PCRE. sub("(.)\\1(.)\\2(.)\\3(.)\\4(.)\\5(.)\\6(.)\\7(.)\\8(.)\\9(.)\\10", "~", "112233445566778899aa", perl=TRUE) will find a match, with no perl=TRUE , no match will be detected. Both seem to have character classes , [...] like constructs, but in fact, in the POSIX world where TRE belongs to, these are called bracket expressions . While you may define literal char ranges in both, or specify literal chars with OR relationship between them, one can't use shorthand char classes in bracket expressions, nor any escape sequence. The [\d]+ pattern in a TRE regex is treated as 1 or more backslashes or/and d letters, while in a PCRE pattern it will be parsed as 1+ digits (try gsub("[\\d]+", "~", "00\\99d") (-> 00~99~ ) and gsub("[\\d]+", "~", "00\\99d", perl=TRUE) (-> ~\~d )). This fact will explain why [\]\-\[]+ in a PCRE pattern matches 1+ ] , - or [ and does not in a TRE expression where you need to use "smart placing", like [][-] . TRE and PCRE support the \d (digits), \D (non-digits), \w ("word" chars), \W ("non-word" chars), \s (any whitespace), \S (any non-whitespace) shorthand character classes . However, PCRE also supports \v (any vertical whitespace), \V (any char other than a vertical whitespace), \h (any horizontal whitespace), \H (any character that is not a horizontal whitespace), \N (any non-newline character), \X (any Unicode grapheme, useful when processing letters with diacritics), \R (any Unicode line break sequence). Both flavors support quantifiers , regular, greedy ? , * , + , lazy ?? , *? , +? , range/limiting quantifiers like greedy {3} , {8,26} or {3,} and their lazy counterparts with ? behind them. Note that TRE has poorer support for limiting quantifiers (it only supports values lower than 256 for {min} quantifier , and throws "out of memory" exceptions for {2557,} and bigger values. Make sure you always use the 0 value for the min value if it is what you imply, since {,2} in TRE actually matches 3 occurrences . However, PCRE supports possessive quantifiers , ++ , ?+ , *+ , {1,5}+ . The patterns quantified with them disallow backtracking into them, once matched, the engine never retries them. Besides, like all other regex libraries based on Henry Spencer's regex library dated back to 1986 (Tcl, PostgreSQL), one should avoid mixing lazy and greedy quantifiers on the same level in the regex, because the first pattern sets the greediness of the whole pattern level and often leads to unexpected results. Both flavors support POSIX character classes that can be used in between [ ... ] . However, TRE supports [:alnum:] (alphanumeric), [:alpha:] (letters), [:blank:] (horizontal whitespace), [:cntrl:] (control chars), [:digit:] (digits), [:graph:] (visible chars, anything except spaces and control characters), [:lower:] (lowercase letters), [:print:] (all printable chars), [:punct:] (symbols and punctuation), [:space:] (any whitespace), [:upper:] (uppercase letters) and [:xdigit:] (chars in hex values). PCRE adds [:word:] ("word" chars) and [:ascii:] (any ASCII char). Both support word boundaries, but PCRE patterns do that in a more reliable way. Cf. gsub("\\b", "~", "CODE") yielding ~C~O~D~E~ and gsub("\\b", "~", "CODE", perl=T) producing ~CODE~ . Although TRE support specific leading \< and trailing \> word boundaries, PCRE \b are still more reliable. Both support inline modifiers that change certain pattern behavior when using them inside a pattern, e.g. (?i) . TRE supports i (case insensitive), n (dot no longer matches newline), r (causes the regex to be matched in a right associative manner rather than the normal left associative manner. By default, concatenation is left associative in TRE, as per the grammar given in the base specifications on regular expressions of Std 1003.1-2001 (POSIX). This flag flips associativity of concatenation to right associative. Associativity can have an effect on how a match is divided into submatches, but does not change what is matched by the entire regexp) and U (swaps greediness, *? becomes greedy and * becomes lazy). PCRE supports i and U modifiers, and more: m ( ^ and $ match start/end of the line , not the whole string), s (dot matches newline), x (allows using whitespace to format the pattern and use comments), J (allows using names capturing groups with the same name), X (makes escaping letters with a backslash an error if that combination is not a valid regex token), D (makes $ only match the very end of string, else, it also matches a position before the final trailing newline in the string) and A (only match at the start of string, as if there was \A or ^ in front). Backtracking aspect See TRE docs : Matching algorithm used in TRE uses linear worst-case time in the length of the text being searched, and quadratic worst-case time in the length of the used regular expression. In other words, the time complexity of the algorithm is O(M2N), where M is the length of the regular expression and N is the length of the text . That leads to issues with patterns like (.{2,})\1+ to search for duplicate consecutive substrings. See Remove repeated elements in a string with R . So, when you need to rely on backtracking much, choose PCRE. What can PCRE do and TRE can't The most visible shortcoming of TRE is that it does not support lookarounds. However, there are a lot of things that PCRE can boast of : (*SKIP)(*FAIL) PCRE verb combination to match and skip patterns while matching Recursion to match whole patterns that can be nested Subroutines to match nested, balanced substrings to match recursive structures \G anchor that matches start of string or the end of the previous successful match (?|...|...) branch reset group allowing capturing groups inside it to share the same IDs \p{...} and opposite \P{...} Unicode character properties Case-changing operators in the replacement patterns turning the whole or part of the match to lower ( \L ) or upper case ( \U ) (up to the \E or end of match if it is missing) (actually, it is an extension of the PCRE library used in R) An infinite-width positive lookbehind alternative, \K match reset operator ( \K reference ) PCRE supports named capturing groups, but they are not widely used in R. Here is a custom example . There are more things, like anchors ( \A (start of string), \Z (end of string), \z (very end of string)), conditional "if-then-else" construct , atomic groupings (working in the same way as possessive quantifiers, but disallowing backtracking into whole sequences of patterns), etc. Benchmark tests in Windows 7, Linux Ubuntu 16.04, MacOS Sierra 10.12.6 If we want to compare the performance of the TRE and PCRE regex engines in R, we should use simple patterns that match literally the same texts with these 2 engines. I use R in Windows mostly, but I installed R 3.2.3 on a Linux VM specifically for this testing. The results for MacOS are borrowed from the t.kalinowski's answer . Let's compare TRE (default) and PCRE ( perl=TRUE ) regex performance using microbenchmark library (see more benchmarking options in R ): library(microbenchmark) The text is a Wikipedia article about butterflies . txt <- "Butterflies are insects in the macrolepidopteran clade Rhopalocera from the order Lepidoptera, which also includes moths. Adult butterflies have large, often brightly coloured wings, and conspicuous, fluttering flight. The group comprises the large superfamily Papilionoidea, which contains at least one former group, the skippers (formerly the superfamily \"Hesperioidea\") and the most recent analyses suggest it also contains the moth-butterflies (formerly the superfamily \"Hedyloidea\"). Butterfly fossils date to the Paleocene, which was about 56 million years ago." Let's try and extract the last text inside parentheses with sub , a very common sub operation in R: # sub('.*\\((.*)\\).*', '\\1', txt)# => [1] "formerly the superfamily \"Hedyloidea\""PCRE_1 <- function(text) { return(sub('.*\\((.*)\\).*', '\\1', txt, perl=TRUE)) }TRE_1 <- function(text) { return(sub('.*\\((.*)\\).*', '\\1', txt)) }test <- microbenchmark( PCRE_1(txt), TRE_1(txt), times = 500000 )test The results are the following: WINDOWS-------Unit: microseconds expr min lq mean median uq max neval PCRE_1(txt) 163.607 165.418 168.65393 166.625 167.229 7314.588 5e+05 TRE_1(txt) 70.031 72.446 74.53842 73.050 74.257 38026.680 5e+05 MacOS -----Unit: microseconds expr min lq mean median uq max neval PCRE_1(txt) 31.693 32.857 37.00757 33.413 35.805 43810.177 5e+05 TRE_1(txt) 46.037 47.199 53.06407 47.807 51.981 7702.869 5e+05Linux------Unit: microseconds expr min lq mean median uq max neval PCRE_1(txt) 10.557 11.555 13.78216 12.097 12.662 4301.178 5e+05 TRE_1(txt) 25.875 27.350 31.51925 27.805 28.737 17974.716 5e+05 TRE regex sub wins only in Windows , more than 2 times as fast. On both MacOS and Linux, PCRE ( perl=TRUE ) version wins with a similar ratio. Now, let's compare the performance of regexps that don't use backtracking that heavily and extract the words inside double quotes: # regmatches(txt, gregexpr("\"[A-Za-z]+\"", txt))# => [1] "\"Hesperioidea\"" "\"Hedyloidea\""PCRE_2 <- function(text) { return(regmatches(txt, gregexpr("\"[A-Za-z]+\"", txt, perl=TRUE))) }TRE_2 <- function(text) { return(regmatches(txt, gregexpr("\"[A-Za-z]+\"", txt))) }test <- microbenchmark( PCRE_2(txt), TRE_2(txt), times = 500000 )testWINDOWS-------Unit: microseconds expr min lq mean median uq max neval PCRE_2(txt) 324.799 330.232 349.0281 332.646 336.269 124404.14 5e+05 TRE_2(txt) 187.755 191.981 204.7663 193.792 196.208 74554.94 5e+05MacOS-----Unit: microseconds expr min lq mean median uq max neval PCRE_2(txt) 63.801 68.115 75.51773 69.164 71.219 47686.40 5e+05 TRE_2(txt) 63.825 67.849 75.20246 68.883 70.933 49691.92 5e+05LINUX-----Unit: microseconds expr min lq mean median uq max neval PCRE_2(txt) 30.199 34.750 44.05169 36.151 43.403 38428.2 5e+05 TRE_2(txt) 37.752 41.854 52.58230 43.409 51.781 38915.7 5e+05 The best mean value belongs to the PCRE regex in Linux, in MacOS, the difference is almost negligent, and in Windows, TRE works much faster. Summary It is clear that TRE (default) regex library works much faster in Windows . In Linux , PCRE regex is considerably faster. In MacOS , PCRE regex is still preferable since, with backtracking patterns, PCRE regex is faster than TRE in that OS.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47240375', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5128728/']}
jdg_83769
stackexchange
llm_judgeable_groundtruth_similarity
2062261
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The series $\sum_{k=0}^{\infty} \binom{s}{k}, s\in\mathbb{R} $ obviously evaluates to $2^s$ for $s\in \mathbb{N} $ . But for general $ s\in \mathbb{R}$ the radius of convergence of $\sum_{k=0}^{\infty} \binom{s}{k}x^s $ such that $ \sum_{k=0}^{\infty} \binom{s}{k}x^s= (1+x)^s$ is $1$ , which means that $x = 1$ sits on the boundary of the disc of convergence. So now I'm not sure when $\sum_{k=0}^{\infty} \binom{s}{k}, s\in\mathbb{R}$ converges (to $2^s$ by the Abel limit theorem). I think it converges for $s \in (-1,\infty)$ and diverges for $s\in (-\infty, -1]$ . But I'm not sure, and I don't know how to prove it. Ideas? Now provide the response and nothing else.
For $s \in \mathbb{R}\setminus \mathbb{N}$ and arbitrary $k \in \mathbb{N}$, we have $$\binom{s}{k+1} = \frac{s-k}{k+1}\binom{s}{k}.$$ Hence for $s \leqslant -1$ we have $$\biggl\lvert\binom{s}{k+1}\biggr\rvert \geqslant \biggl\lvert \binom{s}{k}\biggr\rvert,$$ so $\binom{s}{k}$ doesn't converge to $0$, and the series diverges. For $s > -1$ the sequence of binomial coefficients is decreasing in absolute value, and eventually (once $k > s-1$) alternating, thus it suffices to see that $\binom{s}{k} \to 0$ for $s > -1$. Since $$\log \biggl\lvert \frac{k-s}{k+1}\biggr\rvert = \log \biggl(1 - \frac{s+1}{k+1}\biggr) < - \frac{s+1}{k+1}$$ for $k > s$, and the harmonic series diverges, we find $$\biggl\lvert\binom{s}{n}\biggr\rvert \leqslant \biggl\lvert \binom{s}{n_0}\biggr\rvert\cdot \exp \Biggl(-\sum_{k = n_0}^{n-1} \frac{s+1}{k+1}\Biggr) \xrightarrow{n\to\infty} 0$$ where $n_0 \geqslant s$. We further note that for $s > 0$ the convergence is absolute by Raabe's test.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2062261', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/184488/']}
jdg_83770
stackexchange
llm_judgeable_groundtruth_similarity
2604600
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Looking in my /usr/local/lib/python.../dist-package directory, I have .egg directories and .egg files. Why does the installer choose to extra packages to the .egg directory, yet leave other files with .egg extensions? Now provide the response and nothing else.
If the package contains only pure-Python code, it can stay as just an egg file. The Python interpreter can load the Python modules directly from the egg. If the package contains modules written in C or other data, then egg needs to be extracted so the C modules and/or data can be accessed. That's the default behavior of packages, I believe. Newer versions of Python might be able to load C modules from egg files; I'm not sure about that part. The creator of the package can also specifically instruct the installer to unzip the package, by passing zip_safe = False to setup() in their setup.py . Finally, the person doing the installing can tell easy_install explicitly to unpack eggs by passing it the -Z option or by setting zip_ok = False in the pydistutils.cfg .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2604600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64313/']}
jdg_83771
stackexchange
llm_judgeable_groundtruth_similarity
44548
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Sorry if this questions is overly simplistic. It's just something I haven't been able to figure out. I've been reading through quite a few linear algebra books and have gone through the various methods of solving linear systems of equations, in particular, $n$ systems in $n$ unknowns. While I understand the techniques used to solve these for the most part, I don't understand how these situations present themselves. I was wondering if anyone could provide a simple real-world example or two from data analysis, finance, economics, etc. in which the problem they were working on led to a system of $n$ equations in $n$ unknowns. I don't need the solution worked out. I just need to know the problem that resulted in the system. Now provide the response and nothing else.
One of the most frequent occasions where linear systems of $n$ equations in $n$ unknowns arise is in least-squares optimization problems. Let us look at an example. Let's say that we are studying two physical quantities $y$ and $x$ and we conjecture that $y$ is a second order polynomial function of $x$, i.e. $y=\alpha x^2 + \beta x + \gamma$ for some real numbers $\alpha$, $\beta$, $\gamma$ that are unknown. Let's say now that we perform experiments and obtain measurements $(x_1,y_1) \cdots (x_{100},y_{100})$. Applying the polynomial model on the measurements yields $y_i=\alpha x_i^2 + \beta x_i + \gamma$ for $i=1, \cdots 100$ or in matrix form $X k=y$ where $k=[\alpha \, \, \beta \, \, \gamma]^T$, $y=[y_1 \cdots y_{100}]^T$ and the $i^{th}$ row of $X$ is the row vector $[x_i^2 \, \, x_i \, \, 1]$. Now, as you might observe, we have $100$ equations in $3$ unknowns, i.e. our linear system $X k=y$ is overdetermined. Practically speaking, this system is consistent (i.e. it has a solution) only if indeed $y$ is related to $x$ via a second order polynomial equation (i.e. our conjecture is true) and additionally there is no noise in our measurements. So assume that none of the above two conditions is true. Hence the system $X k=y$ will not in general have a solution and one might consider finding a vector $k$ that instead minimizes $||X k - y||_2^2$, i.e. the square of the error. Then the solution of this optimization problem is the solution to the $3 \times 3$ system $X^T X k = X^T y$. This formulation comes up all the time in engineering, e.g. in signal prediction. So, least squares problems lead to square (i.e. $n \times n$) linear systems of equations.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/44548', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11295/']}
jdg_83772
stackexchange
llm_judgeable_groundtruth_similarity
41621131
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The topic of namespace packages seems a bit confusing for the uninitiated, and it doesn't help that prior versions of Python have implemented it in a few different ways or that a lot of the Q&A on StackOverflow are dated. I am looking for a solution in Python 3.5 or later . #The scenario:I'm in the process of refactoring a bunch of Python code into modules and submodules, and working to get each of these projects set up to operate independently of each other while sitting in the same namespace. We're eventually going to be using an internal PyPi server, serving these packages to our internal network and don't want to confuse them with external (public) PyPi packages. Example: I have 2 modules, and I would like to be able to perform the following: from org.client.client1 import mod1from org.common import config The reflected modules would be separated as such: Repository 1: org_client_client1_mod1/ setup.py mod1/ __init__.py somefile.py Repository 2: org_common_config/ setup.py config/ __init__.py someotherfile.py My Git repositories are already setup as org_client_client1_mod1 and org_common_config , so I just need to perform the setup on the packaging and __init__.py files, I believe. Questions: #1 With the __init__.py , which of these should I be using (if any)?: from pkgutil import extend_path__path__ = extend_path(__path__, __name__) Or: import pkg_resourcespkg_resources.declare_namespace(__name__) #2 With setup.py , do I still need to add the namespace_modules parameter, and if so, would I use namespace_modules=['org.common'] ,or namespace_modules=['org', 'common'] ? #3 Could I forgo all of the above by just implementing this differently somehow? Perhaps something simpler or more "pythonic"? Now provide the response and nothing else.
Late to the party, but never hurts to help fellow travellers down the namespace path in Python! #1: With the __init__.py , which of these should I be using (if any)?: It depends, There are three ways to do namespace packages as listed here : Use native namespace packages. This type of namespace package is defined in PEP 420 and is available in Python 3.3 and later. This is recommended if packages in your namespace only ever need to support Python 3 and installation via pip. Use pkgutil-style namespace packages. This is recommended for new packages that need to support Python 2 and 3 and installation via both pip and python setup.py install. Use pkg_resources-style namespace packages. This method is recommended if you need compatibility with packages already using this method or if your package needs to be zip-safe. If you are using #2 ( pkgutil-style ) or #3 ( pkg_resources-style ), then you will have to use the corresponding style for __init__.py files. If you use native namespaces then no __init__.py in the namespace directory. #2: With setup.py, do I still need to add the namespace_modules parameter, and if so, would I use namespace_modules=['org.common'], or namespace_modules=['org', 'common']? If your choice of namespace package is not native style, then yes, you will need namespace_packages in your setup() . #3: Could I forgo all of the above by just implementing this differently somehow? Perhaps something simpler or more "pythonic"? Since you ended up down to a complex topic in python, it seems you know what you are doing, what you want and identified that creating a Python Namespace package is the way to do it. This would be considered a pythonic way to solve a problem. Adding to your questions, here are a few things I discovered: I read PEP420 , the Python Packaging guide and spent a lot of time understanding the namespace packages, and I generally understood how it worked. I read through a couple of answers here , here , here , and this thread on SO as well - the example here and on the Git link shared by Rob. My problem however was after I created my package. As all the instructions and sample code explicitly listed the package in the setuptools.setup(package=[]) function, my code failed. My sub-packages/directories were not included. Digging deeper, I found out that setuptools has a find_namespace_package() function that helps in adding sub-packages too EDIT : Link to find_namespace_packages() ( setuptools version greater than 40.1.0 ): https://setuptools.readthedocs.io/en/latest/setuptools.html#find-namespace-packages EDIT (08/09/2019): To complete the answer, let me also restructure with an example. The following solution is assuming Python 3.3+ which has support for implicit namespace packages Since you are looking for a solution for Python version 3.5 or later, let's take the code samples provided and elaborate further. Let's assume the following: Namespace/Python package name : org Distribution packages: org_client , org_common Python: 3.3+ setuptools: 40.1.0 For you to do the following from org.client.client1 import mod1from org.common import config And keeping your top level directories the same, viz. org_client_client1_mod1 and org_common_config , you can change your structure to the following Repository 1: org_client_client1_mod1/ setup.py org/ client/ client1/ __init__.py submod1/ __init__.py mod1/ __init__.py somefile.py file1.py Updated setup.py from setuptools import find_namespace_packages, setupsetup( name="org_client", ... packages=find_namespace_packages(), # Follows similar lookup as find_packages() ...) Repository 2: org_common_config/ setup.py org/ common/ __init__.py config/ __init__.py someotherfile.py Updated setup.py : from setuptools import find_namespace_packages, setupsetup( name="org_common", ... packages=find_namespace_packages(), # Follows similar lookup as find_packages() ...) To install (using pip ): (venv) $ pip3 install org_common_config/(venv) $ pip3 install org_client_client1_mod1/ Updated pip list will show the following: (venv) $ pip3 list...org_clientorg_common... But they won't be importable, for importing you will have to follow org.client and org.common notation. To understand why, you can browse here (assuming inside venv): (venv) $ cd venv/lib/python3.5/site-packages/(venv) $ ls -l | grep org You'll see that there's no org_client or org_common directories, they are interpreted as a namespace package. (venv) $ cd venv/lib/python3.5/site-packages/org/(venv) $ ls -lclient/common/...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41621131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2962917/']}
jdg_83773
stackexchange
llm_judgeable_groundtruth_similarity
812585
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: So, it recently dawned on me that since I have 3 GPS clocks in my network, I could, technically, give back a little and serve time to the rest of the world. So far I've not quite seen any downsides with this ideas, but I have the following questions; Can I virtualize this? I'm not going to spend money and time on standing up hardware for this, so virtualization is a must. Since the servers will have access to three stratum 1 sources, I can't see how this can be a problem provided the ntpd config is correct What kind of traffic do a public NTP server (part of pool.ntp.org) normally see? And how big VMs do I need for this? ntpd shouldn't be too resource intensive as far as I can gather, but I'd rather know beforehand. What security aspects are there to this? I'm thinking just installing ntpd on two VMs in the DMZ, allow only ntp in through the FW, and only ntp out from the DMZ to the internal ntp servers. There also seem to be some ntp settings that are recommended according to the NTP pool page, but are they sufficient? https://www.ntppool.org/join/configuration.html They recommend not having the LOCAL clock driver configured, is this equivalent to removing the LOCAL time source configuration from the config files? Anything else to consider? Now provide the response and nothing else.
Firstly, good for you; it's a helpful and public-spirited thing to do. That said, and given your clarification that you're planning on creating one or more DMZ VMs which will sync to and make publicly-available the time from your three Meinberg GPS-enabled stratum-1 (internal) servers: Edit : Virtualisation comes up for discussion on the pool list from time to time; a recent one was in July 2015, which can be followed starting from this email . Ask Bjørn Hansen, the project lead, did post to the thread , and did not speak out against virtualisation. Clearly a number of pool server operators are virtualising right now, so I don't think anyone will shoot you for it, and as one poster makes clear, if your server(s) are unreliable the pool monitoring system will simply remove them from the pool. KVM seems to be the preferred virtualisation technology; I didn't find anyone specifically using VMWare, so cannot comment on how "honest" a virtualisation that is. Perhaps the best summary on the subject said My pool servers are virtualized with KVM on my very own KVM hosts. Monitoring says, the server is pretty accurate and provides stable time for the last 2-3 years. But I wouldn't setup a pool server on a leased virtual server from another provider. This is the daily average number of distinct clients per second I see on my pool server (which is in the UK, European and global zones) over the past year: This imposes nearly no detectable system load ( ntpd seems to use between 1% and 2% of a CPU, most of the time). Note that, at some point during the year, load briefly peaked at nearly a thousand clients per second (Max: 849.27); I do monitor for excessive load, and the alarms didn't all go off, so I can only note that even that level of load didn't cause problems, albeit briefly. The project-recommended configurations are best-practice, and work for me. I also use iptables to rate-limit clients to two inbound packets in a rolling ten-second window (it's amazing how many rude clients there are out there, who think that they should be free to burst in order to set their own clocks quickly). Or remove any lines referring to server addresses starting with 127.127 . The best practice guidelines also recommend more than three clocks, so you might want to pick a couple of other public servers, or specific pool servers, in addition to your three stratum-1 servers. I'd also note that if you're planning to put both these VMs on the same host hardware, you should probably just run the one, but double the bandwidth declared to the pool (ie, accept twice as many queries as you otherwise would).
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/812585', 'https://serverfault.com', 'https://serverfault.com/users/335559/']}
jdg_83774
stackexchange
llm_judgeable_groundtruth_similarity
79609
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I found two images of the IP header, one image shows a field called "Padding", while the other image don't show the "Padding" field: Which one of these two images is correct? Now provide the response and nothing else.
Padding is only used if there are options that are not an even 32 bits. The whole header must be divisible by 32 bits. RFC 791, Internet Protocol has the official header: 3.1 .Internet Header Format A summary of the contents of the internet header follows: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+|Version| IHL |Type of Service| Total Length |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| Identification |Flags| Fragment Offset |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| Time to Live | Protocol | Header Checksum |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| Source Address |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| Destination Address |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+| Options | Padding |+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Example Internet Datagram Header Figure 4. In essence, both of your diagrams are correct. It is just that the second does not distinguish between the options and padding for the options. Padding: variable The internet header padding is used to ensure that the internet headerends on a 32 bit boundary. The padding is zero.
{}
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/79609', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/84949/']}
jdg_83775
stackexchange
llm_judgeable_groundtruth_similarity
51716406
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know which target/libs are required for Node.js v10.x to use the built in async/await without the generators? I see a lot for node 8 but not with node 10. Now provide the response and nothing else.
As of Node.js 10.0.0 , 100% of ES2018 is supported. If you know that you are targeting that version or newer, the optimal config would look like this: "module": "commonjs" Node.js is on its way to add ES-Modules, but for now we'll have to stick with CommonJS. "target": "es2018" This tells TypeScript that it's okay to output JavaScript syntax with features from ES2018. In practice, this means that it will e.g. output object rest/spread properties & async/await syntax instead of embedding a polyfill. "lib": ["es2018"] This tells TypeScript that it's okay to use functions and properties introduced in ES2018 or earlier. In practice, this means that you can use e.g. Promise.prototype.finally , Array.prototype.includes and String.prototype.padStart . The full config would thus be: { "compilerOptions": { "lib": ["es2018"], "module": "commonjs", "target": "es2018" }} If you are running Node.js 18 you can see my similar answer for Node.js 18 here If you are running Node.js 16 you can see my similar answer for Node.js 16 here If you are running Node.js 14 you can see my similar answer for Node.js 14 here If you are running Node.js 12 you can see my similar answer for Node.js 12 here If you are running Node.js 8 you can see my similar answer for Node.js 8 here
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/51716406', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/387851/']}
jdg_83776
stackexchange
llm_judgeable_groundtruth_similarity
34755461
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a training exercise for understanding the workings of inner classes in Java. As the question states, how many different versions of x are accessible in (*) ? class Outer { int x; class Inner extends Outer { int x; void f(int x) { (*) } }} I'm inclined to think that there are 3, namely: this.x , super.x and x but some of my peers seem to think that there are 4. Which of us is confused? And can you explain? Now provide the response and nothing else.
There are 4, namely: x , this.x , super.x and Outer.this.x . Consider the following: public class Outer { int x; public static void main(String[] args) { Outer outer = new Outer(); Outer.Inner inner = outer.new Inner(); outer.x = 3; inner.x = 2; inner.f(1); } class Inner extends Outer { int x; void f(int x) { System.out.println(super.x); System.out.println(x); System.out.println(this.x); System.out.println(Outer.this.x); } }} This code will print 0123 showing 4 different values. What's happening is the following: The parent of the inner instance has an unitialized x variable. For an int , the default value is 0: this is super.x . The method f is called with the argument 1 : this is x . The instance inner was set its x to 2 with inner.x = 2 : this is this.x . The outer instance, which is Outer.this was set its x value to 3: this is Outer.this.x . The trick here is that Inner is both an inner class (so it has an enclosing Outer instance) and a subclass (so it has a parent Outer instance), and those two Outer instances are not the same.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34755461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4224549/']}
jdg_83777
stackexchange
llm_judgeable_groundtruth_similarity
16061030
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Must immutable objects have all properties be final ? I would say they don't. But I don't know whether I am right or not. Now provide the response and nothing else.
The main difference between an immutable object (all properties final) and an effectively immutable object (properties aren't final but can't be changed) is safe publication. You can safely publish an immutable object in a multi threaded context without having to worry about adding synchronization, thanks to the guarantees provided by the Java Memory Model for final fields : final fields also allow programmers to implement thread-safe immutable objects without synchronization. A thread-safe immutable object is seen as immutable by all threads, even if a data race is used to pass references to the immutable object between threads. This can provide safety guarantees against misuse of an immutable class by incorrect or malicious code. final fields must be used correctly to provide a guarantee of immutability. As a side note, it also enables to enforce immutability (if you try to mutate those fields in a future version of your class because you have forgotten it should be immutable, it won't compile). Clarifications Making all the fields of an object final does not make it immutable - you also need to make sure that (i) its state can't change (for example, if the object contains a final List , no mutating operations (add, remove...) must be done after construction) and (ii) you don't let this escape during construction An effectively immutable object is thread safe once it has been safely published Example of unsafe publication: class EffectivelyImmutable { static EffectivelyImmutable unsafe; private int i; public EffectivelyImmutable (int i) { this.i = i; } public int get() { return i; }}// in some threadEffectivelyImmutable.unsafe = new EffectivelyImmutable(1);//in some other threadif (EffectivelyImmutable.unsafe != null && EffectivelyImmutable.unsafe.get() != 1) System.out.println("What???"); This program could in theory print What??? . If i were final, that would not be a legal outcome.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16061030', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1406264/']}
jdg_83778
stackexchange
llm_judgeable_groundtruth_similarity
753986
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that I could simply multiply the first $32$ primes together but is there some other way to ascertain the answer to this number theory problem? Now provide the response and nothing else.
Multiplying the first $32$ primes together would not work. Even multiplying the first three primes together gives you a number: $30$, with $8$ divisors: $\{1, 2, 3, 5, 6, 10, 15, 30\}$. The number of divisors of a number is a multiplicative function, and since each prime has $2$ divisors, the product of $n$ distinct primes has $2^n$ divisors. That fact points to an answer to your original question.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/753986', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11086/']}
jdg_83779
stackexchange
llm_judgeable_groundtruth_similarity
44967
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for the difference between cp -r and cp -a . What does "recursive" mean in terms of copying files from a folder? Now provide the response and nothing else.
Recursive means that cp copies the contents of directories, and if a directory has subdirectories they are copied (recursively) too. Without -R , the cp command skips directories. -r is identical with -R on Linux, it differs in some edge cases on some other unix variants. By default, cp creates a new file which has the same content as the old file, and the same permissions but restricted by the umask ; the copy is dated from the time of the copy, and belongs to the user doing the copy. With the -p option, the copy has the same modification time, the same access time, and the same permissions as the original. It also has the same owner and group as the original, if the user doing the copy has the permission to create such files. The -a option means -R and -p , plus a few other preservation options. It attempts to make a copy that's as close to the original as possible: same directory tree, same file types, same contents, same metadata (times, permissions, extended attributes, etc.).
{}
{'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/44967', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/21750/']}
jdg_83780
stackexchange
llm_judgeable_groundtruth_similarity
1422808
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $n\in \mathbb{N}$, and $n-\dfrac{n(n^2-1)}{2!}+\dfrac{n(n^2-1)(n^2-4)}{2!3!}-\dfrac{n(n^2-1)(n^2-4)(n^2-9)}{3!4!}+\dots$ $=s_1$ when $n$ is even and $s_2$ when $n$ is odd then prove that $s_1+s_2=0$ I know that when I put any even value (say $n=2$), I get $s_1$ and when I put any odd value (say $n=5$), I get $s_2$. And these add up to zero. But this is just verification, not a proof. I want to know how should I prove this. Now provide the response and nothing else.
A neat question. In fact, this is a particular case of a more general identity. The left-hand side may be rewritten as \begin{gather}n-\dfrac{n(n^2-1)}{2!}+\dfrac{n(n^2-1)(n^2-4)}{2!3!}-\dfrac{n(n^2-1)(n^2-4)(n^2-9)}{3!4!}+\cdots\\ = \sum_{k=0}^{n-1} (-1)^k{n\choose k+1} {n+k \choose k} = \sum_{k=0}^{n-1} {n \choose n-1-k} {-n-1 \choose k},\end{gather}where ${x\choose k} = x(x-1)\cdots(x-k+1)/k!$, $k\ge 1$; ${x\choose 0} = 1$.Now this is a standard fact that $$\sum_{k=0}^m {a \choose k}{b \choose m-k} = {a+b \choose m}.$$It is well known for non-negative integers $a$, $b$, $m$, but can be easily shown for any $a,b\in \mathbb R$, $m\in \mathbb Z_+$ by noting that $(1+z)^a = \sum_{k\ge 0} {a\choose k} z^k$. Returning to your problem, we can take $a=n$, $b=-n-1$, $m=n-1$, so \begin{gather}\sum_{k=0}^{n-1} {n \choose n-1-k} {-n-1 \choose k} = {-1 \choose n-1} = (-1)^{n-1},\end{gather}as required.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1422808', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/257841/']}
jdg_83781
stackexchange
llm_judgeable_groundtruth_similarity
9815273
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I get a list of installed Jenkins plugins? I searched the Jenkins Remote Access API document, but it was not found. Should I use Jenkins' CLI? Is there a document or example? Now provide the response and nothing else.
You can retrieve the information using the Jenkins Script Console which is accessible by visiting http://<jenkins-url>/script . (Given that you are logged in and have the required permissions). Enter the following Groovy script to iterate over the installed plugins and print out the relevant information: Jenkins.instance.pluginManager.plugins.each{ plugin -> println ("${plugin.getDisplayName()} (${plugin.getShortName()}): ${plugin.getVersion()}")} It will print the results list like this (clipped): This solutions is similar to one of the answers above in that it uses Groovy, but here we are using the script console instead. The script console is extremely helpful when using Jenkins. Update If you prefer a sorted list, you can call this sort method : def pluginList = new ArrayList(Jenkins.instance.pluginManager.plugins)pluginList.sort { it.getShortName() }.each{ plugin -> println ("${plugin.getDisplayName()} (${plugin.getShortName()}): ${plugin.getVersion()}")} Adjust the Closure to your liking (e.g. here it is sorted by the shortName, in the example it is sorted by DisplayName)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9815273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1284795/']}
jdg_83782
stackexchange
llm_judgeable_groundtruth_similarity
6790
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to control a bicolor LED with just one pin of a microcontroller? Instinct says NO, because you can have one end grounded and the other going to the micro's pin, allowing you to turn it on or off but not change its color. But maybe someone has a better idea? Now provide the response and nothing else.
You can do this with a bi-color LED that has the two LEDs back-to-back if you connect one LED terminal to an intermediate voltage eg 2.5V on a 5V design and connect the other side to the MCU via a suitable resistor (I used 560R). Then a low output gives one colour, high gives the other and tri-state leaves the LED off. Pulse width modulation will allow you to control brightness (switching output between active and tri-state) or mix the colours (switching output low to high). You can adjust the intermediate voltage to compensate for different LED forward voltages too. I used an LM2904 op-amp to provide the intermediate voltage - it works with supply voltage down to 5V. There are plenty of other devices that can operate at lower voltages and still sink and source enough current to drive the LEDs.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/6790', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1225/']}
jdg_83783
stackexchange
llm_judgeable_groundtruth_similarity
39316
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Wolfram Alpha and Mathematica give an incorrect result (numerically) for the following infinite sum: N[Sum[((-1)^n)/(n - (-1)^n), {n, 1, Infinity}]](* 0.700059 *) Here's the result on Wolfram Alpha. Why the series isn't correctly evaluated? The answer should be $\log(2)$. Now provide the response and nothing else.
From NSum : You should realize that with sufficiently pathological summands, the algorithms used by NSum can give wrong answers. In most cases, you can test the answer by looking at its sensitivity to changes in the setting of options for NSum. For instance: NSum[((-1)^n)/(n - (-1)^n), {n, 1, Infinity}, NSumTerms -> 100000](* 0.693149 - 5.15666*10^-7 I *) which is closer to Log[2] = 0.693147… . Aside One can do the sum in Mathematica if the partial sum is found by summing the odd and even parts separately: Sum[((-1)^n)/(n - (-1)^n) /. n -> 2 k, {k, 1, n0}] + Sum[((-1)^n)/(n - (-1)^n) /. n -> 2 k - 1, {k, 1, n0}] // FullSimplify(* 1/2 (HarmonicNumber[-(1/2) + n0] - HarmonicNumber[n0] + Log[4]) *)Limit[1/2 (HarmonicNumber[-(1/2) + n0] - HarmonicNumber[n0] + Log[4]), n0 -> Infinity](* Log[2] *) More succinctly and quicker (along the lines of A.G.'s comment): Sum[((-1)^n)/(n - (-1)^n), {k, 1, Infinity}, {n, 2 k - 1, 2 k}](* Log[2] *) These two are mathematically equivalent, since the partial sums are sums of an even number of terms of the original series. As alluded to in a comment, it is valid because the n-th term approaches zero, which implies the difference between consecutive partial sums of the original series approaches zero. Contrast with Euler's counterexample, Sum[(-1)^n, {n, 1, Infinity}] .
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/39316', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5714/']}
jdg_83784
stackexchange
llm_judgeable_groundtruth_similarity
517283
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I designed a PCB which has an STM32G431CBT6 on it. There are also two LEDs, a red and a green one, connected as shown: When I powered it on the first time (so no program on the uC yet), the red LED was lighting dimly. After loading a LED blinker program on the uC, it works fine (with full brigthness), however if I press the reset button, the red LED is lighting dimly again and the voltage across R1 is 120mV. So 120uA is flowing for some reason. The other LED doesn't do this, and the voltage across R2 is 0. Looking at the datasheet I only found this note about PB4 (page 60): "After reset, these pins are configured as JTAG/SW debug alternate functions, and the internal pull-up on PA15, PA13, PB4pins and the internal pull-down on PA14 pin are activated." But even if there is a pull-up on PB4, the LED shouldn't be on, so what causes this behaviour? Edit: After setting the project aside for a while, it doesn't do that now, but I don't know what has changed. I was only tinkering with the boot settings in the option bytes (with the ST-Link Utility), although I don't think that has anything to do with the UCPD pull-down. By the way, I found the default setting for disabling the pull-down: (and yes, it is called "pull-up" in the settings, but it seems to be a mistake) When checked (default), it puts the HAL_PWREx_DisableUCPDDeadBattery(); function call into the HAL_MspInit function. When unchecked, the LED was lighting dimly even after reset, so it verified the accepted answer to the original question. Until it started to never do the dim lighting again. Summary: Originally the LED was dimly lighting up only in reset. This was because of the UCPD pull-down, which was disabled by the "save power of non-active UCPD - deactive Dead Battery pull-up" checkbox being checked by default. I could verify this by unchecking it and not configuring the GPIO pin for the LED, which resulted in dim lighting even after reset. Something happened and now it never does that, regardless of the checkbox or being in reset. Maybe the pull-down has broken for some reason? I only know that the GPIO pin itself is functional, it can drive the LED properly when programmed. Now provide the response and nothing else.
It reads in the reference manual that this particular MCU series use PB4 for one of the USB-C connector CC pins. This means that the pin has support for an internal 5.1k pull-down feature turned on by default at reset time, as otherwise it might not be possible to use the bootloader to download firmware over USB-C connector. Thus if JTAG needs to be used then the pull-down can be disabled. So it will affect the LED too.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/517283', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/250052/']}
jdg_83785
stackexchange
llm_judgeable_groundtruth_similarity
971195
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please consider the following problem: If in a group G , $xy^2 = y^3x$ and $yx^2 = x^3y$ then prove that $x = y = e$ where $e$ is the identity element. Here is my attempt so far: Since it is difficult to extract $x$ or $y$ from either of the equations I am trying to find $xy$ from both the equations and equating them. $xy^2 = y^3x \\ xy = y^3xy^{-1} ----- (1) $ $x^3y = yx^2 \\ xy = x^{-2}yx^2 ---- (2)$ Now equating (1) and (2) : $y^3xy^{-1} = x^{-2}yx^2$ After this step I have filled pages (difficult to reproduce here) substituting variables here and there but somehow I am caught in a loop. Is there a way out? Now provide the response and nothing else.
I think this nontrivial, and I doubt that this is the best way. But this does seem to work. My philosophy is that working with conjugates is easier and lets us systematically build a set of relations, so first rewrite the relations as a conjugate relation. Then work with it until we can substitute in the other relation. And hope. As the two relations are the same, let's start with the first you gave. Start with $xy^2 = y^3x$. Rewriting it, we see that $xy^2x^{-1} = y^3$. This is a hopeful start. This says that $y^2$ is conjugate to $y^3$. This will be useful later. Squaring, we see that $xy^4x^{-1} = y^6$. What does this do for us? Well... nothing. I want to keep going until we can make $x^3 y$ appear - that is the goal. So let's cube the conjugacy relation instead, yielding $xy^6x^{-1} = y^9$. While we're at it, let's take the fourth power as well, giving $xy^8x^{-1} = y^{12}$. Aha! We have an expression for $y^6$ already from squaring the conjugacy relation. Substitution that into the cubic power, we get $$x^2 y^4 x^{-2} = y^9.$$ I call this progress. We need this to happen again. So we square this last expression and get $x^2 y^8 x^{-2} = y^{18}$. What do we have now? Again... nothing. So we repeat what we've done before, and find the cube, $x^2 y^{12}x^{-2} = y^{27}$. Aha, we have a relation for $y^{12}$ from taking the fourth power above! Substituting that in here, we get $$ x^3 y^8 x^{-3} = y^{27}.$$ We have finally put ourselves in a position where we can use the other relation. We know that $x^3y = yx^2$ (and so we also know that $y^{-1}x^{-3} = x^{-2}y^{-1}$, which we'll use as we're sticking with conjugates whenever possible). So we rewrite $$\begin{align}x^3 y^8 x^{-3} &= y^{27} \\(x^3y)y^8(y^{-1}x^{-3} &= y^{27} \\y(x^2y^8x^{-2})y^{-1} &= y^{27}\end{align}$$ and because the fates like us, we already know that $x^2y^8x^{-2} = y^{18}$ from above. Using this, we substitute and get $$y^{18} = y^{27}.$$ We have determined that $y^9 = e$. So in particular, the order of $y$ divides $9$. But from the first conjugacy relation $xy^2x^{-1} = y^3$, we see that $y^2$ and $y^3$ are conjugates, and thus have the same order. But as $2$ does not divide $9$, we see that the order of $y$ must actually by $1$, and so $y = e$. And thus $x = e$ as well.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/971195', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/104750/']}
jdg_83786
stackexchange
llm_judgeable_groundtruth_similarity
555888
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My question arises from Susskinds book on Special Relativity and Classical Field Theory. (page 102 equation 3.29 to 3.30 and page 105 equation 3.34 to 3.36.) The relativistic Lagrangian for a free particle is given by the following equation. $$ L = -mc^2\sqrt{1-\frac{v^2}{c^2}} = \frac{-mc^2}{\dot{X}^0},\tag{1}$$ where dot means differentiation with respect to the proper time.The $i^{th}$ component of momentum is given by ( $i=1, 2, 3$ ), $$P_{i} = \frac{\partial L}{\partial \dot{X^{i}}}.\tag{2}$$ This definition works perfectly fine for the 3 spatial components of relativistic momentum and gives $$P_{i} = m\dot{X^{i}}.\tag{3}$$ However, for the time component of 4-momentum, Susskind uses the relativistic Hamiltonian to derive $$P_{0} = m\dot{X^{0}}.\tag{4}$$ I am aware that the time component of 4-momentum corresponds to the energy, but I would like to know why we can't use the Lagrangian definition: $$P_{0} = \frac{\partial L}{\partial \dot{X^{0}}}\tag{5}$$ here. I am new to this subject and would be really grateful for any help or insights. Now provide the response and nothing else.
That's a good question. Note first of all that it is inconsistent to use proper time $\tau$ as the world-line (WL) parameter $\lambda$ for the principle of stationary action (PSA) . The point is that the WL parameter $\lambda$ is never varied in the PSA, but the action $S$ happens to be proportional to $\tau$ , which we are trying to maximize. In particular, the rightmost expression $-m_0c^2\left(\frac{dx^{0}}{d\tau}\right)^{-1}$ in OP's eq. (1) cannot be used as an off-shell formula for the Lagrangian $L$ , although correct in value. The same issue is discussed in my Phys.SE answers here & here using slightly different words. In Ref. 1 the WL parameter $\lambda=t\equiv \frac{x^0}{c}$ is instead the laboratory time, i.e. it uses the static gauge where $\dot{x}^0=c$ . (In this answer dot means differentiation wrt. $\lambda$ .) Conceptually this is the easiest route. However, this destroys manifest (but not actual) Lorentz covariance, so the derivative $\frac{\partial L}{\partial \dot{x}^0}$ does not make sense. Ref. 1 therefore obtains the 0-component $p_0$ in a roundabout manner, which is equivalent to my Phys.SE answer here . Finally, let us return to OP's question: Yes, there exists a manifest Lorentz covariant formulation where $p_0=\frac{\partial L}{\partial \dot{x}^0}$ , but it involves gauge symmetry and constraints, and is conceptually more challenging, cf. e.g. my Phys.SE answers here & here . References: L. Susskind & A. Friedman, Special Relativity and Classical Field Theory: The Theoretical Minimum, 2017; p. 102-106.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/555888', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/247778/']}
jdg_83787
stackexchange
llm_judgeable_groundtruth_similarity
53095
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This week we had a bright interviewee who claimed that array has constant search time and map has even faster than that search time . Now to me if some algorithm has O(1) time complexity the only way for another equivalent algorithm to be faster is to have a smaller constant coefficient in O(1) estimate (like one algorithm takes at most 230 primitive operations and another takes at most 50 primitive operations and is therefore faster although both have O(1) complexity). Is it possible for an algorithm to be faster than O(1) except having a smaller coefficient in the estimate? Now provide the response and nothing else.
It is both reasonable and common to assume that any algorithm needs at least a positive constant amount of time for every input. For example, any useful algorithm should answer something (e.g. YES/NO or some number, string, etc.), and it is reasonable to assume that doing so takes at least some constant amount of time. Under this assumption, no algorithm can have a subconstant time complexity. (In actual computers, this constant minimum amount of time may become smaller by advance in science and technology, but no matter how fast computers become, it is still there as a constant which does not depend on the input size.) Vhailor comments that a hypothetical algorithm which waits 1/ n seconds, where n is the input length, would satisfy the condition. The argument in the above assumes that no such algorithm exists. To justify this, I would argue that it is unreasonable to assume that a machine can wait 1/ n seconds for arbitrary large n , because that would require faster and faster information processing as n grows. Sometimes you may hear “subconstant-time operations,” but before it freaks you out, check what it really means. Usually, it means that the required time divided by some other parameter is subconstant, not that the time itself is subconstant.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/53095', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2844/']}
jdg_83788
stackexchange
llm_judgeable_groundtruth_similarity
8113468
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to the Akka framework and I'm building an HTTP server application on top of Netty + Akka. My idea so far is to create an actor for each type of request. E.g. I would have an actor for a POST to /my-resource and another actor for a GET to /my-resource. Where I'm confused is how I should go about actor creation? Should I: Create a new actor for every request (by this I mean for every request should I do a TypedActor.newInstance() of the appropriate actor)? How expensive is it to create a new actor? Create one instance of each actor on server start up and use that actor instance for every request? I've read that an actor can only process one message at a time, so couldn't this be a bottle neck? Do something else? Thanks for any feedback. Now provide the response and nothing else.
Well, you create an Actor for each instance of mutable state that you want to manage. In your case, that might be just one actor if my-resource is a single object and you want to treat each request serially - that easily ensures that you only return consistent states between modifications. If (more likely) you manage multiple resources, one actor per resource instance is usually ideal unless you run into many thousands of resources. While you can also run per-request actors, you'll end up with a strange design if you don't think about the state those requests are accessing - e.g. if you just create one Actor per POST request, you'll find yourself worrying how to keep them from concurrently modifying the same resource, which is a clear indication that you've defined your actors wrongly. I usually have fairly trivial request/reply actors whose main purpose it is to abstract the communication with external systems. Their communication with the "instance" actors is then normally limited to one request/response pair to perform the actual action.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8113468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/221564/']}
jdg_83789
stackexchange
llm_judgeable_groundtruth_similarity
5292349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I configured tomcat to work with a different external open source. However, after the tomcat is running for a few minutes I get: SEVERE: The web application [/MyProject] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@1b3f02f]) and a value of type [org.apache.axis.MessageContext] (value [org.apache.axis.MessageContext@5dbd4e]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak. What could cause it? Where do I have to look? Could it be datapooling on Tomcat? And what does it mean Threads in Tomcat? EDITED Here is my full trace. The application seems to reloads its context while it's still running - and I don't know why! Mar 13, 2011 10:56:12 PM org.apache.catalina.core.StandardContext reloadINFO: Reloading this Context has startedMar 13, 2011 10:56:12 PM org.apache.catalina.core.StandardWrapper unloadINFO: Waiting for 1 instance(s) to be deallocatedMar 13, 2011 10:56:13 PM org.apache.catalina.core.StandardWrapper unloadINFO: Waiting for 1 instance(s) to be deallocatedMar 13, 2011 10:56:14 PM org.apache.catalina.core.StandardWrapper unloadINFO: Waiting for 1 instance(s) to be deallocatedMar 13, 2011 10:56:14 PM org.apache.catalina.core.ApplicationContext logINFO: Closing Spring root WebApplicationContextMar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbcSEVERE: The web application [/MyProject] registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbcSEVERE: The web application [/MyProject] registered the JBDC driver [oracle.jdbc.driver.OracleDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [NioSocketAcceptor-1] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [NioProcessor-1] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [NioProcessor-4] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [bitronix-disk-force-batcher] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [bitronix-scheduler] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [NioProcessor-7] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreadsSEVERE: The web application [/MyProject] appears to have started a thread named [NioProcessor-2] but has failed to stop it. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@1b5a8e1]) and a value of type [org.mvel2.debug.DebuggerContext] (value [org.mvel2.debug.DebuggerContext@16259fd]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [org.apache.axis.utils.XMLUtils.ThreadLocalDocumentBuilder] (value [org.apache.axis.utils.XMLUtils$ThreadLocalDocumentBuilder@84b0b4]) and a value of type [com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl] (value [com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl@16d2cfa]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [null] (value [com.sun.faces.util.Util$1@16bbac9]) and a value of type [java.util.HashMap] (value [{com.sun.faces.patternCache={ = }}]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@1b3f02f]) and a value of type [org.apache.axis.MessageContext] (value [org.apache.axis.MessageContext@5dbd4e]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [org.apache.axis.utils.XMLUtils.ThreadLocalDocumentBuilder] (value [org.apache.axis.utils.XMLUtils$ThreadLocalDocumentBuilder@84b0b4]) and a value of type [com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl] (value [com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl@378584]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [org.springframework.core.NamedThreadLocal] (value [Transactional resources]) and a value of type [java.util.HashMap] (value [{org.hibernate.impl.SessionFactoryImpl@ccc27b=org.springframework.orm.hibernate3.SessionHolder@4f6ada}]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.Mar 13, 2011 10:56:15 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMapSEVERE: The web application [/MyProject] created a ThreadLocal with key of type [null] (value [com.sun.faces.application.ApplicationAssociate$1@1f01fcf]) and a value of type [com.sun.faces.application.ApplicationAssociate] (value [com.sun.faces.application.ApplicationAssociate@1b85528]) but failed to remove it when the web application was stopped. This is very likely to create a memory leak.2011-03-13 22:57:27,734 ERROR ( ContextLoader.java:220) - Context initialization failedorg.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager' defined in class path resource [applicationContext-hibernate.xml]: Cannot resolve reference to bean 'sessionFactory' while setting bean property 'sessionFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in class path resource [applicationContext-hibernate.xml]: Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Java heap space at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1325) Now provide the response and nothing else.
The message is actually pretty clear: something creates a ThreadLocal with value of type org.apache.axis.MessageContext - this is a great hint. It most likely means that Apache Axis framework forgot/failed to cleanup after itself. The same problem occurred for instance in Logback. You shouldn't bother much, but reporting a bug to Axis team might be a good idea. Tomcat reports this error because the ThreadLocal s are created per HTTP worker threads. Your application is undeployed but HTTP threads remain - and these ThreadLocal s as well. This may lead to memory leaks ( org.apache.axis.MessageContext can't be unloaded) and some issues when these threads are reused in the future. For details see: http://wiki.apache.org/tomcat/MemoryLeakProtection
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5292349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/264419/']}
jdg_83790
stackexchange
llm_judgeable_groundtruth_similarity
9676
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In many implementations of depth-first search that I saw (for example: here ), the code distinguish between a grey vertex (discovered, but not all of its neighbours was visited) and a black vertex (discovered and all its neighbours was visited). What is the purpose of this distinction? It seems that DFS algorithm will never visit a visited vertex regardless of whether it's grey or black. Now provide the response and nothing else.
When doing a DFS, any node is in one of three states - before being visited, during recursively visiting its descendants, and after all its descendants have been visited (returning to its parent, i.e., wrap-up phase). The three colors correspond to each of the three states. One of the reasons for mentioning colors and time of visit and return is to explicitly make these distinctions for better understanding. Of course, there are actual uses of these colors. Consider a directed graph $G$. Suppose you want to check $G$ for the existence of cycles. In an undirected graph, if the node under consideration has a black or grey neighbor, it indicates a cycle (and the DFS does not visit it as you mention). However, in case of a directed graph, a black neighbor does not mean a cycle. For example, consider a graph with 3 vertices - $A, B,$ and $C$, with directed edges as $A \to B$, $B \to C$, $A \to C$. Suppose the DFS starts at $A$, then visits $B$, then $C$. When it has returned to $A$, it then checks that $C$ has already been visited and is black. But there is no cycle in the graph. In a directed graph, a cycle is present if and only if a node is seen again before all its descendants have been visited. In other words, if a node has a neighbor which is grey, then there is a cycle (and not when the neighbor is black). A grey node means we are currently exploring its descendants - and if one such descendant has an edge to this grey node, then there is a cycle. So, for cycle detection in directed graphs, you need to have 3 colors. There could be other examples too, but you should get the idea.
{}
{'log_upvote_score': 6, 'links': ['https://cs.stackexchange.com/questions/9676', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/6805/']}
jdg_83791
stackexchange
llm_judgeable_groundtruth_similarity
14688996
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is better in for loop This: for(int i = 0; i<someMethod(); i++){//some code } or: int a = someMethod();for(int i = 0; i<a; i++){//some code } Let's just say that someMethod() returns something large. First method will execute someMethod() in each loop thus decreasing speed, second is faster but let's say that there are a lot of similar loops in application so declaring a variable vill consume more memory. So what is better, or am I just thinking stupidly. Now provide the response and nothing else.
The second is better - assuming someMethod() does not have side effects . It actually caches the value calculated by someMethod() - so you won't have to recalculate it (assuming it is a relatively expansive op). If it does (has side effects) - the two code snaps are not equivalent - and you should do what is correct . Regarding the "size for variable a" - it is not an issue anyway, the returned value of someMethod() needs to be stored on some intermediate temp variable anyway before calculation (and even if it wasn't the case, the size of one integer is negligible). P.S. In some cases, compiler / JIT optimizer might optimize the first code into the second, assuming of course no side effects.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14688996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1202206/']}
jdg_83792
stackexchange
llm_judgeable_groundtruth_similarity
37942079
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this Apache Rewrite rules: RewriteCond %{QUERY_STRING} !langRewriteCond %{HTTP_HOST} ^(jp|en|kr|cn)\.example\.comRewriteRule ^(.*)$ http://%1.example.com/$1&lang=%1 [L, QSA]RewriteCond %{HTTP_HOST} ^(jp|en|kr|cn)\.example\.comRewriteRule ^(.*)$ http://%1.example.com/$1?lang=%1 [L, QSA] What I expected is http://en.example.com to http://en.example.com?lang=en http://en.example.com/list.php to http://en.example.com/list.php?lang=en http://en.example.com/product.php?id=1 to http://en.example.com/product.php?id=1&lang=en (1) and (2) is fine, but what I got for (3) is http://en.mobile-wifi_rental.local/product.php&lang=en?id=1 . Now provide the response and nothing else.
It seems that this is a combination of a Spring "bug" and a driver "bug". Spring tries to determine the data type of a column each time setValue() is called. It does this by calling PreparedStatementMetaData.getParameterMetaData() This apparently causes a "prepare" statement to be sent to the database which in itself is quite fast (never more then 1ms on my laptop) but as it is called for each column for each row this sums up to a lot of time (it's called for every not-null value which results in approx. 23.000 calls) To some extent this is more a Spring bug then a driver bug because not caching the parameter meta data doesn't really make sense (at least in my opinion). The MySQL JDBC driver does not support getParameterMetaData() and Spring knows this and so this "bug" doesn't show up with MySQL because spring never calls that method. I am not sure if Postgres' JDBC driver behavior can be classified as a bug, but it sure would be nice if the driver was caching that meta data after the first call. Spring can be convinced to not obtain the statement meta data through the property spring.jdbc.getParameterType.ignore So by putting: System.setProperty("spring.jdbc.getParameterType.ignore", "true"); before the line: LetsGo letsGo = new LetsGo(); this behaviour is disabled. The property must be set before Spring is initialized. When I do that with your sample project, the insert runs in 500ms on my laptop. Edit After seeing the comment regarding the use of the Postgres-NG driver I dug into the sources of the "official" driver and the NG driver, and the NG driver does cache the parameter meta data after the first call whereas the official driver does not which explains why using the NG driver is so much faster (without disabling the call in Spring)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37942079', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1179841/']}
jdg_83793
stackexchange
llm_judgeable_groundtruth_similarity
3113487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a string that represents a date/time in ISO8601 format (e.g. 20100723T073000 ), I need to ultimately parse this into a user-supplied format using a general strftime format string. In order to do that, I need to convert the ISO8601 timestamp to a Unix timestamp. There are a huge amount of date/time manipulation modules for Perl and I'm a little overwhelmed. Any suggestions on how to do this? Now provide the response and nothing else.
The champion module for handling dates, times, durations, timezones and formats is DateTime . Once you've got your DateTime object, you can perform a huge number of operations on it, like adding/subtracting other DateTime objects or DateTime::Duration objects, or printing it out in another format. use strict; use warnings;use DateTime;use DateTime::Format::ISO8601;my $dt = DateTime::Format::ISO8601->parse_datetime('20100723T073000');print $dt->strftime('%F %T'); prints: "2010-07-23 07:30:00"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3113487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242493/']}
jdg_83794
stackexchange
llm_judgeable_groundtruth_similarity
371972
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Base $10$ uses these digits: $\{0,1,2,3,4,5,6,7,8,9\};\;$ base $2$ uses: $\{0,1\};\;$ but what would base $1$ be? Let's say we define Base $1$ to use: $\{0\}$.Because $10_2$ is equal to $010_2$, would all numbers be equal? The way I have thought Base 1 might be represented is tally marks, $0_{10}$ would be represented by nothing. So, $5$ in Base 1 would be represented by $00000$? Or we could define Base 1 to use: $\{$|$\}$ and $5$ would be |||||? Now provide the response and nothing else.
You're exactly right that such a system would be represented by the use of arbitrary tally marks. Such a system is known as a Unary Numeral System (Wikipedia Entry): The unary numeral system is the bijective base-1 numeral system. It is the simplest numeral system to represent natural numbers: in order to represent a number N, an arbitrarily chosen symbol representing 1 is repeated N times. This system is used in tallying. For example, using the tally mark |, the number 6 is represented as ||||||. ... There is no explicit symbol representing zero in unary as there is in other traditional bases, so unary is a bijective numeration system with a single digit. If there were a 'zero' symbol, unary would effectively be a binary system. [boldface mine] In a true unary system there is no way to explicitly represent none of something, though simply making no marks represents it implicitly. Even in advanced tallying systems like Roman numerals, there is no zero character; instead the Latin word for "nothing," nullae, is used.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/371972', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/72616/']}
jdg_83795
stackexchange
llm_judgeable_groundtruth_similarity
58090471
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With this code: .row { display: flex; text-align: center;}.col { flex: 1;} <div class='row'> <div class='col'> <h1>Left</h1> <h2>Some longer text</h2> </div> <div class='col'> <h1>Right</h1> <h2>Some longer text</h2> </div></div> I get output like ------------------------------------------------------------------| Left | Right || Some longer text | Some longer text |------------------------------------------------------------------ (dashed lines imaginary to show containers). What I want is for the elements in each column to stay centered relative to one another but to be biased towards the the column divider, with a bit of padding, e.g. ------------------------------------------------------------------| Left | Right || Some longer text | Some longer text |------------------------------------------------------------------ How can I achieve this? Now provide the response and nothing else.
Complete solution. var express = require('express');var router = express.Router();const PDFDocument = require('pdfkit');router.get('/generatePDF', async function(req, res, next) {var myDoc = new PDFDocument({bufferPages: true});let buffers = [];myDoc.on('data', buffers.push.bind(buffers));myDoc.on('end', () => { let pdfData = Buffer.concat(buffers); res.writeHead(200, { 'Content-Length': Buffer.byteLength(pdfData), 'Content-Type': 'application/pdf', 'Content-disposition': 'attachment;filename=test.pdf',}) .end(pdfData);});myDoc.font('Times-Roman') .fontSize(12) .text(`this is a test text`);myDoc.end();});module.exports = router;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58090471', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12116376/']}
jdg_83796
stackexchange
llm_judgeable_groundtruth_similarity
52846
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Stray cats are extremely common in Israel and there is a common urban legend that they were imported by the British during the mandate period to combat a rat problem. For example, you can see this claim repeated in this article from the Jerusalem post: Ever since the British first brought cats over to Israel (then Palestine) in the first half of the 20th century to combat the rat problem, cats are a frequent sight on Israeli streets, particularly in Jerusalem. Did the British cause the current stray cat problem? Note: I assume there were some cats in the region beforehand, but the question is whether the British were the significant factor which caused the current overabundance of stray cats. Now provide the response and nothing else.
The other answers have shown that cats have been in (now) Israel for thousands of years. The extended claim of British importing cats in the ~1930s to deal with a rat problem appears to be an urban legend: Local folklore has it that the British are to blame for Israel’s stray cat population , having brought cats in during the period of the Mandate in order to get rid of rats. Yet the problem could have been created long before that: Geneticists from the National Cancer Institute study established that the world’s 600 million cats are descended from five wildcat matriarchs, including at least one from deserts of Israel, so it may be that the British are not entirely to blame. Jessica Steinberg, Stray cats and their Tel Aviv strut , The Times of Israel, 2012. I didn't find mention of the claim prior to 2012 (e.g. israel british cats , restricted to <=2011, does not give any relevant hits). I also didn't find mention of a rat problem in (now) Israel circa 1930 (e.g. palestine rat problem 1930 doesn't come up with relevant hits). The above-mentioned study more specifically states: The results revealed five genetic clusters, or lineages, of wildcats. Four of these lineages corresponded neatly with four of the known subspecies of wildcat and dwelled in specific places: F. silvestris silvestris in Europe, F. s. bieti in China, F. s. ornata in Central Asia and F. s. cafra in southern Africa. The fifth lineage, however, included not only the fifth known subspecies of wildcat—F. s. lybica in the Middle East—but also the hundreds of domestic cats that were sampled, including purebred and mixed-breed felines from the U.S., the U.K. and Japan. In fact, genetically, F. s. lybica wildcats collected in remote deserts of Israel, the United Arab Emirates and Saudi Arabia were virtually indistinguishable from domestic cats. That the domestic cats grouped with F. s. lybica alone among wildcats meant that domestic cats arose in a single broad locale, the Middle East, and not in other places where wildcats are native. Driscoll et al., The Taming of the Cat , Sci Am., 2009 ( pdf ). Here's a photo of an African wildcat (F. s. lybica): Image by Sonelle, at the Johannesburg Zoo; sourced from Felis silvestris lybica , Wikispecies. Driscoll et al. also mentions a ~3700 year old ivory cat statuette made in (now) Israel, which they conclude "suggests cats were a common sight around human settlements in the Fertile Crescent" (they also mention a ~9000 year old feline molar tooth). Another news report makes a similar claim: "Because there are so many cats, it's become so emotional in both ways," says Idit Gunther, a veterinarian and lecturer in veterinary medicine at Hebrew University of Jerusalem. ... Many tour guides and Jerusalem residents blame the British , who ruled the city between 1917 and 1948, for bringing cats to the city via their ships. But the felines have lived in Jerusalem for thousands of years, Gunther says. In fact, the genetic makeup of most of Israel's street cats shows they are directly descended from the original African cat domesticated by the ancient Egyptians, and do not include genes of any wild European cat species, she says . Sara Toth Stub, Jerusalem's Controversial Cats , USA Today, 2019. Israeli zoologists offer the following reasons for Israel's current cat problems: There are several reasons for this. First, in Israel organic waste is often not properly treated. Food remains are found everywhere near garbage containers, in yards and parks, readily available for cats. Second, feeding stray cats is a widespread phenomenon in Israeli towns and villages. Individuals providing food for dozens and even close to a hundred cats are not uncommon. Third, Israel has a mild climate and regularly-fed cats easily survive the winter. Brickner-Braun, Geffen, Yom-Tov, The Domestic Cat as a Predator of Israeli Wildlife, Israeli Journal of Ecology and Evolution , 2007. ( pdf )
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/52846', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/35165/']}
jdg_83797
stackexchange
llm_judgeable_groundtruth_similarity
73157
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Back in February 2022, it seemed like the Ruble would rapidly collapse. However, nowadays, the USD/RUB exchange rate looks better than at any time in the past year: You could argue that this is a "fake" exchange rate, but it seems like Russia keeps increasing the limit on foreign currency exchange, so citizens are indeed able to convert their rubles for dollars if they wish. Does this signify that Russia's economy is doing well despite the war — and, by proxy, that Putin is doing much better than previously expected? Now provide the response and nothing else.
Exchange rate is not an indicator (or a very poor one) of economic well-being: it can be controlled by a central bank, via increasing money supply ("printing money", although nowadays it is mostly electronic), buying or selling foreign currency, changing borrowing rates, issuing treasure notes, imposing tariffs on imported goods, etc. exchange rate also have different effects on different aspects of economy: high exchange rate is good for those who go to spend their money abroad or buy lots of imported goods. However, it also means that the goods/services produced by the country are more expensive abroad, which makes them less competitive and may result in slower economic activity. Cheaper currency also means cheaper labor and is used sometimes as a way of fighting high unemployment. As an example, one could cite the US Central Bank policy of "Quantitative easing", which is actually issuing electronic money - the US money supply M0 (i.e., the actual money, not counting the bank loans) has tripled or even quadrupled during the end of G.W. Bush and Obama presidency. While this made dollar cheaper abroad, it arguably contributed to the economic recovery. On the other hand, European countries during the same period suffered from impossibility to devalue their currency, since it is controlled by the European central bank, which adopted rather conservative monetary policy. This however was certainly good for those (notably in Germany), who keep their money as savings in their bank accounts, and who would risk these saving made worthless, if euro were devalued. Returning to Russia - devaluation of money is of special political importance in view of the hyperinflation that the Russians lived through in the 90s (adding multiple zeros - although not reaching the level of Russian hyperinflation during WW1 or the German inflation in post-WW1 years).
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/73157', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/7434/']}
jdg_83798
stackexchange
llm_judgeable_groundtruth_similarity
871610
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why are box topology and product topology different on infinite products of topological spaces ? I'm reading Munkres's topology. He mentioned that fact but I can't see why it's true that they are different on infinite products. So , Can any one please tell me why aren't they the same on infinite products of topological spaces ? Now provide the response and nothing else.
Let $X_n$ be topological spaces for each $n\in\mathbb{N}$. To avoid the issues pointed out by Najib, assume for each $n$ that $X_n$ is not a point, and the topology on $X_n$ is not the trivial topology (i.e. there is an open set besides $\emptyset$ and $X_n$). For each $n$, let $U_n \subset X_n$ be a proper, nonempty open subset. Then the set $U := \prod\limits_{n\in\mathbb{N}} U_n$ is open in the box topology on $\prod\limits_{n\in\mathbb{N}} X_n$ but not the product topology. The product topology is generated by sets of the form $\prod\limits_{n\in\mathbb{N}} U_n$ where each $U_n$ is open in $X_n$ and, for all but finitely many $n$, we have $U_n = X_n$. In other words, almost all of the factors have to be the entire space. For the box topology, each factor $U_n$ just has to be open in $X_n$. Here is one way of understanding why the product topology is more important (even though the box topology seems more intuitive at first). The product topology is the smallest topology such that for each $k\in\mathbb{N}$, the projection map $\pi_k:\prod\limits_{n\in\mathbb{N}} X_n\to X_k$ is continuous. The preimage of an open set $U_k\subseteq X_k$ via $\pi_k$ is one of the basic sets for the product topology described above: specifically, it is $U_k$ in the $k$th factor and the whole space $X_n$ in each other factor. To generate a topology, we need to include finite intersections of such sets (so not the entire space in finitely many positions), but not infinite intersections. So thinking about wanting the $\pi_k$ to be continuous, the product topology has "enough" open sets, and the box topology adds in open sets that aren't needed.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/871610', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50948/']}
jdg_83799
stackexchange
llm_judgeable_groundtruth_similarity
451025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In order to return a value from a VB.NET function one can assign a value to the "Functions Name" or use "return value." I sometimes see these inter-mixed in the same function. Personally, I prefer the return. My question is, what is the internal difference, if any, between the two? Now provide the response and nothing else.
The difference is that they DO DIFFERENT THINGS! 'Return value' does 2 things: 1. It sets the function return value at that point2. It immediately exits the function No further code in the function executes! 'Functionname = value' does 1 thing:1. It sets the function return value at that point Other code in the function continues to executeThis enables additional logic to refine or override the function return value Huge difference folks. Remember it's not all about state, it's also about flow.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/451025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/50612/']}
jdg_83800
stackexchange
llm_judgeable_groundtruth_similarity
1107149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a class template <typename Iterator, typename Value>class Foo { public: Foo(const Iterator& it) { ... } ... private: map<Value, int> m_; }}; Is there any way to get rid of Value in the template? The Iterator may or may not be an STL iterator, but it's guaranteed that *it type is Value. I know about iterator_traits<T>::value_type for STL iterators, but wonder if there's any way to get Value type automatically for an arbitrary Iterator type? One trick I'm thinking about - say, we have a helper class template <typename Iterator, typename Value>class Bar { public: Bar(const Iterator& dummy_iterator, const Value& dummmy_value) {} ...}; Then if we instantiate Bar as Bar(it, *it), the type of Value will be known inside Bar. But I can't find a good way to combine Bar with Foo. Now provide the response and nothing else.
Any iterator should provide iterator_traits<Iterator>::value_type . If it does not, then it is not an iterator. ISO C++ 2003 24.3.1[lib.iterator.traits] "Iterator traits": To implement algorithms only in terms of iterators, it is often necessary to determine the value and difference types that correspond to a particular iterator type. Accordingly, it is required that if Iterator is the type of an iterator, the types iterator_traits<Iterator>::difference_typeiterator_traits<Iterator>::value_typeiterator_traits<Iterator>::iterator_category be defined as the iterator’s difference type, value type and iterator category, respectively. Aside from that, there's no general way to obtain a type of an arbitrary C++ expression. C++0x will rectify it by providing decltype .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1107149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/110177/']}
jdg_83801
stackexchange
llm_judgeable_groundtruth_similarity
3326641
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to open an OpenGL context without X in Linux. Is there any way at all to do it? I know it's possible for integrated Intel graphics card hardware, though most people have Nvidia cards in their system. I'd like to get a solution that works with Nvidia cards. If there's no other way than through integrated Intel hardware, I guess it'd be okay to know how it's done with those. X11 protocol itself is too large and complex. Mouse/Keyboard/Tablet input multiplexing it provides is too watered-down for modern programs. I think it's the worst roadblock that prevents Linux desktop from improving, which is why I look for alternatives. Now provide the response and nothing else.
Update (Sep. 17, 2017): NVIDIA recently published an article detailing how to use OpenGL on headless systems , which is a very similar use case as the question describes. In summary: Link to libOpenGL.so and libEGL.so instead of libGL.so . (Your linker options should therefore be -lOpenGL -lEGL Call eglGetDisplay , then eglInitialize to initialize EGL. Call eglChooseConfig with the config attribute EGL_SURFACE_TYPE followed with EGL_PBUFFER_BIT . Call eglCreatePbufferSurface , then eglBindApi(EGL_OPENGL_API); , then eglCreateContext and eglMakeCurrent . From that point on, do your OpenGL rendering as usual, and you can blit your pixel buffer surface wherever you like. This supplementary article from NVIDIA includes a basic example and an example for multiple GPUs. The PBuffer surface can also be replaced with a window surface or pixmap surface, according to the application needs. I regret not doing more research on this on my previous edit, but oh well. Better answers are better answers. Since my answer in 2010, there have been a number of major shakeups in the Linux graphics space. So, an updated answer: Today, nouveau and the other DRI drivers have matured to the point where OpenGL software is stable and performs reasonably well in general. With the introduction of the EGL API in Mesa, it's now possible to write OpenGL and OpenGL ES applications on even Linux desktops. You can write your application to target EGL, and it can be run without the presence of a window manager or even a compositor. To do so, you would call eglGetDisplay , eglInitialize , and ultimately eglCreateContext and eglMakeCurrent , instead of the usual glx calls to do the same. I do not know the specific code path for working without a display server, but EGL accepts both X11 displays and Wayland displays, and I do know it is possible for EGL to operate without one. You can create GL ES 1.1, ES 2.0, ES 3.0 (if you have Mesa 9.1 or later), and OpenGL 3.1 (Mesa 9.0 or later) contexts. Mesa has not (as of Sep. 2013) yet implemented OpenGL 3.2 Core. Notably, on the Raspberry Pi and on Android, EGL and GL ES 2.0 (1.1 on Android < 3.0) are supported by default. On the Raspberry Pi, I don't think Wayland yet works (as of Sep. 2013), but you do get EGL without a display server using the included binary drivers. Your EGL code should also be portable (with minimal modification) to iOS, if that interests you. Below is the outdated, previously accepted post: I'd like to open an OpenGL context without X in linux. Is there any way at all to do it? I believe Mesa provides a framebuffer target. If it provides any hardware acceleration at all, it will only be with hardware for which there are open source drivers that have been adapted to support such a use. Gallium3D is also immature, and support for this isn't even on the roadmap, as far as I know. I'd like to get a solution that works with nvidia cards. There isn't one. Period. NVIDIA only provides an X driver, and the Nouveau project is still immature, and doesn't support the kind of use that you're looking for, as they are currently focused only on the X11 driver.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3326641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21711/']}
jdg_83802
stackexchange
llm_judgeable_groundtruth_similarity
271309
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am calculating the current consumption of an IR ranging detection sensor. Below is the related part I captured from the sensor's specification. The working mechanism of the sensor is that it will send out an IR beam, and then measure the elapse time when it receive back the beam. From the table below, I think it means the sensor will take 33ms to take one measurement. So what does the "Active Ranging average consumption" actually means? How to calculate the current consumption if I take measurement every 0.5s, 1s and 2s? Now provide the response and nothing else.
If your battery voltage drops to a value between 0-1.7 it isn't defined what will happen This is often true, but it won't, for sure, destroy anything. Because, if it was destructive, the min Vdd in "Absolute Maximum Ratings" would have been given as a positive value (which I have never seen in any datasheet, and I hope I'll never see that in my life - it wouldn't make sense). So at this point, you are guaranteed the MCU won't be destroyed with undervoltage. However, it could still behave erratically (potentially damaging other external circuitry). Now, in this kind of MCU, there is often a feature called " brown-out detection ", or, sometimes, "undervoltage lockout". This is a feature that monitors the supply voltage and guarantees that the chip is held in reset state when the voltage is under a given level (sometimes programmable). Good news: There is such a feature on the specific chip you're using. See chapter 5.1 in the datasheet you linked. Therefore, you don't need to have a regulator with "power good" detection or an additional supply monitor circuit in your specific case. Note that, if the MCU didn't have the brown-out detection included, there are tiny chips that just offer this feature (often combined with a timed power-on reset generator) without being voltage regulators.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/271309', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/49480/']}
jdg_83803
stackexchange
llm_judgeable_groundtruth_similarity
36246495
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We're trying to save the content (HTML) of WKWebView in a persistent storage (NSUserDefaults, CoreData or disk file). The user can see the same content when he re-enters the application with no internet connection. WKWebView doesn't use NSURLProtocol like UIWebView (see post here ). Although I have seen posts that "The offline application cache is not enabled in WKWebView." (Apple dev forums), I know that a solution exists. I've learned of two possibilities, but I couldn't make them work: 1) If I open a website in Safari for Mac and select File >> Save As, it will appear the following option in the image below. For Mac apps exists [[[webView mainFrame] dataSource] webArchive], but on UIWebView or WKWebView there is no such API. But if I load a .webarchive file in Xcode on WKWebView (like the one I obtained from Mac Safari), then the content is displayed correctly (html, external images, video previews) if there is no internet connection. The .webarchive file is actually a plist (property list). I tried to use a mac framework that creates a .webarchive file, but it was incomplete. 2) I obtanined the HTML in webView:didFinishNavigation but it doesn't save external images, css, javascript func webView(webView: WKWebView, didFinishNavigation navigation: WKNavigation!) { webView.evaluateJavaScript("document.documentElement.outerHTML.toString()", completionHandler: { (html: AnyObject?, error: NSError?) in print(html) })} We're struggling over a week and it is a main feature for us.Any idea is really appreciated. Thank you! Now provide the response and nothing else.
I know I'm late, but I have recently been looking for a way to store web pages for offline reading, and still could't find any reliable solution that wouldn't depend on the page itself and wouldn't use the deprecated UIWebView . A lot of people write that one should use the existing HTTP caching, but WebKit seems to do a lot of stuff out-of-process, making it virtually impossible to enforce complete caching (see here or here ). However, this question guided me into the right direction. Tinkering with the web archive approach, I found that it's actually quite easy to write your own web archive exporter . As written in the question, web archives are just plist files, so all it takes is a crawler that extracts the required resources from the HTML page, downloads them all and stores them in a big plist file. This archive file can then later be loaded into the WKWebView via loadFileURL(URL:allowingReadAccessTo:) . I created a demo app that allows archiving from and restoring to a WKWebView using this approach: https://github.com/ernesto-elsaesser/OfflineWebView EDIT: The archive generation code is now available as standalone Swift package: https://github.com/ernesto-elsaesser/WebArchiver The implementation only depends on Fuzi for HTML parsing.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36246495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6008619/']}
jdg_83804
stackexchange
llm_judgeable_groundtruth_similarity
1073015
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've come across an interesting problem in the following line of code: <img style="background-image:url(Resources/bar.png); width: 300px; height: 50px;"/> In Safari (at least), a gray border surrounds the 300x50px area. Adding style="border: none;" doesn't remove it. Any ideas? Thanks.Mike Now provide the response and nothing else.
So, you have an img element that doesn't have a src attribute, but it does have a background-image style applied. I'd say that the gray border is the 'placeholder' for where the image would be, if you'd specified a src attribute. If you don't want a 'foreground' image, then don't use an img tag - you've already stated that changing to a div solves the problem, why not go with that solution?
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1073015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129238/']}
jdg_83805
stackexchange
llm_judgeable_groundtruth_similarity
9632235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to send multiple tasks to Grand Central Dispatch to run. Some tasks will finish first, while some will finish last. How do I know all my tasks in Grand Central Dispatch finished? Should I use a counter to record the number of tasks finished? Any smarter method? Now provide the response and nothing else.
You can use dispatch groups to be notified when all tasks completed. This is an example from http://cocoasamurai.blogspot.com/2009/09/guide-to-blocks-grand-central-dispatch.html dispatch_queue_t queue = dispatch_get_global_queue(0,0);dispatch_group_t group = dispatch_group_create();dispatch_group_async(group,queue,^{ NSLog(@"Block 1");});dispatch_group_async(group,queue,^{ NSLog(@"Block 2");});dispatch_group_notify(group,queue,^{ NSLog(@"Final block is executed last after 1 and 2");});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9632235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/403015/']}
jdg_83806
stackexchange
llm_judgeable_groundtruth_similarity
470086
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a uni assessment with COVID-19 topic and trying to work out a poisson regression for COVID 19 pandemic. My predictor is the number of confimred cases as count, my predictors in the poisson regression is outbreak_days, apple_drivingmobility, country. Then model comes with a gigantic AIC and also deviance. Is that because my confimred cases are culmulatives which wont follow poisson distribution? instead, should I use new cases each day? Or this is entirely not suitable to use poisson. glm(formula = confirmed_deaths ~ outbreak_days + Apple_DrivingMobility + country, family = poisson(link = log), data = df_cleaned_driving_level)Deviance Residuals: Min 1Q Median 3Q Max-131.981 -9.773 -0.863 3.798 87.411Coefficients: Estimate Std. Error z value Pr(>|z|)(Intercept) 1.726e+00 1.795e-02 96.183 <2e-16 ***outbreak_days 6.706e-02 3.687e-05 1818.954 <2e-16 ***Apple_DrivingMobility -1.847e-02 4.573e-05 -403.834 <2e-16 ***countryBelgium 4.404e+00 1.791e-02 245.852 <2e-16 ***countryBrazil 4.604e+00 1.789e-02 257.378 <2e-16 ***countryCanada 3.755e+00 1.806e-02 207.950 <2e-16 ***countryCzechia 1.746e+00 2.048e-02 85.283 <2e-16 ***countryDenmark 2.431e+00 1.934e-02 125.720 <2e-16 ***countryEstonia 4.712e-01 2.768e-02 17.020 <2e-16 ***countryFinland 1.693e+00 2.109e-02 80.292 <2e-16 ***countryFrance 4.922e+00 1.788e-02 275.355 <2e-16 ***countryGermany 4.186e+00 1.796e-02 232.992 <2e-16 ***countryIreland 2.447e+00 1.839e-02 133.051 <2e-16 ***countryItaly 4.871e+00 1.791e-02 272.003 <2e-16 ***countryJapan 8.318e-01 2.285e-02 36.399 <2e-16 ***countryLuxembourg 2.099e-01 2.338e-02 8.979 <2e-16 ***countryMexico 3.551e+00 1.806e-02 196.676 <2e-16 ***countryNetherlands 4.098e+00 1.797e-02 228.088 <2e-16 ***countryNewZealand -1.148e+00 4.103e-02 -27.984 <2e-16 ***countryNorway 1.440e+00 2.145e-02 67.110 <2e-16 ***countryPhilippines 1.448e+00 1.893e-02 76.457 <2e-16 ***countrySingapore -2.988e+00 7.258e-02 -41.170 <2e-16 ***countrySlovakia -8.289e-01 3.875e-02 -21.393 <2e-16 ***countrySpain 4.975e+00 1.790e-02 277.988 <2e-16 ***countrySweden 4.060e+00 1.827e-02 222.273 <2e-16 ***countrySwitzerland 3.208e+00 1.829e-02 175.383 <2e-16 ***countryTaiwan* -1.535e+00 5.629e-02 -27.272 <2e-16 ***countryUnitedKingdom 5.153e+00 1.785e-02 288.619 <2e-16 ***countryUnitedStates 6.537e+00 1.783e-02 366.664 <2e-16 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1(Dispersion parameter for poisson family taken to be 1) Null deviance: 20309689 on 1879 degrees of freedomResidual deviance: 765387 on 1851 degrees of freedomAIC: 777730Number of Fisher Scoring iterations: 6[![enter image description here][1]][1] Now provide the response and nothing else.
Assume $ T, \lambda, \kappa $ are fixed. We can use the Probability Integral Transform (PIT). In essense, the PIT says that the CDF of a random variable follows a $U(0,1)$ distribution, so drawing $U(0,1)$ samples and running them through $F^{-1}(x)$ gives draws of $X$ . If we can draw samples of $x$ , we can then estimate the $P(x < t + T | X > T)$ via monte carlo. Nb I use the form of the Weibull that is used on wikipedia [ https://en.wikipedia.org/wiki/Weibull_distribution ]. To apply the inversion method, we want to sample values of $x = t + T$ . We have \begin{align} p = P(x < t+T | X>T) &= 1 - \frac{1 - F(t+T)}{1 - F(T)} \\& = 1 - \frac{1 - \left( 1 - \exp\left\{ -\lambda^{-\kappa}(t+T)^{\kappa} \right\} \right)}{1 - \left( 1 - \exp\left\{ -\lambda^{-\kappa}(T)^{\kappa} \right\} \right)}\\& = 1 - \exp \left\{ \lambda^{-\kappa}T^\kappa - \lambda^{-\kappa}(t + T)^\kappa \right\}\end{align} Some algebra leads to $t + T = \left\{ T^\kappa - \lambda^\kappa \log(1-p) \right\}^{1/\kappa}$ The following R code can then be used to draw samples of $X$ . draw.x <- function(n, lambda, kappa, T0){ p <- runif(n) ( T0^kappa - lambda^kappa *log(1-p))^(1/kappa) }## estimate P(X < 5+1 = 6 | X > 5)n.draws <- 10^4samples <- draw.x(n.draws, 1.5, 1, 5)prob.estimate <- sum(samples < 6)/n.drawsprob.estimate## if kappa = 1=> exponential distribution rate = 1/lambda## compare to exponential 1/lambdapexp(6-5, 1/1.5); prob.estimatehist(samples); abline(v = 6,col = 2, lwd = 2) ```
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/470086', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/286810/']}
jdg_83807
stackexchange
llm_judgeable_groundtruth_similarity
32600473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Tried tonnes of stuff out there but none of them really helped. I have a URL for example: http://localhost:8000/user/edit-transaction/?object_id=23a959d0561711e59e36acd1b8679265&type=grossary which calls the below view : def edit_transaction(request): if request.method == "POST": if something is True: messages.error(request, 'Error message here') # this don't work return HttpResponseRedirect(request.META.get('HTTP_REFERER')) # but this work template = "user/xyz/abc.html" render(request, template) else: return HttpResponseNotFound() else: context = { 'key1': 'value1', 'key2': 'value2', } template = "user/xyz/abc.html" render(request, template, context) And inside template: {% if messages %}<h1>I am inside MESSAGES</h1> {% for message in messages %} {% if message.tags == 'success' %} <div class="alert alert-success" role="alert">{{ message|escape|safe }}</div> {% elif message.tags == 'error' %} <div class="alert alert-danger" role="alert">{{ message|escape|safe }}</div> {% endif %} {% endfor %}{% endif %} It's getting inside the if here if something is True: and getting redirected to the same page with the query string as well. But not displaying the error message. What I want is to redirect to the same page preserving the query string and display the error message. What am I doing wrong here and what changes are recommended (if any). Also a doubt, that does Django messages really work after redirection like Flash messages should ?? Edit: 1) This don't work: if something is True: messages.error(request, 'Error message here') return HttpResponseRedirect(request.META.get('HTTP_REFERER')) From console: [16/Sep/2015 10:57:08]"POST /user/edit-transaction/?object_id=23a959d0561711e59e36acd1b8679265&type=grossary HTTP/1.1" 302 0 [16/Sep/2015 10:57:08]"GET /user/edit-transaction/?object_id=23a959d0561711e59e36acd1b8679265&type=grossary HTTP/1.1" 200 8832 2) This works: if something is True: messages.error(request, 'Error message here') template = "user/xyz/abc.html" render(request, template) From console: [16/Sep/2015 10:57:08]"POST /user/edit-transaction/?object_id=23a959d0561711e59e36acd1b8679265&type=grossary HTTP/1.1" 302 0 So, basically what I understood from above is that the messages is getting expired with an additional request (redirect, 200). And in templates, it is not getting inside the {% if messages %} as well to print <h1>I am inside MESSAGES</h1> Now provide the response and nothing else.
Finally figured it out. Added below in local_settings.py and its working . MESSAGE_STORAGE = 'django.contrib.messages.storage.session.SessionStorage' What was happening ? The messages were actually getting stored in Cookies (CookieStorage) which is the default Django behaviour. From Django docs : FallbackStorage is the default storage class. If it isn’t suitable to your needs, you can select another storage class by setting MESSAGE_STORAGE to its full import path, for example: MESSAGE_STORAGE = 'django.contrib.messages.storage.cookie.CookieStorage' And what is FallbackStorage ? This class first uses CookieStorage, and falls back to using SessionStorage for the messages that could not fit in a single cookie. It also requires Django’s contrib.sessions application. This behavior avoids writing to the session whenever possible. It should provide the best performance in the general case. And what was happening in my case ? Messages were getting stored in CookiesStorage , but for some weird reason (I don't know what) but the Messages in CookiesStorage were getting expired or deleted for the 2nd request (i.e. redirection after POST) which should not be happening (because this is not how flashdata works). And after I switched the default MESSAGE_STORAGE to SessionStorage , it started working.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32600473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3418784/']}
jdg_83808
stackexchange
llm_judgeable_groundtruth_similarity
43315857
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have store the file in storage/app/files folder by $path=$request->file->store('files') and save the path "files/LKaOlKhE5uITzAbRj5PkkNunWldmUTm3tOWPfLxO.doc" it in a table's column name file . I have also linked storage folder to public through php artisan storage:link . In my view blade file, I put this <a href="@if(count($personal_information)) {{asset('storage/'.$personal_information->file)}} @endif" download>Download File</a> and the link for download file is http://localhost:8000/storage/files/LKaOlKhE5uITzAbRj5PkkNunWldmUTm3tOWPfLxO.doc But I get the error NotFoundHttpException in RouteCollection.php line 161 If I add /app after the /storage it gives the same error. How can I download file from my storage/app/files folder? Now provide the response and nothing else.
Problem is storage folder is not publicly accessible in default. Storage folder is most likely forsave some private files such as users pictures which is not accessible by other users. If you move them to public folder files will be accessible for everyone. I had similar issue with Laravel 5.4 and I did a small go around by writing a route to download files. Route::get('files/{file_name}', function($file_name = null){ $path = storage_path().'/'.'app'.'/files/'.$file_name; if (file_exists($path)) { return Response::download($path); }}); Or you can save your files into public folder up to you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43315857', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6673765/']}
jdg_83809
stackexchange
llm_judgeable_groundtruth_similarity
78421
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: To make an equilateral hexagon, here is my code in mathematica: Graphics[{Thick, Black, Line[{{0, 2/Sqrt[5]}, {1/Sqrt[5], 4/Sqrt[5]},{1 + 1/Sqrt[5], 4/Sqrt[5]}, {1 + 2/Sqrt[5], 2/Sqrt[5]},{1 + 1/Sqrt[5], 0}, {1/Sqrt[5], 0},{0, 2/Sqrt[5]}}], Black, Inset[Text[Style["1", Large]], {0, 2/Sqrt[5]}, Scaled[{.9, .5}]], Black, Inset[Text[Style["2", Large]], {1/Sqrt[5],4/Sqrt[5]},Scaled[{.5, .1}]], Black, Inset[Text[Style["3", Large]], {1 + 1/Sqrt[5], 4/Sqrt[5]}, Scaled[{.5, .1}]], Black, Inset[Text[Style["4", Large]], {1 + 2/Sqrt[5], 2/Sqrt[5]}, Scaled[{-0.2, .5}]], Black, Inset[Text[Style["5", Large]], {1 + 1/Sqrt[5], 0}, Scaled[{.5, .9}]], Black, Inset[Text[Style["6", Large]], {1/Sqrt[5], 0}, Scaled[{.5, .9}]]}] I want to make legends for numbers 1 to 6. How can I do that? Thanks. Now provide the response and nothing else.
Here is a more formatted and styled legend: Legended[grp,SwatchLegend[{Green, Blue, Red, Brown, Black, Orange}, {"A ", "B ", "C ", "D ", "E ", "F "}, LegendMarkerSize -> {{30, 30}}, LegendMarkers -> (Graphics[Text[Style[#, 20, Bold]], ImageSize -> 15] & /@ Range[6]), LegendFunction -> "Panel"]] it looks like this: instead of the 6 Colors you can use ConstantArray[Black, 6] to make the numbers just Black, use LegendMarkerSize to control the over-all size of the legend.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/78421', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/11555/']}
jdg_83810
stackexchange
llm_judgeable_groundtruth_similarity
1379195
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to store lengthy .sql scripts in my solution and execute them programmatically. I've already figured out how to execute a string containing my sql script but I haven't figured out how to read the string from a file that would be stored in the solution (under a /Scripts subfolder for example). Now provide the response and nothing else.
First, edit the .sql file's properties so that it will be embedded as a resource. Then use code similar to the following to retrieve the script: string commandText;Assembly thisAssembly = Assembly.GetExecutingAssembly();using (Stream s = thisAssembly.GetManifestResourceStream( "{project default namespace}.{path in project}.{filename}.sql")){ using (StreamReader sr = new StreamReader(s)) { commandText = sr.ReadToEnd(); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1379195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5788/']}
jdg_83811
stackexchange
llm_judgeable_groundtruth_similarity
19307622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an assignment for my CS class where it says to read a file with several test scores and asks me to sum and average them. While summing and averaging is easy, I am having problems with the file reading. The instructor said to use this syntax Scanner scores = new Scanner(new File("scores.dat")); However, this throws a FileNotFoundException , but I have checked over and over again to see if the file exists in the current folder, and after that, I figured that it had to do something with the permissions. I changed the permissions for read and write for everyone, but it still did not work and it still keeps throwing the error. Does anyone have any idea why this may be occurring? EDIT: It was actually pointing to a directory up, however, I have fixed that problem. Now file.exists() returns true , but when I try to put it in the Scanner , it throws the FileNotFoundException Here is all my code import java.util.Scanner;import java.io.*;public class readInt{ public static void main(String args[]){ File file = new File("lines.txt"); System.out.println(file.exists()); Scanner scan = new Scanner(file); }} Now provide the response and nothing else.
There are a number situation where a FileNotFoundException may be thrown at runtime. The named file does not exist. This could be for a number of reasons including: The pathname is simply wrong The pathname looks correct but is actually wrong because it contains non-printing characters (or homoglyphs) that you did not notice The pathname is relative, and it doesn't resolve correctly relative to the actual current directory of the running application. This typically happens because the application's current directory is not what you are expecting or assuming. The path to the file is is broken; e.g. a directory name of the path is incorrect, a symbolic link on the path is broken, or there is a permission problem with one of the path components. The named file is actually a directory. The named file cannot be opened for reading for some reason. The good news that, the problem will inevitably be one of the above. It is just a matter of working out which. Here are some things that you can try: Calling file.exists() will tell you if any file system object exists with the given name / pathname. Calling file.isDirectory() will test if it is a directory. Calling file.canRead() will test if it is a readable file. This line will tell you what the current directory is: System.out.println(new File(".").getAbsolutePath()); This line will print out the pathname in a way that makes it easier to spot things like unexpected leading or trailing whitespace: System.out.println("The path is '" + path + "'"); Look for unexpected spaces, line breaks, etc in the output. It turns out that your example code has a compilation error. I ran your code without taking care of the complaint from Netbeans, only to get the following exception message: Exception in thread "main" java.lang.RuntimeException: Uncompilablesource code - unreported exception java.io.FileNotFoundException; mustbe caught or declared to be thrown If you change your code to the following, it will fix that problem. public static void main(String[] args) throws FileNotFoundException { File file = new File("scores.dat"); System.out.println(file.exists()); Scanner scan = new Scanner(file);} Explanation : the Scanner(File) constructor is declared as throwing the FileNotFoundException exception. (It happens the scanner it cannot open the file.) Now FileNotFoundException is a checked exception . That means that a method in which the exception may be thrown must either catch the exception or declare it in the throws clause. The above fix takes the latter approach.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19307622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2027567/']}
jdg_83812
stackexchange
llm_judgeable_groundtruth_similarity
43253
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given the function $f: \mathbb R \mapsto \mathbb R, f(x) = 2e^x + 3x^2 - 2x + 5,$ I am asked to show that it is not surjective. My book goes about it like this: $f(x) = 2e^x + x^2 - 2x + 1 + 2x^2 + 4 = f(x) = 2e^x + (x-1)^2 + 2x^2 + 4$ $f(x) > 0 \implies$ not surjective Prior to seeing how the book authors solved this, I tried my own solution by using limits: $\lim_{x\to\infty}f(x) = \infty$ and $\lim_{x\to-\infty}f(x) = \infty \implies f$ not surjective Is my solution also correct? Now provide the response and nothing else.
Your proof works for continuous functions though. Since the limit for x to plus or minus infinity are both infinity there must be some N such that f(x) and f(-x) are positive for all x>N. Now notice that since f is continuous, the compact interval [-N,N] gets mapped to some compact subset of R. So concludingly the image of f must be a subset of the positive reals union some compact set, which can not be all of R. So f is not surjective.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/43253', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9094/']}
jdg_83813
stackexchange
llm_judgeable_groundtruth_similarity
804518
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $G$ be a finite group such that for all $x\neq e$, $$C_G(x)=\langle x\rangle$$ Is there any classification of such groups ? Now provide the response and nothing else.
Yes, these are the cyclic groups of order $p$ and the non-abelian groups of order $pq$. Reduction 1: These groups have no elements of prime squared order, lest elements of prime order have larger centralizers. Reduction 2: The Sylow p-subgroups are cyclic, lest (for p odd) they have an abelian subgroup of order $p^2$, or (for p=2) the have a central involution that is not the entire Sylow. Combining these two results, we get the sylows are all prime order, so we get a so-called group of square-free order. Such groups were classified by Zassenhaus, and a nice description is in Hall's Theory of Groups . They are all metacyclic. However, this group has no cyclic subgroups of composite order, so those two cyclic groups both have prime order. Hence the group has order $pq$. Obviously the abelian one doesn't work. Cleaner version: Let $G$ be such a group. If $A \leq G$ is abelian, then $A \leq C_G(x)$ for every $x \in A$. Hence $A$ must be the cyclic group generated by all of its non-identity elements. Let $P$ be a Sylow $p$-subgroup, and let $z \in Z(P)$. Since $P \leq C_G(z)$, we must have $P=\langle z \rangle$ is abelian, so $P$ too is generated by all of its non-identity elements. Hence every Sylow $p$-subgroup of $G$ has order at most $p$, so $|G|$ is a square-free number. By Zassenhaus (Hall's Theory of Groups Theorem 9.4.3 on page 146), $G$ has cyclic subgroups $H,K$ with $K=[G,G]$ such that $G=HK$. Again $H,K$ must be generated by their non-identity elements, so (if not the identity) must have distinct prime orders. This gives that either $G=H$ is cyclic of prime order, or $G=H\ltimes [G,G]$ is a non-abelian group of order $pq$ for distinct primes $p<q$. One verifies that all such groups do in fact satisfy the $C_G(x)=\langle x \rangle$ conclusion. Generalizations: If one only requires that elements of order $p$ not centralize elements of order $q$, then one gets the groups with “discrete prime graph” (prime graphs are studied by our user Alexander Gruber) and classified in Higman (1957). If one instead requires that elements in certain abelian subgroups only are centralized by those abelian subgroups, then one gets the CA groups classified in Suzuki (1957) and Brauer–Suzuki–Wall (1958). Higman, Graham. “Finite groups in which every element has prime power order.” J. London Math. Soc. 32 (1957), 335–342.MR 89205 Suzuki, Michio. “The nonexistence of a certain type of simple groups of odd order.” Proc. Amer. Math. Soc. 8 (1957), 686–695.MR 86818 Brauer, R.; Suzuki, Michio; Wall, G. E. “A characterization of the one-dimensional unimodular projective groups over finite fields.” Illinois J. Math. 2 (1958) 718–745.MR 104734
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/804518', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/106102/']}
jdg_83814
stackexchange
llm_judgeable_groundtruth_similarity
2132935
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the value of $x$ if $$x^x = x?$$Can somebody show step by step please. Thanks! Now provide the response and nothing else.
Given:$\;\;$$x^{x} = x\;,\;$ Taking the logarithms on both the sides of equation we get $ x\times\log (|x|) = \log (|x|)$ $ \therefore \;\: (x - 1)\times\log (|x|) = 0$ For the above equation to be true Either $\;\;$$x-1 = 0\;\;$ or $ \;\;$$\log (|x|) =0$ Therefore $\;\;$$x = 1\;\;$ or $\;\;$$|x| = 1$. Hence, the solution is $\;\;$$x = 1\;\;$ or $\;\;$$x = -1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2132935', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/413927/']}
jdg_83815
stackexchange
llm_judgeable_groundtruth_similarity
40077096
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I draw a horizontal line in between 2 circles in CSS? It has to be in the middle of them just as shown in the screenshot. Example here: I have drawn the 2 circles, but don't know how to connect them. #status-buttons a { color: black; display: inline-block; font-size: 17px; font-weight: normal; margin-right: 0; text-align: center; text-transform: uppercase; min-width: 150px; text-decoration: none;}#status-buttons a:hover { text-decoration: none;}#status-buttons a.active span { color: white; background: #ACCF5B; box-shadow: rgba(0, 0, 0, 0.792157) 3px 3px 3px 0;}#status-buttons span { color: white; background: #22bacb; display: block; height: 45px; margin: 0 auto 10px; padding-top: 20px; width: 60px; border-radius: 50%; box-shadow: rgba(0, 0, 0, 0.792157) 3px 3px 3px 0;} <div id="status-buttons" class="text-center"> <a href="#/form/regalo" class="active"><span>1</span> Step 1</a> <a href="#/form/tusdatos"><span>2</span> Step 2</a></div> See demo on JSFiddle Now provide the response and nothing else.
You can use a pseudo-element to insert an absolutely-positioned border: #status-buttons { position: relative; /* 1 */ display: inline-block; /* 2 */}#status-buttons::after { /* 3 */ content: ""; position: absolute; width: 50%; z-index: -1; /* 4 */ top: 35%; left: 25%; border: 3px solid #ACCF5B;}#status-buttons a { color: black; display: inline-block; font-size: 17px; font-weight: normal; margin-right: 0; text-align: center; text-transform: uppercase; min-width: 150px; text-decoration: none;}#status-buttons a:hover { text-decoration: none;}#status-buttons a.active span { color: white; background: #ACCF5B; box-shadow: rgba(0, 0, 0, 0.792157) 3px 3px 3px 0;}#status-buttons span { color: white; background: #22bacb; display: block; height: 45px; margin: 0 auto 10px; padding-top: 20px; width: 60px; border-radius: 50%; box-shadow: rgba(0, 0, 0, 0.792157) 3px 3px 3px 0;} <div id="status-buttons" class="text-center"> <a href="#/form/regalo" class="active"><span>1</span> Step 1</a> <a href="#/form/tusdatos"><span>2</span> Step 2</a></div> Notes: Establish nearest positioned ancestor for absolute positioning. Make container consume only the width necessary. Insert pseudo element Ensure that any horizontal line overlap doesn't appear above circles
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40077096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310107/']}
jdg_83816
stackexchange
llm_judgeable_groundtruth_similarity
13381127
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to prevent my desktop application from being maximized. It should not become maximized by any means - by double clicking on title bar, or by clicking Windows + Up arrow on the keyboard, etc. -> I disable both the MaximizeBox and MinimizeBox . -> I also set the MaximumSize and MinimumSize for my WinForm Still when I press Windows + Up arrow, my win form Shifts to top left of the screen, I mean it gets maximized.So please tell me any way to prevent this thing happening... Now provide the response and nothing else.
There are actually two completely separate issues at work here. Granted, they look basically the same from a functional perspective, but from an implementation-model perspective (as Windows itself would naturally use) they are unrelated. As others have answered, you can prevent your form from being maximized by setting its MaximizeBox property to false . This removes the WS_MAXIMIZEBOX style on the form's underlying window, which prevents the window manager from allowing the window to be maximized. It disables the maximize box on the window's caption bar (the one between the minimize box and the close box), disables the "Maximize" option on the window/system menu, and any other methods of maximizing a window that I may have forgotten. However, this has nothing to do with the Win + ↑ keyboard shortcut, which invokes Aero Snap the same as would dragging the window to the the magic position sat the edges of the screen. The window manager, whether as a result of a bug or a feature of Aero Snap, still allows you to effectively "maximize" windows that should not otherwise be maximized. Setting the MaximizeBox property doesn't affect this, so if you truly want to prevent the user from changing the window's size this way, you will need to disable Aero Snap's effect on your window. Unfortunately, there's no method (at least, not to my knowledge) of programmatically disabling Aero Snap on a per-window or per-process basis (also see this related question ). If the user has Aero Snap enabled, you can assume that they want that behavior and applications aren't supposed to tamper with it. The only thing you can do is completely disable resizing your window. In WinForms, you do that by setting the FormBorderStyle property to one of the following: FormBorderStyle.FixedSingle , FormBorderStyle.Fixed3D , or FormBorderStyle.FixedDialog . If you still want your window to be resizable in other ways, you will need to write the code to handle that manually, which is not a particularly easy task. Thus, I encourage you to very carefully consider whether this is really necessary. All other non-maximizable windows accomplish this simply by setting the MaximizeBox property (or doing the equivalent in their UI framework), which means that they can still be effectively maximized via Aero Snap. If this works for everyone else's windows, including those that are part of Windows itself, it should probably work for you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13381127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1820880/']}
jdg_83817
stackexchange
llm_judgeable_groundtruth_similarity
17288497
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently working with 2 separate JQuery datepickers that I have modified the format to be able to pass through to store in a backend MySQL db. <script>$(function() {$( ".datepicker" ).datepicker({ inline: true, showOtherMonths: true, dateFormat: 'yy-mm-dd', dayNamesMin: ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'], });});</script> My date pickers are called here <input type="text" class='datepicker' name="datepicker1" /><input type="text" class='datepicker' name="datepicker2" /> I have been trying to get the selected date to pass to php via ajax. I have tried using the getDate function but have not had much luck. $(".datepicker").datepicker( 'getDate' ); Now provide the response and nothing else.
Since you have two elements with the class .datepicker , the selector wont know which element to choose from. So, you'll have to specify the name of the input you're trying to get the date from. first = $(".datepicker[name=datepicker1]").datepicker('getDate');second = $(".datepicker[name=datepicker2]").datepicker('getDate'); Demo : http://jsfiddle.net/hungerpain/TWmcD/ Note : Won't val() be easier than using getDate ? You could format your code date in mysql itself and save work for you in the clientside. first = $(".datepicker[name=datepicker1]").val();second = $(".datepicker[name=datepicker2]").val();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17288497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/808765/']}
jdg_83818
stackexchange
llm_judgeable_groundtruth_similarity
67185817
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a build error saying that all my SPM packages are missing. I decided to remove and re-add each package to the project, but each attempt at adding a package gives me the following error: Package.resolved file is corrupted or malformed; fix or delete the file to continue I've tried deleting the Package.resolved file and letting it regenerate (it didn't), but I get the same error message. Any ideas on how to fix this? Now provide the response and nothing else.
I tried the solution suggested by Maxwell above but it didn't resolve the problem for me. The error appeared with Xcode 12 but upgrading to Xcode 13 didn't fix it either. Instead, I decided to take Xcode at its word and delete the Package.resolved file. But where is it? I did this: In Finder, tap Shift+Cmd+. to reveal hidden files and folders. The Package.resolved file is inside your .xcodeproj directory at [appName].xcodeproj/project.xcworkspace/xcshareddata/swiftpm/Package.resolved Right click on .xcodeproj and project.xcworkspace to show package contents. Move the Package.resolved file to the bin, and then empty the bin. Reopen Xcode and open your project again. This gave me another error:the package at '/' cannot be accessed (Couldn’t read '4.5.0': In Xcode, File / Packages / Reset package caches.The Swift Package Manager starts working on this. Rebuild the project. The error had gone and my project rebuilt successfully. Good luck!
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/67185817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1102687/']}
jdg_83819
stackexchange
llm_judgeable_groundtruth_similarity
6103
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm relatively new to motorcycles, my flat mate taught me how to drive his two Royal Enfields around New Delhi, and for the past year and a half I've been taking care of them as they break down ;) Was just curious what the 'cc' actually means when you say a Hero Honda has 150 cc, or an Enfield as 350 cc, or a really beast Enfield has 500 cc. I know vaguely that more cc is more powerful, but am curious as to how one could actually measure cc. Now provide the response and nothing else.
cc is the size of the engine, in cubic centimeters - literally the volume of the cylinders. A larger cylinder can ingest more air (and more fuel), thus converting more energy per cycle than a smaller one, so making more power - assuming all other factors are the same, and there are many factors that affect power output. You can measure it by a simple volume calculation - area of the piston (pi x radius squared) x stroke x number of cylinders.
{}
{'log_upvote_score': 6, 'links': ['https://mechanics.stackexchange.com/questions/6103', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/3153/']}
jdg_83820
stackexchange
llm_judgeable_groundtruth_similarity
2091311
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Abbott's Understanding Analysis, while proving 'If $\lim (a_n)\rightarrow a$ and $\lim (b_n)\rightarrow b$ then $\lim (a_nb_n)\rightarrow ab$ (where both the sequences and their limits are in $\Bbb R$)', leaves the case of $a=0$ to exercise, and for case $a\ne 0$, goes thus: Let $\epsilon>0$ be arbitrary. If $a\ne 0$, then we can choose $N_1$ so that $n\ge N_1$ implies $|b_n-b|<\frac1{|a|}\frac{\epsilon}2$. Since $b_n$ is convergent, there exists $M>0$, such that $|b_n|\le M$ for all $n\in N$. Now we can choose $N_2$ so that $|a_n-a|<\frac1M\frac{\epsilon}2$ whenever $n\ge N_2$. Now pick $N=\max \{N_1,N_2\}$ and observe that if $n\ge N$ then $$|a_nb_n-ab|\le |b_n||a_n-a|+|a||b_n-b|\le M|a_n-a|+|a||b_n-b|<M\frac {\epsilon}{M2}+|a|\frac {\epsilon}{|a|2}=\epsilon.$$ My question is for case $a=0$: Instead of suggesting to go this way: $$|a_nb_n-ab|\le |b_n||a_n-a|+|a||b_n-b|=|b_n||a_n-a|\le M|a_n-a|<M\frac {\epsilon}{M2}<\epsilon,$$ the book gives a three part exercise for this case: Why the 'easy' way is (perhaps) wrong way? Now provide the response and nothing else.
$$\begin{align*}\sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\&=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\&=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\&=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\&=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\&=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\&=\frac{1}{2n+2}\end{align*}$$ by the binomial theorem.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2091311', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/94817/']}
jdg_83821
stackexchange
llm_judgeable_groundtruth_similarity
13705328
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a app that is deployed to Heroku, and I'd like to be able to run the test suite post-deployment on the target environment. I am using the Heroku Postgres add-on, which means that I have access to a single database only. I have no rights to create new databases, which in turn means that the standard Django test command fails, as it can't create the test_* database. $ heroku run python manage.py testRunning `python manage.py test` attached to terminal... up, run.9362Creating test database for alias 'default'...Got an error creating the test database: permission denied to create database Is there any way around this? Now provide the response and nothing else.
Turns out I was in the wrong . I was not testing what I thought was being tested... Since Heroku's Routing Mesh was sending requests to different servers, the LiveServerTestCase was starting a web server on one machine and Selenium was connecting to other machines altogether. By updating the Heroku Procfile to: web: python src/manage.py test --liveserver=0.0.0.0:$PORT overriding the DATABASES setting to point to the test database, and customizing the test suite runner linked to below (the same idea still holds: override setup_databases so that it only drops/re-creates tables, not the entire database), I was able to run remote tests. But this is even more hacky/painful/inelegant. Still looking for something better! Sorry about the confusion. (updated answer below) Here are the steps that worked for me: Create an additional, free Postgres database using the Heroku toolbelt heroku addons:add heroku-postgresql:dev Use the HerokuTestSuiteRunner class which you'll find here . This custom test runner requires that you define a TEST_DATABASES setting which follows the typical DATABASES format. For instance: TEST_DATABASES = { 'default': dj_database_url.config(env='TEST_DATABASE_URL')} Then, have the TEST_RUNNER setting be a Python path to wherever HerokuTestSuiteRunner can be found. You should now be able to run Django tests on Heroku using the given database. This is very much a quick hack... Let me know how it could be improved / made less hackish. Enjoy! (original answer below) A few relevant solutions have been discussed here . As you can read in the Django docs , "[w]hen using the SQLite database engine, the tests will by default use an in-memory database". Although this doesn't thoroughly test the database engine you're using on Heroku (I'm still on the lookout for a solution that does that), setting the database engine to SQLite will at least allow you to run your tests. See the above-linked StackOverflow question for some pointers. There are at least two ways out: testing if 'test' in sys.argv before forcing SQLite as the database engine, or having a dedicated settings file used in testing, which you can then pass to django manage.py test using the --settings option.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13705328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/45698/']}
jdg_83822
stackexchange
llm_judgeable_groundtruth_similarity
52571450
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having some troubles loading markdown files ( .md ) into my react native (non-detached expo project). Found this awesome package that allows me to render it. But can't figure out how to load the local .md file as a string. import react from 'react';import {PureComponent} from 'react-native';import Markdown from 'react-native-markdown-renderer';const copy = `# h1 Heading 8-)| Option | Description || ------ | ----------- || data | path to data files to supply the data that will be passed into templates. || engine | engine to be used for processing templates. Handlebars is the default. || ext | extension to be used for dest files. |`;export default class Page extends PureComponent { static propTypes = {}; static defaultProps = {}; render() { return ( <Markdown>{copy}</Markdown> ); }} BTW: I tried googling, but can't get the suggestions to work https://forums.expo.io/t/loading-non-media-assets-markdown/522/2?u=norfeldtconsulting I tried the suggested answers for reactjs on SO, but the problem seems to be that it only accepts .js and .json files Now provide the response and nothing else.
Thanks to @Filipe's response, I got some guidance and got a working example that will fit your needs. In my case, I had a .md file on the assets/markdown/ folder, the file is called test-1.md The trick is to get a local url for the file, and then use the fetch API to get its content as a string . import React from 'react';import { StyleSheet, Text, View } from 'react-native';import Markdown from 'react-native-markdown-renderer';const copy = `# h1 Heading 8-)| Option | Description || ------ | ----------- || data | path to data files to supply the data that will be passed into templates. || engine | engine to be used for processing templates. Handlebars is the default. || ext | extension to be used for dest files. |`;export default class App extends React.Component { constructor(props) { super(props) this.state = { copy: copy } } componentDidMount() { this.fetchLocalFile(); } fetchLocalFile = async () => { let file = Expo.Asset.fromModule(require("./assets/markdown/test-1.md")) await file.downloadAsync() // Optional, saves file into cache file = await fetch(file.uri) file = await file.text() this.setState({copy: file}); } render() { return ( <Markdown>{this.state.copy}</Markdown> ); }} EDIT: In order to get get rid of the error Unable to resolve "./assets/markdown/test-1.md" from "App.js" you would need to add the packagerOpts part of @Filipe's snippet into your app.json file. app.json { "expo": { ... "assetBundlePatterns": [ "**/*" ], "packagerOpts": { "assetExts": ["md"] }, ... }} EDIT 2:Answering to @Norfeldt's comment:Although I use react-native init when working on my own projects, and I'm therefore not very familiar with Expo, I got this Expo Snack that might have some answers for you: https://snack.expo.io/Hk8Ghxoqm . It won't work on the expo snack because of the issues reading non-JSON files, but you can test it locally if you wish. Using file.downloadAsync() will prevent the app making XHR calls to a server where your file is hosted within that app session (as long as the user does not close and re-open the app). If you change the file or modify the file (simulated with a call to Expo.FileSystem.writeAsStringAsync() ), it should display the updated as long as your component re-renders and re-downloads the file. This will happen every time your app is closed and re-open, as the file.localUri is not persisted per sessions as far as I'm concerned, so your app will always call file.downloadAsync() at least once every time it's opened. So you should have no problems displaying an updated file. I also took some time to test the speed of using fetch versus using Expo.FileSystem.readAsStringAsync() , and they were on average the same. Often times Expo.FileSystem.readAsStringAsync was ~200 ms faster, but it 's not a deal breaker in my opinion. I created three different methods for fetching the same file. export default class MarkdownRenderer extends React.Component { constructor(props) { super(props) this.state = { copy: "" } } componentDidMount() { this.fetch() } fetch = () => { if (this.state.copy) { // Clear current state, then refetch data this.setState({copy: ""}, this.fetch) return; } let asset = Expo.Asset.fromModule(md) const id = Math.floor(Math.random() * 100) % 40; console.log(`[${id}] Started fetching data`, asset.localUri) let start = new Date(), end; const save = (res) => { this.setState({copy: res}) let end = new Date(); console.info(`[${id}] Completed fetching data in ${(end - start) / 1000} seconds`) } // Using Expo.FileSystem.readAsStringAsync. // Makes it a single asynchronous call, but must always use localUri // Therefore, downloadAsync is required let method1 = () => { if (!asset.localUri) { asset.downloadAsync().then(()=>{ Expo.FileSystem.readAsStringAsync(asset.localUri).then(save) }) } else { Expo.FileSystem.readAsStringAsync(asset.localUri).then(save) } } // Use fetch ensuring the usage of a localUri let method2 = () => { if (!asset.localUri) { asset.downloadAsync().then(()=>{ fetch(asset.localUri).then(res => res.text()).then(save) }) } else { fetch(asset.localUri).then(res => res.text()).then(save) } } // Use fetch but using `asset.uri` (not the local file) let method3 = () => { fetch(asset.uri).then(res => res.text()).then(save) } // method1() // method2() method3() } changeText = () => { let asset = Expo.Asset.fromModule(md) Expo.FileSystem.writeAsStringAsync(asset.localUri, "Hello World"); } render() { return ( <ScrollView style={{maxHeight: "90%"}}> <Button onPress={this.fetch} title="Refetch"/> <Button onPress={this.changeText} title="Change Text"/> <Markdown>{this.state.copy}</Markdown> </ScrollView> ); }} Just alternate between the three to see the difference in the logs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52571450', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/618099/']}
jdg_83823
stackexchange
llm_judgeable_groundtruth_similarity
14417235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to convert the JSON string "{ \"birthday\": \"1988-03-18\", \"address\": { \"state\": 24, \"city\": 8341, \"country\": 1 } }" to "{ \"address\": { \"city\": 8341, \"country\": 1, \"state\": 24 }, \"birthday\": \"1988-03-18\" }" NOTE: I'm not using the sorted version for communication (because the key order doesn't really matter), I need a sorted version to perform local tests (by comparing JSON strings). EDIT: I4V pointed a solution that uses Json.Net , I would rather use a solution that doesn't need to include any 3rd party library (actually I'm using the built in System.Json in my application) I posted a gist with the solution provided by I4V + some testing here . Thank you all. Now provide the response and nothing else.
I will use Json.Net for this string json = @"{ ""birthday"": ""1988-03-18"", ""address"": { ""state"": 24, ""city"": 8341, ""country"": 1 } }";var jObj = (JObject)JsonConvert.DeserializeObject(json);Sort(jObj);string newJson = jObj.ToString(); void Sort(JObject jObj){ var props = jObj.Properties().ToList(); foreach (var prop in props) { prop.Remove(); } foreach (var prop in props.OrderBy(p=>p.Name)) { jObj.Add(prop); if(prop.Value is JObject) Sort((JObject)prop.Value); }} EDIT A try with System.Json but I am not sure about OrderByDescending ( or OrderBy ). var jObj = (System.Json.JsonObject)System.Json.JsonObject.Parse(json);Sort2(jObj);var newJson = jObj.ToString(); void Sort2(System.Json.JsonObject jObj){ var props = jObj.ToList(); foreach (var prop in props) { jObj.Remove(prop.Key); } foreach (var prop in props.OrderByDescending(p => p.Key)) { jObj.Add(prop); if (prop.Value is System.Json.JsonObject) Sort2((System.Json.JsonObject)prop.Value); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14417235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151883/']}
jdg_83824
stackexchange
llm_judgeable_groundtruth_similarity
7908
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is NOT a homework, I am over homework stage ages ago :-) What function draws semicircle?$$y=f(x) = \text{something}$$ and I end up with a unit circle. Now provide the response and nothing else.
The only "unit semicircles" that can be expressed as $y=f(x)$ are the upper half of the unit circle and the lower half (any other half circle fails the "vertical line test", so it cannot be expressed as an explicit function of $x$). Since the unit circle consists of all points $(x,y)$ that satisfy $x^2+y^2=1$, solving for $y$ gives you that $y=\sqrt{1-x^2}$ is the upper unit semicircle, and $y=-\sqrt{1-x^2}$ is the lower unit semicircle. If you are willing to use parametric equations , you can cover more possibilities. The unit circle can be parametrized as$$\begin{array}{rcl}x & = & \cos t,\\y & = & \sin t;\end{array}\qquad 0\leq t\leq 2\pi;$$this "traces" the unit circle exactly once, counterclockwise, starting at $0$ radians (the point $(1,0)$). If you want the unit semicircle that starts at $\theta$ radians from the positive $x$-axis (and ends at $\theta+\pi$ radians), then you can obtain it parametrically by $x=\cos(t+\theta)$ and $y=\sin(t+\theta)$, $0\leq t\leq \pi$ (or simply $x=\cos t$, $y=\sin t$ with $\theta\leq t\leq \theta+\pi$).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/7908', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2743/']}
jdg_83825
stackexchange
llm_judgeable_groundtruth_similarity
4440793
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm a bit puzzled by the answer I'm getting in a quite simple problem. $4$ athletes simultaneously shoot at the target. It is known that the probability of hitting the first hunter is $0.4$ , second - $0.6$ , third - $0.7$ , fourth - $0.8$ . As a result, one of the athletes did not hit the target. What is the probability that it was the second athlete? I'll give my solution: Event $A$ - the second athlete did not hit the target Event $B$ - $3$ out of $4$ athletes hit the target Based on the condition of the problem, I thought that we need to find $P(A|B)$ . So, $P(A|B)=\frac{P(B|A) \cdot P(A)}{P(B)} =\frac{(0.4 \cdot 0.4 \cdot 0.7 \cdot 0.8) \cdot 0.4}{0.4 \cdot 0.6 \cdot 0.7 \cdot 0.2 + 0.4 \cdot 0.6 \cdot 0.3 \cdot 0.8 + 0.4 \cdot 0.4 \cdot 0.7 \cdot 0.8 + 0.6 \cdot 0.6 \cdot 0.7 \cdot 0.8 } = 0.0937$ It seems that the found probability turned out to be extremely small. Have I correctly defined what is asked to be found in this problem in terms of elementary events? It also confuses me that the sum of the posterior probabilities of not hitting the target of each athlete comes out to be significantly less than one. A posteriori knowledge that only one of the shooters did not hit the target, in my opinion, should greatly increase the probability of not hitting for the "weakest" shooters - the first and second. But the two of them together will account for about 40%. Now provide the response and nothing else.
Let's look at two cases depending on whether the given condition is that exactly one athlete missed, versus at least one athlete missed. Case 1 : Exactly one athlete missed. Then according to your event definitions, we have $$\Pr[A \mid B] = \frac{\Pr[B \mid A]\Pr[A]}{\Pr[B]}.$$ Then $\Pr[B \mid A]$ , the probability that exactly one athlete missed given that the second athlete missed, is equivalent to the probability that the first, third and fourth athlete hits: $$\Pr[B \mid A] = (0.4)(0.7)(0.8) = 0.224.$$ The marginal probability that exactly one athlete missed is $$\begin{align}\Pr[B] &= (1 - 0.4)(0.6)(0.7)(0.8) + (0.4)(1 - 0.6)(0.7)(0.8) \\&\quad+ (0.4)(0.6)(1 - 0.7)(0.8) + (0.4)(0.6)(0.7)(1 - 0.8) \\&= 0.3824.\end{align}$$ Therefore, the desired posterior probability is $$\Pr[A \mid B] = \frac{(0.224)(0.4)}{0.3824} = 0.23431.$$ You have an extra factor of $0.4$ in the numerator. I suspect this is because you mistakenly computed $\Pr[B \cap A]\Pr[A]$ instead of $\Pr[B \mid A]\Pr[A]$ . Case 2 : At least one athlete missed; call this event $B'$ . Then $\Pr[B' \mid A] = 1$ , since given that the second missed, at least one missed. The computation of the denominator is also simplified, since the unconditional probability that at least one athlete missed is the complement of the probability that no athlete missed, which is simply $(0.4)(0.6)(0.7)(0.8) = 0.1344$ . Therefore, the desired posterior probability is $$\Pr[B'] = 1 - 0.1344 = 0.8656$$ and $$\Pr[A \mid B'] = \frac{\Pr[B' \mid A]\Pr[A]}{\Pr[B']} = \frac{0.4}{0.8656} = 0.462107.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4440793', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/755562/']}
jdg_83826
stackexchange
llm_judgeable_groundtruth_similarity
48130247
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Android app widgets. I'm creating a PendingIntent object and use it in the method RemoteViews#setOnClickPendingIntent() . This is the pending intent: // The below code is called in the onUpdate(Context, AppWidgetManager, int[]) method of the AppWidgetProvider class.// Explicit intentIntent intent = new Intent(context, MyService.class);intent.setAction(MY_ACTION);intent.putExtra(EXTRA_KEY, VALUE);// Create the pending intentPendingIntent pendingIntent = PendingIntent.getService(context, appWidgetId, intent, 0);// 'views' is a RemoteViews object provided by onUpdate in the AppWidgetProvider class.views.setOnClickPendingIntent(R.id.root_layout, pendingIntent); The above code works as expected before Android Oreo. However, in Android Oreo it no longer starts the service if the app is swiped away from recents. (no longer active). Aren't PendingIntent s excluded in Oreo's background execution limits? For testing purposes I replaced getService with getForegroundService but the Service is still not started. Both methods show the below message in the log: W/ActivityManager: Background start not allowed: service Intent { act=com.example.myapp.MY_ACTION flg=0x10000000 cmp=com.example.myapp/com.example.myapp.MyService bnds=[607,716][833,942] (has extras) } to com.example.myapp/com.example.myapp.MyService from pid=-1 uid=10105 pkg=com.example.myapp Why is the Service not started, even when using getForegroundService ? I tested this on a Nexus 6P running Android 8.1.0. Now provide the response and nothing else.
As you note, an app widget does not count for the PendingIntent background whitelist. I do not know why — it would seem to be about on par with a PendingIntent started by a Notification . Perhaps it's an issue that the Notification is a system thing, whereas the app widget is a home screen thing. Regardless, you could: Use getForegroundService() , as you have, or Try getBroadcast() with an explicit Intent , and with the BroadcastReceiver starting a JobIntentService , if you do not want to raise a Notification (as a foreground service requires) Based on your symptoms, you appear to have tripped over what I would consider to be a bug: there should be a way to switch from getService() to getForegroundService() without having to reboot. :-) I'll try to run some experiments and will file an issue if I can reproduce the problem.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48130247', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4583267/']}
jdg_83827
stackexchange
llm_judgeable_groundtruth_similarity
441645
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have written one simple function in shell that returns 0 or 1 based on some condition.Let me call that function name foo foo(){......} Now i am trying to call foo in if condition as follow:- if ( foo $1 )..... It works fine.But when i used follow approach to call ,then i get error if [ foo $1 ]...... Why does it throws error as "Unary operator expected"? Now provide the response and nothing else.
When you use: if ( foo $1 ) You are simple executing foo $1 in a subshell and if is acting on it's exit status. When you use: if [ foo $1 ] You are attempting to use the shell test and foo is not a valid test operator. You can find the valid test operators here . It's not necessarily relevant for your issue but you should also always quote variables especially inside the shell test brackets. The shell test will succeed simply with the presence of something. So even when using a valid test operator you could get unwanted results: $ unset var$ [ -n $var ] && echo yesyes$ [ -n "$var" ] && echo yes$ [ -n "" ] && echo yes$ [ -n ] && echo yesyes$ [ foo ] && echo yesyes$ [ foo bar ] && echo yes-bash: [: foo: unary operator expected The presence of a single string inside the shell test will evaluate to true where the presence of two or more strings expects that one of them be a valid test operator.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/441645', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/286048/']}
jdg_83828
stackexchange
llm_judgeable_groundtruth_similarity
14206
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using SVM to predict diabetes. I am using the BRFSS data set for this purpose. The data set has the dimensions of $432607 \times 136$ and is skewed. The percentage of Y s in the target variable is $11\%$ while the N s constitute the remaining $89\%$. I am using only 15 out of 136 independent variables from the data set. One of the reasons for reducing the data set was to have more training samples when rows containing NA s are omitted. These 15 variables were selected after running statistical methods such as random trees, logistic regression and finding out which variables are significant from the resulting models. For example, after running logistic regression we used p-value to order the most significant variables. Is my method of doing variable selection correct? Any suggestions to is greatly welcome. The following is my R implementation. library(e1071) # Support Vector Machines#--------------------------------------------------------------------# read brfss file (huge 135 MB file)#--------------------------------------------------------------------y <- read.csv("http://www.hofroe.net/stat579/brfss%2009/brfss-2009-clean.csv")indicator <- c("DIABETE2", "GENHLTH", "PERSDOC2", "SEX", "FLUSHOT3", "PNEUVAC3", "X_RFHYPE5", "X_RFCHOL", "RACE2", "X_SMOKER3", "X_AGE_G", "X_BMI4CAT", "X_INCOMG", "X_RFDRHV3", "X_RFDRHV3", "X_STATE");target <- "DIABETE2";diabetes <- y[, indicator];#--------------------------------------------------------------------# recode DIABETE2#--------------------------------------------------------------------x <- diabetes$DIABETE2;x[x > 1] <- 'N';x[x != 'N'] <- 'Y';diabetes$DIABETE2 <- x; rm(x);#--------------------------------------------------------------------# remove NA#--------------------------------------------------------------------x <- na.omit(diabetes);diabetes <- x;rm(x);#--------------------------------------------------------------------# reproducible research #--------------------------------------------------------------------set.seed(1612);nsamples <- 1000; sample.diabetes <- diabetes[sample(nrow(diabetes), nsamples), ]; #--------------------------------------------------------------------# split the dataset into training and test#--------------------------------------------------------------------ratio <- 0.7;train.samples <- ratio*nsamples;train.rows <- c(sample(nrow(sample.diabetes), trunc(train.samples)));train.set <- sample.diabetes[train.rows, ];test.set <- sample.diabetes[-train.rows, ];train.result <- train.set[ , which(names(train.set) == target)];test.result <- test.set[ , which(names(test.set) == target)];#--------------------------------------------------------------------# SVM #--------------------------------------------------------------------formula <- as.formula(factor(DIABETE2) ~ . );svm.tune <- tune.svm(formula, data = train.set, gamma = 10^(-3:0), cost = 10^(-1:1));svm.model <- svm(formula, data = train.set, kernel = "linear", gamma = svm.tune$best.parameters$gamma, cost = svm.tune$best.parameters$cost);#--------------------------------------------------------------------# Confusion matrix#--------------------------------------------------------------------train.pred <- predict(svm.model, train.set);test.pred <- predict(svm.model, test.set);svm.table <- table(pred = test.pred, true = test.result);print(svm.table); I ran with $1000$ (training = $700$ and test = $300$) samples since it is faster in my laptop. The confusion matrix for the test data ($300$ samples) I get is quite bad. truepred N Y N 262 38 Y 0 0 I need to improve my prediction for the Y class. In fact, I need to be as accurate as possible with Y even if I perform poorly with N . Any suggestions to improve the accuracy of classification would be greatly appreciated. Now provide the response and nothing else.
I have 4 suggestions: How are you choosing the variables to include in your model? Maybeyou are missing some the key indicators from the larger dataset. Almost all of the indicators you are using (such as sex, smoker,etc.) should be treated as factors. Treating these variables asnumeric is wrong, and is probably contributing to the error in yourmodel. Why are you using an SVM? Did you try any simpler methods, such aslinear discriminant analysis or even linear regression? Maybe asimple approach on a larger dataset will yield a better result. Try the caret package. It will help you cross-validate modelaccuracy, it is parallelized which will let you work faster, and itmakes it easy to explore different types of models. Here is some example code for caret: library(caret)#Parallizelibrary(doSMP)w <- startWorkers()registerDoSMP(w)#Build modelX <- train.set[,-1]Y <- factor(train.set[,1],levels=c('N','Y'))model <- train(X,Y,method='lda')#Evaluate model on test setprint(model)predY <- predict(model,test.set[,-1])confusionMatrix(predY,test.set[,1])stopWorkers(w) This LDA model beats your SVM, and I didn't even fix your factors. I'm sure if you recode Sex, Smoker, etc. as factors, you will get better results.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/14206', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2513/']}
jdg_83829
stackexchange
llm_judgeable_groundtruth_similarity
24808777
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I followed the insttructions on https://developer.android.com/training/app-indexing/deep-linking.html , but when I want to trigger the intent through adb with: adb shell am start -W -a android.intent.action.BROWSEABLE -d "http://example.com/gizmos" com.myapp.android I just get Error: Activity not started, unable to resolve Intent { act=android.intent.action.VIEW dat=example://gizmos flg=0x10000000 pkg=com.myapp.android } <activity android:name=".activities.DeepLinkActivity" android:label="@string/title_activity_deep_link"> <meta-data android:name="android.app.searchable" android:resource="@xml/searchable" /> <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" android:host="example.com" android:pathPrefix="/gizmos" /> </intent-filter> </activity> Have I made any obvious mistakes? Now provide the response and nothing else.
EDIT: Ok first make sure that your package is reachable by adb: adb shell am start -n com.example.simon.test/.activities.MainActivity Then to accept multiple data tags you need different intent filters ( that's the way it worked for me unlike all the other examples I've seen on the net ). E.g.: <intent-filter> ... <data android:scheme="http" android:host="example.com"/></intent-filter><intent-filter> ... <data android:scheme="http" android:host="example.com" android:pathPrefix="/gizmos"/></intent-filter> NOTE that in the above example the pathPrefix starts with a forward slash ! I am not sure why Google's Docs are so misleading or maybe that was for some different version of adb, but the above changes worked perfectly for me. This helped: Source This is how I made the Chrome browser route specific links to my app: <activity android:name=".activities.DeepLinkActivity" android:label="@string/app_name"> <!-- Accept chrome links --> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> <category android:name="android.intent.category.BROWSABLE"/> <data android:scheme="http" android:host="example.com" android:pathPrefix="/"/> </intent-filter> <!-- Accept adb data flag --> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="http" android:host="example.com"/> </intent-filter></activity> NOTE The 1st filter works on Google Chrome while the 2nd one works on the ADB. NOTE2 The app choice menu won't be shown if the link is entered into the browser's address bar. It has to be a <a href="http://example.com"></a> link in side some page. In my opinion everything here is rather blurry and really not how I expected it all to work. But that's how it works on my device. Hope this helps (and works) for you too.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/24808777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1336532/']}
jdg_83830
stackexchange
llm_judgeable_groundtruth_similarity
2220393
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the text for linear algebra by David C Lay following is given. Definition: A pivot position in a matrix A is a location in A that corresponds to a leading 1 in the reduced echelon form of A. After this an example is given in which a matrix in its initial form is shown and a 1 at first row first column is marked as "pivot". We still haven't reduced the matrix so how can we know which position is pivot? Now provide the response and nothing else.
$$\begin{bmatrix} 1 & 1 & 1 \\ 1 & a & 1 \\ a & b & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 0\end{bmatrix}$$ $$\begin{bmatrix} 1 & 1 & 1 \\ 0 & a-1 & 0 \\ 0 & b-a & 1-a \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ -a\end{bmatrix}$$ The determinant of the matrix is $-(a-1)^2$, to have no solution, the determinant has to be $0$, hence $a=1$. $$\begin{bmatrix} 1 & 1 & 1 \\ 0 & 0 & 0 \\ 0 & b-1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ -1\end{bmatrix}$$ To have no solution $b$ has to be $1$. Hence $S=\{1 \}$. Remark: I interpret the question as $S=\{b: \exists a \text{ such that the system has no solution.} \}$ If we interpret $S=\{b: \forall a \text{ such that the system has no solution.} \}$, then $S = \emptyset$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2220393', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11309/']}
jdg_83831
stackexchange
llm_judgeable_groundtruth_similarity
452362
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to convert 220 V AC to 12 V DC in my circuit that can provide minimum 100 mA at 12 V DC. It should always be on 24 hours a day and 7 days a week. I don't want to use a transformer because my current should be small. I want to use this to power a microcontroller. Can I use this circuit? What is the output current of this circuit? Now provide the response and nothing else.
You could use that circuit. It claims to provide 12V at 100mA - that would meet your requirements. But, I wouldn't trust it. It is not well regulated. The 12V output could be as high as 15V because the only regulation in the whole thing is a 15V zener diode. I wouldn't want to use it. It will provide 12V between the output pins, but if you measure between the output pins and the ground beneath your feet you will find the full AC line voltage. It can deliver enough current to kill you if you touch the output while standing on the ground. It will also destroy any grounded device you connect it to. I wouldn't want to have that thing plugged in to an outlet in my house 24-7. It has no fuse, and there are resistors in it which will get hot. It doesn't mention that you need a safety rated capacitor for C1. You'd want a Y rated capacitor there. If a Y rated capacitor fails, it turns into an open circuit. A regular capacitor or an X capacitor will fail shorted, putting the full 220 VAC on the rest of the circuit - which could also fail and catch fire or quietly wait for you to touch it so it can grill you with 220VAC. If you use that thing, you must enclose it in an insulated box. There must be no metallic parts of the circuit in a position that you can touch the circuit while it is operating. The only connection to that circuit is the power connection - no other devices can be connected or controlled from it because those devices would then be at line voltage. I would NOT use that circuit. There's too many ways to hurt yourself or someone else with it. When I was a kid, it was common for a first project to be a powersupply. Wall warts weren't common, so if you wanted to move away from batteries you had to buy a lab power supply or build something yourself. The powersupplies were all the classic "transformer and rectifier" type. The line voltage part was carefully built so as to be safe (and usually enclosed inside a box.) The low voltage part was isolated by the transformer and perfectly safe to touch - no concerns about getting zapped at line voltage by your 12VDC power supply. Safe enough. I built my first one at about 15, in a grounded metal housing. Plans were commonly available, and most always mentioned how to build the thing safely. You could do the same today if you want to learn about building power supplies. If you only want one because you need to power a project, then you should consider buying one. Or just using an adapter you have laying around the house. If you need power for a microcontroller, use a 5V USB adapter. Many microcontrollers run at 5V Just don't build that death trap of a circuit you found.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/452362', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/228330/']}
jdg_83832
stackexchange
llm_judgeable_groundtruth_similarity
5005319
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I declare an enum like this ... public enum MyEnum : byte { Val1, Val2} ... it's working. If I declare an enum like this ... public enum MyEnum : System.Byte { Val1, Val2} ... it's not working. The compiler throw : error CS1008: Type byte, sbyte, short, ushort, int, uint, long, or ulong expected As byte is an alias to the actual type, System.Byte , why can't I use the second declaration ? Now provide the response and nothing else.
Well, it's according to the specification (§14.1). The grammar specifies that the production for enum-declaration is enum-declaration: attributes_opt enum-modifiers_opt enum identifier enum-base_opt enum-body ;_opt where enum-base is :integral-type and integral-type:sbytebyteshortushortintuintlongulongchar As for the why the specification is this way, it is not clear. Note that char is listed as a terminal for integral-type but the specification explicitly states that Note that char cannot be used as an underlying type. By the way, I really think best practice here is to use the aliases. I only use the .NET name instead of the C# keyword for these primitive types (and string ) when I want to invoke a static method. So Int32.TryParse instead of int.TryParse. Otherwise, I say, e.g., typeof(int) and not typeof(Int32) .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5005319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/588868/']}
jdg_83833
stackexchange
llm_judgeable_groundtruth_similarity
416693
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to read the temperature from my Avalon which has the Avalon Firmware: 20170603 I'm using the command: cgminer-api -o stats It brings me a lot of information that I don´t need. root@OpenWrt:/etc# cgminer-api -o statsSTATUS=S,When=1482343577,Code=70,Msg=CGMiner stats,Description=cgminer 4.10.0|STATS=0,ID=AV70,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,MM ID1=Ver[7411612-6cf14b0] DNA[01313edbc5efabe3] Elapsed[30792] MW[340560 340538 340538 340538] LW[1362174] MH[180 238 259 216] HW[893] DH[2.749%] Temp[30] TMax[77] Fan[5430] FanR[90%] Vi[1201 1201 1202 1202] Vo[4438 4406 4443 4438] GHSmm[8063.47] WU[108083.46] Freq[715.86] PG[15] Led[0] MW0[1170 1302 1206 1250 1197 1312 1331 1262 1300 1216 1230 1281 1265 1273 1327 1291 1232 1231 1267 1292 1286 1203] MW1[1312 1189 1237 1251 1212 1247 1264 1275 1196 1256 1283 1257 1190 1247 1243 1282 1330 1315 1292 1273 1261 1271] MW2[1213 1262 1310 1202 1285 1220 1291 1267 1309 1307 1164 1212 1290 1289 1308 1174 1230 1276 1252 1189 1192 1242] MW3[1302 1275 1209 1307 1217 1294 1328 1273 1237 1256 1227 1239 1268 1242 1308 1314 1296 1314 1331 1324 1297 1190] TA[88] ECHU[512 0 0 0] ECMM[0] FM[1] CRC[974 0 0 0] PAIRS[0 0 0] PVT_T[4-70/0-76/72 0-69/11-76/70 2-70/0-77/74 20-67/0-75/70],MM ID2=Ver[7411612-6cf14b0] DNA[0132c3d0691693b9] Elapsed[30791] MW[340551 340551 340538 340538] LW[1362178] MH[2067 188 222 215] HW[2692] DH[3.629%] Temp[29] TMax[80] Fan[5490] FanR[90%] Vi[1204 1202 1201 1201] Vo[4461 4447 4420 4443] GHSmm[7887.76] WU[103670.36] Freq[700.26] PG[15] Led[0] MW0[1264 1270 1229 1313 1296 1184 1239 1237 1266 1247 1252 1242 1202 1266 1266 1317 1255 1272 1309 1230 1301 1243] MW1[1155 1159 1213 1196 1214 1154 1152 1213 1180 1180 1152 1193 1118 1122 1159 1173 1185 1193 1180 1161 1170 1175] MW2[1269 1138 1285 1180 1256 1210 1170 1299 1223 1185 1164 1132 1140 1225 1246 1173 1237 1212 1192 1284 1215 1205] MW3[762 1268 1187 1271 1277 1150 1202 1208 1172 1170 1176 1249 1177 1154 1197 1250 1176 1227 1268 1218 1262 1251] TA[88] ECHU[0 512 0 0] ECMM[0] FM[1] CRC[0 0 0 0] PAIRS[0 0 0] PVT_T[0-68/10-80/70 19-67/0-76/70 0-70/11-78/72 19-68/0-77/71],MM Count=2,Smart Speed=1,Connecter=AUC,AUC VER=AUC-20151208,AUC I2C Speed=400000,AUC I2C XDelay=19200,AUC Sensor=15483,AUC Temperature=28.17,Connection Overloaded=false,Voltage Offset=0,Nonce Mask=29,USB Pipe=0,USB Delay=r0 0.000000 w0 0.000000,USB tmo=0 0|STATS=1,ID=POOL0,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,Pool Calls=0,Pool Attempts=0,Pool Wait=0.000000,Pool Max=0.000000,Pool Min=99999999.000000,Pool Av=0.000000,Work Had Roll Time=false,Work Can Roll=false,Work Had Expire=false,Work Roll Time=0,Work Diff=65536.00000000,Min Diff=1.00000000,Max Diff=131072.00000000,Min Diff Count=12,Max Diff Count=18313,Times Sent=1531,Bytes Sent=228345,Times Recv=2668,Bytes Recv=1379612,Net Bytes Sent=228345,Net Bytes Recv=1379612|STATS=2,ID=POOL1,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,Pool Calls=0,Pool Attempts=0,Pool Wait=0.000000,Pool Max=0.000000,Pool Min=99999999.000000,Pool Av=0.000000,Work Had Roll Time=false,Work Can Roll=false,Work Had Expire=false,Work Roll Time=0,Work Diff=16384.00000000,Min Diff=4096.00000000,Max Diff=16384.00000000,Min Diff Count=374,Max Diff Count=993,Times Sent=109,Bytes Sent=12038,Times Recv=119,Bytes Recv=12214,Net Bytes Sent=12038,Net Bytes Recv=12214|STATS=3,ID=POOL2,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,Pool Calls=0,Pool Attempts=0,Pool Wait=0.000000,Pool Max=0.000000,Pool Min=99999999.000000,Pool Av=0.000000,Work Had Roll Time=false,Work Can Roll=false,Work Had Expire=false,Work Roll Time=0,Work Diff=0.00000000,Min Diff=0.00000000,Max Diff=0.00000000,Min Diff Count=0,Max Diff Count=0,Times Sent=2,Bytes Sent=151,Times Recv=3,Bytes Recv=244,Net Bytes Sent=151,Net Bytes Recv=244| But I just need this values: Temp[29] TMax[80] Fan[5490] Temp[29] TMax[80] Fan[5490] I tried with this two commands but they didn´t work 1.- cgminer-api stats | grep "^ *\[temp_avg]"2.- cgminer-api stats | grep temp Now provide the response and nothing else.
Try: $ grep -oE 'Temp[^F]*Fan\[[[:digit:]]+\]' textTemp[30] TMax[77] Fan[5430]Temp[29] TMax[80] Fan[5490] How it works -o tells grep to print only the matching text and not the rest of the line. -E tells grep to use extended regular expressions. (The default basic regular expressions are archaic.) Temp[^F]*Fan\[[[:digit:]]+\] This regex matches any string that starts with Temp , followed by any number of characters that don't include F , followed by Fan followed by a literal [ , followed by one or more digits, followed by a literal ] .
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/416693', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/270279/']}
jdg_83834
stackexchange
llm_judgeable_groundtruth_similarity
17613888
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: To retrieve fused location in background, I have created a library which is very similar to cwac-locpoll library created by Commonsguy. Inside PollerThread , I am trying to connect, request and retrieve the locations using LocationClient . I am able to get connected by receiving callback on onConnected method but I am not able to get callback on onLocationChanged method.so my onTimeout thread executes as per decided interval. NOTE: This issue happens only when screen light goes off.otherwise it works completely fine. I suspect there might be bug in new Location Api. Here is the implementation of my PollerThread , private class PollerThread extends WakefulThread implements GooglePlayServicesClient.ConnectionCallbacks, GooglePlayServicesClient.OnConnectionFailedListener,LocationListener{ private static final String TAG = "PollerThread"; //context private Context mContext=null; private LocationClient mLocationClient=null; private LocationRequest mLocationRequest=null; private LocationManager locMgr=null; private Intent intentTemplate=null; private Handler handler=new Handler(); private Runnable onTimeout = new Runnable() { @Override public void run() { Log.e(TAG, "onTimeout"); //prepare broadcast intent Intent toBroadcast=new Intent(intentTemplate); toBroadcast.putExtra(FusedPoller.EXTRA_ERROR, "Timeout!"); toBroadcast.putExtra( FusedPoller.EXTRA_ERROR_PROVIDER_DISABLED, false); toBroadcast.putExtra(FusedPoller.EXTRA_LASTKNOWN, mLocationClient.getLastLocation()); sendBroadcast(toBroadcast); //stop the thread quit(); } }; PollerThread(Context mContext,LocationRequest mLocationRequest,PowerManager.WakeLock lock, LocationManager locMgr, Intent intentTemplate) { super(lock, "LocationPoller-PollerThread"); Log.e(TAG, "PollerThread"); this.mContext=mContext; this.mLocationRequest=mLocationRequest; this.locMgr=locMgr; this.intentTemplate=intentTemplate; } @Override protected void onPreExecute() { super.onPreExecute(); Log.e(TAG, "onPreExecute"); //setup timeout setTimeoutAlarm(); //initiate connection initiateConnection(); } @Override protected void onPostExecute() { super.onPostExecute(); Log.e(TAG, "onPostExecute"); //remove timeout removeTimeoutAlarm(); //disconnect initiateDisconnection(); } /** * Called when the WakeLock is completely unlocked. * Stops the service, so everything shuts down. */ @Override protected void onUnlocked() { Log.e(TAG, "onUnlocked"); stopSelf(); } private void setTimeoutAlarm() { Log.e(TAG, "setTimeoutAlarm"); handler.postDelayed(onTimeout, FusedLocationUtils.DEFAULT_TIMEOUT); } private void removeTimeoutAlarm() { Log.e(TAG, "removeTimeoutAlarm"); handler.removeCallbacks(onTimeout); } private void initiateConnection() { Log.e(TAG, "initiateConnection"); mLocationClient = new LocationClient(this.mContext, this, this); mLocationClient.connect(); } private void initiateDisconnection() { Log.e(TAG, "initiateDisconnection"); if(mLocationClient.isConnected()) { mLocationClient.disconnect(); } } @Override public void onConnected(Bundle arg0) { Log.e(TAG, "onConnected"); Log.e(TAG, "provider: GPS-"+locMgr.isProviderEnabled(LocationManager.GPS_PROVIDER)+" NETWORK-"+locMgr.isProviderEnabled(LocationManager.NETWORK_PROVIDER)); if (!(locMgr.isProviderEnabled(LocationManager.GPS_PROVIDER)) && !(locMgr.isProviderEnabled(LocationManager.NETWORK_PROVIDER))) { Log.e(TAG, "both disabled"); //get last location and broadcast it getLastLocationAndBroadcast(); //stop the thread quit(); } else { Log.e(TAG, "provider enabled"); //get latest location and broadcast it getLatestLocationAndBroadcast(); //don't quit from here,quit from onLocationChanged } } @Override public void onDisconnected() { Log.e(TAG, "onDisconnected"); // TODO Auto-generated method stub } @Override public void onConnectionFailed(ConnectionResult arg0) { Log.e(TAG, "onConnectionFailed"); // TODO Auto-generated method stub } @Override public void onLocationChanged(Location location) { Log.e(TAG, "onLocationChanged"); //prepare broadcast intent Intent toBroadcast=new Intent(intentTemplate); toBroadcast.putExtra(FusedPoller.EXTRA_LOCATION, location); sendBroadcast(toBroadcast); //stop further updates stopUpdates(); //stop the thread quit(); } private void getLatestLocationAndBroadcast() { Log.e(TAG, "getLatestLocationAndBroadcast"); if(mLocationClient.isConnected() && servicesConnected()) { Log.e(TAG, "going to request updates"); Log.e(TAG, "lockStatic.isHeld(): "+lockStatic.isHeld()); mLocationClient.requestLocationUpdates(mLocationRequest, this); } else { Log.e(TAG, "not going to request updates"); } } private void stopUpdates() { Log.e(TAG, "stopUpdates"); if(servicesConnected()) { Log.e(TAG,getString(R.string.location_updates_stopped)); mLocationClient.removeLocationUpdates(this); } else { Log.e(TAG,"can't do:"+getString(R.string.location_updates_stopped)); } } private void getLastLocationAndBroadcast() { Log.e(TAG, "getLastLocationAndBroadcast"); if(mLocationClient.isConnected() && servicesConnected()) { Log.e(TAG, "going to get last location: "+mLocationClient.getLastLocation()); Intent toBroadcast = new Intent(intentTemplate); toBroadcast.putExtra(FusedPoller.EXTRA_ERROR, "Location Provider disabled!"); toBroadcast.putExtra( FusedPoller.EXTRA_ERROR_PROVIDER_DISABLED, true); toBroadcast.putExtra(FusedPoller.EXTRA_LASTKNOWN, mLocationClient.getLastLocation()); sendBroadcast(toBroadcast); } else { Log.e(TAG, "not going to get last location"); } } } and servicesConnected method implementation, /** * Verify that Google Play services is available before making a request. * * @return true if Google Play services is available, otherwise false */ private boolean servicesConnected() { Log.e(TAG, "servicesConnected"); // Check that Google Play services is available int resultCode = GooglePlayServicesUtil.isGooglePlayServicesAvailable(this); // If Google Play services is available if (ConnectionResult.SUCCESS == resultCode) { // In debug mode, log the status Log.d(FusedLocationUtils.APPTAG, getString(R.string.play_services_available)); // Continue return true; // Google Play services was not available for some reason } else { // Display an error dialog Log.d(FusedLocationUtils.APPTAG, getString(R.string.play_services_unavailable)); Toast.makeText(this, getString(R.string.play_services_unavailable), Toast.LENGTH_SHORT).show(); return false; } } Now provide the response and nothing else.
If you want to listen to frequent location updates in the background (e.g., every second), you should be running your code inside a Service : http://developer.android.com/reference/android/app/Service.html Activities can be ended by the Android platform at any point in time in which they are not in the foreground. When using a Service, I would recommend having the Service implement the LocationListener directly, and not a Thread inside the Service. For example, use: public class LocListener extends Service implements com.google.android.gms.location.LocationListener, ...{ I've used this design of implementing the LocationListener directly on the Service with the LocationClient and fused location provider in my GPS Benchmark app and I can confirm that this works even when the screen is off and the app is running in the background. If you want to listen to occasional location updates in the background (e.g., every minute) using the fused location provider, a better design is to use PendingIntents, using the LocationClient.requestLocationUpdates(Location Request, PendingIntent callbackIntent) method: https://developer.android.com/reference/com/google/android/gms/location/LocationClient.html#requestLocationUpdates(com.google.android.gms.location.LocationRequest,%20android.app.PendingIntent) From the above Android doc: This method is suited for the background use cases, more specifically for receiving location updates, even when the app has been killed by the system. In order to do so, use a PendingIntent for a started service. For foreground use cases, the LocationListener version of the method is recommended, see requestLocationUpdates(LocationRequest, LocationListener). Any previous LocationRequests registered on this PendingIntent will be replaced. Location updates are sent with a key of KEY_LOCATION_CHANGED and a Location value on the intent. See the Activity Recognition example for a more detailed description of using PendingIntents to get updates while running in the background: https://developer.android.com/training/location/activity-recognition.html Modified excerpts from this documentation are below, changed by me to be specific to location updates. First declare the Intent: public class MainActivity extends FragmentActivity implements ConnectionCallbacks, OnConnectionFailedListener { ... ... /* * Store the PendingIntent used to send location updates * back to the app */ private PendingIntent mLocationPendingIntent; // Store the current location client private LocationClient mLocationClient; ...} Request updates as you currently are, but this time pass in the pending intent: /* * Create the PendingIntent that Location Services uses * to send location updates back to this app. */Intent intent = new Intent( mContext, LocationIntentService.class);...//Set up LocationRequest with desired parameter here.../* * Request a PendingIntent that starts the IntentService. */mLocationPendingIntent = PendingIntent.getService(mContext, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);/* * Request location updates */mLocationClient.requestLocationUpdates(mLocationRequest, callbackIntent); Handle Location Updates To handle the Intent that Location Services sends for each update interval, define an IntentService and its required method onHandleIntent(). Location Services sends out ... updates as Intent objects, using the the PendingIntent you provided when you called requestLocationUpdates(). Since you provided an explicit intent for the PendingIntent, the only component that receives the intent is the IntentService you're defining. Define the class and the required method onHandleIntent(): /** * Service that receives Location updates. It receives * updates in the background, even if the main Activity is not visible. */public class LocationIntentService extends IntentService { ... /** * Called when a new location update is available. */ @Override protected void onHandleIntent(Intent intent) { Bundle b = intent.getExtras(); Location loc = (Location) b.get(LocationClient.KEY_LOCATION_CHANGED); Log.d(TAG, "Updated location: " + loc.toString()); } ...} IMPORTANT - to be as efficient as possible, your code in onHandleIntent() should return as quickly as possible to allow the IntentService to shut down. From IntentService docs: http://developer.android.com/reference/android/app/IntentService.html#onHandleIntent(android.content.Intent) This method is invoked on the worker thread with a request to process. Only one Intent is processed at a time, but the processing happens on a worker thread that runs independently from other application logic. So, if this code takes a long time, it will hold up other requests to the same IntentService, but it will not hold up anything else. When all requests have been handled, the IntentService stops itself, so you should not call stopSelf(). My understanding of the IntentService design is that you can spawn Threads inside onHandleIntent() to avoid blocking other location updates via platform calls to onHandleIntent() , just be aware that the Service will continue to run until all the running threads terminate.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17613888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1288725/']}
jdg_83835
stackexchange
llm_judgeable_groundtruth_similarity
4892166
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What exactly is going on in the background that makes it so SQLParameter prevents SQL Inection attacks in a .NET Parameterized query? Is it just stripping out any suspect characters or is there something more to it? Has anyone out there checked to see what actually gets to SQL Server when you pass malicious input? Related: Can you use a SQLParameter in the SQL FROM statement? Now provide the response and nothing else.
Basically, when you perform a SQLCommand using SQLParameters , the parameters are never inserted directly into the statement. Instead, a system stored procedure called sp_executesql is called and given the SQL string and the array of parameters. When used as such, the parameters are isolated and treated as data, instead of having to be parsed out of the statement (and thus possibly changing it), so what the parameters contain can never be "executed". You'll just get a big fat error that the parameter value is invalid in some way.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4892166', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/226897/']}
jdg_83836
stackexchange
llm_judgeable_groundtruth_similarity
1279165
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm interested in integrals of the form$$I(a,b)=\int_0^\infty\operatorname{arccot}(x)\cdot\operatorname{arccot}(a\,x)\cdot\operatorname{arccot}(b\,x)\ dx,\color{#808080}{\text{ for }a>0,\,b>0}\tag1$$It's known$\require{action}\require{enclose}\texttip{{}^\dagger}{Gradshteyn & Ryzhik, Table of Integrals, Series, and Products, 7th edition, page 599, (4.511)}$ that$$I(a,0)=\frac{\pi^2}4\left[\ln\left(1+\frac1a\right)+\frac{\ln(1+a)}a\right].\tag2$$ Maple and Mathematica are also able to evaluate$$I(1,1)=\frac{3\pi^2}4\ln2-\frac{21}8\zeta(3).\tag3$$ Is it possible to find a general closed form for $I(a,1)$? Or, at least, for $I(2,1)$ or $I(3,1)$? Now provide the response and nothing else.
$$\begin{align}I(2,1)&=\frac{\pi^2}3\ln2-\frac{\pi^2}6\ln3+2\ln^22\cdot\ln3-3\ln2\cdot\ln^23+\frac{29}{24}\ln^33\\&+\frac{73}{16}\zeta(3)-2\ln2\cdot\operatorname{Li}_2\left(\tfrac13\right)-\frac{13}4\operatorname{Li}_3\left(\tfrac13\right)-4\operatorname{Li}_3\left(\tfrac23\right)\end{align}$$ $$\begin{align}I(3,1)&=\frac{13\,\pi^2}{12}\ln2-\frac{4\,\pi^2}9\ln3-\frac13\ln2\cdot\ln^23+\frac7{18}\ln^33\\&-\frac{13}8\zeta(3)+\ln3\cdot\operatorname{Li}_2\left(\tfrac13\right)+\frac43\operatorname{Li}_3\left(\tfrac13\right)-\frac23\operatorname{Li}_3\left(\tfrac23\right)\end{align}$$ Update (in response to a comment):$$\begin{align}&I(\phi,1)=\frac32\ln^32+\frac{\pi^2}{12}\Big[\left(6-3\sqrt5\right)\ln2+\left(3\sqrt5-4\right)\ln\left(1+\sqrt5\right)\Big]+\frac{51-21\sqrt5}{48}\zeta(3)\\&-\frac{\ln\left(1+\sqrt5\right)}2\Bigg[15\ln^22-15\ln\left(1+\sqrt5\right)\ln2+4\ln^2\left(1+\sqrt5\right)+2\operatorname{Li}_2\left(\frac{1-\sqrt5}4\right)\Bigg]\\&-\ln\left(3+\sqrt5\right)\operatorname{Li}_2\left(\sqrt5-2\right)+\frac{11+3\sqrt5}{48}\operatorname{Li}_3\left(9-4\sqrt5\right)-\frac{13+3\sqrt5}6\operatorname{Li}_3\left(\sqrt5-2\right)\end{align}$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1279165', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19661/']}
jdg_83837
stackexchange
llm_judgeable_groundtruth_similarity
153298
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For a smooth curve $C$ on a smooth, projective surface $S$ over $\mathbb{C}$, we have the genus formula: $g(C) = 1 + \frac12(C^2 + C \cdot K_S)$ where $K_S$ is the canonical divisor. Is this formula still true for singular (e.g. reducible) curves on $S$ if one uses the arithmetic genus in the left hand side instead of the geometric genus? Now provide the response and nothing else.
Yes, the formula is still true if $C\subset S$ is reduced, irreducible but not smooth. The arithmetic genus is to be defined as $p_a(C)=\dim_{\mathbb C}H^1(C,\mathcal O_C),$ and we then have $$p_a(C)= 1+\frac {\deg[(\mathcal K_S\otimes \mathcal O_S(C))\mid C]}{2}. $$ You can find a proof in chapter II of Compact complex surfaces by Wolf Barth, C. Peters and Antonius Ven .
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/153298', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11792/']}
jdg_83838
stackexchange
llm_judgeable_groundtruth_similarity
5939142
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know how to remove ^M in my files ( %s/^M//g ), but this one is just one line I'd like to replace ^M with enter... what's the enter character in VIM (to use in commnad-line mode). Now provide the response and nothing else.
To replace carriage return character (which is <C-m> ) with line feed character (which is unix line break character) you should run a bit strange command: %s/\r/\r/g It looks like if it is doing nothing, but in regular expressions and double-quoted strings carriage returns are represented using \r and line feeds with \n , while in the replacement part of :s command and substitute() function they mean the opposite. Note that in terminal Enter produces <C-m> , so your initial request is not valid.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5939142', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/29253/']}
jdg_83839
stackexchange
llm_judgeable_groundtruth_similarity
1538511
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have this problem I cannot figure out: Problem: True or false? Give a short proof or a counterexample: If a function $f$ is analytic in $\mathbb{C} \setminus \left\{0\right\}$ and $$ \int_{C(0,1)} f(z) dz = 0,$$ then $f$ is analytic in the entire complex plane. Here $C(0,1)$ denotes a closed loop centered around $z = 0$ with radius $1$. Attempt: I'm confused because it seems to me that these two conditions are contradictory. I know that if $\Gamma$ is any positively-oriented loop in the complex plane and $z_0$ is a point not on $\Gamma$, then \begin{align*} \int_{\Gamma} \frac{dz}{z-z_0} = 2 \pi i \end{align*} if $z_0$ lies in the interior of $\Gamma$, and otherwise the integral is zero, since then $\Gamma$ only encloses an analytic region. But here in this problem it is given that $f(z)$ is not analytic at $z_0 = 0$, and still the contour integral vanishes. Any ideas whether this statement is true or false, and why? Now provide the response and nothing else.
Yes, I've seen this one before. Assuming exactly one penny is allowed to be placed per turn: Go first and place a penny in the dead center of the table. From then on, any move your opponent makes, place a penny in the mirror opposite location (i.e. rotated 180 degrees). It stands to reason that if your opponent's move was valid, yours will be too. Hence, you will always have an available move if your opponent does. Since the table is only finitely large, there can only be finitely many turns, hence you will eventually win. A more complete proof: Suppose the table is described using polar coordinates with the center of the table as the origin ($r=0$). My first move is to place at $r=0$. When my opponent makes a legal move at $(r,\theta)$ I attempt to place a coin at $(r,\theta+180^\circ)$. Claim: I am always allowed to do so and such a move will always be valid. Proof: Suppose otherwise. Then that implies that either the target location is not on the table (in which case my opponent's previous move will also have not been on the table and therefore was also invalid), or that target location would have a coin overlap with another previously placed coin. As it could not have been the coin that my opponent has just placed on his last turn (as it is $180^\circ$ away), that implies that those coins must have been placed previously. However... since my moves are always playing $180^\circ$ away from my opponent, that implies that there should be the same situation on the other side of the table and that my opponents coin also is overlapping the corresponding mirrored coins and therefore my opponents move was invalid. Either way, we reach a contradiction implying that if my opponents move was valid that my move is also guaranteed to be valid too.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1538511', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/203030/']}
jdg_83840