source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 29971762 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to select my data by date - from a date until another date, so I have this query, SELECT * FROM mytalbe WHERE date BETWEEN '2014-10-09' AND '2014-10-10' But this query only return the data in '2014-10-09' , excluding the data in '2014-10-10' , unless I change the query to this below, SELECT * FROM mytalbe WHERE date BETWEEN '2014-10-09' AND '2014-10-11' This is not an ideal solution. How can I select the data including the data in '2014-10-10' ? NOTE: I think my problem is different from other duplicate questions becos, My date type is TEXT . I need to select the date's data without its time. It is an SQLite database. My sample data: sid nid timestamp date 1 20748 5 1412881193 2014-10-09 14:59:53 2 20749 5 1412881300 2014-10-09 15:01:40 3 20750 5 1412881360 2014-10-09 15:02:40
Now provide the response and nothing else.
| IF date is a timestamp, you'll need to do like: SELECT * FROM mytalbe WHERE date BETWEEN '2014-10-09 00:00:00' AND '2014-10-10 23:59:59' Or you can do, I believe: SELECT * FROM mytalbe WHERE DATE(date) BETWEEN '2014-10-09' AND '2014-10-10' Or, since it is a text field: SELECT * FROM mytalbe WHERE DATE_FORMAT(date,'%Y-%m-%d') BETWEEN '2014-10-09' AND '2014-10-10' | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29971762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/413225/']} | jdg_73041 |
stackexchange | llm_judgeable_groundtruth_similarity | 2485 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While studying for Cisco and Microsoft certifications I read a lot about NIC Teaming, Etherchannels and similar stuff. I know you can do static teaming in MS Windows Server or creating a port-channel in Cisco or use Dynamic protocols such as LACP and PAgP. But they all offer load-balancing using source or destination MAC address or IP address, a combination XORed together or similar things, right? And it seems there's no way to increase your actual throughput if you have a long long file transfer from one server to another (ex. having 4 GigE bonded together to increase it to 4 Gbps). So I wonder if it is true? Is there a way to use all the bandwidth of bonded interfaces? I heard from few sources that there's something like SLB (Switch-Assisted Load Balancing w/ Fault Tolerance) which supports full load balancing and allows to use full bandwidth (like 4 Gbps from my example between two servers in one transaction). I've heard so many contradictory things about that, and I am totally confused. Help me guys!
Now provide the response and nothing else.
| At layer 2, all load balancing is, at best, done by an XOR or hash of the source and destination MAC, and if you're lucky, it may even read into layer 3 and hash that data too. At layer 3, however, where we're basically talking about multiple gateways (so, effectively, two physical links with a unique next-hop across each) you can max out the bandwidth across the links IF you're prepared to do per-packet balancing. Before I go on, per-packet balancing is generally a bad thing due to the fact that it can result in out-of-order packet delivery, this can be especially terrible with TCP connections, but that of course comes down to the implementation and most modern stacks can tolerate this relatively well. In order to do per-packet balancing, obviously one requirement is that the source and destination IP addresses are not at all on-link to the devices that have the multiple paths since they need to be routed in order for balancing to be possible. Redundancy can be achieved via a routing protocol such as BGP, OSPF, ISIS, RIP, or alternatively, BFD or simple link-state detection. Finally, there is of course a transport layer solution - protocols like SCTP support connecting to multiple endpoints, and TCP already has drafts in the making that will add options to do similar things. Or... you can just make your application open multiple sockets. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/2485', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/2093/']} | jdg_73042 |
stackexchange | llm_judgeable_groundtruth_similarity | 21314893 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am porting some code over to windows and my cmake checks for the package Libavahi using find_package(Libavahi) I have the headers, dll, etc. but I'm not sure where to place these such that cmake will find them. Where can I put these files to be found by cmake? They're in a folder called usr . I see that the module path is specified using: set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake/Modules/") but I'm wondering if there is a default location that will be searched as well
Now provide the response and nothing else.
| The CMake manual fully specifies the rather complicated search order for the different find_* commands. Unfortunately, since Windows lacks a default directory structure à la /usr/local/lib , it is hard to come up with reasonable defaults here. One of the most reliable ways of managing directories is through environment variable hints. You simply add an $ENV{MY_VAR} to the HINTS section of the find command and then document that environment variable in your project's readme. Most users that are capable of compiling a C++ program know how to use environment variables, and it is way more convenient than having to give the path on the command line every time (although it never hurts to leave that as an additional option). For find_package CMake offers a special mechanism on Windows called the package registry . CMake maintains a list of package information in the Windows registry under HKEY_CURRENT_USER\Software\Kitware\CMake\Packages\ . Packages build from source can register there using the export command. Other projects build later on the same machine will then be able to find that package without additional configuration. This is quite powerful if you need to build a lot of interdependent projects from source on the same machine. Update: Starting with version 3.12, CMake now implicitly considers the <PackageName>_Root environment variable a HINT for every find_package call. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21314893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1370927/']} | jdg_73043 |
stackexchange | llm_judgeable_groundtruth_similarity | 221018 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen a lot of people talk about how to pentest and how NOT to get caught during engagements but have a hard time finding "How to behave when caught during a Red Team engagement". Red Teams are to simulate adversaries attacking systems. Many actions can't be done (or at least very hard to) with just some computers and Red Teams often have to go on-site and break in (legally). What I've seen so far is people succeeding in not getting caught. However, I haven't seen anyone talk about what to do when caught. It may just be some suspicion or even being chased by security (possibly armed). In cases where a Red Teamer is caught during an engagement, what should he/she do? Say "I'm a security tester. You've caught me so I'll just leave." Run away like a criminal with their stolen data (which sounds fun but dangerous) to be more like an actual criminal attacker Contact the employer to report it and get a "just continue" pass Quietly come along for some possible interrogation (I think this would be the safest) Update: I've made another question here which covers the 3rd parties not discussed in this question.
Now provide the response and nothing else.
| Always have your slip with you! This is the golden rule of Red Teaming! If you don't have your Permission to Attack with you, it's like driving without a driver's license. That said, if you are caught during an engagement, I recommend the following: Present a forged Permission to Attack. This way, you can see if criminals could possibly trick a security guard to letting them do their thing with a fake Permission to Attack. Present the real Permission to Attack. If a guard has not bought your fake slip, then it's time to hand in the real slip. If the guard believes you, it's time to pick up and leave the perimeter. A real attacker would have been stopped at this point. If the guard did not believe you, ask them kindly to talk to their supervisor. If they insist on not believing you and calling the police, so be it. You're not a criminal, so don't worry about it. Follow the police's orders. They'll take you with them to the station, where you can explain to the police that you are part of a Red Team Engagement, and that you have a permission to break into the company. They will double-check that, calling whoever is listed as the person who signed your Permission to Attack. In the happy case, they'll pick up the phone, explain that you are really hired to do that, and you'll be free to go. In the not-so-happy case, they won't pick up because it's 4 in the morning and their phone has no battery. Should this happen, you will probably spend the night in the police station. Worse things have happened. Call your employer in the morning, and they will reach the contact at the customer's company for you. What about the other options? Saying "I'm a security researcher. You've caught me so I'll just leave" will not be very helpful. In the eyes of a security guard, you're a criminal, caught in the middle of a crime. You will not have the choice of "just leaving". Run away like a criminal. A very bad idea. Probably the worst you could do. If the guard calls the police (they likely will), the costs could rise quite a lot and it would not make the customer happy to know they now have to pay the police for an unnecessary manhunt as well. However, you should absolutely include in your report if getting away from the perimeter after getting caught would have been a trivial effort or not. Contact the employer to get a "Just continue pass". That would miss the point of a Red Team Engagement. Once you have a "Just continue"-pass, you are not simulating how a real attacker would act. You would just go through the stuff of the company with their permission. | {} | {'log_upvote_score': 9, 'links': ['https://security.stackexchange.com/questions/221018', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/206331/']} | jdg_73044 |
stackexchange | llm_judgeable_groundtruth_similarity | 54353221 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to create a constructor in dart like Student(this._name) it doesn't work with private variables. I have already tried using setters but it doesn't work either. class Student{ var _id; var _name; Student(this.id, this.name); void set id(int id) => _id = id; void set name(String name) => _name = name; }
Now provide the response and nothing else.
| This is not supported because it would expose private implementation to the outside. If you'd rename var _id; to var _userId; you would break code that uses your class just by renaming a private field. See instead the comment below my answer. class Student{ var _id; var _name; Student({this._id, this._name}); // error void set id(int id) => _id = id; void set name(String name) => _name = name; } The alternative class Student{ var _id; var _name; Student({int id, String name}) : _id = id, _name = name; void set id(int id) => _id = id; void set name(String name) => _name = name; } | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/54353221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9008011/']} | jdg_73045 |
stackexchange | llm_judgeable_groundtruth_similarity | 40226063 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does the set function call wipe out the dupes, but parsing a set literal does not? >>> x = Decimal('0')>>> y = complex(0,0)>>> set([0, x, y]){0}>>> {0, x, y}{Decimal('0'), 0j} (Python 2.7.12. Possibly same root cause as for this similar question)
Now provide the response and nothing else.
| Sets test for equality, and until there are new Python releases, the order in which they do this can differ based on the form you hand the values to the set being constructed, as I'll show below. Since 0 == x is true and 0 == y is true, but x == y is false , the behaviour here is really undefined , as the set assumes that x == y must be true if the first two tests were true too. If you reverse the list passed to set() , then you get the same output as using a literal, because the order of equality tests changes: >>> set([y, x, 0])set([0j, Decimal('0')]) and the same for reversing the literal: >>> {y, x, 0}set([0]) What's happening is that the set literal loads the values onto the stack and then the stack values are added to the new set object in reverse order. As long as 0 is loaded first , the other two objects are then tested against 0 already in the set. The moment one of the other two objects is loaded first, the equality test fails and you get two objects added: >>> {y, 0, x}set([Decimal('0'), 0j])>>> {x, 0, y}set([0j, Decimal('0')]) That set literals add elements in reverse is a bug present in all versions of Python that support the syntax, all the way until Python 2.7.12 and 3.5.2. It was recently fixed, see issue 26020 (part of 2.7.13, 3.5.3 and 3.6, none of which have been released yet). If you look at 2.7.12, you can see that BUILD_SET in ceval.c reads the stack from the top down: # oparg is the number of elements to take from the stack to addfor (; --oparg >= 0;) { w = POP(); if (err == 0) err = PySet_Add(x, w); Py_DECREF(w);} while the bytecode adds elements to the stack in reverse order (pushing 0 on the stack first): >>> from dis import dis>>> dis(compile('{0, x, y}', '', 'eval')) 2 0 LOAD_CONST 1 (0) 3 LOAD_GLOBAL 0 (x) 6 LOAD_GLOBAL 1 (y) 9 BUILD_SET 3 12 RETURN_VALUE The fix is to read the elements from the stack in reverse order; the Python 2.7.13 version uses PEEK() instead of POP() (and a STACKADJ() to remove the elements from the stack afterwards): for (i = oparg; i > 0; i--) { w = PEEK(i); if (err == 0) err = PySet_Add(x, w); Py_DECREF(w);}STACKADJ(-oparg); The equality testing issue has the same root cause as the other question; the Decimal() class is having some equality issues with complex here, which was fixed in Python 3.2 (by making Decimal() support comparisons to complex and a few other numeric types it didn't support before ). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40226063', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/674039/']} | jdg_73046 |
stackexchange | llm_judgeable_groundtruth_similarity | 37683 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A candidate to the French presidency tweeted The fall in industrial production in Spain, Italy and France of course has a link with the euro. However I have a couple of doubts on its accuracy because of its source and the arbitrary use of gauges and percentages. Is the data accurate? The graph shows an "euro effect" according to the OP. Is this an artifact of the way the graph is made? What happened to countries outside of the euro?
Now provide the response and nothing else.
| The raw statistics can be mined from http://stats.oecd.org which I have done using the following parameters (Sorry, it doesnt appear to be linkable!) Selection: Production of total Industry Frequency: Annual, last 30 years Countries: France, Germany, Italy & Spain The oecd stats system seems to put the datum point at 2010, which differs from the original question, but as all are compared equally this should not make too much of a difference. The resulting data looks like this: Producing the following chart Which differs from your original, but not vastly. I suspect there is some cherry-picking of data. As for the second part, how non-Euro nations compare, there are nine member states who do not use the Euro (Bulgaria, Croatia, Czech Republic, Denmark, Hungary, Poland, Romania, Sweden, and the United Kingdom). Not all of these countries are included in the OECD data, but most are and they can be compared to the wider Euro-zone. The chart looks as below, which does not look materially different between euro-zone and non-euro-zone nations. | {} | {'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/37683', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/96/']} | jdg_73047 |
stackexchange | llm_judgeable_groundtruth_similarity | 2703640 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get running process list using Python on Linux?
Now provide the response and nothing else.
| IMO looking at the /proc filesystem is less nasty than hacking the text output of ps . import ospids = [pid for pid in os.listdir('/proc') if pid.isdigit()]for pid in pids: try: print open(os.path.join('/proc', pid, 'cmdline'), 'rb').read().split('\0') except IOError: # proc has already terminated continue | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2703640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/324814/']} | jdg_73048 |
stackexchange | llm_judgeable_groundtruth_similarity | 322642 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $f:X \to Y$ be a finite morphism of schemes.Let $\mathcal{F}$ be a sheaf of abelian groups on the the etale site of $X$ then we know that $R^{i}f_{*} \mathcal{F} = 0$ . Is this statement also true when $\mathcal{F}$ is sheaf of abelian groups on the fppf site?
Now provide the response and nothing else.
| No, it is not true. Let $k$ be an algebraically closed field of characteristic $p > 0$ and set $k' := k[x]/(x^2)$ . Let $f \colon \mathrm{Spec}(k') \rightarrow \mathrm{Spec}(k)$ be the corresponding map. Then the sequence $0 \rightarrow \mu_p \rightarrow \mathbb{G}_m \rightarrow \mathbb{G}_m \rightarrow 0$ will show you that $(R^1f_*(\mu_p))(k) \cong k'^{\times}/k'^{\times p} \neq 0$ . As far as I know, it is an open question whether such vanishing is true when $f$ is a closed immersion. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/322642', 'https://mathoverflow.net', 'https://mathoverflow.net/users/124202/']} | jdg_73049 |
stackexchange | llm_judgeable_groundtruth_similarity | 371 |
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to be able to run emacs on one computer: server $ emacs --daemon And then connect to it from another: local $ emacsclient -c server Is this possible? If so, how?
Now provide the response and nothing else.
| You cannot use emacsclient to connect to an Emacs instance running on a remote computer. This client-server concept is related to (local) processes, not network nodes. However, you can use various network technologies to log in to the remote computer, attach to an emacs server running there and display the emacsclient's screen locally. Depending on the operating system that is used the networking could be done via SSH (terminal/X-Forwarding), VNC, RDP etc. UPDATE : As some people pointed out, emacsclient actually does have an option to connect to the server via TCP. However, emacsclient was never meant to be used remotely, the TCP socket option is required for compatibility with non-UNIX systems (i.e. operating systems where UNIX domain sockets are not available, like Windows). | {} | {'log_upvote_score': 6, 'links': ['https://emacs.stackexchange.com/questions/371', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/278/']} | jdg_73050 |
stackexchange | llm_judgeable_groundtruth_similarity | 25205845 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'am using EGit and I commited a change to my local git repository. I'd like to push this change to a remote.When doing that, a dialog screen pops up which shows "rejected-master-master-non-fast-forward" . The answer of this linked question states that I have to pull first. When doing the pull, an EGit exception is thrown: org.eclipse.jgit.api.errors.TransportException: Nothing to fetch. at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:139) at org.eclipse.jgit.api.PullCommand.call(PullCommand.java:253) at org.eclipse.egit.core.op.PullOperation$1.run(PullOperation.java:97) at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2344) at org.eclipse.egit.core.op.PullOperation.execute(PullOperation.java:128) at org.eclipse.egit.ui.internal.pull.PullOperationUI.execute(PullOperationUI.java:139) at org.eclipse.egit.ui.internal.pull.PullOperationUI$1.runInWorkspace(PullOperationUI.java:114) at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)Caused by: org.eclipse.jgit.errors.TransportException: Nothing to fetch. at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1087) at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:130) ... 8 more It seems that I'm stuck. Who can help me out? UPDATE The .git/config file in my repository contains (remote URL hid): [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true[branch "master"] remote = origin merge = refs/heads/master[remote "origin"] url = <URL_HIDDEN> I'm using Eclipse Git Team Provider 3.4.1.201406201815
Now provide the response and nothing else.
| The configuration of the repository seems to miss this line in the [remote "origin"] section: fetch = +refs/heads/*:refs/remotes/origin/* It tells git to fetch all refs starting with refs/heads/ (all branches) and store them under refs/remotes/origin/ locally. E.g. the refs/heads/master in the remote repository will become refs/remotes/origin/master locally (or origin/master in short). Can you add that and try if it makes pull work? Also, it would be very interesting to know in which way you first created/cloned this repository. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25205845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/324152/']} | jdg_73051 |
stackexchange | llm_judgeable_groundtruth_similarity | 455651 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Many of us have seen the evaluation of the integral $$\int^{\infty}_0 \frac{dx}{x^p(1+x)}\, dx \,\,\, 0<\Re(p)<1$$ It can be solved using contour integration or beta function . I thought of how to solve the integral $$\int^{\infty}_0 \frac{\log(1+x)}{x^p(1+x)}\, dx \,\,\, 0<\Re(p)<1$$ It can be solved using real methods as follows consider the following integral $$\int^{\infty}_0 x^{-p}(1+x)^{s-1} dx= \frac{\Gamma(1-p)\Gamma(p-s)}{\Gamma(1-s)}$$ Differentiating with respect to $s$ we get $$\int^{\infty}_0 x^{-p}(1+x)^{s-1}\log(1+x) dx=\frac{\Gamma(1-p)\Gamma(p-s)}{\Gamma(1-s)} \left(\psi_0 (1-s)- \psi_0(p-s)\right)$$ at $s =0$ we get $$\int^{\infty}_0 x^{-p}\frac{\log(1+x)}{1+x} dx=\frac{\pi}{\sin(\pi p)} \left(\psi_0 (1)- \psi_0(p)\right)$$ where i used the reflection formula . Statement of question How to solve the following integral using contour integration $$\int^{\infty}_0 \frac{\log(1+x)}{x^p(1+x)}\, dx \,\,\, 0<\Re(p)<1$$ I thought we can use the following contour So the function $$F(z) = \frac{e^{-p \log(z)}\log(1+z)}{(1+z)} $$ is analytic in and on the contour by choosing the branch cut of $e^{-p \log(z)}$ as $0\leq \text{Arg}(z)<2\pi$ and the branch cut of $\log(1+z)$ as $0\leq \text{Arg}(z+1)<2\pi$ so the function $F(z)$ is analytic everywhere except at $z\geq -1$ . I am finding difficulty finding the integral on the branch point $z=-1$ it seems there is a contribution of the branch point and the pole . Please don't make any substitutions or simplifications for the integral. Feel free to use another contour if my choice was wrong .
Now provide the response and nothing else.
| To evaluate the integral using contour integration, consider the integral $$\oint_C dz \frac{z^{-p} \log{(1+z)}}{1+z} $$ where $C$ is the following contour: The magnitude of the integral about the large arc of radius $R$ behaves as $\frac{\log{R}}{R^p}$ as $R \to \infty$ and thus vanishes. Let the radius of the small circular arcs be $\epsilon$. The contour integral is then equal to, in this limit, $$\left (1-e^{-i 2 \pi p} \right ) \int_0^{\infty} dx \frac{x^{-p} \log{(1+x)}}{1+x} - e^{-i \pi p}\int_{\infty}^{1+\epsilon} dx \frac{x^{-p} [\log{(x-1)}+i \pi]}{1-x} \\ - e^{-i \pi p}\int_{1+\epsilon}^{\infty} dx \frac{x^{-p} [\log{(x-1)}-i \pi]}{1-x}+i \epsilon \int_{\pi}^{-\pi} d\phi \,e^{i \phi} \frac{(e^{i \pi}+\epsilon e^{i \phi})^{-p} \log{(\epsilon e^{i \phi})}}{\epsilon e^{i \phi}}$$ The first integral represents the integral about the branch cut along the positive real axis, which concerns the $z^{-p}$ term only. Note that the integral about the origin vanishes as $\epsilon \to 0$. The second and third integrals represent the integrals along each side of the branch cut on the negative axis. Note that, along this branch cut, the argument of $z^{-p}$ is $-\pi p$ on either side of the branch cut, as the branch cut there concerns the log term only. The fourth integral is the integral about the branch point $z=-1$. By Cauchy's theorem, the contour integral is zero. Thus, we have $$\left (1-e^{-i 2 \pi p} \right ) \int_0^{\infty} dx \frac{x^{-p} \log{(1+x)}}{1+x} = i 2 \pi \, e^{-i \pi p} \int_{1+\epsilon}^{\infty} dx \frac{x^{-p}}{x-1} + i e^{-i \pi p} \int_{-\pi}^{\pi} d\phi \left ( \log{\epsilon} + i \phi \right )$$ Note that $$\begin{align}\int_{1+\epsilon}^{\infty} dx \frac{x^{-p}}{x-1} &= \int_0^{1-\epsilon} dx \frac{x^{p-1}}{1-x} + O(\epsilon) \end{align} $$ and $$\begin{align}\int_0^{1-\epsilon} dx \frac{x^{p-1}}{1-x} &= \int_0^{1-\epsilon} dx \, x^p \left (\frac1x + \frac1{1-x} \right ) \\ &= \frac1p (1-\epsilon)^p - \left [\log{(1-x)} x^p \right ]_0^{1-\epsilon} + p \int_0^{1-\epsilon} dx \, x^{p-1} \log{(1-x)}\\ &= \frac1p - \log{\epsilon} + p \int_0^1 dx \, x^{p-1} \log{(1-x)} + O(\epsilon)\\ &= \frac1p - \log{\epsilon} - p \sum_{k=1}^{\infty} \frac1{k (k+p)}+ O(\epsilon) \\ &= \frac1p - \log{\epsilon}-\gamma -\psi(1+p)+ O(\epsilon) \\ &= - \log{\epsilon} - (\gamma + \psi(p))+ O(\epsilon) \end{align} $$ where $\psi$ is the digamma function. Note that the singular $\log{\epsilon}$ pieces cancel. The second piece of the second integral on the RHS vanishes as it is an odd function over a symmetric interval. Thus, we may take the limit as $\epsilon \to 0$ and we get $$\int_0^{\infty} dx \frac{x^{-p} \log{(1+x)}}{1+x} = -\frac{i 2 \pi \, e^{-i \pi p}}{1-e^{-i 2 \pi p}} (\gamma + \psi(p)) = -\frac{\pi}{\sin{\pi p}} (\gamma + \psi(p))$$ Alternatively, we may express this so that it is clear that the integral takes a positive value: $$\int_0^{\infty} dx \frac{x^{-p} \log{(1+x)}}{1+x} = \frac{\pi}{\sin{\pi p}} \left (\frac1p - H_p \right ) $$ where $H_p$ is the analytically continued harmonic number at $p$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/455651', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/87813/']} | jdg_73052 |
stackexchange | llm_judgeable_groundtruth_similarity | 5069293 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the @AspectJ style for writing aspects, to handle logging in our application. Basically I have a pointcut set up like so: @Pointcut("call(public * com.example..*(..))")public void logging() {} and then a before and after advice like so: @Before("logging()")public void entering() {...}...@After("logging()")public void exiting() {...} I want to create a log in these methods in the following format: logger.trace("ENTERING/EXITING [" className + "." + methodName "()]"); The problem is I don't know how to get a reference to the class and method names. I have tried: joinPoint.getThis().getClass() but this seems to return the caller's class name. class A { public void a() { B.b(); }}class B { public void b() { ... }} would result in the following log ENTERING [A.b()] can someone give some help on how to get the actual joinpoint class and method name
Now provide the response and nothing else.
| You need to use joinPoint.getTarget().getClass() . Since you are using advising a call join point, the object of your interest is the target of the call. Please note the API specification states : Returns the target object. This will always be the same object as that matched by the target pointcut designator. Unless you specifically need this reflective access, you should use the target pointcut designator to get at this object for better static typing and performance. Returns null when there is no target object. Blindly using joinPoint.getTarget().getClass() can result in a NullPointerException . Consider using the join point's signature, such as: final Signature signature = joinPoint.getSignature(); Then: final Class clazz = signature.getDeclaringType(); Or if all you need is the class name: final String clazz = signature.getDeclaringTypeName(); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5069293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/274560/']} | jdg_73053 |
stackexchange | llm_judgeable_groundtruth_similarity | 8375059 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Ruby array with account ids. I would like to store the Ruby array of account ids in a Javascript array. I was wondering the best way to do this? In addition, when I've been trying to do this it seems that Javascript think if there is only one account id entered then that should be the size of the array. Is there a way around this? I've tried putting it in quotes but that does not seem to work.
Now provide the response and nothing else.
| Let's assume you are using erb . A first approach: <%= javascript_tag "account_ids = #{account_ids.to_json.html_safe};" %> The problem is that this creates a global variable without context (who uses it?). That's why I'd rather call a function defined somewhere in your JS code: <%= javascript_tag "setAccounts(#{account_ids.to_json.html_safe});" %> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8375059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393440/']} | jdg_73054 |
stackexchange | llm_judgeable_groundtruth_similarity | 4167867 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $\dfrac{1-\sin x}{1+\sin x}=4$ , what is the value of $\tan \frac{x}2$ ? $1)-3\qquad\qquad2)2\qquad\qquad3)-2\qquad\qquad4)3$ Here is my method: $$1-\sin x=4+4\sin x\quad\Rightarrow\sin x=-\frac{3}5$$ We have $\quad\sin x=\dfrac{2\tan(\frac x2)}{1+\tan^2(\frac{x}2)}=-\frac35\quad$ . by testing the options we can find out $\tan(\frac x2)=-3$ works (although by solving the quadratic I get $\tan(\frac x2)=-\frac13$ too. $-3$ isn't the only possible value.) I wonder is it possible to solve the question with other (quick) approaches?
Now provide the response and nothing else.
| This is the same way to go as in the OP, maybe combining the arguments looks simpler. Let $t$ be $t=\tan(x/2)$ for the "good $x$ " satisfying the given relation. Then $\displaystyle \sin x=\frac {2t}{1+t^2}$ , so $$4=\frac{1-\sin x}{1+\sin x}=\frac{(1+t^2)-2t}{(1+t^2)+2t}=\left(\frac{1-t}{1+t}\right)^2\ .$$ This gives for $(1-t)/(1+t)$ the values $\pm 2$ , leading to the two solutions $-3$ and $-1/3$ mentioned in the OP. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4167867', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/717650/']} | jdg_73055 |
stackexchange | llm_judgeable_groundtruth_similarity | 25894495 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
var songs = MPMediaQuery()var localSongs = songs.itemssongList = NSMutableArray(array: localSongs)tableView.reloadData()var song = MPMediaItem(coder: songList[0] as NSCoder)var currentItem = AVPlayerItem(URL: song.valueForProperty(MPMediaItemPropertyAssetURL) as NSURL)player.replaceCurrentItemWithPlayerItem(currentItem)player.play()var songTitle: AnyObject! = song.valueForProperty(MPMediaItemPropertyTitle)songName.text = songTitle as? StringsliderOutlet.value = Float(player.currentTime()) // <<-Error here I'm building a music player and I want a slider to show the duration of the song, but I get this error Could not find an overload for 'init' that accepts the supplied arguments I think the problem is converting CMTime to Float.
Now provide the response and nothing else.
| CMTime is a structure , containing a value , timescale and other fields, so you cannot just "cast" it to a floating point value. Fortunately, there is a conversion function CMTimeGetSeconds() : let cmTime = player.currentTime()let floatTime = Float(CMTimeGetSeconds(player.currentTime())) Update: As of Swift 3, player.currentTime returns a TimeInterval which is a type alias for Double .Therefore the conversion to Float simplifies to let floatTime = Float(player.currentTime) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25894495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2628367/']} | jdg_73056 |
stackexchange | llm_judgeable_groundtruth_similarity | 2043947 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an application where part of the inner loop was basically: double sum = 0;for (int i = 0; i != N; ++i, ++data, ++x) sum += *data * x; If x is an unsigned int, then the code takes 3 times as long as with int! This was part of a larger code-base, but I got it down to the essentials: #include <iostream> #include <cstdlib> #include <vector>#include <time.h>typedef unsigned char uint8;template<typename T>double moments(const uint8* data, int N, T wrap) { T pos = 0; double sum = 0.; for (int i = 0; i != N; ++i, ++data) { sum += *data * pos; ++pos; if (pos == wrap) pos = 0; } return sum;}template<typename T>const char* name() { return "unknown"; }template<>const char* name<int>() { return "int"; }template<>const char* name<unsigned int>() { return "unsigned int"; }const int Nr_Samples = 10 * 1000;template<typename T>void measure(const std::vector<uint8>& data) { const uint8* dataptr = &data[0]; double moments_results[Nr_Samples]; time_t start, end; time(&start); for (int i = 0; i != Nr_Samples; ++i) { moments_results[i] = moments<T>(dataptr, data.size(), 128); } time(&end); double avg = 0.0; for (int i = 0; i != Nr_Samples; ++i) avg += moments_results[i]; avg /= Nr_Samples; std::cout << "With " << name<T>() << ": " << avg << " in " << (end - start) << "secs" << std::endl;}int main() { std::vector<uint8> data(128*1024); for (int i = 0; i != data.size(); ++i) data[i] = std::rand(); measure<int>(data); measure<unsigned int>(data); measure<int>(data); return 0;} Compiling with no optimization: luispedro@oakeshott:/home/luispedro/tmp/so §g++ test.cpp luispedro@oakeshott:/home/luispedro/tmp/so §./a.outWith int: 1.06353e+09 in 9secsWith unsigned int: 1.06353e+09 in 14secsWith int: 1.06353e+09 in 9secs With optimization: luispedro@oakeshott:/home/luispedro/tmp/so §g++ -O3 test.cppluispedro@oakeshott:/home/luispedro/tmp/so §./a.outWith int: 1.06353e+09 in 3secsWith unsigned int: 1.06353e+09 in 12secsWith int: 1.06353e+09 in 4secs I don't understand why such a large difference in speed. I tried figuring it out from the generated assembly, but I got nowhere. Anyone have any thoughts? Is this something to do with the hardware or is it a limitation of gcc's optimisation machinery? I'm betting the second. My machine is an Intel 32 bit running Ubuntu 9.10. Edit : Since Stephen asked, here is the de-compiled source (from a -O3 compilation). I believe I got the main loops: int version: 40: 0f b6 14 0b movzbl (%ebx,%ecx,1),%edx sum += *data * pos;44: 0f b6 d2 movzbl %dl,%edx47: 0f af d0 imul %eax,%edx ++pos;4a: 83 c0 01 add $0x1,%eax sum += *data * pos;4d: 89 95 54 c7 fe ff mov %edx,-0x138ac(%ebp) ++pos; if (pos == wrap) pos = 0;53: 31 d2 xor %edx,%edx55: 3d 80 00 00 00 cmp $0x80,%eax5a: 0f 94 c2 sete %dl T pos = 0; double sum = 0.; for (int i = 0; i != N; ++i, ++data) {5d: 83 c1 01 add $0x1,%ecx sum += *data * pos;60: db 85 54 c7 fe ff fildl -0x138ac(%ebp) ++pos; if (pos == wrap) pos = 0;66: 83 ea 01 sub $0x1,%edx69: 21 d0 and %edx,%eax T pos = 0; double sum = 0.; for (int i = 0; i != N; ++i, ++data) {6b: 39 f1 cmp %esi,%ecx sum += *data * pos;6d: de c1 faddp %st,%st(1) T pos = 0; double sum = 0.; for (int i = 0; i != N; ++i, ++data) {6f: 75 cf jne 40 unsigned version: 50: 0f b6 34 13 movzbl (%ebx,%edx,1),%esi sum += *data * pos;54: 81 e6 ff 00 00 00 and $0xff,%esi5a: 31 ff xor %edi,%edi5c: 0f af f0 imul %eax,%esi ++pos;5f: 83 c0 01 add $0x1,%eax if (pos == wrap) pos = 0;62: 3d 80 00 00 00 cmp $0x80,%eax67: 0f 94 c1 sete %cl T pos = 0; double sum = 0.; for (int i = 0; i != N; ++i, ++data) {6a: 83 c2 01 add $0x1,%edx sum += *data * pos;6d: 89 bd 54 c7 fe ff mov %edi,-0x138ac(%ebp)73: 89 b5 50 c7 fe ff mov %esi,-0x138b0(%ebp) ++pos; if (pos == wrap) pos = 0;79: 89 ce mov %ecx,%esi7b: 81 e6 ff 00 00 00 and $0xff,%esi sum += *data * pos;81: df ad 50 c7 fe ff fildll -0x138b0(%ebp) ++pos; if (pos == wrap) pos = 0;87: 83 ee 01 sub $0x1,%esi8a: 21 f0 and %esi,%eax for (int i = 0; i != N; ++i, ++data) {8c: 3b 95 34 c7 fe ff cmp -0x138cc(%ebp),%edx sum += *data * pos;92: de c1 faddp %st,%st(1) for (int i = 0; i != N; ++i, ++data) {94: 75 ba jne 50 This is the -O3 version, which is why the source lines jump up and down.Thank you.
Now provide the response and nothing else.
| Here's why: many common architectures (including x86) have a hardware instruction to convert signed int to doubles, but do not have a hardware conversion from unsigned to double, so the compiler needs to synthesize the conversion in software. Furthermore, the only unsigned multiply on Intel is a full width multiply, whereas signed multiplies can use the signed multiply low instruction. GCC's software conversion from unsigned int to double may very well be suboptimal (it almost certainly is, given the magnitude of the slowdown that you observed), but it is expected behavior for the code to be faster when using signed integers. Assuming a smart compiler, the difference should be much smaller on a 64-bit system, because a 64-bit signed integer -> double conversion can be used to efficiently do a 32-bit unsigned conversion. Edit: to illustrate, this: sum += *data * x; if the integer variables are signed, should compile into something along these lines: mov (data), %eaximul %ecx, %eaxcvtsi2sd %eax, %xmm1addsd %xmm1, %xmm0 on the other hand, if the integer variables are unsigned, cvtsi2sd can't be used to do the conversion, so a software workaround is required. I would expect to see something like this: mov (data), %eax mul %ecx // might be slower than imul cvtsi2sd %eax, %xmm1 // convert as though signed integer test %eax, %eax // check if high bit was set jge 1f // if it was, we need to adjust the converted addsd (2^32), %xmm1 // value by adding 2^321: addsd %xmm1, %xmm0 That would be "acceptable" codegen for the unsigned -> double conversion; it could easily be worse. All of this is assuming floating-point code generation to SSE (I believe this is the default on the Ubuntu tools, but I could be wrong). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2043947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248279/']} | jdg_73057 |
stackexchange | llm_judgeable_groundtruth_similarity | 48078042 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Could you please explain what this strange code does? expression.Compile()(); Why are there 2 pairs of parentheses here? I didn't find anything in google. The full method is public Validator NotEmpty(Expression<Func<IEnumerable<T>>> expression){ var member = (MemberExpression)expression.Body; string propertyName = member.Member.Name; IEnumerable<T> value = expression.Compile()(); if (value == null || !value.Any()) { ValidationResult.AddError(propertyName, "Shouldn't be empty"); } return this;} It is used like this: _validator.NotEmpty(() => request.PersonIds); // request.PersonIds is List<int> This method checks if a collection is empty or null. Everything works fine but I am a little bit confused with that code. I have never seen using 2 pairs of parentheses before in C#. What does it mean?
Now provide the response and nothing else.
| Well, you pass list of int into the method as expression tree. This expression produces the value of IEnumerable<T> (in this case IEnumerable<int> ). To get value of expression you need to compile this expression into a delegate Func<IEnumerable<T>> and then invoke the delegate. In fact, I can write two separate lines of code instead of the shorter syntax used above: Func<IEnumerable<T>> del = expression.Compile();IEnumerable<T> value = del(); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48078042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9168227/']} | jdg_73058 |
stackexchange | llm_judgeable_groundtruth_similarity | 274135 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently studying Statistical Inference class on Coursera. In one of the assignments, the following question comes up. | Suppose you rolled the fair die twice. What is the probability of rolling the same number two times in a row?1: 2/62: 1/363: 04: 1/6Selection: 2| You're close...I can feel it! Try it again.| Since we don't care what the outcome of the first roll is, its probability is 1. The second roll of the dice has to match the outcome of the first, so that has a probability of 1/6. The probability of both events occurring is 1 * 1/6. I do not understand this bit. I understand that the two die rolls are independent events and their probabilities can be multiplied, so the outcome should be 1/36. Can you please explain, why I am wrong?
Now provide the response and nothing else.
| The probability of rolling a specific number twice in a row is indeed 1/36, because you have a 1/6 chance of getting that number on each of two rolls (1/6 x 1/6). The probability of rolling any number twice in a row is 1/6, because there are six ways to roll a specific number twice in a row (6 x 1/36). Another way to think about it is that you don't care what the first number is, you just need the second number to match it (with probability 1/6 ). | {} | {'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/274135', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/157594/']} | jdg_73059 |
stackexchange | llm_judgeable_groundtruth_similarity | 35713 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I believe there is a straightforward formula for the abelianization of a semi-direct product: if $G$ acts on $H$, and we form the semi-direct product of $G$ and $H$ in the usual way, and the abelianization of this semi-direct product is the product $G^{ab}\times (H^{ab})_{G}$. (Here the subscript $G$ denotes taking the coinvariants with respect to $G$. That is, $(H^{ab})_{G}$ is a the quotient of $H^{ab}$ by the subgroup generated by elements of the form $h^g-h$ for $h$ in $H$ and $g$ in $G$, and where the superscript $g$ denotes the action of $G$ on $H^{ab}$ induced by the action of $G$ on $H$.) Does anyone happen to know a good reference for this?
Now provide the response and nothing else.
| I agree with Ryan and Victor, except that you don't need presentations. The subgroup $[G \ltimes H,G \ltimes H]$ is generated by $[H,H] \cup [G,H] \cup [G,G]$, so you can write$$(G \ltimes H)^{ab} = (G \ltimes H) / \langle [H,H] \cup [G,H] \cup [G,G] \rangle.$$If you apply the relators $[H,H]$, you get $G \ltimes H^{ab}$; then if you apply the relators $[G,H]$, you get $G \times (H^{ab})_G$; then finally if you apply $[G,G]$, you get $G^{ab} \times (H^{ab})_G$. You can add this as an extra half-paragraph or footnote rather than giving a citation. I don't think that the referee has the right to demand a longer explanation than this, unless maybe you are writing a textbook. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/35713', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3513/']} | jdg_73060 |
stackexchange | llm_judgeable_groundtruth_similarity | 1746413 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to show that: for all vectors $v\in \Bbb R^n$ : $\lim_{p\to \infty}||v||_p = \max_{1\le i \le n}|v_i|$ with the $||\cdot ||_p$ norm defined as $$ ||\cdot ||_p: (v_1, \dots ,v_n) \to (\sum^n_{i=i} |v_i|^p)^{1/p} $$ I think I once read something about mixing the root and the same power with the power going to infinity but i can't really remember anything concrete.Any Ideas? Thanks in advance
Now provide the response and nothing else.
| Hint: For the upper bound, observe that $$\left(\sum_{i=1}^n |v_i|^p\right)^{1/p}\leq\left(\sum_{i=1}^n \max|v_i|^p\right)^{1/p}=n^{1/p}\max|v_i|.$$ For the lower bound, observe that $$\left(\sum_{i=1}^n |v_i|^p\right)^{1/p}\geq\left( \max|v_i|^p\right)^{1/p}=\max|v_i|.$$ Now, take limits. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1746413', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/206905/']} | jdg_73061 |
stackexchange | llm_judgeable_groundtruth_similarity | 679896 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I installed NordVPN from the AUR (nordvpn-bin package) around a week or two weeks ago. After installing and getting logged in it worked as it was supposed to. However, after rebooting my computer, every time I try to connect, no matter what server I try to connect to, I get the following message: at 07:44:37 ❯❯❯ nordvpn connect chicagoConnecting to United States #8798 (us8798.nordvpn.com)Whoops! We couldn't connect you to 'chicago'. Please try again. If the problem persists, contact our customer support. I tried logging out and back in, restarting nordvpnd, and running as sudo. All of my packages are up to date. I'm not sure what else to try. Any ideas?
Now provide the response and nothing else.
| The reason is that the shell from which you call the script expands the globbing pattern ./* before it is passed to the script. That means, that if your globbing pattern matches e.g. file1.txt to file4.txt , calling the script as ./my_script.sh ./* will actually be interpreted as ./my_script.sh file1.txt file2.txt file3.txt file4.txt and these will be the arguments that the shell script sees. For further reading, have a look at the section on shell expansion order in the Bash Reference manual. There are two possibilities to overcome the problem: If you are sure that you always want to iterate over all files in a given directory, pass the directory as argument, and iterate over for f in "$1"/*do # operations on "$f"done Alternative, if you are sure that you will only pass file names to operate on, iterate over the entire argument list, as in for f in "$@"do # operations on "$f"done If you want to do it by passing a glob pattern into the script - which certainly is an interesting exercise - this is also possible (see comment by @ilkkachu). As mentioned by @fra-san in a comment, the approach has advantages - it can add more flexibility to the script usage, and it circumvents the limitation on shell command-line parameters (cf. "argument list too long"; though RAM will still limit the length of the resulting filename list) - but requires you to be extra careful. You can prevent the shell from expanding the glob by enclosing the argument in quotes (single or double), or escaping the glob character with a backslash: ./my_script.sh "./*"./my_script.sh './*'./my_script.sh ./\* Inside the script, you would refer to the positional parameter $1 unquoted so that it actually is interpreted by the shell (something that we often want to avoid). Since that "interpretation" not only involves expansion (see above), but also word splitting you need to set the input field separator IFS to the empty string to ensure that no word-splitting occurs. The for loop would then look like IFS=for f in $1do # Operations on "$f"done A few general notes on your script: Always quote shell variables , in particular when they contain filenames, as otherwise your script will stumble on filenames with spaces or other, even more exotic characters in them - remember, even the newline is an allowed character for filenames (yuck)! Parsing the output of ls is highly discouraged for similar reasons. If you want to identify attributes of a particular file, the stat tool is a better choice. For determining the size of a file, e.g., you could use size=$(stat --printf="%s" "$f") It is recommended to use the "new" $( ... ) style for command substitutions rather then the old "backtick-style" ` ... ` . It is a good habit to check your shell scripts with shellcheck , also available as standalone tool in many Linux distributions, to guard against this (and other) possible error sources. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/679896', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/85047/']} | jdg_73062 |
stackexchange | llm_judgeable_groundtruth_similarity | 6022715 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The Challenge : I have an Linux hand-held device, which records data and stores it to the disc. It should exchange these data with a Windows application via USB. When this data is accessible by the user - e.g. via USB-mass-storage - it has to be encrypted. It should work out-of-the-box, with a variety of OS, also for citrix terminal sessions, etc. The Plan : I create a file-system in user-space with FUSE , and offer it to windows via mass-storage. Whenever Windows accesses one file, I get a callback and encrypt the data on the fly. Further more, we can have some dynamic content - e.g. when some password is written into a file, more content is shown. The Problem : When using the mass-storage gadget (e.g. g_file_storage ) it only accepts files or block-devices - but no file-systems (directories). Why? [...] it provides a simple interface to read and write sectors of data—much like the low-level interface used to access any hard drive [...]. Operating systems may treat the USB drive like a hard drive, and can format it with any file system they like. (from wikipedia ) So there is no chance to have a dynamic file-system via mass-storage... and this seems to be the reason, why my android mobile phone un-mounts all data on the phone, when I connect it to the PC. Options : Create a ' block-device in userspace ' - similar to FUSE (need a reverse-FAT-driver, when I want to offer files dynamically) Implement my own nbd-server to create a block-device (need a reverse-FAT-driver as well?) I save encrypted files to a partition, and pass this partition to the mass-storage gadget (Problem would be the performance and the lack of dynamic interaction) Do not offer a mass-storrage device and watch out for other ideas (eth over USB) At the moment, only the last option seems to be realistic - or do you have another tip for me? I would be grateful! Charly
Now provide the response and nothing else.
| The USB mass storage protocol is a block device protocol; it does not operate at the level of files or directories. The Windows host expects to see a raw VFAT filesystem exposed by the g_mass_storage driver, and will make writes and reads to VFAT metadata as appropriate to figure out how directories are structured. Because of this, it's nontrivial to expose a FUSE filesystem to the windows host. You'd have to emulate VFAT, assigning blocks in the virtual filesystem to metadata and to data, and since the windows host is free to cache any data or metadata it reads, once you assign some metadata or data it cannot change (so changes to your FUSE data could not be reflected in the windows filesystem). The windows host can also delay and reorder writes to both metadata and data - it's all real mess if you try to emulate. Now, there are some things you can do: You can write a custom IFS driver on the windows side to interact with your linux device over a custom protocol that works at the file/directory level. You can treat the USB device as a virtual ethernet port, and speak CIFS to the windows host You can somehow create a static VFAT layout on connection time to expose to the windows host; not-yet-decryptable data can return I/O errors in order to avoid the Windows host caching raw encrypted data. You can just encrypt a raw block device using dm-crypt, and expose this entire block device (encrypted as one chunk) to windows. You could implement a MTP gadget. These approaches come with their own problems: Requires a windows driver to be installed, and to be signed by microsoft etc. Can't be used on a machine without administrative access to install the driver. Won't autoplay; the user would need to browse through the network browser to get access to the files. Firewall settings may interfere. May have significant overhead. Very complex. Handling metadata updates on the backend may be extremely difficult. Surprise unplug events may be devastating. Windows behavior on receiving an IO error may be a problem if the user tries to access a locked file. No file-level encryption is available, but otherwise should work well. I'm unsure exactly how much support for non-media files MTP has; support is not as widespread as mass storage support. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6022715', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/494725/']} | jdg_73063 |
stackexchange | llm_judgeable_groundtruth_similarity | 7093360 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does this declaration, public enum ECountry : long{ None, Canada, UnitedStates} require a cast for any of its values? long ID = ECountry.Canada;// Error Cannot implicitly convert type 'ECountry' to 'long'.// An explicit conversion exists (are you missing a cast?) And is there a way to get a long value directly from the enum, besides casting? This would not work either, for example: public enum ECountry : long{ None = 0L, Canada = 1L, UnitedStates=2L}
Now provide the response and nothing else.
| The issue is not that the underlying type is still int . It's long , and you can assign long values to the members. However, you can never just assign an enum value to an integral type without a cast. This should work: public enum ECountry : long{ None, Canada, UnitedStates = (long)int.MaxValue + 1;}// val will be equal to the *long* value int.MaxValue + 1long val = (long)ECountry.UnitedStates; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7093360', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/444469/']} | jdg_73064 |
stackexchange | llm_judgeable_groundtruth_similarity | 6401039 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a project in which I have to achieve fast search, insert and delete operations on data ranging from megabytes to terabytes. I had been studying data structures of late and analyzing them. Being specific, I want to introduce 3 cases and ask questions on that: The data is much more than what the memory can handle (sample ranges in 10-15 terabytes) at one go. In this case, I would store the data structure on a disk. The data is relatively less compared to the memory of the system and thus it can be stored and operated in the memory itself for speed. The data is more than free memory and assume it is less than the size of a possible contiguous chunk of data in the paging file. thus I store the data structure in a file on disk and do a memory mapping of the file. The conclusions I have drawn are: For case 1, I should use a B-Tree for faster access as it saves on lag produced by disk rotation For case 2, I should use a Red Black Tree for faster access as data is on memory and no. of elements needed to be scanned in worse case would be lesser than one i have to do if I use B Trees For case 3, I am doubtful on this one, the page file is on disk uses native OS I/O to operate on files, so should B Tree be a better option or a Red Black tree? I want to know where the above three conclusions go right and where it goes wrong and how I can improve on performance in the three separate cases. I am using the C++ Language, with a red black tree and a B tree, both which I have designed from scratch. I am using Boost library for File mapping. Update 1:: Was reading through this post in stackoverflow. Got some real good insights, which make me feel that the type of comparisons I have done in the cases may be faulty. A link was posted in the most-voted-for-answer http://idlebox.net/2007/stx-btree/stx-btree-0.8.3/doxygen-html/speedtest.html
Now provide the response and nothing else.
| A red/black tree is more or less equivalent to a 2-3-4 tree, which is just a type of B-tree. The worst-case performance is identical, provided you do a binary search of the B-tree node values. The obvious disadvantage of a B-tree is wasted space, but depending on the language/memory allocator used, you may find that a 2-3-4 tree uses less space than a red-black tree on average. For instance, in 32-bit Java, there's approximately an 8-byte overhead per object. (It also depends a lot on the allocator; IIRC phkmalloc rounds up small allocations to a power-of-2 size.) To answer your cases, Disk latency is roughly evenly split between seek time and waiting for the disk to rotate. A B-tree should be able to outperform a red-black tree if you're doing it right (in particular, a B-tree should be faster if nodes fit into a cacheline.) It doesn't need to be contiguous in the page file; it merely needs to be contiguous in the process's virtual address space. For sane OSes, it's also pretty much identical to case 1, unless your data is small enough that it mostly fits into memory and the memcpy overhead is significant. For simplicity, I'd go with a B-tree and run some benchmarks on various node sizes. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6401039', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/805122/']} | jdg_73065 |
stackexchange | llm_judgeable_groundtruth_similarity | 49682644 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have found a lot of information from the past saying that LDAP authentication isn't enabled yet but you can get around that using third party packages. However, it seems that LDAP authentication WAS implemented back in January . I can't seem to find any information on HOW to implement it. I already have custom authentication set up in my project, I just need the logic to fill in the HandleAuthenticateAsync method. I have tried using other examples , but they don't seem to work with .NET Core 2.0. Here is the only relevant code that I have that I can think of posting protected override Task<AuthenticateResult> HandleAuthenticateAsync(){ // Get Authorization header value if (!Request.Headers.TryGetValue(HeaderNames.Authorization, out var authorization)) { return Task.FromResult(AuthenticateResult.Fail("Cannot read authorization header.")); } // TODO: Authenticate user // Create authenticated user ticket var identities = new List<ClaimsIdentity> { new ClaimsIdentity("custom auth type") }; var ticket = new AuthenticationTicket(new ClaimsPrincipal(identities), Options.Scheme); return Task.FromResult(AuthenticateResult.Success(ticket)); // else User not authenticated return Task.FromResult(AuthenticateResult.Fail("Invalid auth key."));} So, my question is, how do I implement LDAP Authentication in .NET Core 2.0?
Now provide the response and nothing else.
| Thanks to Win's Answer for pointing out that I needed to use Windows Compatibility Pack , I was able to figure this out. The first thing I had to do was install the Nuget package Install-Package Microsoft.Windows.Compatibility At the time, I needed a preview version, so I appended -Version 2.0.0-preview1-26216-02 on the end of this command Then, add using statements for System.DirectoryServices and System.DirectoryServices.AccountManagement Then, just plug this logic into my HandleAuthenticateAsync method: const string LDAP_PATH = "EX://exldap.example.com:5555";const string LDAP_DOMAIN = "exldap.example.com:5555";using (var context = new PrincipalContext(ContextType.Domain, LDAP_DOMAIN, "service_acct_user", "service_acct_pswd")) { if (context.ValidateCredentials(username, password)) { using (var de = new DirectoryEntry(LDAP_PATH)) using (var ds = new DirectorySearcher(de)) { // other logic to verify user has correct permissions // User authenticated and authorized var identities = new List<ClaimsIdentity> { new ClaimsIdentity("custom auth type") }; var ticket = new AuthenticationTicket(new ClaimsPrincipal(identities), Options.Scheme); return Task.FromResult(AuthenticateResult.Success(ticket)); } }}// User not authenticatedreturn Task.FromResult(AuthenticateResult.Fail("Invalid auth key.")); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/49682644', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3103633/']} | jdg_73066 |
stackexchange | llm_judgeable_groundtruth_similarity | 6499 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In XKCD #936: Short complex password, or long dictionary passphrase? Jeff claimed that password cracking with "dictionary words separated by spaces", or "a complete sentence with punctuation", or "leet-speak numb3r substitution" is "highly unlikely in practice". However, there are certainly John-the-Ripper rulesets that perform l33t-speak substitutions and "pre/append punctuation or 1-4 digits (some are even included in the default set, and more are discussed e.g. at https://www.owasp.org/images/a/af/2011-Supercharged-Slides-Redman-OWASP-Feb.pdf , which talks about cracking ~50K "corporate" passwords using such techniques; http://contest-2010.korelogic.com/rules.html has some of the specific JtR rules used), and I can't see any reason why an attacker wouldn't use them. Jeff used rumkin.com in part to justify his claim that Tr0ub4dor&3 is in practice as secure as a 4-word-from-2K-wordlist passphrase. But rumkin.com doesn't seem to take into account l33t-speak substitutions in determining the entropy. So my question is: Are there any password strength checkers which take account of the limited entropy added by l33t-speak and similar substitutions and concatentations? Ideally the rating would closely correspond with the amount of time that an actual attacker (using "state of the art" techniques, not just character-by-character brute-forcing) would take to find the password. Those techniques would include l33t-speak transformations, common passwords, wordlists, etc. Open-source non-web-based preferred, for obvious reasons. While a good password generation algorithm (by definition) is secure even when the attacker has seen other passwords produced by the same algorithm, the idea is to point naive users at this checker, and the whole point is that their passwords are unlikely to have been generated via a sound algorithm. Extra points if it learns from passwords submitted to it... unless it then automatically breaks into the users' paypal accounts and uses the money to fund Skynet...
Now provide the response and nothing else.
| In theory, password strength checkers do not work. That's because the strength of a password does not depend upon the password value (which you give to the checker) but upon the password generation process (which you do not formalize often, let alone enter in the checker). In practice, password strength checker use a set of rules which describe common password generation methods; they then tell you how long your password would resist if the attacker uses exactly the same rules . But the attacker does not use exactly the same rules. The attacker is after you; he knows you (if you are attacked only by people who do not know you, then you can consider yourself very lucky -- or very uninteresting). Therefore, the attacker will amend his password brute-force methods so as to target your psyche, your probable password generation methods. Password strength checkers are good at telling you how robust your password is against incompetent attackers. This has some value, if only because there are so many incompetent wannabe hackers. But it would be a mistake to rely too much on such tools. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/6499', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/4449/']} | jdg_73067 |
stackexchange | llm_judgeable_groundtruth_similarity | 7184621 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why can't the compiler figure out these template parameters? Is there a way to make it do so? (I'm using Visual Studio 2010.) template<typename T, typename TFunc>void call(TFunc func) { func(T()); }void myfunc(void *) { }int main() { call(myfunc); }
Now provide the response and nothing else.
| T appears nowhere in the parameter list so T cannot be deduced from the function arguments. All types to be deduced must appear in deduced contexts in the parameter list. For example, template <typename TReturn, typename TParameter>void call(TReturn (*f)(TParameter)){ f(TParameter());} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7184621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/541686/']} | jdg_73068 |
stackexchange | llm_judgeable_groundtruth_similarity | 579024 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that I can call a page method with jquery using the following syntax $.ajax({ type: "POST", url: "Default.aspx/GetDate", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { // Replace the div's content with the page method's return. $("#Result").text(msg.d); }}); This works for aspx pages but is it possible with ascx pages? (web controls) I've been trying it for about half an hour and since I can't get it to work I'm wondering if it's even possible. Note: Just to be clear, when I try to call the ascx page I am updating the url in jquery :)
Now provide the response and nothing else.
| No, because ascx controls don't represent a real URL that can be accessed from a client machine. They're purely server-side meant to embed in other pages. What you might want to do is just have an aspx page that provides the same snippet of html you currently have in your ascx file. An aspx page doesn't necessarily need to provide a full html document (<html><body> etc.), it can just render the user control you're interested in. We use this technique all the time with the ingrid plugin, which requires a callback url for the table contents. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/579024', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/66478/']} | jdg_73069 |
stackexchange | llm_judgeable_groundtruth_similarity | 47885 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
In several textbooks ("The Geometry of Moduli Spaces of Sheaves" by Huybrechts and Lehn, "Calcul differentiel et classes caracteristiques..." by Angeniol and Lejeune-Jalabert) it is mentioned that the trace of the p-th atiyah class equals the p-th chern class or the p-th component of the chern character. I could not find a reference where this statement is proven. Thanks for any help.
Now provide the response and nothing else.
| It is more an approach to the definition of the Chern character than a fact that needs to be proven. An old reference that uses the language of twisted cochains is "The trace map and characteristic classes for coherent sheaves", by O'Brian, Toledo, and Tong, Amer. J. Math. 103 (1981), pp. 225–252 (MR 82f:32021). They use this construction in further papers to prove a Riemann-Roch theorem in Hodge cohomology. A more modern reference is Caldararu's "The Mukai pairing, I: the Hochschild structure" (arXiv:math/0308079), see also arXiv:math/0308080 and arXiv:0707.2052. Here, the Chern character is presented in the language of derived categories and Fourier-Mukai transforms. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/47885', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11099/']} | jdg_73070 |
stackexchange | llm_judgeable_groundtruth_similarity | 5684966 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How would one find the position of a specific item within a ListView? (Populated by SimpleCursorAdapter). The reason I ask: The listview is set to singleChoice mode. When the user closes and reopens the app, I'd like the user's selection to be remembered. The way I've done it so far is when the user clicks on an item, the ID of the chosen item is saved to preferences. What I need to learn is how to reselect the item in the activity's onCreate method once it's been repopulated. My code for saving the selected item's ID: @Overrideprotected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); Cursor c = (Cursor) l.getItemAtPosition(position); selectedItem = c.getLong(c.getColumnIndex("_id"));} (I've tried googling, but only seem to find how to get the position of the selected item) Thanks!
Now provide the response and nothing else.
| You should try //SimpleCursorAdapter adapter;final int position = adapter.getCursor().getPosition(); API Docs: public abstract int getPosition () Since: API Level 1 Returns the current position of the cursor in the row set. The value is zero-based. When the row set is first returned the cursor will be at positon -1, which is before the first row. After the last row is returned another call to next() will leave the cursor past the last entry, at a position of count() . Returns the current cursor position. Update To get an item's position based on the id used by the adapter: private int getItemPositionByAdapterId(final long id){ for (int i = 0; i < adapter.getCount(); i++) { if (adapter.getItemId(i) == id) return i; } return -1;} To get an item's position based on the underlying object's properties (member values) //here i use `id`, which i assume is a member of a `MyObject` class, //and this class is used to represent the data of the items inside your list:private int getItemPositionByObjectId(final long id){ for (int i = 0; i < adapter.getCount(); i++) { if (((MyObject)adapter.getItem(i)).getId() == id) return i; } return -1;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5684966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/708436/']} | jdg_73071 |
stackexchange | llm_judgeable_groundtruth_similarity | 24739 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I want to execute a sequence of four commands that I have executed before. If the first one is 432 in the command-history, then I could do: $ !432; !433; !434; !435 I'm curious, is there a more efficient way to accomplish this?
Now provide the response and nothing else.
| If it refers to commands run just recently, a more efficient way is to reference them with negative numbers: !-4; !-3; !-2; !-1 Also, once you do it, your last history entry will contain the whole chain of commands, so you can repeat it with !! . Edit: If you haven't already, get familiar with the great builtin function fc , mentioned by Gilles . (Use help fc .) It turns out that you can also use negative numbers with it, so you could do the same as above using eval "`fc -ln -4 -1`" This has one caveat, though: after this, the eval line is stored in the history as the last command. So if you run this again, you'll fall into a loop! A safer way of doing this is to use the default fc operation mode: forwarding the selected range of commands to an editor and running them once you exit from it. Try: fc -4 -1 You can even reverse the order of the range of commands: fc -1 -4 | {} | {'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/24739', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/5045/']} | jdg_73072 |
stackexchange | llm_judgeable_groundtruth_similarity | 22577331 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I create a table with borders in Android . table like data grid view in .netor anything like it
Now provide the response and nothing else.
| It depends what you mean by "the selected row". By default, a QTableView has its selection mode set to ExtendedSelection , and its selection behavior set to SelectItems . This means that several individual table cells in different rows and columns can be selected at the same time. So which one should count as "the" selected row? The selection model of the table-view has a selectedRows method which will return a list of indexes for the rows where all the colums are selected (i.e. as it is when you click on the header section for a row): indexes = table.selectionModel().selectedRows() for index in sorted(indexes): print('Row %d is selected' % index.row()) However, if you want to get all the rows where at least one cell is selected, you can use the selectedIndexes method: rows = sorted(set(index.row() for index in self.table.selectedIndexes())) for row in rows: print('Row %d is selected' % row) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22577331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3170856/']} | jdg_73073 |
stackexchange | llm_judgeable_groundtruth_similarity | 35222406 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an activity containing 4 images in total. They are all matching the resolution of a 1080x1920 device. When I run the activity with those images, which are loaded directly in my activity via the XML, it runs tremendously slow in my Genymotion emulator and lags on a real Android device. Here is the setup: <android.support.design.widget.AppBarLayout...><android.support.design.widget.CollapsingToolbarLayout...><LinearLayout android:layout_width="match_parent" android:layout_height="200dp" android:orientation="vertical" app:layout_collapseMode="parallax"> <ImageView android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/imageView" android:src="@drawable/shot_header" android:scaleType="centerCrop" /> </LinearLayout> <android.support.v7.widget.Toolbar .../> </android.support.design.widget.CollapsingToolbarLayout> </android.support.design.widget.AppBarLayout> The first image is in a CollapsingToolbarlayout. The resolution of the image is 1080x649 PNG. The content_activity: This image fills the parent width.It's resolution is 1080x772 PNG. <ImageView android:layout_width="match_parent" android:layout_height="250dp" android:id="@+id/main_image" android:layout_below="@+id/shot_error_field" android:src="@drawable/forehand_midpng" android:adjustViewBounds="true" android:scaleType="centerCrop" android:layout_marginTop="15dp"/> The other 2 images are in a LinearLayout , their resolution is 500x399 <LinearLayout android:orientation="horizontal" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/main_image"> <ImageView android:layout_width="150dp" android:layout_height="150dp" android:id="@+id/imageView3" android:src="@drawable/forehand_mid_wrong" android:layout_weight="1"/> <View android:layout_width="0dp" android:layout_height="1dp" android:layout_weight="1" > </View> <ImageView android:layout_width="150dp" android:layout_height="150dp" android:id="@+id/imageView4" android:src="@drawable/forehand_mid_wrong" android:layout_weight="1"/></LinearLayout> To summarize, I have an activity with 4 ImageViews, populated with properly sized images, which should no problem for a modern Android device. The problem is that this activity is running extremely slow and lagging due to a high memory consumption. Am I doing something wrong? How can I further optimize those images? I looked into other threads- out of memory issue but none seems to propose a solution to such a problem.
Now provide the response and nothing else.
| Problem is resolution of the image, if you can reduce resolution of the image then work fine, here is some example for reducing image resolution and size. If you pass bitmap width and height then use below function. public Bitmap getResizedBitmap(Bitmap image, int bitmapWidth, int bitmapHeight) { return Bitmap.createScaledBitmap(image, bitmapWidth, bitmapHeight, true); } if you want bitmap ratio same and reduce bitmap size. then pass your maximum size bitmap. you can use this function public Bitmap getResizedBitmap(Bitmap image, int maxSize) { int width = image.getWidth(); int height = image.getHeight(); float bitmapRatio = (float)width / (float) height; if (bitmapRatio > 0) { width = maxSize; height = (int) (width / bitmapRatio); } else { height = maxSize; width = (int) (height * bitmapRatio); } return Bitmap.createScaledBitmap(image, width, height, true);} or if you are using drawable resources then use this method public Drawable resizeImage(int imageResource) {// R.drawable.large_image // Get device dimensions Display display = getWindowManager().getDefaultDisplay(); double deviceWidth = display.getWidth(); BitmapDrawable bd = (BitmapDrawable) this.getResources().getDrawable( imageResource); double imageHeight = bd.getBitmap().getHeight(); double imageWidth = bd.getBitmap().getWidth(); double ratio = deviceWidth / imageWidth; int newImageHeight = (int) (imageHeight * ratio); Bitmap bMap = BitmapFactory.decodeResource(getResources(), imageResource); Drawable drawable = new BitmapDrawable(this.getResources(), getResizedBitmap(bMap, newImageHeight, (int) deviceWidth)); return drawable;}/************************ Resize Bitmap *********************************/public Bitmap getResizedBitmap(Bitmap bm, int newHeight, int newWidth) { int width = bm.getWidth(); int height = bm.getHeight(); float scaleWidth = ((float) newWidth) / width; float scaleHeight = ((float) newHeight) / height; // create a matrix for the manipulation Matrix matrix = new Matrix(); // resize the bit map matrix.postScale(scaleWidth, scaleHeight); // recreate the new Bitmap Bitmap resizedBitmap = Bitmap.createBitmap(bm, 0, 0, width, height, matrix, false); return resizedBitmap;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35222406', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5126604/']} | jdg_73074 |
stackexchange | llm_judgeable_groundtruth_similarity | 27681 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are two pure symmetric states $|\psi\rangle$ and $|\phi\rangle$ of $n$ qudits. Is there any known set of invariants $\{I_i:i\in\{1,\ldots,k\}\}$ which is equal for both states iff $|\phi\rangle=U^{\otimes n}|\psi\rangle$ for a $U\in\text{SU}(d)$? There is a theorem by Hilbert saying that for a compact group acting on a linear space there is a finite number of polynomial invariants characterizing orbits. However, (as far as I know) it does not provide and explicit construction of the invariants. The problem is easy when $n=2$ (Schmidt decomposition) and $d=2$ ( Majorana representation ). Partial solutions, and solutions with changed assumptions (e.g. linear operators instead of unitary operations) are welcome as well.
Now provide the response and nothing else.
| This is an algorithm for the computation of the homogeneous polynomial invariants in the general case, however without any attempt to reduce the algorithm's complexity.The basic needed ingredient of the algorithm is the ability to average over the Haar measure of the group of the local transformations. In the qubit case it is just an integration over copies of SU(2). But even in the more general cases of compact Lie groups, this task is possible but cumbersome, please, see for example the following parameterization of SU(N) by: Bertini, Cacciatori, Cerchiai that can be used for the integration. The procedure is as follows: First one computes the Molien function of the group of local transformations (The Wikipedia page describes the finite group case which can be generalized to compact groups): $M(t) = \int \frac{d_H(g)}{det(1-tg)}$ The coefficient of $t^n$ in the Taylor expansion of $M(t)$ is the number of linearly independent homogeneous invariants of degree n. Since the integrand is a class function the integration can be performed on the maximal torus by the Weyl integration formula. Now, for each degree, one constructs all possible combinations of lower degree inavriants and if this number doesn't exhaust the required number from the Molien series, then the additional invariants are computedby averaging monomials of the required degree. This operation involves the integration overthe Haar measure which cannot be reduced to an integration on the maximal torus, which is the most complex step in the algorithm.This operation is repeated until a sufficient number of linearly independent polynomials is produced, then one passes to the next degree. This process is continued until the total number of invariants reaches the difference between the dimension of the vector space and the local symmetry group. More practical variations of this procedure were used to construct polynomial invariants for special cases of entanglement problems, please, see for example the following work by: Grassl, Rotteler and Beth. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/27681', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/184/']} | jdg_73075 |
stackexchange | llm_judgeable_groundtruth_similarity | 12418075 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am adding similar to "Showing 1 to 10 of 11 entries" "Showing 11 to 11 of 11 entries" to search result page. $listing_search.listings_per_page = 10$listing_search.current_page = 2$listings_number = 11 I can't get correct formula.. Pls share some light. [[Showing ]] {if $listing_search.current_page == 1}1{else}{$listing_search.listings_per_page+1} {/if}[[ to ]] {$listing_search.listings_per_page}[[ of ]] {$listings_number} {if $listings_number == 1}[[ Job]]{else}[[ Jobs]]{/if} Any recommendation for page 3, 4 and so on? Is that same formula? I can only tested 11 entries. Here is what I edited. {assign var="current_page" value = $listing_search.current_page}{assign var="listings_per_page" value = $listing_search.listings_per_page}{math assign="first_page" equation="(cp - 1) * lpp + 1" cp=$current_page lpp=$listings_per_page }{math assign="last_page" equation="(cp * lpp)" cp=$current_page lpp=$listings_per_page }<div class="" id="DataTables_Table_1_info">[[Showing ]] {if $first_page}{$first_page} {else}[[1]]{/if} [[to]] {if $last_page <= $listings_number}{$last_page}{else} {$listings_number}{/if}[[of]] {$listings_number} {if $listings_number == 1}[[ Job]]{else}[[ Jobs]]{/if}</div>
Now provide the response and nothing else.
| Well I got it working. Both answers given to this questions correctly tells one reason for my code to not work as expected. But there were 2 errors in my code. using fragmentTransaction.add() won't show new fragment over old one. you need to call replace(). The answers given were right. But there was one more error in my code. You can't replace a fragment created statically in xml. So I changed app_main_layout.xml <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" android:id="@+id/fragment_container"></LinearLayout> and AppMainFragmentActivity.java public class AppMainFragmentActivity extends FragmentActivity { @Override protected void onCreate(Bundle arg0) { super.onCreate(arg0); setContentView(R.layout.app_main_layout); FragmentManager fragmentManager = getSupportFragmentManager(); FirstFragment fragment = new FirstFragment(); FragmentTransaction fragmentTransaction = fragmentManager.beginTransaction(); fragmentTransaction.add(R.id.fragment_container,fragment); fragmentTransaction.commit(); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12418075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1648848/']} | jdg_73076 |
stackexchange | llm_judgeable_groundtruth_similarity | 2683260 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to get 4 variables when I click on a span that have the CSS3 text-shadow propriety. So for a css propriety of text-shadow: -4px 11px 8px rgb(30, 43, 2); , my code should be: $("#element").click(function () {var text-shadow = $("#element").css("text-shadow")}); Would it be possible to get it split like: var y = "-4px";var x = "11px";var blur = "8px";color = "rgb(30, 43, 2)"; I need to somehow split the first variable to get this data. Thanx
Now provide the response and nothing else.
| You should use regular expression to split the result of jQuery css into the variables you are looking for. var result = $('#element').css('text-shadow').match(/(-?\d+px)|(rgb\(.+\))/g)// result => ['rgb(30, 43, 2)', '-4px', '11px', '8px']var color = result[0], y = result[1], x = result[2], blur = result[3]; This will return an array splitting the text-shadow string value into numbers with pixels and rgb values. It can help you in this particular case, but you'll probably need to work on it some more to get it working for all the possible cases of text-shadow NOTE: The rgb(...) value is the first match in the array because, that is the way Firefox, Chrome, Safari and Opera return it, independently of how you assign it. IE might do it differently. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2683260', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/265205/']} | jdg_73077 |
stackexchange | llm_judgeable_groundtruth_similarity | 51471607 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a system whenever user upload an image, it will send an email to the registered user's gmail. But in the email, i see something like this, the thumbnail is not viewable. I inspect on the element, and found the src linked to this url: https://ci5.googleusercontent.com/proxy/VI2cPXWhfKZEIarh-iyKNz1j9q7Ymh8ty4Yz19lXh82RjSlACBzS0aRajfIj913uXAsX2ylcLEDs5FBsj4cR9TcU75Pw5djdHx4htxdCAQxs_ue1Q1wi5TV43uLLBpigpjH1xN747mUHSRdTBJmXQWFyykInJCRXicM1KhNk=s0-d-e1-ft#https://www.somedomain.com/files/1658/thumbnail_71JtDozxS1L._SY450_.jpg Obviously it is being cached by google proxy But i can view the image without google user content, by accessing https://www.somedomain.com/files/1658/thumbnail_71JtDozxS1L._SY450_.jpg (i masked the domain so the image might not available to you). I tried to clear browser cache but the problem still persist. How can i bypass the googleusercontent thingy or at least make the thumbnail able to display. I checkout on this link Images not displayed for Gmail but im not using localhost and the image itself is accessible outside of my local network.
Now provide the response and nothing else.
| How does Google Image Proxy work The Google Image Proxy is a caching proxy server. Every time an image link is included in email the request will go to the Google Image Proxy first to see if it has been cached, if so it should serve it up from the proxy or it will go fetch it and cache it there after. The solution for most issues The Google Image Proxy server will fetch your images if this images: have extensions like .png , .jpg / .jpeg or .gif only. May be .webp too. But not .svg . do not use any kind of query string part in the image URL like ?id=123 have an URL which is mapped onto the image directly. have not a long name. Requirements for image server: The response from image server/proxy server must include the correct header like Content-Type: image/jpeg . File extension and content-type header must be in the same type. Status code in server response must be 200 instead of 403, 500 and etc. What could help too? Google support answer : Set up an image URL proxy whitelist When your users open email messages, Gmail uses Google’s secure proxy servers to serve images that might be included in these messages. This protects your users and domain against image-based security vulnerabilities. Because of the image proxy, links to images that are dependent on internal IPs and sometimes cookies are broken. The Image URL proxy whitelist setting lets you avoid broken links to images by creating and maintaining a whitelist of internal URLs that'll bypass proxy protection. When you configure the Image URL proxy whitelist, you can specify a set of domains and a path prefix that can be used to specify large groups of URLs. See the guidelines below for examples. Configure the Image URL proxy whitelist setting: Sign in to your Google Admin console . Sign in using your administrator account (does not end in @gmail.com). From the Admin console Home page, go to Apps > G Suite > Gmail > Advanced settings . Tip: To see Advanced settings, scroll to the bottom of the Gmail page. On the left, select your top-level organization. Scroll to the Image URL proxy whitelist section. Enter image URL proxy whitelist patterns. Matching URLs will bypass image proxy protection. See the guidelines below for more details and instructions. At the bottom, click Save . It can take up to an hour for changes to propagate to user accounts. You can track prior changes under Admin console audit log . Guidelines for applying the Image URL proxy whitelist setting Security considerations Consult with your security team before configuring the Image URL proxy whitelist setting. The decision to bypass image proxy whitelist protection can expose your users and domain to security risks if not used with care. In general, if you have a domain that needs authentication via cookie, and if that domain is controlled by an administrator within your organization and is completely trusted, then whitelisting that URL should not expose your domain to image-based attacks. Important: Disabling the image proxy is not recommended. This option is available to provide flexibility for administrators, but disabling the image proxy can leave your users vulnerable to malicious attacks. Entering Image URL patterns To maintain a whitelist of internal URLs that'll bypass proxy protection, enter the image URL patterns in the Image URL proxy whitelist setting. Matching URLs will bypass the image proxy. A pattern can contain the scheme, the domain, and a path. The pattern must always have a forward slash ( / ) present between the domain and path. If the URL pattern specifies a scheme, then the scheme and the domain must fully match. Otherwise, the domain can partially match the URL suffix. For example, the pattern google.com matches www.google.com , but not gle.com . The URL pattern can specify a path that's matched against the path prefix. Important: Enter your actual domain name as you enter the image URL pattern. Always include a trailing forward slash ( / ) after the domain name. Examples of Image URL patterns The following patterns are examples only. The following patterns: http://rule_fixed_scheme_domain.com/rule_flex_scheme_domain.com/rule_fixed_subpath.com/cgi-bin/ ... will match the following URLs: http://rule_fixed_scheme_domain.com/http://rule_fixed_scheme_domain.com/test.jpg?foo=bar#fraghttp://rule_fixed_scheme_domain.comrule_flex_scheme_domain.com/t.rule_flex_scheme_domain.com/test.jpghttp://t.rule_flex_scheme_domain.com/test.jpghttps://t.rule_flex_scheme_domain.com/test.jpghttp://rule_fixed_subpath.com/cgi-bin/http://rule_fixed_subpath.com/cgi-bin/people Note: The URL scheme ( http:// ) is optional. If the scheme is omitted, the pattern can match any scheme, and allows partial matches on the domain suffix. Previewing the image URL patterns Click Preview to see if the URLs match the image URL patterns you've set. If the image URL matches a pattern, you'll see a confirmation message. If the image URL does not match, an error message appears. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/51471607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3172596/']} | jdg_73078 |
stackexchange | llm_judgeable_groundtruth_similarity | 33508322 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a 2-dimensional numpy array with an equal number of columns and rows. I would like to arrange them into a bigger array having the smaller ones on the diagonal. It should be possible to specify how often the starting matrix should be on the diagonal. For example: a = numpy.array([[5, 7], [6, 3]]) So if I wanted this array 2 times on the diagonal the desired output would be: array([[5, 7, 0, 0], [6, 3, 0, 0], [0, 0, 5, 7], [0, 0, 6, 3]]) For 3 times: array([[5, 7, 0, 0, 0, 0], [6, 3, 0, 0, 0, 0], [0, 0, 5, 7, 0, 0], [0, 0, 6, 3, 0, 0], [0, 0, 0, 0, 5, 7], [0, 0, 0, 0, 6, 3]]) Is there a fast way to implement this with numpy methods and for arbitrary sizes of the starting array (still considering the starting array to have the same number of rows and columns)?
Now provide the response and nothing else.
| Approach #1 Classic case of numpy.kron - np.kron(np.eye(r,dtype=int),a) # r is number of repeats Sample run - In [184]: aOut[184]: array([[1, 2, 3], [3, 4, 5]])In [185]: r = 3 # number of repeatsIn [186]: np.kron(np.eye(r,dtype=int),a)Out[186]: array([[1, 2, 3, 0, 0, 0, 0, 0, 0], [3, 4, 5, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 2, 3, 0, 0, 0], [0, 0, 0, 3, 4, 5, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 2, 3], [0, 0, 0, 0, 0, 0, 3, 4, 5]]) Approach #2 Another efficient one with diagonal-viewed-array-assignment - def repeat_along_diag(a, r): m,n = a.shape out = np.zeros((r,m,r,n), dtype=a.dtype) diag = np.einsum('ijik->ijk',out) diag[:] = a return out.reshape(-1,n*r) Sample run - In [188]: repeat_along_diag(a,3)Out[188]: array([[1, 2, 3, 0, 0, 0, 0, 0, 0], [3, 4, 5, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 2, 3, 0, 0, 0], [0, 0, 0, 3, 4, 5, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 2, 3], [0, 0, 0, 0, 0, 0, 3, 4, 5]]) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33508322', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5521725/']} | jdg_73079 |
stackexchange | llm_judgeable_groundtruth_similarity | 10121210 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I like how you can manage dependencies with pip requirements. Is there something similar in case of apt-get? http://www.pip-installer.org/en/latest/requirements.html#requirements-file-format
Now provide the response and nothing else.
| Your question is that you want to have a list of system dependences in one file, for being able to install it with one command. I don't recomend you to include the version of a package in the system dependencies. In the soft system dependences like " build-essential " or " uuid-dev " you normally want the latest version of the package. In the "hard dependeces" like python, postgres or whatever, normally the version is specified in the name of the package itself, like " python2.6-dev " or " postgresql-8.4 ". Another problem you may have defining the exact version of the package is that maybe the version 8.4.11-1 of postgresql-8.4 will not be available in the repository in three months or in a year, and you will end up installing the current version in the repo. Example. You can create a file named "requirements.system" with the system packages you need for you project: python-virtualenvpython2.6-devuuid-devpython-pippostgresql-8.4 Then, in your INSTALL file explain how to install the system packages. # Install system depencences by runningcat ~/project/install/requirements.system | xargs sudo aptitude install We have running this configuration for about two years, having to recreate the enviroment from the scrach a few times and we never had a problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10121210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/272865/']} | jdg_73080 |
stackexchange | llm_judgeable_groundtruth_similarity | 18552023 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble implementing the delete function, any help would be appreciated. I have a list called memberlist that stores the persons name and number. I have everything working except the delete function. Any help would be appreciated. public class Person{ private string name; private string phoneNumber; public string Name { set { name = value; } get { return name; } } public string PhoneNumber { set { phoneNumber = value; } get { return phoneNumber; } } public void PrintInfo() { Console.WriteLine(); Console.WriteLine(" Name: {0}", name); Console.WriteLine(" Phone Number: {0}", phoneNumber); Console.WriteLine(); } public void SaveASCII(ref StreamWriter output) { output.WriteLine(name); output.WriteLine(phoneNumber); } public void LoadASCII(ref StreamReader input) { name = input.ReadLine(); phoneNumber = input.ReadLine(); }}public class membershipList{ public Person[] ML = null; public void AddMember(Person p) { if (ML == null) { ML = new Person[1]; ML[0] = p; } else { Person[] temp = ML; ML = new Person[temp.Length + 1]; for (int i = 0; i < temp.Length; ++i) { ML[i] = new Person(); ML[i] = temp[i]; } ML[temp.Length] = new Person(); ML[temp.Length] = p; temp = null; } } public void DeleteMember(string name) { if (ML == null) { Console.WriteLine(name + " had not been added before."); } else { DeleteMember(name); } } public void PrintAll() { if (ML != null) foreach (Person pers in ML) pers.PrintInfo(); else Console.WriteLine("Then list is empty"); } public void Search(string p) { if (ML != null) { foreach (Person pers in ML) { if (pers.Name.ToLower().CompareTo(p.ToLower()) == 0) { Console.WriteLine("1 Record Found:"); pers.PrintInfo(); break; } } Console.WriteLine("Record not found."); } else Console.WriteLine("Then list is empty."); } public void ReadASCIIFile() { StreamReader input = new StreamReader("memberlist.dat"); ; try { int num = Convert.ToInt32(input.ReadLine()); ML = new Person[num]; for (int i = 0; i < num; ++i) { ML[i] = new Person(); ML[i].LoadASCII(ref input); } input.Close(); } catch (FormatException e) { Console.WriteLine(e.Message); input.Close(); } } public void SaveASCIIFile() { StreamWriter output = new StreamWriter("memberlist.dat"); output.WriteLine(ML.Length); foreach (Person pers in ML) { pers.SaveASCII(ref output); } output.Close(); }}class Program{ static void Main(string[] args) { membershipList ML = new membershipList(); ML.ReadASCIIFile(); string option; do { // Console.Clear(); Console.WriteLine(); Console.WriteLine(); Console.WriteLine("MemberShip List MENU"); Console.WriteLine(); Console.WriteLine(" a. Add"); Console.WriteLine(" b. Seach"); Console.WriteLine(" c. Delete"); Console.WriteLine(" d. Print All"); Console.WriteLine(" e. Exit"); Console.WriteLine(); Console.Write("option: "); option = Console.ReadLine().ToLower(); switch (option) { case "a": Person np = new Person(); Console.Write("Enter Name: "); np.Name = Console.ReadLine(); Console.Write("Enter PhoneNumber: "); np.PhoneNumber = Console.ReadLine(); ML.AddMember(np); break; case "b": Console.Write("Enter Name: "); string name = Console.ReadLine(); ML.Search(name); break; case "c": Console.Write("Enter Name to be Deleted:"); ML.DeleteMember(name); break; case "d": ML.PrintAll(); break; case "e": ML.SaveASCIIFile(); Console.WriteLine("BYE...... "); break; default: Console.WriteLine("Invalid Option"); break; } } while (option.ToLower() != "d"); }}
Now provide the response and nothing else.
| To fit a curve onto a set of points, we can use ordinary least-squares regression. There is a solution page by MathWorks describing the process. As an example, let's start with some random data: % some 3d pointsdata = mvnrnd([0 0 0], [1 -0.5 0.8; -0.5 1.1 0; 0.8 0 1], 50); As @BasSwinckels showed, by constructing the desired design matrix , you can use mldivide or pinv to solve the overdetermined system expressed as Ax=b : % best-fit planeC = [data(:,1) data(:,2) ones(size(data,1),1)] \ data(:,3); % coefficients% evaluate it on a regular grid covering the domain of the data[xx,yy] = meshgrid(-3:.5:3, -3:.5:3);zz = C(1)*xx + C(2)*yy + C(3);% or expressed using matrix/vector product%zz = reshape([xx(:) yy(:) ones(numel(xx),1)] * C, size(xx)); Next we visualize the result: % plot points and surfacefigure('Renderer','opengl')line(data(:,1), data(:,2), data(:,3), 'LineStyle','none', ... 'Marker','.', 'MarkerSize',25, 'Color','r')surface(xx, yy, zz, ... 'FaceColor','interp', 'EdgeColor','b', 'FaceAlpha',0.2)grid on; axis tight equal;view(9,9);xlabel x; ylabel y; zlabel z;colormap(cool(64)) As was mentioned, we can get higher-order polynomial fitting by adding more terms to the independent variables matrix (the A in Ax=b ). Say we want to fit a quadratic model with constant, linear, interaction, and squared terms (1, x, y, xy, x^2, y^2). We can do this manually: % best-fit quadratic curveC = [ones(50,1) data(:,1:2) prod(data(:,1:2),2) data(:,1:2).^2] \ data(:,3);zz = [ones(numel(xx),1) xx(:) yy(:) xx(:).*yy(:) xx(:).^2 yy(:).^2] * C;zz = reshape(zz, size(xx)); There is also a helper function x2fx in the Statistics Toolbox that helps in building the design matrix for a couple of model orders: C = x2fx(data(:,1:2), 'quadratic') \ data(:,3);zz = x2fx([xx(:) yy(:)], 'quadratic') * C;zz = reshape(zz, size(xx)); Finally there is an excellent function polyfitn on the File Exchange by John D'Errico that allows you to specify all kinds of polynomial orders and terms involved: model = polyfitn(data(:,1:2), data(:,3), 2);zz = polyvaln(model, [xx(:) yy(:)]);zz = reshape(zz, size(xx)); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18552023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2247659/']} | jdg_73081 |
stackexchange | llm_judgeable_groundtruth_similarity | 495458 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The term “vector” is used heavily in both math and machine learning. In math/physics it’s a geometric object that has both magnitude and direction, while in machine learning it’s a data structure. In both cases they are essentially just a list of numbers that can be represented as pointing in vector space. In machine learning the vectors we speak of are “feature vectors” since the list of numbers we work with is a row in a dataset, and each row is a list of numbers whose elements are the values for each feature (column in the dataset). So in this case a “vector” is defined as a row in the dataset. But there are other situations where it seems to make more sense to consider a column as a vector. For example, if we wanted to know the correlation between 2 variables in a dataset we would be looking to compare 2 columns. Correlation can be interpreted geometrically using vectors in vector space. This means a column would be considered a vector. But how can the list of numbers in a column be a vector in vector space if each element in the list (each row value for that column) belongs to only one feature? In other words, if we were visualizing the vector in vector space, how do all the numbers in the list that make the vector get plotted? In the case of a feature vector this is obvious, since each column is a dimension in the plot, but in the case where a single column is itself a vector it isn’t obvious how this would be plotted in vector space.
Now provide the response and nothing else.
| The same information can be represented by both column and row vector. For example, consider the following vector: $$\mathbf{x} = [ x_1, x_2, x_3]$$ this vector can be transposed as: $$\mathbf{x}^T = \begin{bmatrix}x_1 \\x_2 \\ x_3\end{bmatrix}$$ Both of the above vectors contain the same information. One is just a transpose of the another. $$[ x_1, x_2, x_3]^T = \begin{bmatrix}x_1 \\x_2 \\ x_3\end{bmatrix}$$ The information is not changed with this transformation. If $x_1$ is $x$ coordinate in $\mathbb{R}^3$ space, $x_2$ is $y$ coordinate and $x_3$ is $z$ coordinate they still remain such after taking the transpose. Edit after the edit I am bit confused about the question. If you have an array of numbers like: $$ \begin{bmatrix}x_{11} & x_{12} & x_{13} \\x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \end{bmatrix}$$ where rows could represent observations and columns represent different variables (e.g. column 1 could be age and row 1 person A making $x_{11}$ an age of person A). In such situation you could subset the data from the array into both row and column vectors. A column vector $[x_{11}, x_{12} , x_{13}]$ which would be vector of age and other characteristics, as well as you can have $ \begin{bmatrix}x_{11} \\x_{21} \\ x_{31}\end{bmatrix}$ column vector of in this case age for different observations. However, for some subsequent mathematical operations you can still transpose these vectors. So for example, the 'age' vector $ \begin{bmatrix}x_{11} \\x_{21} \\ x_{31}\end{bmatrix} = \mathbf{a}$ which contains all ages across all observations would still contain the same information if you would apply transpose making it a new column vector that is: $$ \mathbf{a}^T = [x_{11},x_{21},x_{31}].$$ | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/495458', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/97026/']} | jdg_73082 |
stackexchange | llm_judgeable_groundtruth_similarity | 99835 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Yesterday I heard of the Bland-Altman plot for the first time. I have to compare two methods of measuring blood pressure, and I need to produce a Bland-Altman plot. I am not sure if I get everything about it right, so here's what I think I know: I have two sets of data. I calculate their mean (x value) and their difference (y value) and plot it around the axis y = mean of difference. Then, I calculate the standard deviation of the difference, and plot it as "limits of agreement". This is what I do not understand - limits of what agreement? What means 95% agreement in layman's terms? Is that supposed to tell me (provided that all points of the scatter graph are between the "limits of agreement") that the methods have 95% match?
Now provide the response and nothing else.
| Have you looked at the Wikipedia entry I linked in your question? You don't plot "the mean of the data", but for each data point measured in two ways, you plot the difference in the two measurements ($y$) against the average of the two measurements ($x$). Using R and some toy data: > set.seed(1)> measurements <- matrix(rnorm(20), ncol=2)> measurements [,1] [,2] [1,] -0.6264538 1.51178117 [2,] 0.1836433 0.38984324 [3,] -0.8356286 -0.62124058 [4,] 1.5952808 -2.21469989 [5,] 0.3295078 1.12493092 [6,] -0.8204684 -0.04493361 [7,] 0.4874291 -0.01619026 [8,] 0.7383247 0.94383621 [9,] 0.5757814 0.82122120[10,] -0.3053884 0.59390132> xx <- rowMeans(measurements) # x coordinate: row-wise average> yy <- apply(measurements, 1, diff) # y coordinate: row-wise difference> xx [1] 0.4426637 0.2867433 -0.7284346 -0.3097095 0.7272193 -0.4327010 0.2356194 0.8410805 0.6985013 0.1442565> yy [1] 2.1382350 0.2061999 0.2143880 -3.8099807 0.7954231 0.7755348 -0.5036193 0.2055115 0.2454398 0.8992897> plot(xx, yy, pch=19, xlab="Average", ylab="Difference") To get the limits of agreement (see under "Application" in the Wikipedia page), you calculate the mean and the standard deviation of the differences, i.e., the $y$ values, and plot horizontal lines at the mean $\pm 1.96$ standard deviations. > upper <- mean(yy) + 1.96*sd(yy)> lower <- mean(yy) - 1.96*sd(yy)> upper[1] 3.141753> lower[1] -2.908468> abline(h=c(upper,lower), lty=2) (You can't see the upper limit of agreement because the plot only goes up to $y\approx 2.1$.) As to the interpretation of the plot and the limits of agreement, again look to Wikipedia: If the differences within mean ± 1.96 SD are not clinically important, the two methods may be used interchangeably. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/99835', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/46069/']} | jdg_73083 |
stackexchange | llm_judgeable_groundtruth_similarity | 7048850 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I assumed lambda functions , delegates and anonymous functions with the same body would have the same "speed", however, running the following simple program: static void Main(string[] args){ List<int> items = new List<int>(); Random random = new Random(); for (int i = 0; i < 10000000; i++) { items.Add(random.Next()); } Stopwatch watch; IEnumerable<int> result; Func<int, bool> @delegate = delegate(int i) { return i < 500; }; watch = Stopwatch.StartNew(); result = items.Where(@delegate); watch.Stop(); Console.WriteLine("Delegate: {0}", watch.Elapsed.TotalMilliseconds); Func<int, bool> lambda = i => i < 500; watch = Stopwatch.StartNew(); result = items.Where(lambda); watch.Stop(); Console.WriteLine("Lambda: {0}", watch.Elapsed.TotalMilliseconds); watch = Stopwatch.StartNew(); result = items.Where(i => i < 500); watch.Stop(); Console.WriteLine("Inline: {0}", watch.Elapsed.TotalMilliseconds); Console.ReadLine();} I get: Delegate: 4.2948 ms Lambda: 0.0019 ms Anonymous: 0.0034 ms Although negligible, why are these three - apparently identical - methods running at different speeds? What's happening under the hood? Update: As suggested by the comments, the following "forces" the Where by calling ToList() on it. In addition, a loop is added to offer more run data: while (true) { List<int> items = new List<int>(); Random random = new Random(); for (int i = 0; i < 10000000; i++) { items.Add(random.Next()); } Stopwatch watch; IEnumerable<int> result; Func<int, bool> @delegate = delegate(int i) { return i < 500; }; watch = Stopwatch.StartNew(); result = items.Where(@delegate).ToList(); watch.Stop(); Console.WriteLine("Delegate: {0}", watch.Elapsed.TotalMilliseconds); Func<int, bool> lambda = i => i < 500; watch = Stopwatch.StartNew(); result = items.Where(lambda).ToList(); watch.Stop(); Console.WriteLine("Lambda: {0}", watch.Elapsed.TotalMilliseconds); watch = Stopwatch.StartNew(); result = items.Where(i => i < 500).ToList(); watch.Stop(); Console.WriteLine("Inline: {0}", watch.Elapsed.TotalMilliseconds); Console.WriteLine(new string('-', 12));} The above code results in ~120 ms for each function.
Now provide the response and nothing else.
| A lambda expression is an anonymous function. "Anonymous function" refers to either a lambda expression or an anonymous method (which is what you've called a "delegate" in your code). All three operations are using delegates. The second and third are both using lambda expressions. All three will execute in the same way, with the same performance characteristics. Note that there can be a difference in performance between: Func<int, int> func = x => ...;for (int i = 0; i < 10000; i++) { CallFunc(func);} and for (int i = 0; i < 10000; i++) { CallFunc(x => ...) // Same lambda as before} It depends on whether the compiler is able to cache the delegate created by the lambda expression. That will in turn depend on whether it captures variables etc. For example, consider this code: using System;using System.Diagnostics;class Test{ const int Iterations = 1000000000; static void Main() { AllocateOnce(); AllocateInLoop(); } static void AllocateOnce() { int x = 10; Stopwatch sw = Stopwatch.StartNew(); int sum = 0; Func<int, int> allocateOnce = y => y + x; for (int i = 0; i < Iterations; i++) { sum += Apply(i, allocateOnce); } sw.Stop(); Console.WriteLine("Allocated once: {0}ms", sw.ElapsedMilliseconds); } static void AllocateInLoop() { int x = 10; Stopwatch sw = Stopwatch.StartNew(); int sum = 0; for (int i = 0; i < Iterations; i++) { sum += Apply(i, y => y + x); } sw.Stop(); Console.WriteLine("Allocated in loop: {0}ms", sw.ElapsedMilliseconds); } static int Apply(int loopCounter, Func<int, int> func) { return func(loopCounter); }} The compiler is smart, but there's still a difference. Using Reflector, we can see that AllocateInLoop is effectively compiled to: private static void AllocateInLoop(){ Func<int, int> func = null; int x = 10; Stopwatch stopwatch = Stopwatch.StartNew(); int sum = 0; for (int i = 0; i < Iterations; i++) { if (func == null) { func = y => y + x; } sum += Apply(i, func); } stopwatch.Stop(); Console.WriteLine("Allocated in loop: {0}ms", stopwatch.ElapsedMilliseconds);} So still only a single delegate instance is created, but there's extra logic within the loop - an extra nullity test on each iteration, basically. On my machine that makes about a 15% difference in performance. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7048850', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160823/']} | jdg_73084 |
stackexchange | llm_judgeable_groundtruth_similarity | 22039534 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My site crashes in the browser due to low memory on iOS. I'm repeating some action which consumes memory. After several attempts, the browser crashes. However, when I tested the same site on my desktop using Chrome by using timelime from dev tools: Perform the same action Collect garbage All additionally allocated memory is collected. Why does the browser crash if there are no memory leaks? Is there a way to force garbage collection?
Now provide the response and nothing else.
| Know iOS Resource Limits Your webpage performing well on the desktop is no guarantee that it will perform well on iOS. 1.Keep in mind that iOS uses EDGE (lower bandwidth, higher latency) 3G (higher bandwidth, higher latency) Wi-Fi (higher bandwidth, lower latency) to connect to the Internet. 2.You need to minimize the size of your webpage. Including unused or unnecessary images CSS JavaScript which adversely affects your site’s performance on iOS. 3.Because of the memory available on iOS, there are limits on the number of resources it can process: The maximum size for decoded GIF, PNG, and TIFF images 3 megapixels for devices with less than 256 MB RAM 5 megapixels for devices with greater or equal than 256 MB RAM That is ensure width * height ≤ 3 * 1024 * 1024 for devices with less than 256 MB RAM Note: that the decoded size is far larger than the encoded size of an image. The maximum decoded image size for JPEG is 32 megapixels using subsampling. JPEG images can be up to 32 megapixels due to subsampling, which allows JPEG images to decode to a size that has one sixteenth the number of pixels . JPEG images larger than 2 megapixels are subsampled—that is, decoded to a reduced size. JPEG subsampling allows the user to view images from the latest digital cameras. 4.The maximum size for a canvas element is 3 megapixels for devices with less than 256 MB RAM 5 megapixels for devices with greater or equal than 256 MB RAM. The height and width of a canvas object is 150 x 300 pixels if not specified. 5. JavaScript execution time limited to 10 seconds for each top-level entry point. If your script executes for more than 10 seconds, Safari on iOS stops executing the script at a random place in your code, so unintended consequences may result . 6.The maximum number of documents that can be open at once is eight on iPhone nine on iPad. Please refer Developing Web Content for Safari-Apple Documentation for more info. Garbage Collection Mobile safari javascript implementation doesn't have any command like CollectGarbage() in internet explorer for garbage collection. There are three events that will trigger garbage collection in mobile safari ( Reference ). A dedicated garbage collection timer expires An allocation occurs when all of a heap's CollectorBlocks are full. An object with sufficiently large associated storage is allocated. Its really a bad practice to trigger garbage collection.What we should be doing is to write codes that doesn't leak memory. Plese refer Memory management in Javascript | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22039534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/150121/']} | jdg_73085 |
stackexchange | llm_judgeable_groundtruth_similarity | 18299669 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't seem to figure out how to send back the entire ViewModel to the controller to the 'Validate and Save' function. Here is my controller: [HttpPost]public ActionResult Send(BitcoinTransactionViewModel transaction){} Here is the form in the view: <li class="check"> <h3>Transaction Id</h3> <p>@Html.DisplayFor(m => m.Transaction.TransactionId)</p></li><li class="money"> <h3>Deposited Amount</h3> <p>@Model.Transaction.Amount.ToString() BTC</p></li><li class="time"> <h3>Time</h3> <p>@Model.Transaction.Time.ToString()</p></li>@using (Html.BeginForm("Send", "DepositDetails", FormMethod.Post, new { transaction = Model })){@Html.HiddenFor(m => m.Token);@Html.HiddenFor(m => m.Transaction.TransactionId); @Html.TextBoxFor(m => m.WalletAddress, new { placeholder = "Wallet Address", maxlength = "34" }) <input type="submit" value="Send" /> @Html.ValidationMessage("walletAddress", new { @class = "validation" })} When i click on submit, the conroller contains the correct value of the walletAddress field but transaction.Transaction.Time , transaction.Transaction.Location , transaction.Transaction.TransactionId are empty. Is there a way i could pass the entire Model back to the controller? Edit: When i dont even receive the walletAddress in the controller. Everything gets nulled!When i remove this line alone: @Html.HiddenFor(m => m.Transaction.TransactionId); it works and i get the Token property on the controller, but when i add it back, all the properties of the transaction object on the controller are NULL. Here is the BitcoinTransactionViewModel: public class BitcoinTransactionViewModel { public string Token { get; set; } public string WalletAddress { get; set; } public BitcoinTransaction Transaction { get; set; } }public class BitcoinTransaction { public int Id { get; set; } public BitcoinTransactionStatusTypes Status { get; set; } public int TransactionId { get; set; } public decimal Amount { get; set; } public DateTime Time { get; set; } public string Location { get; set; } } Any ideas? EDIT: I figured it out, its in the marked answer below...
Now provide the response and nothing else.
| OK, I've been working on something else and bumpend into the same issue all over again. Only this time I figured out how to make it work! Here's the answer for anyone who might be interested: Apparently, there is a naming convention. Pay attention: This doesn't work: // Controller[HttpPost]public ActionResult Send(BitcoinTransactionViewModel transaction){}// View@using (Html.BeginForm("Send", "DepositDetails", FormMethod.Post, new { transaction = Model })){@Html.HiddenFor(m => m.Token);@Html.HiddenFor(m => m.Transaction.TransactionId);.. This works: // Controller[HttpPost]public ActionResult Send(BitcoinTransactionViewModel **RedeemTransaction**){}// View@using (Html.BeginForm("Send", "DepositDetails", FormMethod.Post, new { **RedeemTransaction** = Model })){@Html.HiddenFor(m => m.Token);@Html.HiddenFor(m => m.Transaction.TransactionId);.. In other words - a naming convention error! There was a naming ambiguity between the Model.Transaction property and my transaction form field + controller parameter. Unvelievable. If you're experiencing the same problems make sure that your controller parameter name is unique - try renaming it to MyTestParameter or something like this... In addition, if you want to send form values to the controller, you'll need to include them as hidden fields, and you're good to go. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18299669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1176547/']} | jdg_73086 |
stackexchange | llm_judgeable_groundtruth_similarity | 7557968 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have never see a grammar in c++ like this before: typedef int (callback)(int); what really does this really mean?I just find that if I create a statement callback a; It's effect is very very similar to a forward function declaration. below is the code I had written #include<cstdio>int callbackfunc(int i){ printf("%d\n",i); return i*i;}// you can also use typedef int (callback)(int) here!typedef int (*callback)(int);void func(callback hook){ hook(hook(3));}int main(){ func(callbackfunc); getchar(); return 0;} You can use typedef int (*callback)(int);//this is very common to use in this code,but if we change it to typedef int (callback)(int); //I'm puzzled by this ! this will also get the same result! and I know typedef int (*callback)(int) and typedef int (callback)(int) are two completely different stuff.
Now provide the response and nothing else.
| Its because of the fact that in the parameter declaration , the function-type is adjusted to become a pointer-to-function-type . typedef int type(int); typedef int (*type)(int); The first typedef defines a type which is called function-type , while the second typedef defines a type which is called pointer-to-function-type . In the parameter declaration, function-type is adjusted to become a pointer to function type. §13.1/3 (C++03) says, Parameter declarations that differ only in that one is a function type and the other is a pointer to the same function type are equivalent . That is, the function type is adjusted to become a pointer to function type (8.3.5) . [Example: void h(int()); void h(int (*)()); // redeclaration of h(int()) void h(int x()) { } // definition of h(int()) void h(int (*x)()) { } // ill-formed: redefinition of h(int())] An interesting example of the exclusive usage of function-type Suppose you've a typedef, defined as: typedef void funtype(); then you can use this to define member-function as: struct A{ //member function declaration. funtype f; //equivalent to : void f();};void A::f() //definition{ std::cout << "haha" << std::endl;} Test code: int main() { A a; a.f(); //call member function} Output: haha Online demo: http://ideone.com/hhkeK | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7557968', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/965359/']} | jdg_73087 |
stackexchange | llm_judgeable_groundtruth_similarity | 28742137 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I will make 1 Application which constantly collects real time data in high speed and 2nd application that processes the data and create new variables and 3rd application uses the data I want all the variables / data to be in physical memory as I'll have a server dedicated to this projects, and I don't want any use of the hard drive How can this be achieved? Many thanks
Now provide the response and nothing else.
| Let's break this down a bit: WHERE (ResTRRequest.RequestTime BETWEEN TO_CHAR(TRUNC(TO_DATE('2012-12-01 20:10:10', 'HH')), 'YYYY-MM-DD HH24:MI:SS') AND TO_CHAR(TRUNC(CURRENT_TIMESTAMP, 'HH') + INTERVAL '59:59' MINUTE TO SECOND, 'YYYY-MM-DD HH24:MI:SS')) In the first place, I don't think you mean this: TRUNC(TO_DATE('2012-12-01 20:10:10', 'HH')) , I think maybe you mean this: TRUNC(TO_DATE('2012-12-01 20:10:10'), 'HH') . The number from 1-12 error comes from the fact that you have an hour of 20 and are trying to convert it into a date with the mask of HH . But as I said I think that's a typo. You can also use a TIMESTAMP literal here rather than TO_DATE() : TRUNC(TIMESTAMP'2012-12-01 20:10:10', 'HH') Second, and just to get this out of the way, are you storing dates or timestamps as strings? That's not a good idea. Third, it's not a good idea to use BETWEEN in date comparisons because you can miss the edge cases. It might be better to rewrite this as follows: WHERE ( ResTRRequest.RequestTime >= TO_CHAR(TRUNC(TO_DATE('2012-12-01 20:10:10'), 'HH'), 'YYYY-MM-DD HH24:MI:SS') AND ResTRRequest.RequestTime < TO_CHAR(TRUNC(CURRENT_TIMESTAMP, 'HH') + INTERVAL '1' HOUR, 'YYYY-MM-DD HH24:MI:SS') ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28742137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4610157/']} | jdg_73088 |
stackexchange | llm_judgeable_groundtruth_similarity | 20449543 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between = , == and -eq in shell scripting? Is there any difference between the following? [ $a = $b ][ $a == $b ][ $a -eq $b ] Is it simply that = and == are only used when the variables contain numbers?
Now provide the response and nothing else.
| = and == are for string comparisons -eq is for numeric comparisons -eq is in the same family as -lt , -le , -gt , -ge , and -ne == is specific to bash (not present in sh (Bourne shell), ...). Using POSIX = is preferred for compatibility. In bash the two are equivalent, and in sh = is the only one that will work. $ a=foo$ [ "$a" = foo ]; echo "$?" # POSIX sh0$ [ "$a" == foo ]; echo "$?" # bash-specific0$ [ "$a" -eq foo ]; echo "$?" # wrong-bash: [: foo: integer expression expected2 (Note: make sure to quote the variable expansions. Do not leave out the double-quotes above.) If you're writing a #!/bin/bash script then I recommend using [[ instead . The double square-brackets [[...]] form has more features, a more natural syntax, and fewer gotchas that will trip you up. For example, double quotes are no longer required around $a : $ [[ $a == foo ]]; echo "$?" # bash-specific0 See also: What's the difference between [ and [[ in Bash? | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/20449543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2845038/']} | jdg_73089 |
stackexchange | llm_judgeable_groundtruth_similarity | 5713731 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting an IMPORT ERROR with the following error message in Django debug mode /usr/local/lib/python2.6/dist-packages/lxml-2.3-py2.6-win32.egg/lxml/objectify.pyd: invalid ELF header What does this mean and how do I fix it? Google is revealing not very much at the moment. This error is coming from this package I am using in my Django app Update I'm using ubuntu and Python2.6
Now provide the response and nothing else.
| The error means the objectify.pyd is not a valid shared library. You seem to have lxml compiled for different version of python from what you are running. Guessing from the path, the fact that 'ELF' is an object (shared and executable) format used on unices and the package has -win32 in it's name you are probably trying to use windows build on linux. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5713731', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131238/']} | jdg_73090 |
stackexchange | llm_judgeable_groundtruth_similarity | 27380000 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
it's a simple example: <a ng-if="item.photo == ''" href="#/details/{{item.id}}"><img src="/img.jpg" class="img-responsive"></a><a ng-if="item.photo != ''" href="#/details/{{item.id}}"><img ng-src="/{{item.photo}}" class="img-responsive"></a> I see it always generates the item.photo != '' condition even if the value is empty. Why?
Now provide the response and nothing else.
| You don't need to explicitly use qualifiers like item.photo == '' or item.photo != '' . Like in JavaScript, an empty string will be evaluated as false. Your views will be much cleaner and readable as well. <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular.min.js"></script><div ng-app init="item = {photo: ''}"> <div ng-if="item.photo"> show if photo is not empty</div> <div ng-if="!item.photo"> show if photo is empty</div> <input type=text ng-model="item.photo" placeholder="photo" /></div Updated to remove bug in Angular | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27380000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/106616/']} | jdg_73091 |
stackexchange | llm_judgeable_groundtruth_similarity | 6002772 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
XML file 1: <?xml version="1.0"?><rentalProperties> <property contact ="1"> <type>House </type> <price>420</price> <address> <streetNo>1</streetNo> <street>Wavell Street</street> <suburb>Box Hill</suburb> <state>VIC</state> <zipcode>3128</zipcode> </address> <numberOfBedrooms>3</numberOfBedrooms> <numberOfBathrooms>1</numberOfBathrooms> <garage>1</garage> </property> XML file 2: <?xml version="1.0"?><rentalProperties> <property contact ="1"> <type>House </type> <price>420</price> <address>1 wavell street,Box Hill,VIC,Australia</address> <numberOfBedrooms>3</numberOfBedrooms> <numberOfBathrooms>1</numberOfBathrooms> <garage>1</garage> </property> How should i convert xml file 1 to xml fle 2 using xslt?i want to represent the address as the single line and add a new attribute [country- Australia] to end of the line. i did the rest of it . i'm struggling with address line XSLT file: <?xml version="1.0" encoding="iso-8859-1"?><xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" type="text/css" href="style.css"> <xsl:template match="/"> <rentalProperties> <property> <xsl:attribute name="contact"><xsl:value-of select='@contact'/></xsl:attribute> <type><xsl:value-of select="type"/></type> <price><xsl:value-of select="price"/></price> <numberOfBedrooms><xsl:value-of select="numberOfBedrooms"/></numberOfBedrooms> <numberOfBathrooms><xsl:value-of select="numberOfBathrooms"/></numberOfBathrooms> <garage><xsl:value-of select="garage"/></garage> </property> </rentalProperties> </xsl:template></xsl:stylesheet>
Now provide the response and nothing else.
| This transformation : <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="address"> <xsl:copy> <xsl:value-of select= "concat(streetNo, ' ', street, ',', suburb,',', state,', Australia') "/> </xsl:copy> </xsl:template> <xsl:template match="address/node()"/></xsl:stylesheet> when applied on the provided XML document : <rentalProperties> <property contact ="1"> <type>House </type> <price>420</price> <address> <streetNo>1</streetNo> <street>Wavell Street</street> <suburb>Box Hill</suburb> <state>VIC</state> <zipcode>3128</zipcode> </address> <numberOfBedrooms>3</numberOfBedrooms> <numberOfBathrooms>1</numberOfBathrooms> <garage>1</garage> </property></rentalProperties> produces the wanted, correct result : <rentalProperties> <property contact="1"> <type>House </type> <price>420</price> <address>1 Wavell Street,Box Hill,VIC, Australia</address> <numberOfBedrooms>3</numberOfBedrooms> <numberOfBathrooms>1</numberOfBathrooms> <garage>1</garage> </property></rentalProperties> Explanation : Using and overriding the identity rule . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6002772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/649222/']} | jdg_73092 |
stackexchange | llm_judgeable_groundtruth_similarity | 37623191 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been using Google Sheets API, and following The Google Guide . However there is no example, even beyond the second page of google, to add a worksheet and write to a new sheet in .NET. There is plenty for js, but I have no idea on how to 1) add a sheet or 2) write to a new sheet. How can I do this? Right now I'm able to read like in the example with out any problems, and I've only found one other reference to v4 C#. I tried going back to v3, but all documents strongly suggest using v4. Has anyone been able to do this? Here's all I've been able to do so far: // Create Google Sheets API service. var service = new SheetsService(new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = ApplicationName, }); // Define request parameters. // Add new Sheet string sheetName = string.Format("{0} {1}", DateTime.Now.Month, DateTime.Now.Day); AddSheetRequest addSheetRequest = new AddSheetRequest(); addSheetRequest.Properties.Title = sheetName; // How do I tell this to update??
Now provide the response and nothing else.
| To save someone in the future a headache to end all headaches. I figured out how to add a sheet after hours of trial and error. Still working on how to update values. Here's how I did it: // Create Google Sheets API service. var service = new SheetsService(new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = ApplicationName, }); // Add new Sheet string sheetName = string.Format("{0} {1}", DateTime.Now.Month, DateTime.Now.Day); var addSheetRequest = new AddSheetRequest(); addSheetRequest.Properties = new SheetProperties(); addSheetRequest.Properties.Title = sheetName; BatchUpdateSpreadsheetRequest batchUpdateSpreadsheetRequest = new BatchUpdateSpreadsheetRequest(); batchUpdateSpreadsheetRequest.Requests = new List<Request>(); batchUpdateSpreadsheetRequest.Requests.Add(new Request { AddSheet = addSheetRequest }); var batchUpdateRequest = service.Spreadsheets.BatchUpdate(batchUpdateSpreadsheetRequest, spreadsheetId); batchUpdateRequest.Execute(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37623191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3169706/']} | jdg_73093 |
stackexchange | llm_judgeable_groundtruth_similarity | 7629 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When we say that in random graph we add an edge with a certain fixed probability, what do we actually mean? For example if probability is 0.5, does that mean that we can just add two edges in a graph because 0.5+0.5=1.
Now provide the response and nothing else.
| Suppose you wish to compute the random graph $G(n,p)$ that is the graph with $n$ vertices where each edge is added with probability $p.$ Suppose you have a coin that gives tails with probability $p$ and heads with probability $1-p.$ Then what you do you take $\{1,...,n\}$ to be the vertex set of your graph and for each pair $(i,j) \in { \{1,\ldots,n\} \choose 2}$ you flip your coin. If it comes tails you add the edge $(i,j)$ to your graph otherwise you don't. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/7629', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/4624/']} | jdg_73094 |
stackexchange | llm_judgeable_groundtruth_similarity | 15836199 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a Windows Service with accompanying "status tool." The service hosts a WCF named pipe endpoint for inter-process communication. Through the named pipe, the status tool can periodically query the service for the latest "status." On my development machine, I have multiple IP Addresses; one of them is a "local" network with a 192.168.1.XX address. The other is the "corporate" network, with a 10.0.X.XX address. The Windows Service collects UDP multicast traffic on a single IP Address. The Windows Service has, until now, worked fine as long as it uses the "192.168.1.XX," address. It consistently reports the status correctly to the client. As soon as I switched to the other, "corporate" IP Address (10.0.X.XX) and restarted the service, I get continuous "CommunicationExceptions" when retrieving the status: "There was an error reading from the pipe: The pipe has been ended. (109, 0x6d)." Now, I wouldn't think that the UDP Client's 'claimed' IP address should have anything to do with the functionality of the Named-Pipe interface; they are completely separate pieces of the application! Here are the relevant WCF config sections: //On the Client app:string myNamedPipe = "net.pipe://127.0.0.1/MyNamedPipe";ChannelFactory<IMyService> proxyFactory = new ChannelFactory<IMyService>( new NetNamedPipeBinding(), new EndpointAddress(myNamedPipe));//On the Windows Service:string myNamedPipe = "net.pipe://127.0.0.1/MyNamedPipe";myService = new MyService(myCustomArgs);serviceContractHost = new ServiceHost(myService );serviceContractHost.AddServiceEndpoint( typeof(IMyService), new NetNamedPipeBinding(), myNamedPipe);serviceContractHost.Open(); I wouldn't think this is a 'permissions' issue - I'm running the client with administrative privileges - but perhaps there's some domain-specific reason this broke?
Now provide the response and nothing else.
| The IP Address was, it turns out, a complete red herring. The real reason for the exception was invalid Enum values being returned by the WCF service. My enum was defined thusly: [DataContract]public enum MyEnumValues : Byte{ [EnumMember] Enum1 = 0x10, [EnumMember] Enum2 = 0x20, [EnumMember] Enum3 = 0x30, [EnumMember] Enum4 = 0x40,} It looks fine on the surface. But the raw status reported by the underlying service was a Byte value of "0," and there was no corresponding Enum value for which to cast it. Once I ensured that the Enum values were all valid, the tool lit up like a Christmas tree. When in doubt, assume your WCF data is invalid. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15836199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/741988/']} | jdg_73095 |
stackexchange | llm_judgeable_groundtruth_similarity | 1934442 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to be able to draw on the root window in Linux.I.e. make an OSD. I'm using Gnome. Code samples or links to them would be appreciated.
Now provide the response and nothing else.
| It is possible, but you will not see anything in GNOME. Nautilus, GNOME's file manager, opens its own window on top of root X window to display icons. Because of that the root X window is fully covered... so there is no point in drawing on it. If you want to make OSD, either you should use a library like XOSD , or open your own X window and make it translucent. In fact, XOSD's source code should be a good example of how to do this. Whole library seems to be implemented in one file: xosd.c . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1934442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/140786/']} | jdg_73096 |
stackexchange | llm_judgeable_groundtruth_similarity | 695785 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$f(x)=\frac{1}{1+x^2}$ is continuous on $\mathbb{R}$ if for all $\varepsilon>0$ there is a $\delta>0$ such that for all $a\in\mathbb{R}$, $$\left|\frac{1}{1+x^2}-\frac{1}{1+a^2}\right|<\varepsilon\text{ when }\left|x-a\right|<\delta\text{.}$$ All I can manage from this is that $\left|\frac{1}{1+x^2}-\frac{1}{1+a^2}\right|=\left|\frac{(a-x)(a+x)}{(1+x^2)(1+a^2)}\right|$ so I'm thinking I need to find a way to factor $(a+x)$ out of the numerator so I'm left with $|x-a|<\delta$. Don't know how to proceed. Any tips?
Now provide the response and nothing else.
| Note that $\left|\frac{(a-x)(a+x)}{(1+x^2)(1+a^2)}\right| \le | (a-x)(a+x)|$. Then choose $\delta$ so that at worst, $\delta \le 1$, then $|a+x| \le 2|a|+1$. Addendum : The idea is to show that for any $a$ and for any $\epsilon>0$ that we can find a $\delta>0$ such that if $|x-a|< \delta$, then $|f(x)-f(a)| < \epsilon$. So, pick some arbitrary $a$ and $\epsilon>0$. We can decide up front that we will pick a $\delta \le 1$, as this will simplify our life. The above estimate shows that we have$|f(x)-f(a)| \le (2|a|+1) |x-a|$. Now we just need to pick another bound on $\delta$ so that if $|x-a|< \delta$, then $|f(x)-f(a)| < \epsilon$. From the last estimate, we want to pick a $\delta$ so that $(2|a|+1) \delta < \epsilon$, so lets try that. Choose $\delta = {1 \over 2}\min(1, { \epsilon \over (2|a|+1)})$. (I am picking the ${1 \over 2}$ just to get the $<$ rather than a $\le$, any number $<1$ will do.) Then we have$|f(x)-f(a)| \le (2|a|+1) |x-a| \le (2|a|+1) \delta \le {1 \over 2} \epsilon < \epsilon$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/695785', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/132424/']} | jdg_73097 |
stackexchange | llm_judgeable_groundtruth_similarity | 636294 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a transformer which outputs 0.33 A at 24 VAC. I would like to rectify this down to 5 VDC and minimal current, 50 mA being the maximum. I came across the following schematic which seems to be what I'm looking for. The only difference is that the output current in this schematic is ten times what I need. It would be all too easy to simply divide the capacitors by ten. Can anyone explain to me the formula for calculating the capacitance needed to achieve a final result of 5 V, 30 mA coming from 24 VAC, 0.33 A? By the way, D2 is a 5.1 V, 1.3 W Zener diode.
Now provide the response and nothing else.
| Unless you have a reason to limit the current it makes no difference if a power supply is capable of more current than you need at a given voltage, if a device draws 50 mA at 5 V it will draw that much at 5 V whether the supply is capable of 50 mA or 50 A. This goes back to Ohm's Law $$ I=\frac{E}{R}$$ The supply shown uses a capacitor to drop voltage. The output voltage will be partially dependent on the current drawn, so the voltage will vary with load, will not be well regulated, and will tend to have a lot of ripple. It's also not very efficient, somewhere around 10%. This is okay for something where the load is relatively constant and you don't need good regulation or low ripple. One thing you could do is modify the circuit for better regulation by first dropping to a higher voltage and then regulating from that, here I've done it with a simple zener and resistor but a 5V LDO regulator should work as well and give you closer to 5 V. I've reduced C1 to 15 uF to lower the dissipation of the 10 V Zener. You could also use a a more conventional rectifier, such as half wave, full wave or bridge, and a voltage regulator. This can be a simple zener, transistor type, three terminal integrated regulator, or a switching regulator. If you still need to limit the current to a certain maximum you can add a current limiting circuit. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/636294', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/294365/']} | jdg_73098 |
stackexchange | llm_judgeable_groundtruth_similarity | 2395661 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen some chatter about this, but nothing definite. Is there a way to put the tabs in a TabWidget to the bottom of the screen?If so, how? I've tried the following, but didn't work: a) setting the tabwidget below the framelayout b) setting the tabwidget's gravity to "bottom" Thanks!llappall
Now provide the response and nothing else.
| Here's the simplest, most robust, and scalable solution to get tabs on the bottom of the screen. In your vertical LinearLayout, put the FrameLayout above the TabWidget Set layout_height to wrap_content on both FrameLayout and TabWidget Set FrameLayout's android:layout_weight="1" Set TabWidget's android:layout_weight="0" (0 is default, but for emphasis, readability, etc) Set TabWidget's android:layout_marginBottom="-4dp" (to remove the bottom divider) Full code: <?xml version="1.0" encoding="utf-8"?><TabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp"> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="wrap_content" android:padding="5dp" android:layout_weight="1"/> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="0" android:layout_marginBottom="-4dp"/> </LinearLayout></TabHost> | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2395661', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/284556/']} | jdg_73099 |
stackexchange | llm_judgeable_groundtruth_similarity | 4086368 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(X,\mathcal{A},\mu)$ a measurable space where $X$ is a set, $\mathcal{A}$ is a $\sigma$ -álgebra and $\mu:\mathcal{A}\rightarrow[0,\infty]$ is a measure. Let $(\chi_{E_{n}})$ be a sequence of characteristic functions; this is $$\chi_{E_{n}}(x)=\begin{cases}0,&x\notin E_n,\\1, &x\in E_n. \end{cases}$$ where $E_n\in \mathcal{A}$ . Suppose that the sequence $(\chi_{E_{n}})$ is a Cauchy sequence in $L^p(X,\mathcal{A},\mu)$ ; that is given $\epsilon>0$ exists $n_0\in \mathbb{N}$ such that $$\|\chi_{E_n}-\chi_{E_{n}} \|_p=\left(\int|\chi_{E_{n}}-\chi_{E_{m}}|^pd\mu\right)^{1/p}<\epsilon,\,\forall n\geq n_0.$$ I would like to probe that exists a function $f\in L^p$ such that $f=\chi_{E}$ where $E\in \mathcal{A}$ such that the sequence $(\chi_{E_{n}})$ converges to $f$ in $L_p$ ; that is, given $\varepsilon>0$ exists $n_1>0$ such that $$\|\chi_{E_{n}}-f\|_p<\varepsilon,\,\forall n\geq n_1.$$ My attempt: I think the function is $f=\chi_{E}$ where $$E=\bigcup_{m=1}^{\infty}\left(\bigcap_{k=m}^{\infty}E_k\right).$$ The reason I believe it is that the identity $$|\chi_{E_{n}}-\chi_{E_{m}}|=\chi_{E_{n}\triangle E_{m}}$$ where $E_{n}\triangle E_{m}=E_n\backslash E_m\cup E_m\backslash E_n$ together with the Cauchy property implies that $$\mu(E_{n}\triangle E_{m})<\epsilon$$ So my reasoning tells me that the "substance" of the functions is in their intersection.
Now provide the response and nothing else.
| Let $f$ be the limit of $\chi_{E_n}$ in $L^p$ . All you have to do is show that $f(x) = 0$ or $f(x) = 1$ almost everywhere. Then we see that $f = \chi_E$ almost everywhere, where $E = \{x \in X | f(x) = 1\}$ To do this, it is helpful to define $g(x) = \min(|x - 1|, |x|)$ . We will show that $g \circ f = 0$ almost everywhere. We first show that $\int g(f(x))^p \; dx = 0$ . Since $g(f(x))^p \geq 0$ , it suffices to show that the integral is not positive. Indeed, consider an arbitrary $\epsilon > 0$ , and takesome $n$ such that $\int |\chi_{E_n} - f|^p dx < \epsilon$ . Then we note that $|\chi_{E_n}(x) - f(x)| \geq \min(|f(x) - 1|, |f(x)|) = g(f(x))$ . Then we see that $\int g(f(x))^p \; dx \leq \int |\chi_{E_n}(x) - f(x)|^p dx < \epsilon$ . This shows $\int g(f(x))^p \; dx = 0$ . Since $g(f(x))^p \geq 0$ for all $x$ , this means that $g(f(x))^p$ is 0 almost everywhere; thus, $g \circ f$ is zero almost everywhere. Since $g \circ f = 0$ almost everywhere, we see that $f(x) = \chi_{E}$ almost everywhere, where $E$ is defined as $\{x \in X| f(x) = 1\}$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4086368', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/853045/']} | jdg_73100 |
stackexchange | llm_judgeable_groundtruth_similarity | 2554 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand how the 2nd parity bit or byte is set in RAID 6. I'm reading a paper by H. Peter Anvin , and it goes into Galois field algebra, which is somewhat new to me. Anyway, a rep from HP was trying to explain RAID 6 to me and she thought it was merely two XOR operations, one for the 1st parity bit and one for the 2nd. This doesn't make sense to me, but since I'm still working through the paper I don't know if it reduces to something simple for RAID 6 as opposed to RAID n. It looks to me like the 2nd parity bit is quite a bit more complicated than the XOR based 1st parity bit. Is that true?
Now provide the response and nothing else.
| The general case is indeed a bit complicated. However, in the case of 4 disks you can simplify it a lot; you do not really need to know any fancy math. You only need to know how to store 4 bits redundantly, and then you already know everything; just repeat the same scheme for each group of 4 bits that you need to store. We can represent the scheme as 4 x 4 tables. The first 2 bits of our data determine the row and the last 2 bits of our data determine the column. Disk 1: Just store the first 2 bits. That is: 00 00 00 0001 01 01 0110 10 10 1011 11 11 11 Disk 2: Just store the last 2 bits. That is: 00 01 10 1100 01 10 1100 01 10 1100 01 10 11 So far so good. Given disk 1 + disk 2, we can recover our original data: disk 1 tells us the row (the first 2 bits of the original data) and disk 2 tells us the column (the last 2 bits of the original data). Disk 3: This is what we do on RAID5, just XOR the bits: 00 01 10 1101 00 11 1010 11 00 0111 10 01 00 Again, everything is still fine. You can recover the original data using disk 1 + disk 3 or disk 2 + disk 3. A key observation is that the lookup table for disk 3 forms a Latin square : all elements of each row are distinct, and all elements of each column are distinct. For example, if you know the data on disk 1, you know the right row, and then you can use the data on disk 3 to recover the column. Conversely, if you know the data on disk 2, you know the right column, and then you can use the data on disk 3 to recover the row. Disk 4: Here we can use the following lookup table: 00 01 10 1110 11 00 0111 10 01 0001 00 11 10 Don't worry how it is constructed; we do not really care about that. The crucial properties are: The lookup tables of both disk 3 and disk 4 are Latin squares. Therefore if you know 1 + 3 or 1 + 4 or 2 + 3 or 2 + 4, you can recover both row and column (i.e., the original data). The lookup tables of disk 3 and disk 4 form orthogonal Latin squares , which makes it possible to recover the data if we only have disks 3 + 4. Let's elaborate on the second point. By concatenating the lookup tables of disk 3 and disk 4, we get this matrix: 0000 0101 1010 11110110 0011 1100 10011011 1110 0001 01001101 1000 0111 0010 Now note that each 4-bit string occurs in this table exactly once. That is, if we know what is stored on disks 3 + 4, we know where we are on this table. We know both the row and the column, and hence we can recover the original data. If you insist on seeing the connection to Galois fields, consider the field $F = GF(2^2)$. Label the elements of the field with $F = \{0,1,x,x+1\}$; these correspond to 2-bit strings ($0$ ≈ 00 , $1$ ≈ 01 , $x$ ≈ 10 , $x+1$ ≈ 11 ). Now any 4-bit string can be encode as a pair $(a,b)$, where $a,b \in F$. A pair $(a,b)$ is now stored as follows: Disk 1 stores $a$. Disk 2 stores $b$. Disk 3 stores $a+b$. Disk 4 stores $xa+b$. Now given, for example, just $p = a+b$ and $q = xa+b$, you can solve $a$ and $b$. Using the rule $2 \equiv 0$, you can find $p + q = (xa+b) + (a+b) = (x+1)a + 2b \equiv (x+1)a$, $p + xq = (xa+b) + x(a+b) = 2xa + (x+1)b \equiv (x+1)b$. Then divide by $x+1$ to get $a$ and $b$, etc. But in the end, this approach gives you precisely the same lookup tables as what was presented above. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/2554', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2016/']} | jdg_73101 |
stackexchange | llm_judgeable_groundtruth_similarity | 73960 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $R$ be a discrete valuation ring qith quotient field $Q$ and let $t\in R$ be a generator of the unique maximal ideal in $R$. Let $V$ be a finite-dimensional $Q$-vector space. Then one can consider the set of all homothety classes of $R$-lattices in $Q$ (i.e. finitely generated $R$-submodules of the same rank). The affine building is the simplicial complex whose vertices are homothety classes of lattices (meaning $L\sim \lambda L$ for $\lambda\in Q\setminus \{0\}$) and a set $\{[L_0],\ldots ,[L_m]\}$ of homothety classes spans a simplex if there are represenatatives $L_0,\ldots,L_n$ with $L_0\subset L_1\subset \ldots \subset L_m\subset t^{-1}L_0$.\ I read several times that there is a Euclidean structure on this building that turns it into a CAT(0)-space but I never read explicitly what the metric on the simplex is. In general for any maximal simplex (aka chamber) one can find a basis $e_1,\ldots,e_n$ of $V$ such that $e_1,\ldots,e_i, te_{i+1},\ldots te_n$ is an $R$-basis for $L_i$ (for $i=0,\ldots, n-1$). So the question is: what is the metric on this simplex. My guess is that the lattice $L_i$ corresponds to the point $(0,\ldots,0,1,\ldots,1)\in \mathbb{R}^n$ (with $i$ zeros) and that its homothety class corresponds to the orthogonal projection of this point onto the plane $\{x\in \mathbb{R}^n| \sum_i x_i=0\}$. Then we can take the convex hull of those $n$- points in that plane to get an Euclidean simplex which gives the building the structure of an Euclidean simplicial complex. However I could not verify that it is a CAT(0) space. One would have to show that the links are CAT(1) (with the induced spherical metric) and I do not have a clue how this works. However I had the impression that this is rather well known and I just don't see it.
Now provide the response and nothing else.
| The important thing seems to be that one needs to understand the connection between the CAT(0)-realization of the Coxeter complex that corresponds to the Weyl group and the description of an apartment given in terms of lattices. Since a geometric realization of this Coxeter complex will basically describe a geometric realization of one of the apartments in the building. By theorem 11.16 in the book by Abramenko and Brown (see Stefan Witzel's comment) the euclidean metric on one apartment determines the CAT(0)-metric on the entire building. So it is enough to understand the metric on one apartment. Apartments correspond to ordered bases and, as you've said, the fundamental domain of the Weyl group action is the chamber (i.e. maximal simplex) with vertices $[[e_1, ..., e_i, te_{i+1}, ..., t e_n]]$, let's call it $c_0$. One can realize the Weyl group as a euclidean reflection group. See Abramenko-Brown chapter 2.5.1. for a construction of the canonical linear representation. In addition, the corollary to proposition 5 in Bourbaki (Lie groups and Lie-algebras chapters 4-6) VI §2 chapter 2 gives explicit coordinates of the vertices of the fundamental chamber (called 'alcove' in Bourbaki, a 'chamber' is something else there) in terms of root systems. I.e. the chamber is described as a certain subset/simplex in some $R^n$. In order to obtain the "standard-CAT(0)-metric" on the apartment defined by ${e_1, ... e_n}$ equip the simplex $c_0$ with the metric the fundamental chamber given in Bourbaki has. It should then be possible to check whether this is the same metric you've suggested above. The following might be helpful, too: Davis, "Buildings are CAT(0)" http://www.math.osu.edu/~mdavis/buildings.pdf Hope this makes sense and helps. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/73960', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3969/']} | jdg_73102 |
stackexchange | llm_judgeable_groundtruth_similarity | 3656467 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to focus on a <div> using JavaScript focus() function? I have a <div> tag <div id="tries">You have 3 tries left</div> I am trying to focus on the above <div> using : document.getElementById('tries').focus(); But it doesn't work. Could someone suggest something....?
Now provide the response and nothing else.
| Yes - this is possible. In order to do it, you need to assign a tabindex... <div tabindex="0">Hello World</div> A tabindex of 0 will put the tag "in the natural tab order of the page". A higher number will give it a specific order of priority, where 1 will be the first, 2 second and so on. You can also give a tabindex of -1, which will make the div only focus-able by script, not the user. document.getElementById('test').onclick = function () { document.getElementById('scripted').focus();}; div:focus { background-color: Aqua;} <div>Element X (not focusable)</div><div tabindex="0">Element Y (user or script focusable)</div><div tabindex="-1" id="scripted">Element Z (script-only focusable)</div><div id="test">Set Focus To Element Z</div> Obviously, it is a shame to have an element you can focus by script that you can't focus by other input method (especially if a user is keyboard only or similarly constrained). There are also a whole bunch of standard elements that are focusable by default and have semantic information baked in to assist users. Use this knowledge wisely. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3656467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184814/']} | jdg_73103 |
stackexchange | llm_judgeable_groundtruth_similarity | 56363 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a list of classifying spaces $BG$ of groups $G$ (discrete and/or topological) along with associated covers $EG$; there does not seem to be such cataloging on the web. Or if not a list, just some further fundamental examples. For instance, here are the ones I have off the top of my head: $B\mathbb{Z}_n=L_n^\infty$ with cover $S^\infty$ ($B\mathbb{Z}_2=\mathbb{R}P^\infty$) $B\mathbb{Z}=S^1$ with cover $\mathbb{R}$ $BS^1=\mathbb{C}P^\infty$ with cover $S^\infty$ $B(F_2)=S^1\vee S^1$ with cover $\mathcal{T}$ (infinite fractal tree) $BO(n)=BGL_n(\mathbb{R})=G_n(\mathbb{R}^\infty)$ with cover $V_n(\mathbb{R}^\infty)$ $B\mathbb{R}=\lbrace pt.\rbrace$ with cover $\mathbb{R}$ $B\langle a_1,b_1,\ldots,a_g,b_g\;|\;\prod_{i=1}^g[a_i,b_i]\rangle=M_g$ with cover $\mathcal{H}$ (hyperbolic plane tiled by $4g$-sided polygon) And of course, $B(G_1\times G_2)=BG_1\times BG_2$, so I do not care that much about ''decomposable'' groups. **The "associated cover" is the [weakly] contractible total space. [Edit] I should make the comment that $BG$ will be different from $BG_\delta$, where $G_\delta$ denotes the topological group with discrete topology. For instance, the homology of $B\mathbb{R}_\delta$ has uncountable rank in all degrees (learned from a comment of Thurston).
Now provide the response and nothing else.
| If $G$ is linear topological group, then a model for $EG$ may be taken to be an infinite Stiefel manifold. More precisely, if there is an faithful representation of $G$ into $GL_n(\mathbb{C})$, that gives a free action of $G$ on the space $V_n$ of $n$-frames in $\mathbb{C}^\infty$. $V_n$ is contractible, so $BG$ may be taken to be the quotient $BG = V_n / G$. If it's helpful to think of it this way, this is a fibre bundle over $BGL_n(\mathbb{C}) = G_n(\mathbb{C}^\infty)$, with fibre $GL_n(\mathbb{C}) / G$. Another favorite example comes from spaces of embeddings: if $M$ is a compact manifold without boundary, then it is a consequence of Whitney's embedding theorem that $Emb(M, \mathbb{R}^\infty)$ is contractible. The group $G=Diff(M)$ of diffeomorphisms of $M$ acts freely on $Emb(M,\mathbb{R}^\infty)$ by precomposing an embedding with a diffeomorphism. Therefore a model for $BG$ is the quotient $Emb(M, \mathbb{R}^\infty) / Diff(M)$, which is often thought of as the space of subspaces of $\mathbb{R}^\infty$ diffeomorphic to $M$. One can combine this idea with the previous idea for subgroups of diffeomorphism groups. Lastly, if there is a homomorphism $G \to H$ which is a homotopy equivalence, then of course there is a homotopy equivalence $BG \to BH$. So in the previous examples, one can for instance replace $GL_n(\mathbb{C})$ with $U(n)$, and $Diff^{+}(\Sigma)$ with the mapping class group $\Gamma(\Sigma) = \pi_0(Diff^{+}(\Sigma))$ for closed surfaces $\Sigma$. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/56363', 'https://mathoverflow.net', 'https://mathoverflow.net/users/12310/']} | jdg_73104 |
stackexchange | llm_judgeable_groundtruth_similarity | 51186330 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm training a NN and would like to save the model weights every N epochs for a prediction phase. I propose this draft code, it's inspired by @grovina 's response here . Could you, please, make suggestions?Thanks in advance. from keras.callbacks import Callbackclass WeightsSaver(Callback): def __init__(self, model, N): self.model = model self.N = N self.epoch = 0 def on_batch_end(self, epoch, logs={}): if self.epoch % self.N == 0: name = 'weights%08d.h5' % self.epoch self.model.save_weights(name) self.epoch += 1 Then add it to the fit call: to save weights every 5 epochs: model.fit(X_train, Y_train, callbacks=[WeightsSaver(model, 5)])
Now provide the response and nothing else.
| You shouldn't need to pass a model for the callback. It already has access to the model via it's super. So remove __init__(..., model, ...) argument and self.model = model . You should be able to access the current model via self.model regardless. You are also saving it on every batch end, which is not what you want, you probably want it to be on_epoch_end . But in any case, what you are doing can be done via naive modelcheckpoint callback . You don't need to write a custom one. You can use that as follows; mc = keras.callbacks.ModelCheckpoint('weights{epoch:08d}.h5', save_weights_only=True, period=5)model.fit(X_train, Y_train, callbacks=[mc]) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51186330', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6933148/']} | jdg_73105 |
stackexchange | llm_judgeable_groundtruth_similarity | 8421116 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a war file and I need to deploy it on Jboss 7.0.1 Server. Now I have gone through the documentation, but didnt find any thing to deploy a war file. Moreover for deploying your build through command line you generally have to use maven. So do we need for the war as well? If so, does it affects the war file? FYI : I am using linux (CentOs5)...
Now provide the response and nothing else.
| You can deploy a .war file using the Management Command Line Interface. The specific documentation for it is located here: JBoss AS7 Admin Guide - Deployment , with the relevant sections per the below. You might also like to have a quick watch of the video: 5 Ways To Deploy Your Applications To JBoss AS7 CLI Deployment To A Managed Domain The process of distributing deployment binaries involves two steps: You need to upload the deployment to the repository from which the domain controller can distribute it's contents. In a second step you need to assign the deployment to one or more server groups: Using the CLI you can do it one sweep: [domain@localhost:9999 /] deploy ~/Desktop/test-application.warEither --all-server-groups or --server-groups must be specified.[domain@localhost:9999 /] deploy ~/Desktop/test-application.war --all-server-groups'test-application.war' deployed successfully.[domain@localhost:9999 /] deploy --help[...] After you've uploaded the binary using the "deploy" command, it will be available to the domain controllerand assigned to a server group: [domain@localhost:9999 /] :read-children-names(child-type=deployment){ "outcome" => "success", "result" => [ "mysql-connector-java-5.1.15.jar", "test-application.war" ]}[domain@localhost:9999 /] /server-group=main-server-group/deployment=test-application.war:read-resource{ "outcome" => "success", "result" => { "enabled" => true, "name" => "test-application.war", "runtime-name" => "test-application.war" }} In a similar way it can be removed from the server group: [domain@localhost:9999 /] undeploy test-application.war --all-relevant-server-groupsSuccessfully undeployed test-application.war.[domain@localhost:9999 /] /server-group=main-server-group:read-children-names(child-type=deployment){ "outcome" => "success", "result" => []} CLI Deployment To A Standalone Server Deployment on a standalone server works similar to the managed domain, just that the server-group associations don't exist. You can rely on the same CLI command as for a managed domain to deploy an application: [standalone@localhost:9999 /] deploy ~/Desktop/test-application.war'test-application.war' deployed successfully.[standalone@localhost:9999 /] undeploy test-application.warSuccessfully undeployed test-application.war. CLI Deployment to Standalone Server (one liner Shell command) You can deploy a WAR in one shot from the Shell as well. This is useful for Bash scripts or Unix aliases. NOTE: This exposes the password, so only use it for personal development instances. Ensure $JBOSS_HOME is set, and change Password and WAR file path & name below as needed: $ $JBOSS_HOME/bin/jboss-cli.sh -u=admin -p=MY_PASSWORD --controller=localhost:9990 --connect --command="deploy /path/to/MY_APP.war --force" Footnote: As you would know, you've got the Management Console for deployment, as well as the deployment scanner. The former is popular as any GUI would be, but the latter is more for development. I try to use the CLI as much as possible, as the learning curve is well worth the effort for the power of batch scripting and the sheer scale of low level operations that are exposed by the CLI API. Very cool stuff. I should add for sake of transparency that I work on the AS/EAP documentation team, so I might be biased. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8421116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/727390/']} | jdg_73106 |
stackexchange | llm_judgeable_groundtruth_similarity | 52264749 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Error compiler.js:215 Uncaught Error: Template parse errors: Can't bind to 'ngbCollapse' since it isn't a known property of 'div'. ("][ngbCollapse]="isHidden"> I have a NavbarComponent and a FooterComponent that I want to move into the SharedModule, to keep the root app.module cleaner. app.module import { AdminComponent } from './admin/admin.component';// import { NavbarComponent } from './navbar/navbar.component';// import { FooterComponent } from './footer/footer.component';// Modulesimport { DashboardModule } from './dashboard/dashboard.module';import { HomeModule } from './home/home.module';@NgModule({ declarations: [ AppComponent, LoginComponent, RegisterComponent, PasswordResetComponent, ResetPasswordComponent, AdminComponent, // NavbarComponent, // FooterComponent, share.module However, once I moved those components into here, and update their paths correctly ./ -> ../ the app breaks. import { NgModule } from '@angular/core';import { CommonModule } from '@angular/common';import { NavbarComponent } from '../navbar/navbar.component';import { FooterComponent } from '../footer/footer.component';import { TermsComponent } from './terms.component';import { PageNotFoundComponent } from './not-found.component';import { PrivacyPolicyComponent } from './privacy-policy.component';@NgModule({ imports: [ CommonModule ], declarations: [ NavbarComponent, FooterComponent, TermsComponent, PageNotFoundComponent, PrivacyPolicyComponent ]})export class SharedModule { } navbar.component.ts import { Component, OnInit } from '@angular/core';import { AuthService } from '../auth.service';import { environment } from '../../environments/environment';@Component({ selector: 'app-navbar', templateUrl: './navbar.component.html', styleUrls: ['./navbar.component.scss']})export class NavbarComponent implements OnInit { isHidden = true; oauthUrl = this.authService.generateOauthUrl(); constructor(public authService: AuthService) { } ngOnInit() { } logout() { sessionStorage.clear(); }} Then a couple of lines with the [ngbCollapse] in navbar.component.html <div *ngIf="!authService.isLoggedIn()" class="collapse navbar-collapse" id="navbarSupportedContent" [ngbCollapse]="isHidden"> I think this has something to do with the relative paths, any ideas?
Now provide the response and nothing else.
| To use ng-bootstrap components and directives in Angular templates, you need to import the NgbModule . In your case, you should import it in the SharedModule . Also, in order to use the shared components in other parts of the application, you should export them from the SharedModule and import the SharedModule in the modules when the components are to be used. shared.module.ts ...import { NgbModule } from '@ng-bootstrap/ng-bootstrap';@NgModule({ imports: [ CommonModule, NgbModule ], declarations: [ NavbarComponent, FooterComponent, TermsComponent, PageNotFoundComponent, PrivacyPolicyComponent ], exports: [ NavbarComponent, FooterComponent, TermsComponent, PageNotFoundComponent, PrivacyPolicyComponent ]})export class SharedModule { } app.module.ts import { SharedModule } from './shared/shared.module';...@NgModule({ imports: [ SharedModule, ... ], ...}) Note: if you want to use ng-bootstrap components and directives in templates outside of the SharedModule , you should add NgbModule to the exports of the SharedModule . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52264749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168738/']} | jdg_73107 |
stackexchange | llm_judgeable_groundtruth_similarity | 103483 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When I type firefox in terminal, it starts Firefox but the terminal "hangs". What is happening behind the scenes? Can I open Firefox from terminal and keep on using the same terminal tab for other things without closing Firefox?
Now provide the response and nothing else.
| The terminal locks when you are running an application from it as long as the application is running. With the ampersand (&) you can start the application in the background and still use the terminal. Type: user@host:~# firefox & To start firefox in the background. Output of the application will still be in the terminal. Or, if firefox is already running you can do this: Ctrl + z to put firefox into the backgroound. Type: jobs You should see your jobs like : [1]+ Stopped firefox. Type: bg %1 (or number of your job) | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/103483', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/15221/']} | jdg_73108 |
stackexchange | llm_judgeable_groundtruth_similarity | 10288800 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm adding a GridView & then showing data in it from a SQL Server database. The problem is that the GridView is not displaying in the browser with or without data. Here is my code: <asp:GridView ID="GridAllStore" runat="server" AutoGenerateColumns="False" Width="100%" ViewStateMode="Enabled"> public partial class AdminPanel : System.Web.UI.Page{ storelocatorDataSetTableAdapters.storedbTableAdapter tastore = new storelocatorDataSetTableAdapters.storedbTableAdapter(); storelocatorDataSetTableAdapters.View_1TableAdapter taview = new storelocatorDataSetTableAdapters.View_1TableAdapter(); List<storelocatorDataSet.storedbRow> lststore = new List<storelocatorDataSet.storedbRow>(); List<storelocatorDataSet.View_1Row> lstview = new List<storelocatorDataSet.View_1Row>(); protected void Page_Load(object sender, EventArgs e) { lstview = taview.GetData().ToList(); GridAllStore.DataSource = lstview; }}
Now provide the response and nothing else.
| I think the problem is that you haven't defined any columns to display. You have to explicitly define the columns when you set AutoGenerateColumns to false. To make sure that the basics are working set AutoGenerateColumns to true: <asp:GridView ID="GridAllStore" runat="server" AutoGenerateColumns="true" Width="100%" ViewStateMode="Enabled"> With AutoGenerateColumns set to true, the datasource assigned, and DataBind() called, you should start seeing some data. Once you start seeing the data, you can define the specific columns you want to display. Since you only need to bind the grid on the first page load, utilize the !Page.IsPostBack condition: protected void Page_Load(object sender, EventArgs e){ if (!Page.IsPostBack) { GridAllStore.DataSource = lstview; GridAllStore.DataBind(); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10288800', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1106615/']} | jdg_73109 |
stackexchange | llm_judgeable_groundtruth_similarity | 50214338 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been trying to get the size of the whole context view in Flutter. But every time I try I'm getting the above mentioned error. Here's my code: import 'package:flutter/material.dart';void main => runApp(new MyApp());class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final size = MediaQuery.of(context).size; return new MaterialApp( home: new Scaffold(), ); }} Note: I also tried with a StatefulWidget . Please, help me find what I'm doing wrong here.
Now provide the response and nothing else.
| You need a MaterialApp or a WidgetsApp around your widget. They provide the MediaQuery . When you call .of(context) flutter will always look up the widget tree to find the widget. You usually have this in your main.dart: void main() => runApp(App());class App extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Title', theme: kThemeData, home: HomePage(), ); }}class HomePage extends StatelessWidget { @override Widget build(BuildContext context) { final size = MediaQuery.of(context).size; return Container( child: ..., ); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50214338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7098650/']} | jdg_73110 |
stackexchange | llm_judgeable_groundtruth_similarity | 143287 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $V$ be a vector field on $S^3$ such that its singularity points, namely the points at which the vector field vanishes, are only sinks or sources (i.e. the field is converging or diverging). Is there an upper bound on the number $N$ of sinks and sources? Is this number 2? Is it possible that N=1? There are examples of non-singular fields (Hopf fibering) for which N=0 and simple examples with N=2 (gradient of height function). I am asking if there are other possibilities. The analogous question on $S^2$ has negative answer by the Poincaré-Hopf theorem which states that N=2. Here is my attempt: If there is an integral curve of the vector field which joins a source A with a sink B then every integral curve from A must end in B (intuitively, this follows from an argument on connectedness of the boundary of a small ball centered at A) thus in this case N=2, and we are left with the possibility that all the sinks and sources are 'separated' by invariant sets, but is this really possible?
Now provide the response and nothing else.
| This is not possible if the sources and sinks form a Morse decomposition. That means that there are no other invariant sets of the flow of the vector field. This happens if one can find a function $f:S^3\rightarrow\mathbb{R}$ that strictly decreases along the flow, except at the sources and sinks. For example if the flow is the gradient flow of a function (not necessarily Morse). The Morse-Conley relations state that, for a Morse decomposition of the three sphere $$\sum P_t(S_i)=P_t(S^3)+(1+t)Q_t$$ Where $P_t(S^3)=1+t^3$ is the Poincaré polynomial of the three sphere, $P_t(S_i)$ are the Poincaré polynomials of the Conley indices of the invariant sets, and $Q_t$ is a polynomial with non-negative coefficients. The Poincaré polynomial of a source is $t^3$ and of a sink it is $1$. Convince yourself that it is not possible to solve the equation, if there is more than one source or sink. However, it is possible if the sources and sinks do not form a Morse decomposition. I think I found an example of a vector field on $S^3$ with two sources, two sinks, and one periodic orbit. The idea is to start with the standard heightfunction, and change the sink into two sinks, a source and a periodic orbit, using a Hopf bifurcation. Replace the a neighbourhood of the source of the standard height function by the vector field whose flow lines are like this: The vector field has two sources, two sinks and a periodic orbit. Of course one can iterate this procedure around the sinks to obtain a vector field with $n$ sinks, $n$ sources and $n-1$ periodic orbits | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/143287', 'https://mathoverflow.net', 'https://mathoverflow.net/users/40549/']} | jdg_73111 |
stackexchange | llm_judgeable_groundtruth_similarity | 2535515 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this if statement that tests for the 2 conditions below. The second one is a function goodToGo() so I want to call it unless the first condition is already true $value = 2239;if ($value < 2000 && goodToGo($value)){ //do stuff}function goodToGo($value){ $ret = //some processing of the value return $ret; } My question is about the 2 if conditions $value < 2000 && goodToGo($value) . Do they both get evaluated or does the second one only get evaluated when the first one is true? In other words, are the following 2 blocks the same? if($value < 2000 && goodToGo($value)) { //stuff to do}if($value < 2000) { if (goodToGo($value)){ //stuff to do }}
Now provide the response and nothing else.
| No--the second condition won't always be executed (which makes your examples equivalent). PHP's && , || , and , and or operators are implemented as "short-circuit" operators. As soon as a condition is found that forces the result for the overall conditional, evaluation of subsequent conditions stops. From http://www.php.net/manual/en/language.operators.logical.php // --------------------// foo() will never get called as those operators are short-circuit$a = (false && foo());$b = (true || foo());$c = (false and foo());$d = (true or foo()); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2535515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/299300/']} | jdg_73112 |
stackexchange | llm_judgeable_groundtruth_similarity | 9147517 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand that you are able to change the title of a Navigation item by calling the SetTitle method navigationItem like so: [[ViewController navigationItem] setTitle:@"Hello World"]; However, i would like to do more then just change the text of the Navigation Item, I would like to change the colour and font style. Are there any functions that allow you to change the colour of the Navigation Bar Title?
Now provide the response and nothing else.
| You can simply add and modify the following code in viewdidload method. Or you even can put a image on the navigation bar if ([[UINavigationBar class]respondsToSelector:@selector(appearance)]) { [[UINavigationBar appearance] setBackgroundImage:[UIImage imageNamed:@"image.png"] forBarMetrics:UIBarMetricsDefault];}[[UINavigationBar appearance] setTitleTextAttributes:@{ UITextAttributeTextColor : [UIColor whiteColor], UITextAttributeTextShadowColor : [UIColor blackColor], UITextAttributeTextShadowOffset : [NSValue valueWithUIOffset:UIOffsetMake(1, 0)], UITextAttributeFont : [UIFont fontWithName:@"System" size:25.0]}]; Hope this help | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9147517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1188694/']} | jdg_73113 |
stackexchange | llm_judgeable_groundtruth_similarity | 5965767 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my software I need to split string into words. I currently have more than 19,000,000 documents with more than 30 words each. Which of the following two ways is the best way to do this (in terms of performance)? StringTokenizer sTokenize = new StringTokenizer(s," ");while (sTokenize.hasMoreTokens()) { or String[] splitS = s.split(" ");for(int i =0; i < splitS.length; i++)
Now provide the response and nothing else.
| If your data already in a database you need to parse the string of words, I would suggest using indexOf repeatedly. Its many times faster than either solution. However, getting the data from a database is still likely to much more expensive. StringBuilder sb = new StringBuilder();for (int i = 100000; i < 100000 + 60; i++) sb.append(i).append(' ');String sample = sb.toString();int runs = 100000;for (int i = 0; i < 5; i++) { { long start = System.nanoTime(); for (int r = 0; r < runs; r++) { StringTokenizer st = new StringTokenizer(sample); List<String> list = new ArrayList<String>(); while (st.hasMoreTokens()) list.add(st.nextToken()); } long time = System.nanoTime() - start; System.out.printf("StringTokenizer took an average of %.1f us%n", time / runs / 1000.0); } { long start = System.nanoTime(); Pattern spacePattern = Pattern.compile(" "); for (int r = 0; r < runs; r++) { List<String> list = Arrays.asList(spacePattern.split(sample, 0)); } long time = System.nanoTime() - start; System.out.printf("Pattern.split took an average of %.1f us%n", time / runs / 1000.0); } { long start = System.nanoTime(); for (int r = 0; r < runs; r++) { List<String> list = new ArrayList<String>(); int pos = 0, end; while ((end = sample.indexOf(' ', pos)) >= 0) { list.add(sample.substring(pos, end)); pos = end + 1; } } long time = System.nanoTime() - start; System.out.printf("indexOf loop took an average of %.1f us%n", time / runs / 1000.0); } } prints StringTokenizer took an average of 5.8 usPattern.split took an average of 4.8 usindexOf loop took an average of 1.8 usStringTokenizer took an average of 4.9 usPattern.split took an average of 3.7 usindexOf loop took an average of 1.7 usStringTokenizer took an average of 5.2 usPattern.split took an average of 3.9 usindexOf loop took an average of 1.8 usStringTokenizer took an average of 5.1 usPattern.split took an average of 4.1 usindexOf loop took an average of 1.6 usStringTokenizer took an average of 5.0 usPattern.split took an average of 3.8 usindexOf loop took an average of 1.6 us The cost of opening a file will be about 8 ms. As the files are so small, your cache may improve performance by a factor of 2-5x. Even so its going to spend ~10 hours opening files. The cost of using split vs StringTokenizer is far less than 0.01 ms each. To parse 19 million x 30 words * 8 letters per word should take about 10 seconds (at about 1 GB per 2 seconds) If you want to improve performance, I suggest you have far less files. e.g. use a database. If you don't want to use an SQL database, I suggest using one of these http://nosql-database.org/ | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5965767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/324853/']} | jdg_73114 |
stackexchange | llm_judgeable_groundtruth_similarity | 644205 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am getting errors, I think with coreutils, where yum won't update. Here is a stripped down output from what I see when I run yum update : Resolving DependenciesThere are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.The program yum-complete-transaction is found in the yum-utils package.--> Running transaction check---> Package PyYAML.x86_64 0:3.10-3.el6 will be updated---> Package PyYAML.x86_64 0:3.10-3.1.el6 will be an update... more of the same, and then:--> Processing Dependency: coreutils = 8.4-31.el6_5.2 for package: coreutils-libs-8.4-31.el6_5.2.x86_64---> Package coreutils.x86_64 0:8.4-37.el6 will be an update... then more of the same --> Finished Dependency Resolution--> Running transaction check---> Package coreutils.x86_64 0:8.4-31.el6_5.2 will be updated--> Processing Dependency: coreutils = 8.4-31.el6_5.2 for package: coreutils-libs-8.4-31.el6_5.2.x86_64---> Package kernel.x86_64 0:2.6.32-431.17.1.el6 will be erased--> Finished Dependency ResolutionError: Package: coreutils-libs-8.4-31.el6_5.2.x86_64 (@updates) Requires: coreutils = 8.4-31.el6_5.2 Removing: coreutils-8.4-31.el6_5.2.x86_64 (@updates) coreutils = 8.4-31.el6_5.2 Updated By: coreutils-8.4-37.el6.x86_64 (base) coreutils = 8.4-37.el6 You could try using --skip-broken to work around the problem** Found 71 pre-existing rpmdb problem(s), 'yum check' output follows:audit-libs-2.3.7-5.el6.x86_64 is a duplicate with audit-libs-2.2-4.el6_5.x86_64audit-libs-python-2.3.7-5.el6.x86_64 is a duplicate with audit-libs-python-2.2-4.el6_5.x86_64... then lots more like the above duplicate I have another server that's (almost) identical to this (they are load balanced and work from the same image) and that one hasn't got this same problem. What might be the problem and how should I best proceed? Update: I also ran yum-complete-transaction that failed and said it had renamed the transaction files. It still says there is an incomplete transaction but now this happens when I run it: [root@nico ~]# yum-complete-transactionLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * webtatic: uk.repo.webtatic.comdrivesrvr | 951 B 00:00There are 1 outstanding transactions to complete. Finishing the most recent oneThe remaining transaction had 252 elements left to run... lots of:Package name-1.23.x8x_64 already installed and latest version...--> Running transaction check... lots of entries like this:---> Package PyYAML.x86_64 0:3.10-3.el6 will be updated... and this:---> Package PyYAML.x86_64 0:3.10-3.1.el6 will be an update... and this:---> Package audit-libs.x86_64 0:2.2-4.el6_5 will be erasedKilled It then suddenly stops with the Killed line. I tried running yum update --skip-broken next: [root@nico ~]# yum update --skip-brokenFreeing read locks for locker 0x7d: 28940/139976145426176Freeing read locks for locker 0x7f: 28940/139976145426176Freeing read locks for locker 0x80: 28940/139976145426176Freeing read locks for locker 0x81: 28940/139976145426176Freeing read locks for locker 0x82: 28940/139976145426176Loaded plugins: downloadonly, fastestmirror, replaceSetting up Update ProcessLoading mirror speeds from cached hostfile * webtatic: uk.repo.webtatic.comdrivesrvr | 951 B 00:00Resolving DependenciesThere are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.--> Running transaction check... lots of will be updated/will be an update/etc, then:--> Processing Dependency: coreutils = 8.4-31.el6_5.2 for package: coreutils-libs-8.4-31.el6_5.2.x86_64 Then several more Running transaction check: s, and Processing Dependency: coreutils =... entries among more willy be updated/willbe an update entries. Then: Packages skipped because of dependency problems: coreutils-8.4-37.el6.x86_64 from baseDependencies Resolved Then I'm shown a table with a list of Installing: and Updating: packages, with a summary at the end: Install 2 Package(s)Upgrade 79 Package(s)Remove 1 Package(s) It then downloads all packages (81/81 successfully) and finally shows this: Total 3.0 MB/s | 115 MB 00:37Running rpm_check_debugKilled Update 2: I also ran yum clean all and had some problems there too. I found some advice to remove the __db* files and try again: [root@nico ~]# yum clean allrpmdb: Thread/process 29404/139880514733824 failed: Thread died in Berkeley DB libraryerror: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recoveryerror: cannot open Packages index using db3 - (-30974)error: cannot open Packages database in /var/lib/rpmCRITICAL:yum.main:Error: rpmdb open failed[root@nico ~]# rm -f /var/lib/rpm/__db*[root@nico ~]# yum clean allLoaded plugins: downloadonly, fastestmirror, replaceCleaning repos: base drivesrvr epel extras ius newrelic percona-release-noarch percona-release-x86_64 rackspace updates webtaticCleaning up EverythingCleaning up list of fastest mirrors[root@nico ~]# yum clean allLoaded plugins: downloadonly, fastestmirror, replaceCleaning repos: base drivesrvr epel extras ius newrelic percona-release-noarch percona-release-x86_64 rackspace updates webtaticCleaning up Everything[root@nico ~]# Update 3: I tailed /var/log/messages and saw a lot of output when I ran yum-complete-transaction , the last two lines of which were: Nov 14 08:19:47 nico kernel: Out of memory: Kill process 30995 (yum-complete-tr) score 149 or sacrifice childNov 14 08:19:47 nico kernel: Killed process 30995, UID 0, (yum-complete-tr) total-vm:641952kB, anon-rss:181420kB, file-rss:4kB So presumably this suggests it's memory related. If so, is this a case of needing more RAM, or could there be another problem I need to investigate?
Now provide the response and nothing else.
| The usual reason for yum dying with Killed is that the system ran out of memory. From your logs it looks like you have an Amazon t1.micro instance, or something similarly sized, and without any swap space. This restricted amount of memory is often not enough to complete a large number of package installations/updates. To resolve the problem, either: Shut down running services until you have enough memory to complete the yum transaction. Or... Add a swap file or swap on ephemeral storage to your instance. Then you should be able to complete your yum transactions. Finally, to clean up your package problems from your previous failed updates, try package-cleanup --cleandupes before you perform any other updates. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/644205', 'https://serverfault.com', 'https://serverfault.com/users/106816/']} | jdg_73115 |
stackexchange | llm_judgeable_groundtruth_similarity | 516823 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There has been quite a lot of fuss over the years to explain why the Demon does not represent a violation of the Second Law of thermodynamics. Many papers bring information or complexity theory on board, explaining either that the recording or the discarding of information supplies the missing entropy, or else re-defining a new entropy which will in increase despite the Demon's efforts. Why do all these authors have such a strong expectation that some argument must be found to restore the second law? Given what I think is a standard understanding of entropy, there is simply no reason to expect this. Once an agent's preferred macro-parameters are settled, phase space is divided into chunks and entropy refers to the volumes of these chunks. A different agent, with access to a differing level of experimental control, would have a differing set of macro-variables and a different partitioning of phase space. Such an agent could definitely implement what appears to be, on the first agent's partition, a decrease in entropy. Couldn't they?
Now provide the response and nothing else.
| The standard version of Maxwell's Demon allows for a perpetual motion machine, producing an infinite amount of work from a finite-temperature reservoir. This seems too good to be true, and since nobody has ever successfully built a perpetual motion machine, it is widely believed that these are unlikely to exist. A number of physicists believe that Maxwell's Demon has been exorcised. This was done using Landauer's principle : the minimum possible amount of energy required to erase one bit of information is kT ln 2. In order to operate, the Demon needs to remember which way he sent the molecules. Eventually, his memory becomes full, and he is unable to keep operating. Once he starts erasing his memory, he can no longer reduce the entropy of the system for free. Viewed in another way, the Demon is simply a heat engine that operates on the difference of temperature between the system and his memory (which has temperature 0). | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/516823', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/126995/']} | jdg_73116 |
stackexchange | llm_judgeable_groundtruth_similarity | 6302531 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the jquery validation plugin for form validation. Using the min property works fine, but I want it to validate values strictly greater than that min value. rules: { price: { required: true, min: 13, number: true }} In my code I have min: 13 , but I don't want to allow 13, only values greater than 13, e.g. 13.10, 13.20, 14. How can I do this? Thanks in advance !
Now provide the response and nothing else.
| Create your own custom method with $.validator.addMethod : $.validator.addMethod('minStrict', function (value, el, param) { return value > param;}); Then use: price: { required: true, minStrict: 13, number: true} Note: The creators of the validator plugin recommend adding Number.MIN_VALUE to the value you supply: min: 13 + Number.MIN_VALUE Number.MIN_VALUE is the smallest positive (non-zero) float that JS can handle, hence the logic is that the two statements below are equivalent: a > b;a >= b + Number.MIN_VALUE; But , this doesn't work, due to the way floating-point numbers are stored in memory. Rounding will cause b + Number.MIN_VALUE to equal b in most cases ( b must be very small for this to work). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6302531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/430112/']} | jdg_73117 |
stackexchange | llm_judgeable_groundtruth_similarity | 51199077 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm making a get request to embed.rock using vue and axios. axios({ method: 'get', url: 'https://api.embed.rocks/api?url=' + this.url, headers: { 'x-api-key': 'my-key' }}) When I use a CDN to get vue and axios with an inline script my code works fine and I get a response back. When I reference the installed vue and axios scrpts with an external script the code no longer runs and I get the following error: Failed to load https://api.embed.rocks/api?url=https://www.youtube.com/watch?v=DJ6PD_jBtU0&t=4s : Request header field X-CSRF-TOKEN is not allowed by Access-Control-Allow-Headers in preflight response. When I click on the error in the console it just brings me to: <!DOCTYPE html>
Now provide the response and nothing else.
| Laravel is setting a global configuration to include automatically the X-CSRF-TOKEN in the headers of the request in your bootstrap.js file. let token = document.head.querySelector('meta[name="csrf-token"]');if (token) { window.axios.defaults.headers.common['X-CSRF-TOKEN'] = token.content;} else { console.error('CSRF token not found: https://laravel.com/docs/csrf#csrf-x-csrf-token');} Therefore, if you want to remove the token, you can achieve it like this: var instance = axios.create();delete instance.defaults.headers.common['X-CSRF-TOKEN'];instance({ method: 'get', url: 'https://api.embed.rocks/api?url=' + this.url, headers: { 'x-api-key': 'my-key' }}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51199077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3325126/']} | jdg_73118 |
stackexchange | llm_judgeable_groundtruth_similarity | 9410046 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a method which implements different logic on data fetched from a DB depending on what the current date is. I want to test it by having the unit test create objects, save them in the DB and invoke the tested method. However, in order to have predictable results I need to change the system date each time and I don't know how to do that in Java. Suggestions?
Now provide the response and nothing else.
| You can generate your expected results using the current date. Or you write your system to use a date/time you give it when being tested (rather than the clock) That way the time is always what the test expects. I use something like interface TimeSource { long currentTimeMS(); // actually I have currentTimeNS void currentTimeMS(long currentTimeMS);}enum VanillaTimeSource implements TimeSource { INSTANCE; @Override public long currentTimeMS() { return System.currentTimeMillis(); } @Override public void currentTimeMS(long currentTimeMS) { // ignored }}class FixedTimeSource implements TimeSource { private long currentTimeMS; @Override public long currentTimeMS() { return currentTimeMS; } @Override public void currentTimeMS(long currentTimeMS) { this.currentTimeMS = currentTimeMS; }} In tests I use a FixedTimeSource which can be data driven e.g. set by inputs/events. In production I use a VanillaTimeSource.INSTANCE which ignores times in inputs/events and uses the current time. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9410046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/628056/']} | jdg_73119 |
stackexchange | llm_judgeable_groundtruth_similarity | 23686118 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I couldn't find an elegant way for setting null values with a <select> using AngularJS. HTML : <select ng-model="obj.selected"> <option value=null>Unknown</option> <option value="1">Yes</option> <option value="0">No</option></select>{{obj}} JS : $scope.obj ={"selected":null}; When the page is loaded, the first option is selected, which is good, and the output is {"selected":null} . When that first option is reselected after having switch to another one, the output becomes {"selected":"null"} (with the quotes), which is not what I would expect. Running example : http://plnkr.co/edit/WuJrBBGuHGqbKq6yL4La I know that the markup <option value=null> is not correct. I also tried with <option value=""> but it corresponds to an empty String and not to null : the first option is therefore not selected and another option which disappears after the first selection is selected by default. Any idea ?
Now provide the response and nothing else.
| This should work for you: Controller: function MyCntrl($scope) { $scope.obj ={"selected":null}; $scope.objects = [{id: 1, value: "Yes"}, {id: 0, value: "No"}] } Template: <div ng-controller="MyCntrl"> <select ng-model="obj.selected" ng-options="value.id as value.value for value in objects"> <option value="">Unknown</option> </select><br/> {{obj}} </div> Working plnkr You should use ng-options with select. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23686118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/766848/']} | jdg_73120 |
stackexchange | llm_judgeable_groundtruth_similarity | 518378 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying the length of stay for patients in a hospital. I have a sample of n=4533 observations. Each of these observations are assigned to an admin group numbered between 1 and 8, based on the reason they were admitted to hospital. Admin group 2 has the characteristics: n = 193,x̄ = 37.2020725 (days),s.d. = 35.6247163 (days) This is the highest mean of the 8 admin groups. I want to test whether the difference between the other groups means and this mean is significant. If I combine the other 7 admin groups, I get the characteristics: n = 4340,x̄ = 25.5078341,s.d. = 31.1011062 I tried to run a t-test to compare these 2 sets of data, but I ended up getting really small values for standard error and degrees of freedom (less than 1). I'm assuming the t-test is inappropriate for this data, perhaps due to the fact that 1 sample size is significantly larger than the other. Can anyone think of a suitable test to help with what I'm trying to investigate here? Alternatively, should I change my angle and try an ANOVA test (if that is appropriate?), to study whether admin group 2's mean is significantly different from all the other groups' respective means? Hope I made my question clear.
Now provide the response and nothing else.
| I agree with @pikachu that the standard deviations are too largecompared with the difference between means for a t test to finda significant difference. Thank you for posting your data. It is always a good idea to take a look at some graphic displaysof the data before doing formal tests. Stripcharts of observations in the two groups do not show a meaningful differencein locations relative to the variability of the samples. stripchart(ANGPTL7 ~OSA_status, pch="|", ylim=c(.5,2.5)) Here are boxplots of the two groups. The 'notches' in the the sides ofthe boxes are nonparametric confidence intervals, calibrated so thatoverlapping notches tend to indicate no significant difference in location. boxplot(ANGPTL7 ~ OSA_status, notch=T, col="skyblue2", horizontal=T) Even with sample sizes as large as these, I would be reluctant to do a two-sample t test on account of the marked skewness of the data. I would do a nonparametric two-sample Wilcoxon rank sum test (which also shows nosignificant difference). wilcox.test(ANGPTL7 ~ OSA_status) Wilcoxon rank sum test with continuity correctiondata: ANGPTL7 by OSA_statusW = 1456.5, p-value = 0.2139alternative hypothesis: true location shift is not equal to 0 | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/518378', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/317745/']} | jdg_73121 |
stackexchange | llm_judgeable_groundtruth_similarity | 1325397 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a short question, but I'm simply not sure where to start, I know by Jordan's Lemma that the integral is not $0$, but I only know the below result due to Mathematica. $$\lim_{R\to\infty}\int_{0}^{\pi}\sin(R^{2}e^{2i\theta})iRe^{i\theta}\:\mathrm{d}\theta=-\sqrt{\frac{\pi}{2}}$$ I need the result in order to proceed with evaluating a real integral using a contour integral. Does anyone have any advice on how to approach the integral.
Now provide the response and nothing else.
| Let $u=Re^{i\theta}$, so that $du=iRe^{i\theta}\,d\theta$. Thus $$\lim_{R\to \infty}\int_0^{\pi}\sin(R^2e^{i2\theta})iRe^{i\theta}\,d\theta=-\int_{-\infty}^{\infty}\sin(u^2)\,du=-\sqrt{\frac{\pi}{2}}$$ where we used the well-know results of the Fresnel Integral . NOTE : To evaluate the Fresnel integral, let's analyze the following complex integral. $$\begin{align}\oint e^{iz^2}\,dz&=\int_0^Re^{ix^2}\,dx+\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}\,d\phi+\int_R^0e^{i(1+i)^2t^2}(1+i)\,dt\\\\&=\int_0^Re^{ix^2}\,dx+\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}\,d\phi+(1+i)\int_R^0e^{-2t^2}\,dt\end{align}$$ Now, as $R\to \infty$, the first integral becomes $$\lim_{R\to \infty}\int_0^Re^{ix^2}\,dx=\int_0^{\infty}\cos (x^2)\, dx+i\int_0^{\infty}\sin (x^2)\, dx$$ the second integral goes to zero since $$\begin{align}\left|\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}d\phi\right|&\le R\int_0^{\pi/4}e^{-R^2\sin 2\phi}\,d\phi\\\\&\le R\int_0^{\pi/4}e^{-4R^2\phi/\pi}\,d\phi\\\\&=\frac{\pi}{4}\frac{1-e^{-R^2}}{R}\to 0\end{align}$$ and the third integral becomes the Gaussian Integral $$(1+i)\int_{\infty}^0e^{-2t^2}\,dt=-(1+i)\sqrt{\frac{\pi}{8}}$$ Since the contour integral is zero, we have $$\bbox[5px,border:2px solid #C0A000]{\int_0^{\infty}\cos (x^2)\, dx=\int_0^{\infty}\sin (x^2)\, dx=\sqrt{\frac{\pi}{8}}}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1325397', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32374/']} | jdg_73122 |
stackexchange | llm_judgeable_groundtruth_similarity | 30907 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using a Mathematica function which returns some error term in symbolic form. I needed a way to determine if this term starts with a minus sign or not. There will be only one term. This is to avoid having to worry about Mathematica auto arranging terms like x-h to become -h+x and hence it is not fair to ask for more than one term solution. Here are some examples of the expressions, all in input form negativeTruetests = {(-(71/12))*h^2*Derivative[4][f],(-1)*h^2*Derivative[10][f],(-(359/3))*h^2*Derivative[10][f], -2*h, -x*h^-2, -1/h*x }negativeFalseTests = { h^2*Derivative[4][f] , h^2*Derivative[4][f],33*h^2*Derivative[4][f], h^-2, 1/h} I need a pattern to check if the expression starts with minus sign or not.
Now provide the response and nothing else.
| I will use your test lists alone as reference. If they are incomplete you will need to update the question. When looking for a pattern for a group of expressions it is helpful to look at their TreeForm : TreeForm /@ {negativeTruetests (*sic*), negativeFalseTests} // Column You see that your True expressions always have the head Times with one negative leaf, be it -1 , -2 or a Rational that is negative. Your False expressions either have head Times or Power but in the case of Times they do not have a negative leaf. Therefore for these expressions you may use: p = _. _?Negative;MatchQ[#, p] & /@ negativeTruetestsMatchQ[#, p] & /@ negativeFalseTests {True, True, True, True, True, True} {False, False, False, False, False} Because of the Optional and OneIdentity (1) this pattern will also handle a negative singlet: MatchQ[#, p] & /@ {-Pi, 7/22} {True, False} Format-level pattern matching Since it was revealed that this question relates to formatting it may be more appropriate to perform the test in that domain. I will use a recursive pattern as I did for How to match expressions with a repeating pattern after converting to boxes with ToBoxes : test = MatchQ[#, RowBox[{"-" | _?#0, __}]] & @ ToBoxes @ # &;test /@ negativeTrueteststest /@ negativeFalseTests {True, True, True, True, True, True} {False, False, False, False, False} Another approach is to convert to a StandardForm string and use StringMatchQ , which is essentially the same as the test above because StandardForm uses encoded Boxes: test2 = StringMatchQ[ToString[#, StandardForm], ("\!" | "\(") ... ~~ "-" ~~ __] &;test2 /@ negativeTrueteststest2 /@ negativeFalseTests {True, True, True, True, True, True} {False, False, False, False, False} | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/30907', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/70/']} | jdg_73123 |
stackexchange | llm_judgeable_groundtruth_similarity | 9061153 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Within a single project solution introducing Areas when you have a lot of controllers does improve separation and allows modules to easily be copied in or out of the solution. However in a large enterprise solution I would favour splitting the logic into separate projects instead. Thus having separate UI, Controller, SOA, Model and Repository projects. In this scenario Areas don't make sense any more, plus they add an extra top level to the Url which is often not needed, although I believe you can omit the area in the Url if you keep your controllers unique, but isn't that a bit smelly? Perhaps Areas are good for medium complexity sites or when module code is better kept in one location so it can be copied to other sites or removed.
Now provide the response and nothing else.
| I'm not sure if that's the right question. Areas can be overkill for small projects, but it's hard to imagine a non-trivially large project not using areas to help keep classes organized. I use MVC Areas for the enterprise and love several things about it : Typically people are working on a feature within a given domain (e.g., Search, Checkout, etc). If the area names correspond with your business domains, MVC Areas help reduce the time it takes to implement a feature , because the related classes are easy to find. MVC routing gives you a ton of flexibility over how how structure the URLs. I used to use the Action Controller "pattern" but for non-public facing URLs I've just fully embraced the Area default route to make things easy. Areas give you the distinct advantage of styling and, more importantly, encapsulating behavior at a site-section level . Each area gets its own web config where you can control the base view page or add managed handlers. You're absolutely right that services should be in separate projects / solutions altogether , that abstract the data access via repositories, in an environment where multiple clients can access common business functionality. But MVC Areas are great at providing some order to the UI / routing chaos as a web project grows, which, to me, is invaluable, regardless of context. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9061153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233987/']} | jdg_73124 |
stackexchange | llm_judgeable_groundtruth_similarity | 35097193 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've built a C++ application using Microsoft Visual Studio 2015 Community Edition. I'm using Advanced Installer to make sure that the Visual C++ Redistributable for Visual Studio 2015 is a prerequisite. However, the redistributable's installer isn't perfect. Some of my users have reported that the redistributable installer hangs, or it fails to install when it says it does, and then users get the "This program can't start because MSVCP140.dll is missing from your computer" error. According to Microsoft, I can now package the redistributable DLLs along with my application, though they don't recommend it: To deploy redistributable Visual C++ files, you can use the Visual C++ Redistributable Packages (VCRedist_x86.exe, VCRedist_x64.exe, or VCRedist_arm.exe) that are included in Visual Studio. ... It's also possible to directly install redistributable Visual C++ DLLs in the application local folder, which is the folder that contains your executable application file. For servicing reasons, we do not recommend that you use this installation location. There are 4 files in C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\redist\x64\Microsoft.VC140.CRT . Does that mean I just need to copy them to my application's directory during the install process? MyApp.exe concrt140.dll msvcp140.dll vccorlib140.dll vcruntime140.dll Is this OK to do? Do I need to show a license? Why aren't more people doing this instead of requiring yet another preinstall of the redistributable?
Now provide the response and nothing else.
| There are 4 files in C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\redist\x64\Microsoft.VC140.CRT . Does that mean I just need to copy them to my application's directory during the install process? Yes, and the paragraph you quoted means just that. Is this OK to do? Do I need to show a license? Why aren't more people doing this instead of requiring yet another preinstall of the redistributable? Technically, that's OK to do. If you want to be pedantic about it, you may include a note in the readme or help/about to the effect that VC++ 2015 redistributables provided in "local deployment" mode as explicitly allowed by Microsoft's Deployment in Visual C++ (with more links to the file lists and licenses at Redistributing Visual C++ Files ). As to why more people don't do it, I'd guess that (among those who care at all): for a single module app like MyApp.exe it's easier to build it with everything linked statically as to eliminate external dependencies to begin with; not including those files saves 1+ MB from the distribution (presumably download) size; running with private copies of the runtime ( "local deployment" ) shifts the responsibility of updates to the maintainer, so that in case of a critical/security fix the package would have to be reissued timely - as opposed to "central deployment" where it would likely be delivered via Windows Update, with both the good and bad that may bring. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35097193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102704/']} | jdg_73125 |
stackexchange | llm_judgeable_groundtruth_similarity | 6039676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
// Base classvar Base = function() { this._value = 'base';};Base.prototype = { constructor: Base, // By function getValue: function() { return this._value; }, // By getter get value() { return this._value; }};// Sub class extends Basevar Sub = function() { this._value = 'sub';};Sub.prototype = { constructor: Sub};// Pass over methodsSub.prototype.getValue = Base.prototype.getValue;Sub.prototype.value = Base.prototype.value;// ---var mySub = new Sub();alert(mySub.getValue()); // Returns 'sub'alert(mySub.value); // Returns 'undefined' At first glance it seems that mySub.value should return the same as mySub.getValue(), but as you can see it instead returns undefined. Obviously the getter is not finding the parent scope as the Sub instance (mySub), but rather a non-existent Base instance. Is there any way around this other than having to assign the same getters onto the new prototype?
Now provide the response and nothing else.
| A more modern solution is to use the Object.defineProperty since it allows getters and setters to be handled without breaking them. Only problem is that it takes a descriptor object, so instead of manually making one, use the Object.getOwnPropertyDescriptor function to just get it for you. var BazValue = Object.getOwnPropertyDescriptor(Base.prototype,'value');Object.defineProperty(Sub.prototype, 'value', BazValue); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6039676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556609/']} | jdg_73126 |
stackexchange | llm_judgeable_groundtruth_similarity | 597890 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!
Now provide the response and nothing else.
| There is no "right" answer here. My personal preference would be use Group Policy for settings/policies, but for application deployment use PowerShell (or even legacy WSH scripts or batch files), assuming you have a single domain with no complex trust issues. However, there are tradeoffs. Doing everything in PowerShell is certainly legal. Application Deployment (GPO vs Script): With GPO based software deployment, you are limited to MSI packages. This can be annoying for products that are not distributed via MSI (e.g. setup.exe). In that case, you have to package it up as an MSI. Enjoy Adobe Reader's MSI+MSP distribution (gotta make an AIP). If you need to customize the install, you need to mess with MSI transforms. Something that could be a single line in a script can become a complicated multi-step process. When everything you are deploying is readily available as an MSI, and you can install it with no special customizations, GPO deployment is pretty smooth. You can use WMI targeting and automatically uninstall stuff that is out of policy. But as soon as you step out of the confines of that box, things get more complicated. Also, when something does go wrong, it can be difficult to figure out what happened from the Event Viewer. With a script, you can install or uninstall pretty much anything, MSI, EXE, whatever. If you need to do some cleanup work before installing (e.g. purging old registry keys from a previous version that did not uninstall cleanly), you can do it. If something goes wrong, you can add as much debug/retry/workaround code as you need, and you can run msixec with logging enabled. You script will generally need some conditional logic to check if the software is already installed (and what version), and then invoke the installer if needed. If you do not have a programming background, this can be intimidating. It is no longer a point-and-click solution. There is also more opportunity for you to screw something up since you wrote it yourself. We switched from GPO to PowerShell for our application deployment, and it made things a lot easier. When a new version comes out, all we need to do is drop the new installer files in our AppDeployment share and update the $expected_version variable in our script. Takes like 1/10th the time. You can also do a hybrid approach where you use GPO for MSI stuff and scripts for everything else. I am not a fan of this because I like to have one place to go to see what is installed. Note that with Group Policy, application deployment does not happen until a reboot. If you are using Startup Scripts, this is also true. However if you use PowerShell Remoting or a Scheduled Task you might be able to pull it off without a reboot (could get messy though). Settings and Configuration (GPO vs Script): Group Policy Preferences is really simple to use to do things like create registry values, map network drives, create shortcuts, etc. The fact that Group Policy has background refresh is slick because you can apply settings without forcing users to reboot. Item level targeting gives you a lot of flexibility (and this is in addition to the WMI and Security filtering at the GPO level). However, Group Policy Preferences is far from perfect. Some of my peeves: There are way too many methods to manage Internet Explorer, some of them deprecated, some don't work with the latest IE. I always end up just managing the registry settings directly. Random Errors in the Event Viewer that are stupid. Example: I created an item to delete a Scheduled Task. Great, it's gone from all my workstations. Now the Event Viewer starts filling with Errors "I couldn't delete the task because it doesn't exist". Duh! Apply Once and do not Reapply will screw you at least once in your career . Update vs Replace . Pay attention because you will probably pick the wrong one. There are limits to what you can do with Item Level Targeting. If you need a for() loop or string manipulation (trim characters, adjust case), you will be back to a script. GPP file operations always happen, even if the files has not been changed. Your workstations will keep copying the same file off of the server over and over again, even if they don't need it. This can unexpectedly hammer your server and network. Item level targeting can help mitigate this, but watch if you have the Remove if Not Applied checked. Many of the GPP Errors in the Event Viewer contain no useful information to actually fix the problem. You have to turn on tracing . You can use PowerShell for settings, but it has some drawbacks too: Managing registry settings with PowerShell gets messy (at least with the native Registry provider). In my opinion Microsoft screwed up when they made registry values "properties" instead of actual items. It makes things way more complex than it needs to be. You need a bunch of conditional logic to test for the registry keys and create them before you create the values. Group Policy is much simpler in this area. There are a lot of "basic" sysadmin tasks that you cannot do natively in PowerShell (e.g. creating a shortcut, managing a scheduled task). You have to invoke Wsh, the .Net Framework, or use a third party module. Scripts run at startup/logon only. No background refresh (unless you setup a Scheduled Task). If you have to call legacy Window commands utilities (net.exe, schtasks.exe), it can be tricky to call the command, and even more "tricky" to get its screen output to play nicely with PowerShell's output. You may have a command that fails and you don't know it. Some other general comments on scripts (PowerShell or otherwise) vs GPO: Scripts are pretty much programming projects. The code will grow and become more complex over time. If you do not treat it like a "real" programming project, with comments, version history, subroutines, etc, it will grow into an unmaintainable mess that your successor will end up scrapping because he didn't understand it. Sysadmins are (generally) not programmers. They don't know proper programming disciplines (see point #1). Even well-structured code can be intimidating and unreadable for an admin with no programming background. Be mindful of the skillset of your current and future staff. Scripts have an advantage in that they can be checked into version control and diffed. With GPOs this is a little harder to do. Scripts are easier to copy onto a USB key and take with you. That was probably the case for your contractor. He probably had some existing scripts from previous projects that he was able to recycle for your project. In my environment I have two networks that are air-gapped (don't ask), but have similar configurations, so we use PowerShell scripts wherever possible so we can re-cycle code between the two networks. GPOs have an advantage in that you can use tools like RSoP to get a report of all settings applied to a particular workstation/user. This can be important for auditing and troubleshooting. GPOs are more "discoverable". I can come into an unfamiliar environment and pretty quickly get up to speed on what policies and settings are applied. With a script, I have to start studying code and hope it is well commented. PowerShell Remoting can be really handy for a one-off command that you want to run on a bunch of systems, and you don't want it to be a "permanent" policy (e.g. everybody delete this one temp file). Regardless of the approach you use, make sure things are well-documented. You should be documenting things at the micro -level ("Implemented setting x for all accounting workstations to resolve problem with application y"), and also at the macro -level ("All of our workstation settings are stored in the GPO called x"). FOLLOW-UP EDIT: PowerShell 4 adds a new feature called Desired State Configuration . This looks like it can be used to achieve some of what Group Policy Preferences does. It could be a game changer. Have not worked with it yet but it looks really cool. FOLLOW-UP EDIT 2: One thing I realized is that I did not properly differentiate between Group Policy Policies vs Preferences . Preferences are pretty much analogous to a PowerShell script. Policies are a little more "rigid", and are trickier to implement in a script. To do so, you have to have your script populate HKLM\Software\Policies, but you have to be mindful of collisions with GPOs. In that case, do one or the other, not both. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/597890', 'https://serverfault.com', 'https://serverfault.com/users/52874/']} | jdg_73127 |
stackexchange | llm_judgeable_groundtruth_similarity | 58106664 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a parent functional component <EditCard/> that is a modal opened when the edit table row button is selected. This edit card contains a state variable called data which consists of the data from the table row being edited. I am setting/modifying state on <EditCard/> with the useState hook. <EditCard/> has a child component <CategoryDropdown/> which is a dropdown that accepts a prop data.assignCategory as its selected value and a callback handleChange() which updates the state value data with the value selected from the dropdown. When I select a new value from the dropdown handleChange() is called and setData() is called and I see the state being updated but <CategoryDropdown/> is not re-rendered with the new selected value. EditCard Component Code export default function EditCard(props) { const [data, setData] = useState(props.data); const handleChange = () => event => { let d = data; d.assignCategory = event.target.value; setData(d); }; let assignCategoryCol = data.assignCategory !== undefined ? <AssignCategoryCol data={data} handleChange={handleChange}/> : <></>; return ( <div> {assignCategoryCol} <Button>Update</Button> </div> )}{props.data.bucketTotal}`} <Lock/></Typography>)};const AssignCategoryCol = (props) => { return ( <CategoryDropdown id={props.data.id} assignedCategory={props.data.assignCategory} handleDropdownChange={props.handleChange}/>)};const useStyles = makeStyles(theme => ({})); CategoryDropdown Component class CategoryDropdown extends Component { constructor(props) { super(props); //TODO Get Categories from DB and set default this.state = { categories: ['Select One', 'Category1', 'Category2', 'Category3'], }; } render() { return ( <div id={'categoryDropdown'}> <Select onChange={this.props.handleDropdownChange(this.props.id)} value={this.props.assignedCategory}> {this.state.categories.map((category) => { return <MenuItem value={category}>{category}</MenuItem> })} </Select> </div> ) }}const styles = theme => ({});export default withStyles(styles)(CategoryDropdown)
Now provide the response and nothing else.
| In order for React to know that something has changed in the state, you need to replace the current object with an entirely new object. Currently, react sees that it is the same object, so does nothing. It does not search the object to see if any properties have changed in it. In your handleChange event handler, you have: let d = data; This is not copying the data object. Instead, now the variable "d" is pointing to the same object (in memory) that the variable "data" points to. (For more info, read about "pass by reference" vs "pass by value"). To fix that, see the destructuring/ copying below: const [data, setData] = useState(props.data)const handleChange = event => { let newData = {...data} //copy the object newData.assignCategory = event.target.value; setData(newData); }; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58106664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7254473/']} | jdg_73128 |
stackexchange | llm_judgeable_groundtruth_similarity | 574586 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading " Reliably Deploying Rails Applications " Regarding defining users to be set up by Chef, it says: “Next we need to define users, inside data_bags/users copy the file deploy.json.example to deploy.json. Generate a password for your deploy user with the command: openssl passwd -1 "plaintextpassword" And update deploy.json accordingly.” My question is, what is the purpose of openssl passwd ? Is it just to generate a strong password? Would it be just as good if I typed in random characters? And then, what is my 'actual' password? The plain text version, or the encrypted version? Do I need to save a copy of both to my password manager? UPDATE: Yes, I have read the manual. And yes, I understand that it generates an md5 encrypted version of my password. My question is more about why you'd use it, as opposed to using a very secure random string of characters that you make up yourself (or generate with a password generator). One benefit I could think of is that you could type a rememberable password, and run it through openssl passwd -1 "plaintextpassword" every time you need to enter it. So you'd kind of have the best of both worlds in terms of an easy to remember password, and a secure, random password. And running the rememberable/plain text version through ``openssl passwd -1` every time you need it would save you having to store the encrypted version of the password and type / paste that in every time you need to enter your password. Is that the only benefit? If not, what are the others?
Now provide the response and nothing else.
| After some chat on the #chef IRC channel, here's what I ultimately needed to know. Most of it is actually peripheral info, rather than openssl passwd specific, but anyway... Chef users the standard adduser command ( http://linux.die.net/man/8/adduser ) for adding users. That command accepts the password already encrypted - Hence why you need to store an encrypted version (generated by openssl passwd -1 "plaintextpassword" ) in your data_bags/users/deploy.json . So, your plain-text password is the 'real' password. But because the adduser command expects the password you pass it to be already encrypted, it's the encrypted version that you need to store in data_bags/users/deploy.json That works well, because you definitely wouldn't want to store a plain-text password in data_bags/users/deploy.json ! Coming back to my original questions: What is my 'actual' password? The plain text version, or the encrypted version? The plain text version is your real one. Do I need to save a copy of both to my password manager? No. You only store your plain-text version. You use that whenever you want to log in. The system then encrypts that, and compares it to the encrypted version that it has stored for your account. What is the benefit / purpose of openssl passwd? There is no 'benefit' as such. It's simply required because the adduser command will expect the password it's given to be already encrypted. Having said all that, apparently it's much better to not sore a password at all in data_bags/users/deploy.json , and only allow access via SSH Keys. It's not considered a good practice to store even an encrypted version of your password in data_bags/users/deploy.json because Linux password encryption has such a bad track record. (edit: read comments below for a better explanation) | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/574586', 'https://serverfault.com', 'https://serverfault.com/users/150075/']} | jdg_73129 |
stackexchange | llm_judgeable_groundtruth_similarity | 16343288 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using the Java 7 grammar https://github.com/antlr/grammars-v4/blob/master/java7/Java7.g4 I want to find methods with a specific name and then just print out that method. I see that I can use the methodDeclaration rule when I match. So I subclass Java7BaseListener and override this listener method: @Override public void enterMethodDeclaration(Java7Parser.MethodDeclarationContext ctx) { } How do I get the original text out? ctx.getText() gives me a string with all the whitespace stripped out. I want the comments and original formatting.
Now provide the response and nothing else.
| ANTLR's CharStream class has a method getText(Interval interval) which will return the original source in the give range. The Context object has methods to get the beginning and end. Assuming you have a field in your listener called input which has the CharStream being parsed, you can do this: int a = ctx.start.getStartIndex(); int b = ctx.stop.getStopIndex(); Interval interval = new Interval(a,b); input.getText(interval); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16343288', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1373521/']} | jdg_73130 |
stackexchange | llm_judgeable_groundtruth_similarity | 2306897 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been searching about exponential rules on google and the first three results was these ones: Link 1 Link 2 Link 3 I didn't understand why they all say: $(a^b)^c=a^{bc}$. This is wrong, see for example: $((-2)^{2})^{1/2}=2\neq (-2)^1$. Why does every site I read say this wrong rule? how can I reformulate this to make it true?
Now provide the response and nothing else.
| Why does every site I read say this wrong rule? All these sites you found appear to be aimed at a beginning algebra audience. At this level, typically all bases are positive and all exponents are integers (at least to begin with). And with positive bases and integer exponents, it is indeed the case that $(a^m)^n = a^{mn}$. At this level, in the US at least, one should never expect to see something like $(-2)^{1/2}$ because that ventures into the realm of imaginary numbers, which is not typically covered until 2-3 semesters after beginning algebra. how can I reformulate this to make it true? All you really need is $a > 0$, then for any $m, n \in \Bbb R$, we have $(a^m)^n = a^{mn}$. If $a = 0$ then you have to worry about negative exponents because they'll lead to division by zero. If $a < 0$ then it is not true in general that $(a^m)^n = a^{mn}$, as you've already noted with your example. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2306897', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42912/']} | jdg_73131 |
stackexchange | llm_judgeable_groundtruth_similarity | 6116583 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This may not be the correct approach, if there is a better way pleas tell me.I've created a class of Custom Adapter & in my getView method I inflate the view I want to use public View getView(int position, View convertView, ViewGroup parent) { View v = mInflater.inflate(R.layout.wherelayout, null); if (convertView != null) { v = convertView; } HashMap<String, Object> whereHash = (HashMap<String, Object>) this.getItem(position); if (whereHash != null) { TextView whereId = (TextView) v.findViewById(R.id.tvWhere); TextView whereDetails = (TextView) v.findViewById(R.id.tvWhereDetails); ImageButton ibDelWhere = (ImageButton) v.findViewById(R.id.ibDelWhere); whereId.setText((CharSequence) whereHash.get("where")); whereDetails.setText((CharSequence) whereHash.get("details")); if (ibDelWhere != null) { ibDelWhere.setId(position); ibDelWhere.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { //do stuff when clicked } } ); } } return v; } The view consists of 2 TextView aligned to the left & an ImageButton aligned to the right, I want to be able to delete the item from the ListView when the button is clicked. the layout is like this - <RelativeLayoutxmlns:android="http://schemas.android.com/apk/res/android"android:layout_width="fill_parent"android:layout_height="fill_parent" android:orientation="horizontal" android:clickable="true"><TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:textSize="25sp" android:id="@+id/tvWhere" android:textColor="#00FF00" android:text="TextView" android:gravity="top|left" android:layout_alignParentTop="true" android:layout_alignParentLeft="true"></TextView><TextView android:layout_height="wrap_content" android:layout_width="wrap_content" android:id="@+id/tvWhereDetails" android:textColor="#0000FF" android:text="TextView" android:textSize="18sp" android:layout_below="@+id/tvWhere" android:gravity="bottom|left" android:layout_alignParentLeft="true"></TextView><ImageButton android:layout_height="wrap_content" android:layout_width="wrap_content" android:src="@drawable/eraser" android:id="@+id/ibDelWhere" android:layout_alignParentRight="true" android:layout_alignParentTop="true"></ImageButton></RelativeLayout> The problem is that when the ImageButton is in the layout, I can click it & the onClick() fires as expected, but I can't click the actual list item itself, i.e. click on the TextView items to fire the ListView.onItemClick that was assigned to it already. If I remove the ImageButton from the layout, then the ListView.onItemClick event fires when I click the item. Is there any way I can enable clicking both the ListView item & the button within the layout ?Thanks guys & gals.
Now provide the response and nothing else.
| You have to set the imagebutton as non focusable and non focusableInTouchMode (clickable is ok). Please note, as opposed as other views, you can't do that in xml because the android:focusable gets overwritten in ImageButton 's constructor.To be more precise, that's one of the few differences between ImageView and ImageButton . See for yourself, this is the complete source of ImageButton . @RemoteViewpublic class ImageButton extends ImageView { public ImageButton(Context context) { this(context, null); } public ImageButton(Context context, AttributeSet attrs) { this(context, attrs, com.android.internal.R.attr.imageButtonStyle); } public ImageButton(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); setFocusable(true); } @Override protected boolean onSetAlpha(int alpha) { return false; }} To solve, just call setFocusable(false) from java. Or use an ImageView :) myImageButton.setFocusable(false); Hope it helps. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6116583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/757243/']} | jdg_73132 |
stackexchange | llm_judgeable_groundtruth_similarity | 446000 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Today I was in the middle of a particularly boring lecture. During this class I briefly let my mind wander. During this time I wondered: Given a unique, but possibly similar, set of input data is it possible for the encoder to create overlapping codings for two or more different inputs?. Suppose that I have an arbitrary set of inputs, $X$ . In this input set, there are no duplicate data instances, but the data can be very simular. If you were to feed this data into a sufficiently trained autoencoder is it possible for the encoder to output the same codings for multiple different instances of the input set?
Now provide the response and nothing else.
| In some setups, not only they can , they need to. An idealized Denoising Autoencoder with a weak decoder would map any input+noise, as well as just input, to the same eventual latent code - its encoder would be just a lossless compression of the noiseless data, plus noise filters. For a negative case, in a pathological scenario the latent encoding could collapse into a single vector, producing a single underfitted reconstruction with a local minimum of reconstruction cost. That's just classical AEs. A VAE should produce overlapping codes, if you consider the code to be the sample rather than the distribution parameter, being N-dimensional bubbles in a compact and (approximately) continuous latent space. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/446000', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/271612/']} | jdg_73133 |
stackexchange | llm_judgeable_groundtruth_similarity | 27594262 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This method gives me a compilation and I don't understand why: private void invokeMethods(Object instance, List<Method> methods) throws InvocationTargetException, IllegalAccessException { methods.forEach(method -> method.invoke(instance));} The error message is: unreported exception java.lang.IllegalAccessException; must be caught or declared to be thrown Which doesn't make sense: the exception is already declared to be thrown. IntelliJ cannot correct it either. In intention actions if I select Add Exceptions to Method Signature , it does nothing. What am I missing? (I'm new to Java 8)
Now provide the response and nothing else.
| The problem here is that it is the lambda that throws the exception, and the lambda used in forEach is not declared to throw an Exception. See Consumer documentation. The best way to solve this is really to use the old-fashioned for-each loop: for (Method method : methods) { method.invoke(instance);} Although it is possible to use something like this: methods.forEach(method -> { try { method.invoke(instance); } catch (IllegalAccessException | InvocationTargetException e) { throw new RuntimeException(e); }}); It doesn't really help you in my opinion because you can't throw the exact same exception from the lambda anymore, you have to wrap it in an RuntimeException , which is not as nice... IntelliJ cannot correct it either. In intention actions if I select Add Exceptions to Method Signature, it does nothing. The reason why IntelliJ cannot do that is because it's part of the Consumer interface, which is part of the JDK. IntelliJ does not have a way to modify that. IntelliJ could handle this in a better way though, of course. Conclusion: There are times when lambdas are useful, and there are times when the old ways of doing things is better. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27594262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/641955/']} | jdg_73134 |
stackexchange | llm_judgeable_groundtruth_similarity | 17690739 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have spent hours trying to make this next piece of code work. import org.joda.time.{DateTime, Period}def dateRange(from: DateTime, to: DateTime, step: Period): Iterator[DateTime] =Iterator.iterate(from)(_.plus(step)).takeWhile(!_.isAfter(to))val range = {dateRange(new DateTime(2012, 06, 30).minusYears(5), new DateTime(2000, 06, 30),new Period.months(6))} I'm trying to set up a date range array that steps through from 2000 to 2012 in 6 month increments. The problem that I am facing is the following error. Exception in thread "main" java.lang.IllegalArgumentException: No instant converter found for type: scala.Tuple3at org.joda.time.convert.ConverterManager.getInstantConverter(ConverterManager.java:165)at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:169)at org.joda.time.DateTime.<init>(DateTime.java:241)at tester.MomentumAlgo$class.$init$(MomentumAlgo.scala:154)at tester.RunMomentumAlgo$$anon$1.<init>(RunMomentumAlgo.scala:86)at tester.RunMomentumAlgo$.main(RunMomentumAlgo.scala:86)at tester.RunMomentumAlgo.main(RunMomentumAlgo.scala)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:601)at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) I seems to be something to do with the last Period.months() part, however I have no idea how to fix it. The Tuple3 error I have no idea about. If someone can give me a different solution, that would also be great. I want a list of dates from 2000 to 2012, every 6 months. Any questions welcome. I thought this would be a common piece of code, but there isn't much on the net about it. Thanks in advance.
Now provide the response and nothing else.
| A work around is to define the dates like this: val date = new DateTime().withYear(2013).withMonthOfYear(7).withDayOfMonth(16) The entire sequence in the REPL then becomes this: scala> import org.joda.time.{DateTime, Period}import org.joda.time.{DateTime, Period}scala> def dateRange(from: DateTime, to: DateTime, step: Period): Iterator[DateTime] =Iterator.iterate(from)(_.plus(step)).takeWhile(!_.isAfter(to))dateRange: (from: org.joda.time.DateTime, to: org.joda.time.DateTime, step: org.joda.time.Period)Iterator[org.joda.time.DateTime]scala> val from = new DateTime().withYear(2012).withMonthOfYear(6).withDayOfMonth(30).minusYears(5)from: org.joda.time.DateTime = 2007-06-30T21:46:05.536-07:00scala> val to = new DateTime().withYear(2000).withMonthOfYear(6).withDayOfMonth(30)to: org.joda.time.DateTime = 2000-06-30T21:46:26.186-07:00scala> val range = dateRange(from, to, new Period().withMonths(6))range: Iterator[org.joda.time.DateTime] = non-empty iteratorscala> range.toListres4: List[org.joda.time.DateTime] = List(2000-06-30T21:46:26.186-07:00,2000-12-30T21:46:26.186-08:00,2001-06-30T21:46:26.186-07:00,2001-12-30T21:46:26.186-08:00,2002-06-30T21:46:26.186-07:00,2002-12-30T21:46:26.186-08:00,2003-06-30T21:46:26.186-07:00,2003-12-30T21:46:26.186-08:00,2004-06-30T21:46:26.186-07:00,2004-12-30T21:46:26.186-08:00,2005-06-30T21:46:26.186-07:00,2005-12-30T21:46:26.186-08:00,2006-06-30T21:46:26.186-07:00,2006-12-30T21:46:26.186-08:00) Also, I wasn't able to reproduce this as noted in my comment. Seems the behavior is different in the REPL and the compiler. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17690739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2574695/']} | jdg_73135 |
stackexchange | llm_judgeable_groundtruth_similarity | 2780012 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The free group over a set $S$ only includes finitely-long words made up of letters from $S$ and their inverses. It seems natural to me to also allow infinitely long words. While this would obviously be impossible to operationalize on a computer, in principle these infinitely-long group elements seem perfectly well-defined. Is there some fundamental logical problem with including infinitely long words? If not, has this concept ever been studied? (I know that such a generalization would no longer count as being "generated" by $S$ - since by definition every group element needs to be reachable by a finite number of generator multiplications - so you'd need to find a new name.)
Now provide the response and nothing else.
| This doesn't work if you just do it naively. The most naive definition of an "infinite word" would be an infinite string $s_1s_2s_3\dots$ where each $s_n$ is an element of $S$ or the formal inverse of an element of $S$. This fails horribly, since such strings are not closed under composition or inverses. For instance, the inverse of such an infinite string would be infinitely long on the left, instead of on the right. And if you concatenate two such infinite strings, you would get a string of the form $(s_1s_2s_3\dots)(t_1t_2t_3\dots)$ where the "letters" in the word are now arranged in a sequence indexed by the ordinal $\omega+\omega$, instead of just by the natural numbers. So, to make sense of this idea, you need to allow a more exotic variety of "infinite words" that can have many different infinite totally ordered sets as their index sets. Another important obstacle is the Eilenberg swindle . Namely, let $s\in S$ (or more generally let $s$ be any word) and consider the infinite word $w=sss\dots$. Then any reasonable definition of a "group of infinite words" should have $sw=w$, which then implies $s=1$! So if you want your group to be nontrivial, you need to impose some restriction that disallows words of this type. However, the news is not all bad! There does exist at least one interesting construction of a "free group with infinite words" (there are probably others too; I don't know the literature on this subject). You can find the details in Section 3 of the nice paper The Combinatorial Structure of the Hawaiian Earring Group by J. W. Cannon and G. R. Conner. Specifically, Cannon and Conner define a "transfinite word" on a set $S$ to be a map $f:I\to S\cup S^{-1}$ where $I$ is a totally ordered set, $S^{-1}$ is the set of formal inverses of elements of $S$, and each fiber of $f$ is finite. So, you can have a word indexed by any totally ordered set, as long as each element of $S$ only appears in it only finitely many times (this finiteness condition avoids Eilenberg swindles). Two words $f:I\to S\cup S^{-1}$ and $g:J\to S\cup S^{-1}$ are identified if there is an order-isomorphism $I\cong J$ which turns $f$ into $g$. They then define an equivalence relation on such words using a sort of cancellation, prove that each word is equivalent to a unique "reduced" word, and use this to define a "big free group" on $S$ consisting of transfinite words modulo equivalence (or equivalently, reduced transfinite words). When $S$ is countably infinite, this group is isomorphic to the fundamental group of the Hawaiian earring . Sadly, this construction is only really of interest when $S$ is infinite, since when $S$ is finite all words must be finite and you just get the ordinary free group. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2780012', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/268333/']} | jdg_73136 |
stackexchange | llm_judgeable_groundtruth_similarity | 237267 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I saw a comparison of two similar radio receiver modules. They used the same IC, but one had a greater range due to the inclusion of a "PA/LNA" which I understand to be an abbreviation for "Power Amp / Low Noise Amp". What is a PA/LNA? How does the PA/LNA work to increase RF range? Are the PA and LNA typically used together? (update) The module with greater range has this IC which includes the PA and LNA functionality: SE2431L2.4 GHz ZigBee/802.15.4 Front End Module
Now provide the response and nothing else.
| PA: (power amp) amplifies when transmitting. LNA: (low noise amp) amplifies when receiving. both sit between circuitry and antenna. for duplexed signal, passive duplexer shifts between the two on Rx/Tx. The PA stands for power amplifier, in this case a RF or microwave amplifier used for transmission of a signal. LNA stands for low noise amplifier, normally used for high RF bands or microwave signals as a sensitive signal receiver. PAs and LNAs are not always combined. It depends on the application. I found this article on the web which covers the basic details. Understanding the Basics of Low-Noise and Power Amplifiers in Wireless Designs By Bill Schweber Contributed By Electronic Products 2013-10-24 1) In a wireless design, two components are the critical interfaces between the antenna and the electronic circuits, the low-noise amplifier (LNA) and the power amplifier (PA). However, that is where their commonality ends. Although both have very simple functional block diagrams and roles in principle, they have very different challenges, priorities, and performance parameters. 2) The LNA functions in a world of unknowns . As the "front end" of the receiver channel, it must capture and amplify a very-low-power, low-voltage signal plus associated random noise which the antenna presents to it, within the bandwidth of interest. In signal theory, this is called the unknown signal/unknown noise challenge, the most difficult of all signal-processing challenges. 3) In contrast, the PA takes a relatively strong signal from the circuitry, with very-high SNR, and must "merely" boost its power. All the general factors about the signal are known, such as amplitude, modulation, shape, duty cycle, and more. This is the known-signal/known-noise quadrant of the signal-processing map, and the easiest one to manage. Despite this apparent simple functional situation, the PA has performance challenges as well. 4) In duplex (bidirectional) systems, the LNA and PA usually do not connect to the antenna directly, but instead go to a duplexer, a passive component. The duplexer uses phasing and phase-shifting to steer the PA's output power to the antenna while blocking it from the LNA input, to avoid overload and saturation of the sensitive LNA input. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/237267', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/17697/']} | jdg_73137 |
stackexchange | llm_judgeable_groundtruth_similarity | 645060 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
A Debian Server having eth0 , eth1 . eth2 , ppp0 devices: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether xx:yy:zz:yy:xx:yy brd ff:ff:ff:ff:ff:ff3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether xx:yy:zz:yy:xx:yy brd ff:ff:ff:ff:ff:ff4: eth2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether xx:yy:zz:yy:xx:yy brd ff:ff:ff:ff:ff:ff63: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc pfifo_fast state UNKNOWN qlen 3 link/ppp forwarding is enabled everywhere: /proc/sys/net/ipv6/conf ~ all/forwarding=1 default/forwarding=1 eth0/forwarding=1 eth1/forwarding=1 eth2/forwarding=1 ppp0/forwarding=1 and autoconf is activated too: /proc/sys/net/ipv6/conf ~ all/autoconf=1 default/autoconf=1 eth0/autoconf=0 eth1/autoconf=1 eth2/autoconf=1 ppp0/autoconf=1 further RA (=Router Advertisement) is accepted on any device but setting accept_ra=2 for at leat ppp0 and eth1 : /proc/sys/net/ipv6/conf ~ all/accept_ra=1 default/accept_ra=1 eth0/accept_ra=1 eth1/accept_ra=2 eth2/accept_ra=0 lo/accept_ra=1 ppp0/accept_ra=2 PPP connection is established successfully, having ipv6 ::dead:beef option set in /etc/ppp/peer/myProvider config file: 63: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qlen 3 inet6 2003:42:e67f:d3ca:6105:155:f2b3:71f0/64 scope global temporary dynamic valid_lft 14266sec preferred_lft 1666sec inet6 2003:42:e67f:d3ca::dead:beef/64 scope global dynamic valid_lft 14266sec preferred_lft 1666sec inet6 fe80::dead:beef/10 scope link valid_lft forever preferred_lft forever and a default route to a link-local address of the provider is set: 2003:42:e67f:d3ca::/64 dev ppp0 proto kernel metric 256 expires 13559secfe80::/64 dev ppp0 proto kernel metric 256 fe80::/10 dev ppp0 metric 1 fe80::/10 dev eth1 proto kernel metric 256 fe80::/10 dev ppp0 proto kernel metric 256 fe80::/10 dev eth0 metric 1024default via fe80::90:1a10:1b2:b780 dev ppp0 proto kernel metric 1024 expires 1789sec The public 2003:42:e67f:d3ca::/64 prefix has a route to the ppp0 device. radvd installed and running, radvdump shows the ppp0 IPv6 link sending RAs interface ppp0{ AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag off; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 0; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvLinkMTU 1492; prefix 2003:42:e67f:d3ca::/64 { AdvValidLifetime 14400; AdvPreferredLifetime 1800; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition}; # End of interface definition From the server host i can ping6 a host from the internet successfully.Now when i try forcing a RA by soliciting a router for ppp0 i get: Soliciting ff02::2 (ff02::2) on ppp0...Hop limit : undefined ( 0x00)Stateful address conf. : NoStateful other conf. : YesRouter preference : mediumRouter lifetime : 1800 (0x00000708) secondsReachable time : unspecified (0x00000000)Retransmit time : unspecified (0x00000000) MTU : 1492 bytes (valid) Prefix : 2003:42:e67f:d3ca::/64 Valid time : 14400 (0x00003840) seconds Pref. time : 1800 (0x00000708) seconds What i would expect is that all nodes on eth1 get SLAAC configured but when i try soliciting on eth1 i get: Soliciting ff02::2 (ff02::2) on eth1...Timed out.Timed out.Timed out.No response. I don't want to use DHCPv6 but SLAAC via radvd I'd like to avoid bash kung fu cutting the actual prefix from the provider (eg: from rdisc6 output) to tweak the radvd.conf file on my own (eg: in an if-up event) Bridging devices isn't a solution. PPP device is virtual and can't be bridged. Somehow forwarding RA packets from ppp0 to eth1 (and to any other device) doesn't seem work at all. Why?As far as i understand any router with a DSL modem has to forward in some way RAs from it's internal modem device to the physical LAN ports attached otherwise any host connected there wouldn't get an IPv6 address, right?Now where is the difference between a router and my debian box?I would be grateful for any hint you may have.
Now provide the response and nothing else.
| Router advertisements are not supposed to be forwarded. So when you find that they are not being forwarded, then at least that part is working as intended. You are supposed to be running your own router advertisement daemon in your router, such that it advertises itself to the LANs. You should have three separate /64 prefixes for your three LANs. So you need a routed /62 or shorter from your ISP. This is no problem because your ISP is supposed to give you a shorter prefix for this purpose (how short depends on who you ask, originally it was /48 but some would only hand out a /56). If there is a DHCPv6 server available over the ppp link, then you can send a DHCPv6 request asking for a prefix to be delegated to you. Otherwise you may have to actually talk to a person. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/645060', 'https://serverfault.com', 'https://serverfault.com/users/195652/']} | jdg_73138 |
stackexchange | llm_judgeable_groundtruth_similarity | 47923403 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Background I'm using mongoose and TypeScript in my Node.JS app. I'm using mongoose's populate in a bunch of places when fetching data from the database. The issue I'm facing is that I don't know how to type my models so that a property can be either an ObjectId or populated with data from another collection. What I've tried I've attempted using union types in my model type definition, which seems like something that TypeScript offers to cover these kind of things: interface User extends Document { _id: Types.ObjectId; name: string}interface Item extends Document { _id: Types.ObjectId; // Union typing here user: Types.ObjectId | User;} My schema only defines the property as an ObjectId with ref. const ItemSchema = new Schema({ user: { type: Schema.Types.ObjectId, ref: "User", index: true }}) Example : So I might do something like this: ItemModel.findById(id).populate("user").then((item: Item) => { console.log(item.user.name);}) Which produces the compilation error: [ts] Property 'name' does not exist on type 'User | ObjectId'. Property 'name' does not exist on type 'ObjectId'. Question How can I have a model property that can be either of two types in TypeScript?
Now provide the response and nothing else.
| You need to use a type guard to narrow the type from Types.ObjectId | User to User ... If you are dealing with a User class, you can use this: if (item.user instanceof User) { console.log(item.user.name);} else { // Otherwise, it is a Types.ObjectId} If you have a structure that matches a User , but not an instance of a class (for example if User is an interface), you'll need a custom type guard: function isUser(obj: User | any) : obj is User { return (obj && obj.name && typeof obj.name === 'string');} Which you can use with: if (isUser(item.user)) { console.log(item.user.name);} else { // Otherwise, it is a Types.ObjectId} If you don't want to check structures for this purpose, you could use a discriminated union . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47923403', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3383751/']} | jdg_73139 |
stackexchange | llm_judgeable_groundtruth_similarity | 389539 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
we have 100% on / Filesystem Size Used Avail Use% Mounted on/dev/mapper/vg08_root 20G 20G 132K 100% / so when I do lvextend we get the following errors # lvextend -L+5G /dev/mapper/vg08_root Couldn't create temporary archive name. Volume group "vg00" metadata archive failed. how to resolve this?
Now provide the response and nothing else.
| You may be able to circumvent the space requirement for this operation by disabling the metadata backup with the -A|--autobackup option: lvextend -An -L+5G /dev/mapper/vg08_root If you do this, follow the operation with a vgcfgbackup to capture the new state. Post-mortem note: Since the ultimate goal was to expand the logical volume and resize the encapsulated filesystem, a one-step operation could have been used: lvextend -An -L+5G --resizefs /dev/mapper/vg08_root In this case, the filesystem type would have been automatically deduced, avoiding trying to use resize2fs in lieu of `xfs_growfs'. | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/389539', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/237298/']} | jdg_73140 |
Subsets and Splits