source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 45569948 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am very new to Kafka. Using Kafka 0.11 Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor') I get the above error on sending a message for a topic kafka-topics --zookeeper localhost:2181 --topic test --describeTopic:test1 PartitionCount:1 ReplicationFactor:1 Configs:Topic: test1 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Now provide the response and nothing else.
| How are you starting the broker ? What is the server.properties file. The one provided with the downloaded package should have the following line : offsets.topic.replication.factor=1 Just to be clear the error you see is not related to the topic you are trying to publish. Today, Kafka doesn't save topic offsets for consumers in Zookeeper anymore but in "internal topics" with name __consumer_offsets. Of course, if you have 1 broker you can't have a replication factor of 3. So I'd like to take a look at your server.properties. If the above property is missing, the default is 3. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45569948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3270104/']} | jdg_84141 |
stackexchange | llm_judgeable_groundtruth_similarity | 31495 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The following might be a very trivial question. If so, I don't mind it being closed, but would appreciate a reference where I could read about it. Let $R$ and $S$ be commutative rings and let $R^\times$ and $S^\times$ denote their respective multiplicative groups of units. Let $f:R \to S$ be a ring homomorphism and let $f^\times : R^\times \to S^\times$ denote the induced group homomorphism. Finally, suppose that $f$ is surjective. Under what conditions (if any) will $f^\times$ be surjective? Thanks in advance!
Now provide the response and nothing else.
| I don't know how satisfactory this will be, but at least its a first stab at an answer, and might highlight some of the issues. There is one "obvious" condition which ensures $f^\times$ is surjective: if the kernel of $f$ is contained in the Jacobson radical of $R$, then $f^\times$ is surjective. We can think of $S$ as being $R/I$ for some ideal $I$, so that maximal ideals of $R/I$ correspond to maximal ideals of $R$ containing $I$. Since units are precisely elements that miss all maximal ideals, if every maximal ideal of $R$ contains $I$ then every unit in $R/I$ can be lifted to a unit in $R$ (in fact, every lift to an element of $R$ is a unit in this case). For $I$ not contained in the Jacobson radical, $R$ will have maximal ideals not containing $I$, and the question of whether every unit in $R/I$ lifts to an element of $R$ missing every maximal ideal in $R$ seems subtle. There are probably other, better, weaker conditions which will imply surjectivity, however. It is also useful to keep in mind the following example: the map $k[x] \to k[x]/(x^2)$ is surjective and does not induce a surjection on units. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/31495', 'https://mathoverflow.net', 'https://mathoverflow.net/users/394/']} | jdg_84142 |
stackexchange | llm_judgeable_groundtruth_similarity | 49261 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Most modern democracies select the government somewhat indirectly. The UK votes in 650 (currently, the number changes over time) separate constituencies each selecting a single MP by a first past the post method. Those MPs then select a government. The USA votes for the next president not directly but via an electoral college which usually allocates all the electoral college votes to the candidate with a majority of votes in the state. Both the UK and USA can, therefore, have a winner who did not win the aggregate popular vote across the whole country (for example, Trump won the electoral college with ~3m fewer votes overall than Clinton). This is an inherent quirk in indirect systems of selecting governments (or presidents). Is this quirk common? How often in long standing representative democracies does the "winner" not also win the popular vote? Note: There are many voting systems where the question may be meaningless. For example, countries like Ireland who use STV give people multiple votes (so it may be impossible to say who "won" the vote. Others have proportional parliamentary systems where the quirk in systems like the UK is designed out. The scope of the question, therefore, excludes those countries where it doesn't make sense. So answers should explain the nature of the electoral system they refer to. I'm not restricting it to specific countries as I don't know enough about every system to be sure I've picked the ones where it does apply. So consider it in two parts: which systems have the quirk and how common is it in practice.
Now provide the response and nothing else.
| In the UK this has happened 7 times since the introduction of the secret ballot: 1852, 1874, January 1910, December 1910, 1929, 1951 and February 1974. However the UK did not offer universal suffrage until 1928, so only the last three apply to a democratic system as we would understand the term today. For example in 1974, Labour won 11,645,616 votes, compared to the Conservative 11,872,180. However Labour a plurality of the seats (301 compared to the Conservative 297) In 1951, although the Labour party won slightly more votes than the Conservatives, the Conservatives actually gained a slim majority in the House of Commons. In both cases the reason was the same: votes "wasted" in large majorities in safe constituencies. In the US the President has lost the popular vote 4 or 5 times,1876, 1888, (1960 depending on how you count the unpledged elector vote), 2000, and 2016. However, the system was not democratic (by current standards) in 1876, 1960 or 1888. In systems with proportional representation, the winners of a plurality may not be able to form a majority coalition. For example in the Netherlands in 1982, the Labour party won a plurality (with 30.4% of the popular vote), but the government was formed by a coalition of the second and third place parties (The Christian Democrats and the Liberal Democrats) So the party of the Prime Minister was not the party that won the most votes. | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/49261', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/8658/']} | jdg_84143 |
stackexchange | llm_judgeable_groundtruth_similarity | 31338462 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding is that virtual functions can cause performance problems because of two issues: the extra derefencing caused by the vtable and the inability of compilers to inline functions in polymorphic code. What if I downcast a variable pointer to its exact type? Are there still any extra costs then? class Base { virtual void foo() = 0; };class Derived : public Base { void foo() { /* code */} };int main() { Base * pbase = new Derived(); pbase->foo(); // Can't inline this and have to go through vtable Derived * pderived = dynamic_cast<Derived *>(pbase); pderived->foo(); // Are there any costs due to the virtual method here?} My intuition tells me that since I cast the object to its actual type, the compiler should be able to avoid the disadvantages of using a virtual function (e.g., it should be able to inline the method call if it wants to). Is this correct? Can the compiler actually know that pderived is of type Derived after I downcast it? In the example above its trivial to see that pbase is of type Derived but in actual code it might be unknown at compile time. Now that I've written this down, I suppose that since the Derived class could itself be inherited by another class, downcasting pbase to a Derived pointer does not actually ensure anything to the compiler and thus it is not able to avoid the costs of having a virtual function?
Now provide the response and nothing else.
| There's always a gap between what the mythical Sufficiently Smart Compiler can do, and what actual compilers end up doing. In your example, since there is nothing inheriting from Derived , the latest compilers will likely devirtualize the call to foo . However, since successful devirtualization and subsequent inlining is a difficult problem in general, help the compiler out whenever possible by using the final keyword. class Derived : public Base { void foo() final { /* code */} } Now, the compiler knows that there's only one possible foo that a Derived* can call. (For an in-depth discussion of why devirtualization is hard and how gcc4.9+ tackles it, read Jan Hubicka's Devirtualization in C++ series posts.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31338462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1800405/']} | jdg_84144 |
stackexchange | llm_judgeable_groundtruth_similarity | 11965600 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a bean B which I have to create myself (using new B() ) and which has @Autowire and @PostConstruct annotations. How do I make Spring process these annotations from my bean A ? Related question: In Spring, can I autowire new beans from inside an autowired bean?
Now provide the response and nothing else.
| Aaron, I believe that your code is correct but I used the following: B bean = new B();AutowireCapableBeanFactory factory = applicationContext.getAutowireCapableBeanFactory();factory.autowireBean( bean );factory.initializeBean( bean, "bean" ); The first method will process @Autowire fields and methods (but not classic properties). The second method will invoke post processing ( @PostConstruct and any defined BeanPostProcessor s). Application context can be obtained in a bean if it implements ApplicationContextAware interface. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11965600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34088/']} | jdg_84145 |
stackexchange | llm_judgeable_groundtruth_similarity | 14084024 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been folowing the advice from this article for setting up a robots.txt file in asp.net mvc3 for using a controller to handle the server response, and IIS 8.0 express is returning a file not found error, rather than an asp.net error. How do I get IIS to not look for a file in these cases? Is there something I need in the web.config?
Now provide the response and nothing else.
| IIS tries to be intelligent here. He intercepts the dot in the url and thinks that this is a static file and attempts to serve it with the default StaticFile handler. it dopesn't event get to the managed ASP.NET application. The first possibility is to add the following in your web.config <system.webserver> <modules runAllManagedModulesForAllRequests="true" /> but actually that's not something I would recommend you doing because this might have a negative effect on the performance of your application because now all requests to static files (such as .js, .css, images, ...) will go through the managed pipeline. The recommended approach is to add the following handler to your web.config ( <handlers> tag of <system.webServer> ): <system.webServer> <handlers> <add name="Robots-ISAPI-Integrated-4.0" path="/robots.txt" verb="GET" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> ... </handlers></system.webServer> Notice how we have specified that this handler will only apply to a particular URL and HTTP verb. Now when you GET /robots.txt , IIS will no longer handle it with the StaticFile handler but will instead pass it to the managed pipeline ASP.NET. And then it will be intercepted by the routing engine and routed to the corresponding controller action. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14084024', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/612512/']} | jdg_84146 |
stackexchange | llm_judgeable_groundtruth_similarity | 4123341 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i'm reading through jQuery's "Plugins/Authoring" though i already wrote a few jQuery-Plugins. Now I see that jQuery has a special way of scoping the methods and calling: (function( $ ){ var methods = { init : function( options ) { // THIS }, show : function( ) { // IS }, hide : function( ) { // GOOD }, update : function( content ) { // !!! } }; $.fn.tooltip = function( method ) { // Method calling logic if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } else { $.error( 'Method ' + method + ' does not exist on jQuery.tooltip' ); } };})( jQuery ); I understand the concept of what will happen in the end… but how exactly? This part is what confuses me: // Method calling logic if ( methods[method] ) { return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); } else if ( typeof method === 'object' || ! method ) { return methods.init.apply( this, arguments ); } Why Array.prototype.slide.call(argumetns, 1) ? And where does the variable "arguments" come from all of the sudden? Any brief or deeper explanation is much appreciated. It is said, that this is how plugins should be written… so i'd like to know why. Thanks!
Now provide the response and nothing else.
| arguments arguments is a part of the JavaScript language. I was confused in exactly the way you were when I first ran into it; it's not just you. :-) It's an automatic local variable in every function, and is an array-like structure giving you all of the arguments (see Section 10.6 of the spec ), e.g.: function foo() { var index; for (index = 0; index < arguments.length; ++index) { alert(arguments[index]); }}foo("one", "two"); // alerts "one", then alerts "two" When I say arguments is array-like, I mean it — it's not an Array. Its references to the arguments are live (and bidirectional). For instance: function foo(namedArg, anotherNamedArg) { alert(namedArg === arguments[0]); // alerts true, of course alert(anotherNamedArg === arguments[1]); // also alerts true namedArg = "foo"; alert(arguments[0]); // alerts "foo" arguments[0] = "bar"; alert(namedArg); // alerts "bar"} Note that when assigning a value to namedArg , the result is reflected in arguments[0] , and vice-versa. arguments is really cool, but only use it if you need to — some implementations speed up calling functions by not hooking it up until/unless the function actually first tries to access it, which can slow the function down (very slightly). arguments also has property on it called callee , which is a reference to the function itself: function foo() { alert(foo === arguments.callee); // alerts true} However, it's best to avoid using arguments.callee for several reasons. One reason is that in many implementations, it's really slow (I don't know why, but to give you an idea, the function call overhead can increase by an order of magnitude if you use arguments.callee ). Another reason is that you can't use it in the new "strict" mode of ECMAScript5. (Some implementations also had arguments.caller — shudder — but fortunately it was never widespread and is not standardized anywhere [nor likely to be].) The slice call and apply Regarding return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 )); What that's doing is using the Array#slice method to copy the arguments into an array (minus the first argument, which was the method to call), and then passing the resulting array into the Function#apply function on the function instance it's calling. Function#apply calls the function instance with the given this object and the arguments supplied as an array. The code's not just using arguments.slice because (again) arguments isn't really an Array and so you can't rely on it having all of the Array functions, but the specification specifically says (in Section 15.4.4.10) that you can apply the Array.prototype.slice function to anything that's array-like, and so that's what they're doing. Function#apply and Function#call are also built-in parts of JavaScript (see Sections 15.3.4.3 and 15.3.4.4). Here are simpler examples of each: // A function to test withfunction foo(msg, suffix) { alert(this.prefix + ": " + msg + suffix);}// Calling the function without any `this` value will default `this`// to the global object (`window` on web browsers)foo("Hi there", "!"); // Probably alerts "undefined: Hi there!" because the // global object probably doesn't have a `prefix` property// An object to use as `this`var obj = { prefix: "Test"};// Calling `foo` with `this` = `obj`, using `call` which accepts the arguments// to give `foo` as discrete arguments to `call`foo.call(obj, "Hi there", "!"); // alerts "Test: Hi there!" // ^----^-----------^---- Three discrete args, the first is for `this`, // the rest are the args to give `foo`// Calling `foo` with `this` = `obj`, using `apply` which accepts the arguments// to give `foo` as an arrayfoo.apply(obj, ["Hi there", "!"]); // alerts "Test: Hi there!" // ^---------------^---- Note that these are in an array, `apply` // takes exactly two args (`this` and the // args array to use) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4123341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/223852/']} | jdg_84147 |
stackexchange | llm_judgeable_groundtruth_similarity | 15783317 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a folder 'Files' in my project. How do I upload files directly to this folder? I used the statements: string filename = Path.GetFileName(FileUpload1.PostedFile.FileName);FileUpload1.SaveAs(Server.MapPath(".\\File\\" + filename)); Is it correct?
Now provide the response and nothing else.
| The strange bit you see with my use of the date '31211231' is just a very large date to handle your "no-end-date" scenario. I have assumed you won't really have many date ranges per employee , so I've used a simple Recursive Common Table Expression to combine the ranges. To make it run faster, the starting anchor query keeps only those dates that will not link up to a prior range (per employee). The rest is just tree-walking the date ranges and growing the range. The final GROUP BY keeps only the largest date range built up per starting ANCHOR (employmentid, startdate) combination. SQL Fiddle MS SQL Server 2008 Schema Setup : create table Tbl ( employmentid int, startdate datetime, enddate datetime);insert Tbl values(5, '2007-12-03', '2011-08-26'),(5, '2013-05-02', null),(30, '2006-10-02', '2011-01-16'),(30, '2011-01-17', '2012-08-12'),(30, '2012-08-13', null),(66, '2007-09-24', null);/*-- expected outcomeEmploymentId StartDate EndDate5 2007-12-03 2011-08-265 2013-05-02 NULL30 2006-10-02 NULL66 2007-09-24 NULL*/ Query 1 : ;with cte as ( select a.employmentid, a.startdate, a.enddate from Tbl aleft join Tbl b on a.employmentid=b.employmentid and a.startdate-1=b.enddate where b.employmentid is null union all select a.employmentid, a.startdate, b.enddate from cte a join Tbl b on a.employmentid=b.employmentid and b.startdate-1=a.enddate) select employmentid, startdate, nullif(max(isnull(enddate,'32121231')),'32121231') enddate from cte group by employmentid, startdate order by employmentid Results : | EMPLOYMENTID | STARTDATE | ENDDATE |-----------------------------------------------------------------------------------| 5 | December, 03 2007 00:00:00+0000 | August, 26 2011 00:00:00+0000 || 5 | May, 02 2013 00:00:00+0000 | (null) || 30 | October, 02 2006 00:00:00+0000 | (null) || 66 | September, 24 2007 00:00:00+0000 | (null) | | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15783317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2233054/']} | jdg_84148 |
stackexchange | llm_judgeable_groundtruth_similarity | 14887378 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using ctypes to implement a C++ function in Python. The C++ function should return a pointer to an array. Unfortunately I haven't figured out, how to access the array in Python. I tried numpy.frombuffer, but that was not successful. It just returned an array of arbitrary numbers. Obviously I didn't used it correctly. Here is a simple example with an array of size 10: Content of function.cpp: extern "C" int* function(){int* information = new int[10];for(int k=0;k<10;k++){ information[k] = k;}return information;} Content of wrapper.py: import ctypesimport numpy as npoutput = ctypes.CDLL('./library.so').function()ArrayType = ctypes.c_double*10array_pointer = ctypes.cast(output, ctypes.POINTER(ArrayType))print np.frombuffer(array_pointer.contents) To compile the C++ file i am using: g++ -c -fPIC function.cpp -o function.og++ -shared -Wl,-soname,library.so -o library.so function.o Do you have any suggestions what I have to do to access the array values in Python?
Now provide the response and nothing else.
| function.cpp returns an int array, while wrapper.py tries to interpret them as doubles. Change ArrayType to ctypes.c_int * 10 and it should work. It's probably easier to just use np.ctypeslib instead of frombuffer yourself. This should look something like import ctypesfrom numpy.ctypeslib import ndpointerlib = ctypes.CDLL('./library.so')lib.function.restype = ndpointer(dtype=ctypes.c_int, shape=(10,))res = lib.function() | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14887378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2063307/']} | jdg_84149 |
stackexchange | llm_judgeable_groundtruth_similarity | 1379341 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose the ellipse has equation $\frac{x^2}{b^2}+\frac{y^2}{a^2}=1$. I understand the way to obtain the surface area of the ellipsoid is to rotate the curve around y-axis and use surface of revolution. But I'm wondering how to obtain this formula that I found on the internet $Area=2\pi b^{2}\left ( 1+\frac{a}{b} \frac{arcsin\varepsilon }{\varepsilon }\right )$ with with eccentricity $\varepsilon$ given by $\varepsilon =\sqrt{1-\left ( \frac{b}{a} \right )^{2}}$ Can anybody help prove it for me just from the ellipse equation above and surface of revolution? Thanks a lot:).
Now provide the response and nothing else.
| I'm going to assume that the ellipse has the equation $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ since that's the more standard assumption. Yours has $a$ and $b$ reversed and I'm not sure if you meant it that way or if that was a typo on your part (I think it was a typo since your expression for the eccentricity matches the standard one). I'll also assume $a \ge b$. The case $b \ge a$ is handled similarly. Now imagine that you rotate the ellipse around its long axis, that is, around the $x$ axis, and focus your attention on the strip that results from rotating a small arc of the ellipse, located at $(x,y)$ before the rotation. That elliptical arc has a length $ds$ that depends on its location, so it's a function of $(x,y)$. In fact, it's not hard to show that that arc-length is given by $$ds = dx\,\sqrt{1 + (\frac{dy}{dx})^2}$$ Anyway, the strip resulting from the rotation of that little elliptical arc has a circular shape and, therefore, an area approximately equal to $$dA = 2\pi\,\mbox{radius} \times ds$$ and you can see from the figure that the radius is just $y$, so $$dA = 2\pi\,y\,ds$$ Now, given the equation at the top, we find $$\frac{2x\,dx}{a^2} + \frac{2y\,dy}{b^2} = 0$$ so $$\frac{dy}{dx} = -\frac{b^2}{a^2}\,\frac{x}{y}$$ and $$ds = \frac{1}{a^2}\,\frac{dx}{y}\,\sqrt{b^4x^2 + a^4y^2}$$ As promised, $ds$ depends on $(x,y)$. Putting all of the above together, we find $$dA = dx\,\frac{2\pi}{a^2}\,\sqrt{b^4x^2 + a^4y^2}$$ The area of the entire surface of revolution is then twice the integral of the above expression, from $x=0$ to $x=a$. Twice because we're integrating over only half the ellipse: $$A = \frac{4\pi}{a^2}\int_{x\,=\,0}^{x\,=\,a} \sqrt{b^4x^2 + a^4y^2}\,dx$$ We still need to eliminate $y$, but that's easy. From the equation at the top, we find $$y^2 = b^2 - \frac{b^2}{a^2}\,x^2$$ and then: $$A = 4\pi\,\frac{b}{a}\int_{x\,=\,0}^{x\,=\,a} \sqrt{a^2 - (\frac{a^2 - b^2}{a^2})\,x^2}\,dx$$ The quantity $$\frac{a^2 - b^2}{a^2}$$ is none other than the ellipse's eccentricity $\varepsilon$. So, finally, we have $$A = 4\pi\,\frac{b}{a}\int_{x\,=\,0}^{x\,=\,a} \sqrt{a^2 - \varepsilon^2x^2}\,dx$$ Now use the parametrisation $x = a\,\sin\theta$ (Why $\sin$ instead of $\cos$? Because it makes the math easier down below. Shouldn't it be $\cos$, though? Not necessarily. Note that $x$ is now a dummy integration variable and we can choose any substitution we want) to get $$A = 4\pi\,ab\,\int_{\theta\,=\,0}^{\theta\,=\,\pi/2} \sqrt{1 - \varepsilon^2\sin^2\theta}\,\cos\theta\,d\theta$$ Next set $\sin\phi = \varepsilon\sin\theta$ so $\cos\phi\,d\phi = \varepsilon\cos\theta\,d\theta$ and $$A = 4\pi\,\frac{ab}{\varepsilon}\,\int\cos^2\phi\,d\phi$$ (I omitted the integration limits but will get back to them below) To integrate $\cos^2\phi$, we can use the fact that $\cos(2\phi) = \cos^2\phi - \sin^2\phi = 2\cos^2\phi - 1$. Thus, $$\cos^2\phi = \frac{1 + \cos(2\phi)}{2}$$ and $$\int\cos^2\phi\,d\phi = \int\frac{1 + \cos(2\phi)}{2}\,d\phi =\frac{\phi}{2} + \frac{\sin(2\phi)}{4}$$ Now back to the integration limits. Note that $\theta = 0$ implies $\sin\phi = 0$, thus $\phi = 0$, and $\theta = \pi/2$ implies $\sin\phi = \varepsilon$, that is, $\phi = \arcsin(\varepsilon)$. Note also that $0 \le \varepsilon \le 1$ since $a \ge b$. So then we get $$A = 2\pi\,\frac{ab}{\varepsilon}\,(\phi + \frac{\sin(2\phi)}{2})\,\big|_{0}^{\arcsin(\varepsilon)} =2\pi\,\frac{ab}{\varepsilon}\,\big[\,\arcsin(\varepsilon) + \frac{\sin(2\arcsin(\varepsilon))}{2}\,\big]$$Then, using $\sin(2\phi) = 2\sin\phi\cos\phi$, we find $$A = 2\pi\,\frac{ab}{\varepsilon}\,\big(\,\arcsin(\varepsilon) + \varepsilon\sqrt{1-\varepsilon^2}\,\big)$$ Finally, using the definition of the eccentricity, we get $$A = 2\pi\,\frac{ab}{\varepsilon}\,\big(\,\arcsin(\varepsilon) + \varepsilon\,\frac{b}{a}\,\big) = 2\pi\,b^2\,\big(1 + \frac{a}{b}\,\frac{\arcsin(\varepsilon)}{\varepsilon} \,\big)$$ which is the expression you wanted to prove. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1379341', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/257936/']} | jdg_84150 |
stackexchange | llm_judgeable_groundtruth_similarity | 28315948 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've read the docs and it appears that there's no discernible way to perform an ALTER TABLE ... ALTER COLUMN ... USING statement to directly convert a json type column to an hstore type. There's no function available (that I'm aware of) to perform the cast. The next best alternative I have is to create a new column of type hstore , copy my JSON data to that new column using some external tool, drop the old json column and rename the new hstore column to the old column's name. Is there a better way? What I have so far is: $ CREATE TABLE blah (unstructured_data JSON);$ ALTER TABLE blah ALTER COLUMN unstructured_data TYPE hstore USING CAST(unstructured_data AS hstore); ERROR: cannot cast type json to hstore
Now provide the response and nothing else.
| Unfortunately, PostgreSQL doesn't allow all kind of expressions within the USING clause of ALTER TABLE ... SET DATA TYPE ... (f.ex. sub-queries are disallowed). But, you can write a function to overcome this, you just need to decide what to do with advanced types (in object's values), like arrays & objects. Here is an example, which simply converts them to string: CREATE OR REPLACE FUNCTION my_json_to_hstore(json) RETURNS hstore IMMUTABLE STRICT LANGUAGE sqlAS $func$ SELECT hstore(array_agg(key), array_agg(value)) FROM json_each_text($1)$func$; After that, you can use this in your ALTER TABLE , like: ALTER TABLE blah ALTER COLUMN unstructured_data SET DATA TYPE hstore USING my_json_to_hstore(unstructured_data); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28315948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/468653/']} | jdg_84151 |
stackexchange | llm_judgeable_groundtruth_similarity | 20223711 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the following code: int foo(const int& f) //version 1{ int g = f; return int(foo(g)); // calls itself, turning into SO}int& foo(int& f) //version 2{ f *= -1; return f;}int main(){ int f = 11; cout << foo(f) << endl; cout << foo(22) << endl;} The first cout prints -11 as expected; f is a lvalue , so it binds to the second version of foo (although it could bind to 1st version as well, the 2nd version it's a better match). The second call of foo is using a rvalue as parameter, so the only viable version of foo is the first one. So far, so good. Inside the first version of foo , I made a copy of the parameter so I could call the second version (with a lvalue ) and return a copy of it after the call of the second version of foo . The thing is this will turn into a stack overflow; still the first version of foo will be called. Could someone please explain to me why this happens? I would expect that g inside the first version of foo to bind to the second version of foo when passed as parameter.
Now provide the response and nothing else.
| It's simple really - foo at that point only means foo(const int& f) . There's no second choice. Not yet. Switch up the definitions. Or separate them: int foo(const int& f);int& foo(int& f);int main(){ int f = 11; cout << foo(f) << endl; cout << foo(22) << endl;}int foo(const int& f) //version 1{ int g = f; return int(foo(g)); // calls itself, turning into SO}int& foo(int& f) //version 2{ f *= -1; return f;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20223711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1019710/']} | jdg_84152 |
stackexchange | llm_judgeable_groundtruth_similarity | 11719011 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have written a little script using sed to transform this: kaefert@Ultrablech ~ $ cat /sys/class/power_supply/BAT0/ueventPOWER_SUPPLY_NAME=BAT0POWER_SUPPLY_STATUS=FullPOWER_SUPPLY_PRESENT=1POWER_SUPPLY_TECHNOLOGY=Li-ionPOWER_SUPPLY_CYCLE_COUNT=0POWER_SUPPLY_VOLTAGE_MIN_DESIGN=7400000POWER_SUPPLY_VOLTAGE_NOW=8370000POWER_SUPPLY_POWER_NOW=0POWER_SUPPLY_ENERGY_FULL_DESIGN=45640000POWER_SUPPLY_ENERGY_FULL=44541000POWER_SUPPLY_ENERGY_NOW=44541000POWER_SUPPLY_MODEL_NAME=UX32-65POWER_SUPPLY_MANUFACTURER=ASUSTeKPOWER_SUPPLY_SERIAL_NUMBER= into a csv file format like this: kaefert@Ultrablech ~ $ Documents/Asus\ Zenbook\ UX32VD/power_to_csv.sh "date";"status";"voltage µV";"power µW";"energy full µWh";"energy now µWh"2012-07-30 11:29:01;"Full";8369000;0;44541000;44541000 2012-07-30 11:29:02;"Full";8369000;0;44541000;44541000 2012-07-30 11:29:04;"Full";8369000;0;44541000;44541000... (in a loop) What I would like now is to divide each of those numbers by 1.000.000 so that they don't represent µV but V and W instead of µW, so that they are easily interpretable on a quick glance. Of course I could do this manually afterwards once I've opened this csv inside libre office calc, but I would like to automatize it. So what I found is, that I can call external programs in between sed, like this: ...s/\nPOWER_SUPPLY_PRESENT=1\nPOWER_SUPPLY_TECHNOLOGY=Li-ion\nPOWER_SUPPLY_CYCLE_COUNT=0\nPOWER_SUPPLY_VOLTAGE_MIN_DESIGN=7400000\nPOWER_SUPPLY_VOLTAGE_NOW=\([0-9]\{1,\}\)/";'`echo 0`'\1/ and that I could get values like I want by something like this: echo "scale=6;3094030/1000000" | bc | sed 's/0\{1,\}$//' But the problem now is, how do I pass my match "\1" into the external command? If you are interested in looking at the full script, you'll find it there: http://koega.no-ip.org/mediawiki/index.php/Battery_info_to_csv
Now provide the response and nothing else.
| if your sed is GNU sed. you can use 'e' to pass matched group to external command/tools within sed command. an example might be helpful to make it clear: say, you have a problem: you have a string "120+20foobar" now you want to get the calculation result of 120+20 part, and replace "oo" to "xx" in "foobar" part. Note that this example is not for solving the problem above, just for showing the sed 'e' usage so you could make 120+20 in the first match group, and rest in 2nd group, then pass two groups to different command/tools and then get the result. like: kent$ echo "100+20foobar"|sed -r 's#([0-9+]*)(.*)#echo \1 \|bc\;echo \2 \| sed "s/oo/xx/g"#ge' 120 fxxbar in this way, you could nest many seds one in another one, till you get lost. :D | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11719011', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1561397/']} | jdg_84153 |
stackexchange | llm_judgeable_groundtruth_similarity | 7000605 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using this modified example from the Rails guides , how does one model a relational "has_many :through" association using mongoid? The challenge is that mongoid does not support has_many :through as ActiveRecord does. # doctor checking out patientclass Physician < ActiveRecord::Base has_many :appointments has_many :patients, :through => :appointments has_many :meeting_notes, :through => :appointmentsend# notes taken during the appointmentclass MeetingNote < ActiveRecord::Base has_many :appointments has_many :patients, :through => :appointments has_many :physicians, :through => :appointmentsend# the patientclass Patient < ActiveRecord::Base has_many :appointments has_many :physicians, :through => :appointments has_many :meeting_notes, :through => :appointmentsend# the appointmentclass Appointment < ActiveRecord::Base belongs_to :physician belongs_to :patient belongs_to :meeting_note # has timestamp attributeend
Now provide the response and nothing else.
| Mongoid doesn't have has_many :through or an equivalent feature. It would not be so useful with MongoDB because it does not support join queries so even if you could reference a related collection via another it would still require multiple queries. https://github.com/mongoid/mongoid/issues/544 Normally if you have a many-many relationship in a RDBMS you would model that differently in MongoDB using a field containing an array of 'foreign' keys on either side. For example: class Physician include Mongoid::Document has_and_belongs_to_many :patientsendclass Patient include Mongoid::Document has_and_belongs_to_many :physiciansend In other words you would eliminate the join table and it would have a similar effect to has_many :through in terms of access to the 'other side'. But in your case thats probably not appropriate because your join table is an Appointment class which carries some extra information, not just the association. How you model this depends to some extent on the queries that you need to run but it seems as though you will need to add the Appointment model and define associations to Patient and Physician something like this: class Physician include Mongoid::Document has_many :appointmentsendclass Appointment include Mongoid::Document belongs_to :physician belongs_to :patientendclass Patient include Mongoid::Document has_many :appointmentsend With relationships in MongoDB you always have to make a choice between embedded or associated documents. In your model I would guess that MeetingNotes are a good candidate for an embedded relationship. class Appointment include Mongoid::Document embeds_many :meeting_notesendclass MeetingNote include Mongoid::Document embedded_in :appointmentend This means that you can retrieve the notes together with an appointment all together, whereas you would need multiple queries if this was an association. You just have to bear in mind the 16MB size limit for a single document which might come into play if you have a very large number of meeting notes. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7000605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365858/']} | jdg_84154 |
stackexchange | llm_judgeable_groundtruth_similarity | 14210 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is a known fact that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place? One thing I can think of perhaps is to understand the presence of outliers i.e. if the median is far from the mean, then the distribution is skewed and perhaps the data needs to be examined to decide what is to be done with the outliers. Are there any other uses?
Now provide the response and nothing else.
| In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summary of central tendency. The median, while being robust and well-defined for any continuous distribution, is only $\frac{2}{\pi}$ as efficient as the mean if the data happened to come from a normal distribution. It is this relative inefficiency of the median that keeps us from using it even more than we do. The relative inefficiency translates into a minor absolute inefficiency as the sample size gets large, so for large $n$ we can be more guilt-free about using the median. It is interesting to note that for a measure of variation (spread, dispersion), there is a very robust estimator that is 0.98 as efficient as the standard deviation, namely Gini's mean difference. This is the mean absolute difference between any two observations. [You have to multiply the sample standard deviation by a constant to estimate the same quantity estimated by Gini's mean difference.] An efficient measure of central tendency is the Hodges-Lehmann estimator, i.e., the median of all pairwise means. We would use it more if its interpretation were simpler. | {} | {'log_upvote_score': 8, 'links': ['https://stats.stackexchange.com/questions/14210', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2164/']} | jdg_84155 |
stackexchange | llm_judgeable_groundtruth_similarity | 43853812 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After doing lots of research it seems this is a grey area... I am trying to send occasional network requests via websockets once a user has entered the background state (using the Swift library Starscream, but I believe the problem is with iOS and sockets in general). Shortly after the user leaves the app, the socket connection automatically disconnects. I gather this has something to do with Apple's policy on 3rd party network activity in the background. In my AppDelegate, I have tried reconnecting when the user leaves, etc, but this does not work. I have also heard about workarounds involving playing audio, but apparently this will stop my app from being published to the App Store (if that's not the case, then why and how does this work?). Others say I need some sort of grant from Apple, how do I request this? It would be great for someone to clear this up, and provide a legitimate solution. I feel this is something that apps should be capable of, so I'm waiting to find a solution.
Now provide the response and nothing else.
| I believe there is no legitimate way to truly work around this. Apple doesn't want apps doing stuff in the background because background activity is a big battery drain and it could make iPhone users feel their battery doesn't last enough (in addition to other issues such as "unexplained" network usage and so on), so they only provide very limited options regarding background activity in iOS apps for the sake of user experience. However, we can keep the app alive in some manners: From the App Programming Guide for iOS : When you find it necessary to keep your app running in the background, iOS helps you do so efficiently and without draining system resources or the user’s battery. The techniques offered by iOS fall into three categories: Apps that start a short task in the foreground can ask for time to finish that task when the app moves to the background. Apps that initiate downloads in the foreground can hand off management of those downloads to the system, thereby allowing the app to be suspended or terminated while the download continues. Apps that need to run in the background to support specific types of tasks can declare their support for one or more background execution modes. So it seems that, other than asking iOS to allow the app to finish short tasks or downloads, the only way to request the system to allow the app to run in the background is to specify a background execution mode in our Info.plist . This can be done in XCode's Capabilities dialog for your project, or by editing the property list file directly. Let's check which background execution modes we have available: In iOS, only specific app types are allowed to run in the background: Apps that play audible content to the user while in the background, such as a music player app Apps that record audio content while in the background Apps that keep users informed of their location at all times, such as a navigation app Apps that support Voice over Internet Protocol (VoIP) Apps that need to download and process new content regularly Apps that receive regular updates from external accessories Keeping a socket alive could fall into the "Apps that need to download and process new content regularly" use case, so let's check that: Fetching Small Amounts of Content Opportunistically Apps that need to check for new content periodically can ask the system to wake them up so that they can initiate a fetch operation for that content. To support this mode, enable the Background fetch option from the Background modes section of the Capabilities tab in your Xcode project. (You can also enable this support by including the UIBackgroundModes key with the fetch value in your app’s Info.plist file.) Enabling this mode is not a guarantee that the system will give your app any time to perform background fetches. The system must balance your app’s need to fetch content with the needs of other apps and the system itself. After assessing that information, the system gives time to apps when there are good opportunities to do so. So it seems that this option is only usable for obtaining small amounts of content through HTTP requests (or other network requests), not for the kind of two-way constant communication a websocket would allow you to use. In fact, looking at other related answers , it seems there is indeed no legitimate manner to keep a socket open when the app enters background mode. This means that, to do what you want, you cannot use websockets as your only communication channel. I'd recommend you to either use the fetch background mode (as described above) in order to fetch content in larger chunks than you would using the websocket while the app is in the background, or if you want the user to be able to see that new content is available, you could implement push notifications . You can't use Push Notifications to send large amounts of content directly, but they can be used to prompt the user that there is new content available when they open your app. Regardless of whether you use background fetch or Push Notifications, you should implement methods on your App Delegate which will synchronize the app's state with your backend's state whenever your app is brought back from the background state. Finally, regarding using audio as a workaround: The audio background state key will allow your app to stay alive indefinitely in the background - but if your app does not truly use it in order to play audio, it will get rejected by the app store. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43853812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2061989/']} | jdg_84156 |
stackexchange | llm_judgeable_groundtruth_similarity | 63207784 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm creating a calculator where each of the calculator's functions have to be in a separate class and called from the main method using scanner input. 1.Add 2.Subtract 3.Multiplication 4.Division 5.Square 6.Power 7.Mod operation 8.Factorial 0.Quit I must create each method called by an object. The calculator requires one main class (has main () method) and one user defined class which has above calculating methods. The program must let the user choose an operation (one of above calculations) and operand(s) (numbers). Some calculations require two operands. (e.g. A + B, AB) Some calculations require one operand. (e.g. A2, N!) Additional functions: •Let the calculation function continues until user wants to exit this program. •When one calculation is done, let user choose another operation. •This program terminates when user selects END option. •Implement all operations. Do not use Java library math methods. I have the code for a program that is using switch statement and case, but our class hasn't even learned any of that. I have looked everywhere online for the past two days and just can't figure it out, so apologies if the solution is simple, and thank you to all help in advance. Here is the code to my program so far. public static void main(String[] args) { Scanner s = new Scanner(System.in); int choice; do { System.out.println("[1] Add "); System.out.println("[2] Subtract "); System.out.println("[3] Multiply "); System.out.println("[4] Division "); System.out.println("[5] Square "); System.out.println("[6] Power "); System.out.println("[7] Mod Operation "); System.out.println("[8] Factorial "); System.out.println("[0] Quit "); System.out.println("Please enter your choice: "); choice = s.nextInt(); int num1, num2; switch(choice) { case 1 : System.out.println("Enter two numbers to add: "); num1 = s.nextInt(); num2 = s.nextInt(); System.out.println("The sum of " + num1 + " and " + num2 + " is: " + add(num1, num2)); break; case 2 : System.out.println("Enter two numbers to subtract: "); num1 = s.nextInt(); num2 = s.nextInt(); System.out.println("The difference of " + num1 + " and " + num2 + " is: " + diff(num1, num2)); break; case 3 : System.out.println("Enter two numbers to multiply: "); num1 = s.nextInt(); num2 = s.nextInt(); System.out.println("The product of " + num1 + " and " + num2 + " is: " + prod(num1, num2)); break; case 4 : System.out.println("Enter two numbers to divide: "); num1 = s.nextInt(); num2 = s.nextInt(); System.out.println("The quotient of " + num1 + " and " + num2 + " is: " + quo(num1, num2)); break; case 5 : System.out.println("A number to square: "); num1 = s.nextInt(); System.out.println("The square of " + num1 + " is: " + square(num1)); break; case 6 : System.out.println("Enter the base and the exponent: "); num1 = s.nextInt(); num2 = s.nextInt(); System.out.println("The power of " + num1 + " to the " + num2 + "th power is: " + power(num1, num2)); break; case 7 : System.out.println("Enter two numbers to get the interger remainder of (modulo): "); num1 = s.nextInt(); num2 = s.nextInt(); System.out.println("The modulo of " + num1 + " and " + num2 + " is: " + mod(num1, num2)); break; case 8 : System.out.println("Enter a number to get the factorial of: "); num1 = s.nextInt(); System.out.println("The factorial of " + num1 + " is: " + factorial(num1)); break; case 0: System.out.println("Thank you for using my program...good bye!"); System.exit(0); } } while(choice != 0); s.close();}public static int add(int num1, int num2){ return num1 + num2;}public static int diff(int num1, int num2){ return num1 - num2;}public static int prod(int num1, int num2){ return num1 * num2;}public static double quo(int num1, int num2){ return (double)num1 / num2;}public static int mod(int num1, int num2){ return num1 % num2;}public static long power(int base, int exp){ long result = 1; while (exp != 0) { result *= base; --exp; } return result;}public static int square(int num){ return num * num;}public static int factorial(int base){ if (base == 0) return 1; else return(base * factorial(base - 1)); } }
Now provide the response and nothing else.
| The solution is quite simple. So a few days ago I came back to look at my question only to see that there was still no new solution, so I tried Bartek solution and wasn't able to make it work, and even if it did work, the clean Hilt code was becoming too messy, so I did a little investigation and played a little and discovered that the solution is actually stupidly easy. It goes like this: App: @HiltAndroidAppclass MyApp: Application() Activity: (implements callback) @AndroidEntryPointclass MainActivity : AppCompatActivity(), SomeClass.Callback { @Inject lateinit var someClass: SomeClass override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) } override fun onWhatEver() { // implement }} SomeClass: (with inner callback) class SomeClass @Inject constructor( private val callback: Callback) { fun activateCallback(){ callback.onWhatEver() } interface Callback{ fun onWhatEver() }} SomeModule: (providing/binding the activity to the callback) @Module@InstallIn(ActivityComponent::class)object SomeModule{ @Provides fun provideCallback(activity: Activity) = activity as SomeClass.Callback } And that's all we need.We cannot bind the activity to the callback with @Bind because it needs to be explicitly provided and cast to the callback so that the app can build. The module is installed in ActivityComponent and is aware of a generic 'activity', if we cast it to the callback, Hilt is content and the activity is bound to the callback, and Hilt will know how to provide the callback as long as its in the specific activity scope. Multiple Activities/Fragments App: @HiltAndroidAppclass MyApp: Application() BooksActivity: @AndroidEntryPointclass BooksActivity : AppCompatActivity(), BooksAdapter.Callback{ @Inject lateinit var adapter: BooksAdapter ... override fun onItemClicked(book: Book) {...} }} AuthorsActivity: @AndroidEntryPointclass AuthorsActivity : AppCompatActivity(), AuthorsAdapter.Callback{ @Inject lateinit var adapter: AuthorsAdapter ... override fun onItemClicked(author: Author) {...}} BooksAdapter class BooksAdapter @Inject constructor ( val callback: Callback) ... { ... interface Callback{ fun onItemClicked(book: Book) }} AuthorsAdapter class AuthorsAdapter @Inject constructor ( val callback: Callback) ... { ... interface Callback{ fun onItemClicked(auhtor: Auhtor) }} AuhtorsModule @Module@InstallIn(ActivityComponent::class)object AuthorsModule { @Provides fun provideCallback(activity: Activity) = activity as AuthorsAdapter.Callback} BooksModule @Module@InstallIn(ActivityComponent::class)object BooksModule { @Provides fun provideCallback(activity: Activity) = activity as BooksAdapter.Callback} The Modules can be joined to one module with no problem, just change the names of the functions. This is offcourse applicable for more activities and/or multiple fragments.. for all logical cases. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/63207784', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14007503/']} | jdg_84157 |
stackexchange | llm_judgeable_groundtruth_similarity | 32080768 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know this has been asked here before but none of the answers seem to work for my case I bought this theme Angle which is working with Angular 1.4.2 and Angular translate 2.6.0 (even updated to last 2.7.2) The template by default has the Translate module on it This is the config file $translateProvider.useStaticFilesLoader({ prefix : 'app/i18n/', suffix : '.json' }); $translateProvider.preferredLanguage('es'); $translateProvider.useLocalStorage(); $translateProvider.usePostCompiling(true); // Enable escaping of HTML $translateProvider.useSanitizeValueStrategy('sanitize'); // I added this line based on Docs wasn't before And the translation files in JSON format { "page": { "PAGES_WELCOME" : "Welcome to <br> MY APPLICATION, HEY THERE IS A BR TAG BEFORE ME" }, "login": { . . . . }, But i cannot add HTML tags inside the text, on the JSON file, instead of getting Welcome toMY APP I'm getting Welcome to < br > MY APP How can i fix this? EDIT I do NOT want to remove the tags, my JSON file is modified by the backend, and it can and will contain HTML Tags, i want those tags to work on the output. JADE Example Where the content is binding div(class="col-lg-4 col-md-4 col-sm-4 col-xs-12 footer-left") p(class="text-center") {{ 'page.PAGES_WELCOME' | translate }}
Now provide the response and nothing else.
| Angular sanitizes any html strings during its interpolation.In order to get around this you will need to mark the HTML as safe in $sce before injecting. Then also use ngBindHtml to output the html. I've not used angular-translate before, but this may work: //app is the main moduleapp.filter("htmlSafe", ['$sce', function($sce) { return function(htmlCode){ return $sce.trustAsHtml(htmlCode); };}]);//then to output it<span data-ng-bind-html="'page.PAGES_WELCOME' | translate | htmlSafe"></span> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32080768', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1766201/']} | jdg_84158 |
stackexchange | llm_judgeable_groundtruth_similarity | 533103 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have already installed and configured new version of PHP with Homebrew, including MySQL support and XDebug. That works fine. However, I need to install the support for PostgreSQL as well. How can I do it? I have figured out it is easy when installing PHP for the first time, just by adding the option: homebrew install php53 --with-pgsql How can I achieve something similar when my PHP is already installed, without reinstalling it completely? Or, if I reinstall, are all configs and additional settings, such as XDebug, kept?
Now provide the response and nothing else.
| brew help does not show every option available. To get more details about all of the available commands, see man brew . In your case, you can run the reinstall command: brew reinstall php53 --with-postgresql According to the brew man page, this is synonymous with running uninstall and then install. It does appear that brew will back up known configuration files first. If brew complains about not being able to find pg_config , then that means you need to install postgresql first -- it has to be postgresql ; otherwise it'll never find pg_config . I spent an inordinate amount of time trying to figure out why --with-pgsql wouldn't work when I had the postgresql92 formulae installed. I can personally confirm that it will not remove any custom configuration files. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/533103', 'https://serverfault.com', 'https://serverfault.com/users/186784/']} | jdg_84159 |
stackexchange | llm_judgeable_groundtruth_similarity | 8379995 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How is it that you can access null nullable's propery HasValue ? I looked to the compiled code, and it's not a syntactic sugar. Why this doesn't throw NullReferenceException: int? x = null;if (x.HasValue){...}
Now provide the response and nothing else.
| That's because int? is short for Nullable<int> which is a value type, not a reference type - so you will never get a NullReferenceException . The Nullable<T> struct looks something like this: public struct Nullable<T> where T : struct{ private readonly T value; private readonly bool hasValue; //..} When you assign null there is some magic happening with support by the compiler (which knows about Nullable<T> and treats them special in this way) which just sets the hasValue field to false for this instance - which is then returned by the HasValue property. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8379995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/601179/']} | jdg_84160 |
stackexchange | llm_judgeable_groundtruth_similarity | 21551854 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am switching from maven to sbt for a Scala project I am working on. I used to work with the maven assembly plugin where you can map any directory in the workspace to a target directory in the assembly. I didn't find any equivalent in sbt-native-package, it worth provide this feature for the Universe kind. I understood that everything that is present in the universal subdirectory is copied to the package as such, and it works like a charm, but I lack something like the following snippet. mappings in Universal += { directory("my/local/dir") -> "static/dirInPackage"} I would like to know if there is already a way to do that, in such case, I would be happy to know how to do it, and I propose my help to commit documentation for that part if you want.If there is no way to do this kind of customization, I will be happy to propose a patch for that after having discussed specifications. By the way, great job, your packager is working very well, thanks !
Now provide the response and nothing else.
| After having discussed with the sbt-native-manager team and a first "rejected" pull request, here is the way to do this directory mapping in the build.sbt file (see pull request https://github.com/sbt/sbt-native-packager/pull/160 which provides mode detailed documentation) : mappings in Universal <++= (packageBin in Compile, target ) map { (_, target) => val dir = target / "scala-2.10" / "api" (dir.***) pair relativeTo(dir.getParentFile)} To reduce verbosity of the above snippet, there is an issue ( https://github.com/sbt/sbt-native-packager/issues/161 ) to propose a more human readable way to express this directory mapping: mappings in Universal ++= allFilesRelativeTo(file(target / "scala-2.10" / "api")) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21551854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3248793/']} | jdg_84161 |
stackexchange | llm_judgeable_groundtruth_similarity | 25639169 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I managed to produce the graph correctly, but with some more testing noted inconsistent result for the following two different line of codes: colors = [h.edge[i][j]['color'] for (i,j) in h.edges_iter()]widths = [h.edge[i][j]['width'] for (i,j) in h.edges_iter()]nx.draw_circular(h, edge_color=colors, width=widths) This approach results in consistent output, while the following produces wrong color/size per the orders of edges: colors = list(nx.get_edge_attributes(h,'color').values())widths = list(nx.get_edge_attributes(h,'width').values())nx.draw_circular(h, edge_color=colors, width=widths) However, it looks to me the above two lines both rely on the function call to return the attributes per the order of edges. Why the different results? It looks a bit clumsy to me to access attributes with h[][][] ; is it possible to access it by dot convention, e.g. edge.color for edge in h.edges() . Or did I miss anything?
Now provide the response and nothing else.
| The order of the edges passed to the drawing functions are important. If you don't specify (using the edges keyword) you'll get the default order of G.edges(). It is safest to explicitly give the parameter like this: import networkx as nxG = nx.Graph()G.add_edge(1,2,color='r',weight=2)G.add_edge(2,3,color='b',weight=4)G.add_edge(3,4,color='g',weight=6)pos = nx.circular_layout(G)edges = G.edges()colors = [G[u][v]['color'] for u,v in edges]weights = [G[u][v]['weight'] for u,v in edges]nx.draw(G, pos, edges=edges, edge_color=colors, width=weights) This results in an output like this: | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25639169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3363026/']} | jdg_84162 |
stackexchange | llm_judgeable_groundtruth_similarity | 38467749 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Pandas DataFrame: import pandas as pddf = pd.DataFrame([[0.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0], [1.0, 0.0, 1.0, 3.0, 1.0, 1.0, 7.0, 0.0], [0.0, 0.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0] ] , columns=['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']) A B C D E F G H0 0.0 2.0 3.0 4.0 5.0 6.0 7.0 8.01 1.0 0.0 1.0 3.0 1.0 1.0 7.0 0.02 0.0 0.0 13.0 14.0 15.0 16.0 17.0 18.0 And I'd like to return a series (not a list) of the first non-zero value in each row. This currently works but lookup returns a list instead of a series (I know I can convert the list to a series) but I'm assuming there's a better way: first_nonzero_colnames = (df > 0).idxmax(axis=1, skipna=True)df.lookup(first_nonzero_colnames.index, first_nonzero_colnames.values)[ 2. 1. 13.] I can use .apply but I want to avoid it.
Now provide the response and nothing else.
| try this: res = df[df != 0.0].bfill(axis=1)['A'] all i'm doing is replacing all non-zeros with nan s and then filling them in from the right, which forces all resulting values in the first column to be the first non-zero value in the row. or, a quicker way, as suggested by @piRSquared: df.replace(0, np.nan).bfill(1).iloc[:, 0] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38467749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2955541/']} | jdg_84163 |
stackexchange | llm_judgeable_groundtruth_similarity | 2987223 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote a proof, and I just wanted to verify if it is correct. Proof : Suppose not. Then, $[0,1] = U\cup V$ for $U$ and $V$ open in $[0,1]$ such that $U\cap V = \emptyset$ . Let $0\in U$ . Consider $s= sup\{t \mid [0,t] \subset U\}$ . Clearly, $s\in[0,1]$ . If $s\in U$ , then since $U$ is open, $\exists \epsilon >0$ such that $B(s;\epsilon) \subset U$ . But then $s+\epsilon \in U$ , and $s+\epsilon > s$ , contradicting the definition of $s$ . So, $s \notin U$ . Then, $s\in V$ . But again, since $V$ is open, $\exists \epsilon >0$ such that $B(s;\epsilon)\subset V$ . But, since $s$ is the least upper bound of $U$ , for this $\epsilon$ , $\exists s_{1} \in U$ such that $s_{1} \in B(s;\epsilon)$ . Then, $s_{1} \in U\cap V$ , which contradicts the fact that $U$ and $V$ are disjoint. So, our assumption is wrong and $[0,1]$ is connected.
Now provide the response and nothing else.
| You're probably not looking for the answer still after a year, but for those who come across this in the future, it may be helpful. You are pretty much right about your proof, but here is a more formal way of writing it. That is that "If $a<b$ , then the subspace $[a,b]$ of $E^1$ is connected". Let $a<b$ and let $[a,b]$ be a subspace of $R$ with the $E^{1}$ topology. For the sake of contradiction, assume that $[a,b]$ is not connected. Then $[a,b]= U \cup V$ , where $U$ and $V$ are nonempty disjoint open sets in $[a,b]$ . Without loss of generality we may assume that $b \in V$ . Because $U$ is nonempty and bounded above (by $b$ ), the axiom of completeness states that $U$ has a least upper bound $s$ . We will prove that $s$ is not an element of either $U$ or $V$ , and this will yield a contradiction. Suppose $s \in U$ . Since $U$ is open, there exists an $\epsilon> 0$ such that $B_{\epsilon}(s) \subseteq U$ . Thus, since $s+ \frac{\epsilon}{2} \in U$ and $s<s+ \frac{\epsilon}{2}$ , there exists an element in $U$ that is greater than $s$ . Therefore, $s$ is not an upper bound for $U$ . This is a contradiction. Thus, $s \not \in U$ . Now suppose $s \in V$ . Then, because $V$ is open, there exists an $\epsilon>0$ such that $B_{\epsilon}(s) \subseteq V$ . Then $s- \frac{\epsilon}{2}$ is an upper bound for $U$ . But $s- \frac{\epsilon}{2}<s$ , which is a contradiction because $s$ is the least upper bound for $U$ . So $s \not \in V$ . Thus, $s \not \in U \cup V$ , which is a contradiction. Therefore, $[a,b]$ is connected. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2987223', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/570328/']} | jdg_84164 |
stackexchange | llm_judgeable_groundtruth_similarity | 18865674 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to split the string using Split function in java String empName="employee name | employee Email";String[] empDetails=empName.split("|"); it gives me result as empDetails[0]="e";empDetails[1]="m";empDetails[2]="p";empDetails[3]="l";empDetails[4]="o";empDetails[5]="y";empDetails[6]="e";empDetails[7]="e";... but when i try following code String empName="employee name - employee Email";String[] empDetails=empName.split("-"); it gives me empDetails[0]="employee name "; empDetails[1]=" employee Email"; why java split function can not split the string seperated by "|"
Now provide the response and nothing else.
| String#split() method accepts a regex and not a String . Since | is a meta character, and it's have a special meaning in regex. It works when you escape that. String[] empDetails=empName.split("\\|"); Update: Handling special characters in java:OFFICIAL DOCS . As a side note: In java method names starts with small letters.it should be split() not Split() ..not the capital and small s | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18865674', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1363086/']} | jdg_84165 |
stackexchange | llm_judgeable_groundtruth_similarity | 166008 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why M-theory has only M2 and M5 branes? In string theory, depending on the type one considers, you get all kind of D-branes. What is so special in M-theory that only allows 2 and 5 branes?
Now provide the response and nothing else.
| The type of branes a theory has in constrained by the p-form fields it contains. There is only one supersymmetric gravity theory in 11d, and it has a single $p$-form with $p=4$, call it $G_4$. This means that the theory can possess brane solutions that either couple electrically to $G_4$ (M2 branes) or magnetically to $G_4$ (M5 branes). To understand the dimensionality of an electric coupling, The $p$-form field corresponds to a $p-1$-form potential. $p-2$ of these directions correspond to the extended brane directions (if there are any), and the last one corresponds to time. Compare this with the familiar case of electromagnetism where $A_t$ couples electrically to a point particle's worldline, and the field strength is $F_2 = dA$. In the case of M-theory, this means that $G_4$ couples electrically to a two-dimensional object, i.e. M2 branes. To see where the magnetic M5 branes come about, simple take the Hodge dual to obtain a 7-form: $\star G_4 = G_7$, and then $G_7$ couples (by the same arithmetic as above) to a 5-brane, i.e. M5 branes. Oh, and just a comment, in the 10D string theories there are indeed many different branes, but not all kinds. They are constrained by the same counting. So for example, in terms of D-branes (not counting the NS branes), IIB only has odd-dimension branes and IIA has even ones. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/166008', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/61966/']} | jdg_84166 |
stackexchange | llm_judgeable_groundtruth_similarity | 9144593 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been looking for solutions and found the following piece of code. But I do not know how to use it, unfortunately. - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)string { NSUInteger newLength = [textField.text length] + [string length] - range.length; return (newLength > 25) ? NO : YES; } Just for testing purposes I set up an IBACTION -(IBAction)checkIfCorrectLength:(id)sender{ [self textView:myTextView shouldChangeTextInRange: ?? replacementText: ?? ];} What do I pass for shouldChangeTextInRange and replacementText ? Or am I getting it completely wrong ?
Now provide the response and nothing else.
| Calling textView:shouldChangeTextInRange:replacementText: from checkIfCorrectLength: doesn't make sense. If you want to test the length from multiple methods, factor the test out into its own method: - (BOOL)isAcceptableTextLength:(NSUInteger)length { return length <= 25;}- (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)string { return [self isAcceptableTextLength:textField.text.length + string.length - range.length];}-(IBAction)checkIfCorrectLength:(id)sender{ if (![self isAcceptableTextLength:self.textField.text.length]) { // do something to make text shorter }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9144593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1066899/']} | jdg_84167 |
stackexchange | llm_judgeable_groundtruth_similarity | 51175 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Whilst reading Tanenbaum - Computer Networks they talk about IP scarcity at the level of ISPs: IP addresses are scarce. An ISP might have a /16 address, giving it 65,534 usable host numbers. If it has more than that, it has a problem. In the next few pages there is a talk about DHCP and NAT to solve this problem. So I googled around and found out that certain ISPs do indeed use CGNAT to link a certain area (street, group of houses, ..) behind one NAT box. I know the use of CGNAT is rather controversial, organisations such as Europol are doing their best to get rid of it altogether. So is/was there another way to do it? Or did most ISPs turn to IPv6 the moment they started to run out of IPs without resorting to CGNAT.
Now provide the response and nothing else.
| Tannenbaum exaggerated a bit. While IPv4 addresses are indeed scarce, ISPs could get as many addresses as they needed. They had to show good justification though, by showing the number of customers, DHCP leases etc. But large ISPs (or large hosting providers like Amazon) can have millions of IPv4 addresses. These days it works differently. Normal distribution of IPv4 addresses has run out, so you can only get a tiny amount. After that you'll have to convince someone to sell you some of their addresses. The are specialised brokers for this as well. This means that technologies that use address-sharing mechanisms are unavoidable. The most useful ones work on an IPv6-only backbone and provide IPv4 as a service over the top. That way the least amount of IPv4 space is wasted. Some of them like MAP and Lw4o6 don't even use CGN but share based on port number, which makes them much cheaper to implement. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/51175', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/48088/']} | jdg_84168 |
stackexchange | llm_judgeable_groundtruth_similarity | 40548213 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there some difference between using Array.from(document.querySelectorAll('div')) or [...document.querySelectorAll('div')] ? Here is a example: let spreadDivArray = [...document.querySelectorAll('div')];console.log(spreadDivArray);let divArrayFrom = Array.from(document.querySelectorAll('div'));console.log(divArrayFrom); The console.log() will log the same result. Is there any performance difference?
Now provide the response and nothing else.
| Spread element ( it's not an operator ) works only with objects that are iterable (i.e. implement the @@iterator method). Array.from() works also on array-like objects (i.e. objects that have the length property and indexed elements) which are not iterable. See this example: const arrayLikeObject = { 0: 'a', 1: 'b', length: 2 };// This logs ['a', 'b']console.log(Array.from(arrayLikeObject));// This throws TypeError: arrayLikeObject[Symbol.iterator] is not a functionconsole.log([...arrayLikeObject]); Also, if you just want to convert something to array, I think it's better to use Array.from() because it's more readable. Spread elements are useful for example when you want to concatenate multiple arrays ( ['a', 'b', ...someArray, ...someOtherArray] ). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40548213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2097607/']} | jdg_84169 |
stackexchange | llm_judgeable_groundtruth_similarity | 126223 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When do we use set -k option in bash? Bash reference manual says, All arguments in the form of assignment statements are placed in the environment for a command, not just those that precede the command name. I understand what the option does, but could not imagine when we need it.
Now provide the response and nothing else.
| You can basically use it anytime you want to "inject" the environment variables passed into a shell script (as arguments) as if they were set within the environment via export , but without having to have them permanently reside in export 's list prior to running commands. NOTE: There's also the long form of the -k switch, set -o keyword . Example $ cat cmd1.bash #!/bin/bashecho $VARCMD Now if I set -k : $ set -k; ./cmd1.bash VARCMD="hi"; set +khi But if I just were to run the above script: $ ./cmd1.bash $ What's export doing? $ help export...Marks each NAME for automatic export to the environment of subsequentlyexecuted commands. If VALUE is supplied, assign VALUE before exporting.... So if we were to add export | grep VAR to our script like so: $ cat cmd2.bash #!/bin/bashecho $VARCMDexport | grep VAR And we ran our above tests again: $ set -k; ./cmd2.bash VARCMD="hi"; set +khideclare -x VARCMD="hi" But without set -k : $ ./cmd2.bash $ So set -k is allowing us to temporarily export variables in mass. Another example $ cat cmd3.bash #!/bin/bashecho $VARCMD1echo $VARCMD2export | grep VAR When we set multiple variables they're all exported: $ set -k; ./cmd3.bash VARCMD1="hi" VARCMD2="bye"; set +khibyedeclare -x VARCMD1="hi"declare -x VARCMD2="bye" So then it's just injecting all the environment variables? No -k is doing a very explicit thing here. It's only exporting variables that were included on the command line when a command was executed. Example Say I set this variable: $ VARCMD1="hi" Now when we run the same command omitting VARCMD1="hi" : $ set -k; ./cmd3.bash VARCMD2="bye"; set +kbyedeclare -x VARCMD2="bye" But why does this exist? I found this source which explains a bit about this feature, titled: "Keyword Parameter Assignment Strings". NOTE: The source URL uses an IP address so I cannot link to it directly here on SE. http://140.120.7.21/OpenSystem2/SoftwareTools/node16.html When programming in any language, the variable and its value passing is critical for writing reliable code. Beside the integer and array variable types, all other shell variables accept strings as their values. When talking about shell programming language, to be consistent, we prefer the phrase "keyword parameter". Here are a few points to watch out when assigning values to keyword parameters: To avoid any unexpected effect, always place parameter assignment substring in front of a command string. In the B shell, the assigned values of keyword parameters will get stored in (local) shell variables. In bash and ksh, the keyword parameter assignment strings preceding command will not be stored in the shell variables. They only affect the immediate subprocess forked to execute the current command. A line of keyword parameters assignment strings alone does get stored in the (local) shell variables. Keyword parameter assignment strings may also appear as arguments to the alias, declare, typeset, export, readonly, and local builtin commands. [Section 3.4 of Bash Reference Manual] The keyword parameter assignment strings will be treated as arguments for the command to be executed, if they are placed after the command name. The keyword parameters may be manipulated by the set command. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/126223', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/64173/']} | jdg_84170 |
stackexchange | llm_judgeable_groundtruth_similarity | 30331282 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to use WebdriverWait for when clicking elements in Python Webdriver. I get the following TimeoutException error when using WebdriverWait: Traceback (most recent call last): File "C:\Users\riaz.ladhani\PycharmProjects\Selenium Webdriver\ClearCore\TestCases\AdministrationPage_TestCase.py", line 30, in test_add_Project administration_page = login_page.clickAdministration() File "C:\Users\riaz.ladhani\PycharmProjects\Selenium Webdriver\ClearCore\Pages\login.py", line 46, in clickAdministration WebDriverWait (self.driver, 10).until(lambda d: self.driver.find_element(*MainPageLocators.AdministrationButton_xpath).click()) File "C:\Python27\lib\site-packages\selenium\webdriver\support\wait.py", line 75, in until raise TimeoutException(message, screen, stacktrace)TimeoutException: Message: If I use time.sleep(10) it works ok and clicks the elements. I have reverted all my links to time.sleep for now until I can get WebdriverWait to work properly. My code snippet for WebdriverWait is: class LoginPage(BasePage): #Click Administration from top menu def clickAdministration(self): WebDriverWait (self.driver, 10).until(lambda d: self.driver.find_element(*MainPageLocators.AdministrationButton_xpath).click()) #time.sleep(10) return AdministrationPage(self.driver) The imports are: from selenium.webdriver.support.ui import WebDriverWaitfrom selenium.common.exceptions import TimeoutExceptionclass LoginPage_TestCase(unittest.TestCase): def test_add_Project(self): login_page = login.LoginPage(self.driver) login_page.userLogin_valid() administration_page = login_page.clickAdministration() Is my WebdriverWait syntax correct? Why the TimeoutException? If I use time.sleep(secs) , it works fine but not the best efficient way to do it.
Now provide the response and nothing else.
| You are not using the Explicit Wait correctly - you need to make use of Expected Conditions - callables that would be called repeatedly until return True . You are returning the result of click() method which returns None which is falsy - the expected condition never returns True and, hence, you are getting TimeoutException . In this case, built-in element_to_be_clickable fits nicely, example: from selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.common.by import Bywait = WebDriverWait(self.driver, 10)element = wait.until(EC.element_to_be_clickable((By.XPATH, '//div[@class="test"]')))element.click() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30331282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/358378/']} | jdg_84171 |
stackexchange | llm_judgeable_groundtruth_similarity | 13998399 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have just come across the android:onClick="myCrazyMethod" XML attribute, seems pretty clean and simple. I was just wondering though if there is any difference in performance between this approach - declaring a simple method and creating an OnClickListener class?
Now provide the response and nothing else.
| I got this working by implementing algorithm: Watcher thread will put file names in BlockingQueue and other thread will poll this queue, takes file names, try few times to open file. If file gets opened, Windows Copier has released file lock and we can proceed. So when other threads finds file has been unlocked, other thread will put this file name in processed queue, from where my application will retrieve file name. Also the other thread while checking for file unlock by opening file, if it runs long time for unlocking file, we can place back this file name in BlockingQueue and process other file names, former can be processed later. Solution: Hope this may help to other: package dirwatch;import java.nio.file.*;import static java.nio.file.StandardWatchEventKinds.ENTRY_CREATE;import static java.nio.file.StandardWatchEventKinds.OVERFLOW;import static java.nio.file.LinkOption.*;import java.nio.file.attribute.*;import java.io.*;import java.util.*;import java.util.concurrent.ArrayBlockingQueue;import java.util.concurrent.BlockingQueue;import java.util.concurrent.TimeUnit;public class WatchDir { private final WatchService watcher; private final Map<WatchKey,Path> keys; private final boolean recursive; private boolean trace = false; private BlockingQueue<String> fileProcessingQueue; //******* processedFileQueue **** will be used by other threads to retrive unlocked files.. so I have // kept as public final public final BlockingQueue<String> processedFileQueue; private volatile boolean closeProcessingThread; private volatile boolean closeWatcherThread; private void processFiles(){ System.out.println("DirWatchProcessingThread Started"); String fileName; outerLoop: while(!closeProcessingThread || !fileProcessingQueue.isEmpty()){ try{ fileName = fileProcessingQueue.poll(1000, TimeUnit.MILLISECONDS); }catch(InterruptedException ie){ fileName = null; } if(fileName == null || fileName.equals("")){ continue outerLoop; } long startTime = System.currentTimeMillis(); innerLoop: while(true){ FileInputStream fis = null; File file = new File(fileName); try{ fis = new FileInputStream(fileName); break innerLoop; }catch(FileNotFoundException fnfe){ if(!file.exists() || file.isDirectory()){ System.out.println("File: '"+fileName+"has been deleted in file system or it is not file. Not processing this file."); continue outerLoop; } try{ Thread.sleep(WatchDirParameters.millisToPuaseForFileLock); }catch(InterruptedException ie){ } if((System.currentTimeMillis() - startTime) > WatchDirParameters.millisToSwapFileForUnlocking){ if(fileProcessingQueue.offer(fileName)){ continue outerLoop; }else{ startTime = System.currentTimeMillis(); continue innerLoop; } } }finally{ if(fis != null){ try{ fis.close(); }catch(IOException ioe){ ioe.printStackTrace(); } } } } System.out.println("Queuing File: "+fileName); processedLoop:while(true){ try{ if(processedFileQueue.offer(fileName, 1000, TimeUnit.MILLISECONDS)){ break processedLoop; } }catch(InterruptedException ie){ //ie.printStackTrace(); } } } closeWatcherThread = true; closeProcessingThread = true; System.out.println("DirWatchProcessingThread Exited"); } /** * Process all events for keys queued to the watcher */ private void processEvents(){ System.out.println("DirWatcherThread started."); while(!closeWatcherThread) { // wait for key to be signalled WatchKey key; try { key = watcher.take(); } catch (InterruptedException x) { // if we are returning from these method, it means we no longer wants to watch directory // we must close thread which may be waiting for file names in queue continue; }catch(ClosedWatchServiceException cwse){ break; } Path dir = keys.get(key); if (dir == null) { System.err.println("WatchKey not recognized!!"); continue; } try{ for (WatchEvent<?> event: key.pollEvents()) { WatchEvent.Kind kind = event.kind(); if (kind == OVERFLOW) { continue; } // Context for directory entry event is the file name of entry WatchEvent<Path> ev = cast(event); Path name = ev.context(); Path child = dir.resolve(name); if(kind.equals(ENTRY_CREATE)){ // if directory is created, and watching recursively, then // register it and its sub-directories if (recursive) { try { if (Files.isDirectory(child, NOFOLLOW_LINKS)) { registerAll(child); continue; } } catch (IOException x) { // ignore to keep sample readbale } } while(true){ if(fileProcessingQueue.remainingCapacity() < 2){ // if only one last can be inserted then don't queue this we need 1 empty space in queue // for swaping file names.. // sleep for some time so processing thread may have made some rooms to queue in fileQueue // this logic will not create any problems as only one this thread is inserting in queue try{ Thread.sleep(200); }catch(InterruptedException ie){ } continue; } if(!fileProcessingQueue.offer(child.toString())){ // couldn't queue this element by whatever reason.. we will try to enqueue again by continuing loop continue; }else{ // file name has been queued in queue break; } } } } // reset key and remove from set if directory no longer accessible boolean valid = key.reset(); if (!valid) { keys.remove(key); // all directories are inaccessible if (keys.isEmpty()) { break; } } }catch(ClosedWatchServiceException cwse){ break; } } closeProcessingThread = true; closeWatcherThread = true; System.out.println("DirWatcherThread exited."); } public void stopWatching(){ try{ watcher.close(); }catch(IOException ioe){ } closeProcessingThread = true; closeWatcherThread = true; } public static WatchDir watchDirectory(String dirName, boolean recursive) throws InvalidPathException, IOException, Exception{ try{ Path dir = Paths.get(dirName); final WatchDir watchDir = new WatchDir(dir, recursive); watchDir.closeProcessingThread = false; watchDir.closeWatcherThread = false; new Thread(new Runnable() { public void run() { watchDir.processFiles(); } }, "DirWatchProcessingThread").start(); new Thread(new Runnable() { public void run() { watchDir.processEvents(); } }, "DirWatcherThread").start(); return watchDir; }catch(InvalidPathException ipe){ throw ipe; }catch(IOException ioe){ throw ioe; }catch(Exception e){ throw e; } } @SuppressWarnings("unchecked") private static <T> WatchEvent<T> cast(WatchEvent<?> event) { return (WatchEvent<T>)event; } /** * Register the given directory with the WatchService */ private void register(Path dir) throws IOException { //WatchKey key = dir.register(watcher, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); WatchKey key = dir.register(watcher, ENTRY_CREATE); if (trace) { Path prev = keys.get(key); if (prev == null) { System.out.format("register: %s\n", dir); } else { if (!dir.equals(prev)) { System.out.format("update: %s -> %s\n", prev, dir); } } } keys.put(key, dir); } /** * Register the given directory, and all its sub-directories, with the * WatchService. */ private void registerAll(final Path start) throws IOException { // register directory and sub-directories Files.walkFileTree(start, new SimpleFileVisitor<Path>() { @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { register(dir); return FileVisitResult.CONTINUE; } }); } /** * Creates a WatchService and registers the given directory */ private WatchDir(Path dir, boolean recursive) throws IOException { fileProcessingQueue = new ArrayBlockingQueue<String>(WatchDirParameters.fileQueueSize, false); processedFileQueue = new ArrayBlockingQueue<String>(WatchDirParameters.fileQueueSize, false); this.watcher = FileSystems.getDefault().newWatchService(); this.keys = new HashMap<WatchKey,Path>(); this.recursive = recursive; //CreateTxtFile.createFile(dir, 1); if (recursive) { System.out.format("Scanning %s ...\n", dir); registerAll(dir); System.out.println("Done."); } else { register(dir); } // enable trace after initial registration this.trace = true; }} Parameter Class: package dirwatch;public class WatchDirParameters { public static final int millisToPuaseForFileLock = 200; public static final int fileQueueSize = 500; public static final int millisToSwapFileForUnlocking = 2000;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13998399', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1717784/']} | jdg_84172 |
stackexchange | llm_judgeable_groundtruth_similarity | 1330740 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across an old question asking to show that $$\displaystyle\sum_{n\text{ odd}}\frac{1}{n\sinh n\pi}=\frac{\ln 2}{8}.\tag{1}$$ Although I have managed to prove this formula, my proof uses various theta functional relations and looks like an overkill. On the other hand, it suggests a few more identities, for example $$\displaystyle\sum_{n\text{ odd}}\frac{1}{n\sinh n\pi\sqrt3}=\frac{\ln \left(8-4\sqrt{3}\right)}{4}.\tag{2}$$ Question : can one prove (1) and (2) in a more elementary way? Ideally, the proof should make clear further generalizations.
Now provide the response and nothing else.
| The identities:$$\sum_{n\geq 1}\frac{(-1)^n}{n^2+m^2} = -\frac{1}{2m^2}+\frac{\pi}{2m\sinh(m\pi)},\tag{A}$$$$ \frac{1}{m^2+n^2}=\int_{0}^{+\infty}\frac{\sin(nx)}{n}e^{-mx}\,dx,\tag{B}$$$$ \sum_{n\geq 1}(-1)^n\frac{\sin(nx)}{x}=-\frac{x}{2}+\pi\left\lfloor\frac{x+\pi}{2\pi}\right\rfloor\tag{C} $$give a wide range of possibilities to evaluate our series. For instance, $(A)$ gives:$$\begin{eqnarray*}\sum_{k\geq 0}\frac{1}{(2k+1)\sinh(\pi(2k+1))}&=&\frac{1}{\pi}\sum_{k\geq 0}\frac{1}{(2k+1)^2}+\frac{2}{\pi}\sum_{k\geq 0}\sum_{n\geq 1}\frac{(-1)^n}{n^2+(2k+1)^2}\\&=&-\frac{\pi}{8}+\frac{2}{\pi}\sum_{h\geq 1}\frac{r_2(h)\cdot\eta(h)}{h}\tag{1}\end{eqnarray*}$$where $\eta(h)$ equals $-1$ if $h\equiv 2\pmod{4}$, $1$ if $h\equiv 1\pmod{4}$, zero otherwise, and:$$ r_2(h) = \#\{(n,k)\in\mathbb{N}^2: h=n^2+(2k+1)^2\}.\tag{2} $$Now it is well-known that $a^2+b^2$ is the only reduced binary quadratic form of discriminant $-4$, hence:$$\begin{eqnarray*} \#\{(a,b)\in\mathbb{Z}:a^2+b^2=n\} &=& 4\left(\chi_4 * 1\right)(n)\\&=& 4\left(d_{1(4)}(n)-d_{3(4)}(n)\right) \tag{3}\end{eqnarray*}$$so that we can evaluate the RHS of $(1)$ through Dirichlet convolution. Since we have class number one also in the case $a^2+3b^2$, the situation is almost the same for the other series. The Mellin transform gives another chance. See, for instance, this related problem. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1330740', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73025/']} | jdg_84173 |
stackexchange | llm_judgeable_groundtruth_similarity | 11170853 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have these elements, and I need to select the li inside the ul where opacity=1.How can I do this? <ul class="class" id="ul"> <li style="width: 100%; list-style: none outside none; position: absolute; top: 0px; left: 0px; z-index: 98; opacity: 0;"><a title="title1" href=""><img alt="alt" class="class_name" src="/images/7dfc294d5c3bcebecb2ec0e44fd27d1c.jpg"></a></li> <li style="width: 100%; list-style: none outside none; position: absolute; top: 0px; left: 0px; z-index: 98; opacity: 0;"><a title="title2" href=""><img alt="alt" class="class_name " src="/images/a9c9eb42934df4576b352d88f607f292.jpg"></a></li> <li style="width: 100%; list-style: none outside none; position: absolute; top: 0px; left: 0px; z-index: 98; opacity: 0;"><a title="title3" href=""><img alt="alt" class="class_name " src="/images/b64264692c0d648068c9d1380e9099c1.jpg"></a></li> <li style="width: 100%; list-style: none outside none; position: absolute; top: 0px; left: 0px; z-index: 99; opacity: 1;"><a title="title4" href=""><img alt="alt" class="class_name " src="/images/43e3e5e2edc4234ecddbc89636e4e224.jpg"></a></li> <li style="width: 100%; list-style: none outside none; position: absolute; top: 0px; left: 0px; z-index: 98; opacity: 0;"><a title="title5" href=""><img alt="e-alt" class="class_name " src="/images/31a156ce7f7ab5485366d24f6cbfbede.jpg"></a></li> </ul>
Now provide the response and nothing else.
| $('#ul li').filter(function() { return $(this).css('opacity') == '1';}); DEMO You can also try with .each() var lis = [];$('#ul li').each(function() { if ($(this).css('opacity') == '1') { lis.push(this); }}); DEMO or using .map() var lis = $('#ul li').map(function() { if($(this).css('opacity') == '1') return this;}).get(); DEMO | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11170853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/885457/']} | jdg_84174 |
stackexchange | llm_judgeable_groundtruth_similarity | 1339437 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So, I'm feeding file data to an API that takes a Reader , and I'd like a way to report progress. It seems like it should be straightforward to write a FilterInputStream implementation that wraps the FileInputStream , keeps track of the number of bytes read vs. the total file size, and fires some event (or, calls some update() method) to report fractional progress. (Alternatively, it could report absolute bytes read, and somebody else could do the math -- maybe more generally useful in the case of other streaming situations.) I know I've seen this before and I may even have done it before, but I can't find the code and I'm lazy. Has anyone got it laying around? Or can someone suggest a better approach? One year (and a bit) later... I implemented a solution based on Adamski's answer below, and it worked, but after some months of usage I wouldn't recommend it. When you have a lot of updates, firing/handling unnecessary progress events becomes a huge cost. The basic counting mechanism is fine, but much better to have whoever cares about the progress poll for it, rather than pushing it to them. (If you know the total size, you can try only firing an event every > 1% change or whatever, but it's not really worth the trouble. And often, you don't.)
Now provide the response and nothing else.
| Here's a fairly basic implementation that fires PropertyChangeEvent s when additional bytes are read. Some caveats: The class does not support mark or reset operations, although these would be easy to add. The class does not check whether the total number bytes read ever exceeds the maximum number of bytes anticipated, although this could always be dealt with by client code when displaying progress. I haven't test the code. Code: public class ProgressInputStream extends FilterInputStream { private final PropertyChangeSupport propertyChangeSupport; private final long maxNumBytes; private volatile long totalNumBytesRead; public ProgressInputStream(InputStream in, long maxNumBytes) { super(in); this.propertyChangeSupport = new PropertyChangeSupport(this); this.maxNumBytes = maxNumBytes; } public long getMaxNumBytes() { return maxNumBytes; } public long getTotalNumBytesRead() { return totalNumBytesRead; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l); } @Override public int read() throws IOException { int b = super.read(); updateProgress(1); return b; } @Override public int read(byte[] b) throws IOException { return (int)updateProgress(super.read(b)); } @Override public int read(byte[] b, int off, int len) throws IOException { return (int)updateProgress(super.read(b, off, len)); } @Override public long skip(long n) throws IOException { return updateProgress(super.skip(n)); } @Override public void mark(int readlimit) { throw new UnsupportedOperationException(); } @Override public void reset() throws IOException { throw new UnsupportedOperationException(); } @Override public boolean markSupported() { return false; } private long updateProgress(long numBytesRead) { if (numBytesRead > 0) { long oldTotalNumBytesRead = this.totalNumBytesRead; this.totalNumBytesRead += numBytesRead; propertyChangeSupport.firePropertyChange("totalNumBytesRead", oldTotalNumBytesRead, this.totalNumBytesRead); } return numBytesRead; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1339437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27358/']} | jdg_84175 |
stackexchange | llm_judgeable_groundtruth_similarity | 20614 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Most English translations have something in Romans that is an interesting turn of phrase—at least to my ears it instinctively makes sense but at the same time it in more at home in a carefully worded thesis than a casual conversation. Following the references works for me in English, but I'm having trouble explaining the thought progression in Turkish. Judging from the assortment of translations it seems I'm not the first to struggle. Unfortunately words for concepts involving relative numbers their implications are not easy to pin down. Paul has several assorted references to quantities here: Romans 5:15 (ESV) But the free gift is not like the trespass. For if many died through one man's trespass, much more have the grace of God and the free gift by the grace of that one man Jesus Christ abounded for many . A couple verses later the ESV translators seem to link back to specific amounts. Or at least that's what the definite article sounds like here: Romans 5:19 (ESV) For as by the one man's disobedience the many were made sinners, so by the one man's obedience the many will be made righteous. This raises a couple questions. What sort of expressions are "many" and "the many" here? To a Greek ear would this read like a calculated mathematic reference carefully picking out a specific group? Or perhaps it's completely idiomatic and a casual way of referring to groups. Drawing on Paul's theology from earlier in the same book it seems obvious that in v15 the first "many" actually means "all". Every last blooming one. Except of course the ones for which that process has been reversed, which apparently he's no longer including in the total. The following reference to "even more", then, seems to have some literary value other than a mathematically larger sum of people affected. Are the two references to "the many" grammatically tied to specific previous groups or is the reference implied the way it comes across in English? What specifically about this sentence, would keep this statement from being interpreted in a universalist fashion, i.e. all who were sinners will be made righteous? If we could see Paul talking with his hands describing the scene, what sort of gestures would he be using? What are the linguistic clues that tell us which group is referenced when and how specific are the relative sizes?
Now provide the response and nothing else.
| The Uses All four instances of the adjective πολύς in v.15 and 19 that are used substantively to refer to "the many " people are articular masculine plural forms, three being nominative case (οἱ πολλοὶ), with the second articular version in v.15 an accusative case (τοὺς πολλοὺς) as the object of the prepositoin εἰς ("to"). Two instances of the anarthrous dative singular neuter are found in v.15 and v.17 as part of the comparison constructions there (πολλῷ μᾶλλον; " much more"), working as adjectives adding the descriptive emphasis of "much" to μᾶλλον's "more." A final instance in v.17 is an anarthrous πολλῶν μᾶλλον (genitive plural; neuter here, since it is paired with the neuter noun παραπτωμάτων, "trespasses"). It is functioning to describe the extent of the trespasses. Two Significant Points First, the articular versions are all being used to contrast a group of people with the "one" (εἷς is found in the genitive singular ἑνὸς, "of one," a total of 12 times, 11 of those in v.15-19 and once back in v.12). The "one" man, whether Adam or Christ, or that man's "one" particular work, trespass or righteous act (v.18; referring back to Christ's death noted in v.6, 8, 10) are being referenced by each use. Second, the articular versions are specifically being used where they are in juxtaposition to the references to "all" (πάντας, accusative plural of πᾶς; v.12, 18 [x2]). That is, none of the "many" are equal to "all." Discussion of Your Specific Points "The many" is an indistinct way of referring to a group. The Greek's would easily pick up, as we do in English, the contrast of one to many. But the term allows for a flexibility of addressing a number between "one" and "all" without being too specific, nor necessarily having one reference of "the many" be of equal numerical value to another reference of "the many." The first "the many" of v.15 does not, in fact, mean "all," for two reasons. First, there is One in particular, Jesus Christ, who did not die "through" or "by" one man's trespass, but rather because He chose to die (cf. Jn 10:15-18). Second, the verb is aorist indicative (past tense), so the reference, strictly speaking, is to those who have indeed physically died already (such as v.14 emphasized, but not limited to just that group). The "much more" that follows is not a mathematical reference of a greater number of "the many," but a greater effect that occurs by the action of Christ versus Adam. The references to "the many" are not tied to specific previous groups in v.15. Again, it is indistinct. However, the second reference in v.19 does tie back to v.17, where one must receive grace and the gift of righteousness (the latter only coming by faith, so Romans 4) in order to be included as one of "the many" who "will be made righteous" in v.19. So here is how it works out that Christ's effect is "much more" than Adam's: Adam sinned, so the penalty of death is upon all men (v.12), and many have already died because of it (v.15), and all but Christ were made sinners (v.19; Christ was made to be sin for us, but was not made a sinner [2 Cor 5:21; cf. Rom 8:3; 1 Jn 3:5]). Christ incarnated in order to die (Heb 2:14), to partake of the penalty of mankind (1 Pet 3:18), and was obedient to the death of the cross (Phil 2:8). In v.15 it outlines this payment was an expression of the grace of God, which v.18 clarifies had an effect upon all men (just as Adam's affected all men) by giving justification for life, that is justification for all men to be resurrected from the penalty of death. So at this point, Christ's work equals Adam's work in effect, as all people will be resurrected (cf. Jn 5:29; Acts 24:15). But His work also opened the door for a further gift by grace (v.15), the gift of righteousness (v.17), given to many men (v.19), that is, those that would believe. So Christ's work becomes "much more" in its effects than Adam's, because it also brings righteousness to believers, and ultimately for them, reigning in the resurrected life (v.17) eternally (v.21). For a more detailed discussion of this passage arguing these points and more, see pages 288-321 of "Pananastasism—A Penal Substitutionary Model of a Definite Universal Atonement" (Ph.D. Diss.; Piedmont International University, 2015). Conclusion "The many" allows an indistinct number between the "one" and the "all" found elsewhere in verses 12-19. The "much more" does not expand the number of people, but rather the greater effect (both reversing and improving) by the work of the "one" man Christ over the "one" man Adam. | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/20614', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/36/']} | jdg_84176 |
stackexchange | llm_judgeable_groundtruth_similarity | 55539701 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
At the time of my original implementation of this, I am using Laravel 5.8, but as far as I know, this is still relevant and Larvel 7.x is out now. I'm trying to import a new javascript library into my Laravel 5.8 application using Mix. Specifically, the quill.js library.
Now provide the response and nothing else.
| Here are the steps I took to install quill and make it globally accessible in the application. 1 Install quill via npm npm install quill --save-dev 2 Create a new file /resources/js/quill.js 3 In the quill.js file, I included the code that the quill documentation suggests: https://quilljs.com/guides/adding-quill-to-your-build-pipeline/ import Quill from 'quill/core';import Toolbar from 'quill/modules/toolbar';import Snow from 'quill/themes/snow';import Bold from 'quill/formats/bold';import Italic from 'quill/formats/italic';import Header from 'quill/formats/header';Quill.register({ 'modules/toolbar': Toolbar, 'themes/snow': Snow, 'formats/bold': Bold, 'formats/italic': Italic, 'formats/header': Header});export default Quill; 4 In my app.js file, I included the quill.js file and assigned it to the global scope require('./quill.js');window.Quill = require('Quill'); 5 Import the quill css in /resources/sass/app.scss @import '~quill/dist/quill.core.css'; And for your theme @import '~quill/dist/quill.snow.css'; 6 Run npm run dev | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55539701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3726775/']} | jdg_84177 |
stackexchange | llm_judgeable_groundtruth_similarity | 3238350 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm interested in subclassing the built-in int type in Python (I'm using v. 2.5), but having some trouble getting the initialization working. Here's some example code, which should be fairly obvious. class TestClass(int): def __init__(self): int.__init__(self, 5) However, when I try to use this I get: >>> a = TestClass()>>> a0 where I'd expect the result to be 5 . What am I doing wrong? Google, so far, hasn't been very helpful, but I'm not really sure what I should be searching for
Now provide the response and nothing else.
| int is immutable so you can't modify it after it is created, use __new__ instead class TestClass(int): def __new__(cls, *args, **kwargs): return super(TestClass, cls).__new__(cls, 5)print TestClass() | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3238350', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220155/']} | jdg_84178 |
stackexchange | llm_judgeable_groundtruth_similarity | 493411 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
when access http://example.com/ its blank but working good with http://example.com/index.phpi need to delete nginx cache file for http://example.com/ how to do it?Here my nginx cache conf. fastcgi_cache_path /backup/cache levels=1:2 keys_zone=my-cache:1000m inactive=1000m; fastcgi_temp_path /backup/cache/tmp 1 2; fastcgi_cache_key "$scheme://$host$request_uri"; How to get the cache file name in /backup/cache folder?
Now provide the response and nothing else.
| one method, for example, but quite flexible an versatile add this line to nginx.conf ... proxy_cache_bypass $http_x_update; ... and you can anytime update any uri in cache with simple http request with "magic" header. for example curl -s -o /dev/null -H "X-Update: 1" mydomain.com or curl -s -o /dev/null -H "X-Update: 1" mydomain.com/some/long/url/ for the security and satisfaction of paranoia :D you can change header name to any blablabla, for example proxy_cache_bypass $http_x_gangnamstyle;#proxy_cache_bypass $http_x_mycatsnickname;#proxy_cache_bypass $http_x_b2ca678b4c936f905fb82f2733f5297f; and curl -s -o /dev/null -H "X-GangnamStyle: 1" mydomain.comcurl -s -o /dev/null -H "X-mycatsnickname: 1" mydomain.comcurl -s -o /dev/null -H "X-b2ca678b4c936f905fb82f2733f5297f: 1" mydomain.com | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/493411', 'https://serverfault.com', 'https://serverfault.com/users/155288/']} | jdg_84179 |
stackexchange | llm_judgeable_groundtruth_similarity | 388225 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently installed a jenkins using docker using the official image from blueocean. It is based on alpine linux. Now I cant run 32bit programs on it $ /opt/android-sdk-linux/build-tools/25.0.3/aapt bash: /opt/android-sdk-linux/build-tools/25.0.3/aapt: No such file or directory And I can't find out which packages need to be installed for running 32 bit programs. Could It be possible that the official blueoceans (jenkins) image does not support running 32bit programs when it is impossible to build many things without it. Also, I found out this issue which says "it doesn't seem that it is possible to build android currently on alpine" But can't wrap my head around it. Can someone confirm this?
Now provide the response and nothing else.
| musl (and therefore Alpine) doesn't really support "multilib" like glibc. You need to have a 32-bit environment in a chroot to run 32-bit applications. Follow the chroot install guide on the wiki, and make sure to pass --arch x86 to each apk command; this will give you a 32-bit chroot on which you can run 32-bit applications. As for running Android development tools on Alpine: I started a library called gcompat that attempts to allow glibc binaries to run natively on musl without using glibc. There are a few outstanding issues left before it can run the Android tools, but more testing is always welcome. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/388225', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/248120/']} | jdg_84180 |
stackexchange | llm_judgeable_groundtruth_similarity | 10101919 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Mycakephp version is 2.1.1. i am trying to save the associated model manually using saveAll() function Model Employee HasOne Address Employee Belongs to Department so table employees(first_name,last_name,age,sex,department_id) addresses(first_line,second_line,city,state,employee_id) now employee creation add.ctp has a form which receives input for employee and address i know $this->Employee->saveAll($this->request->data); this will save the models but i want to save the association manually i was going through the official cakephp document here and i have tried something like this $this->Employee->saveAll($data, array( 'fieldList' => array( 'Employee' => array('first_name','last_name','age','sex','department_id'), 'Department' => array('first_line', 'second_line','city','state','employee_id') ) )); it is not working , and throws following errors Notice (8): Undefined variable: data [APP\Controller\EmployeesController.php, line 118] Warning (2): array_keys() expects parameter 1 to be array, null given [CORE\Cake\Model\Model.php, line 1996] i am cakephp beginner. please help me. $this->request->data: Array ( [Employee] => Array ( [first_name] => Jack [last_name] => Robert [age] => 32 [sex] => 0 [Department] => Development )[Address] => Array ( [first_line] => HSR Layout [second_line] => 1st Cross [city] => Najem [state] => Barel )[Department] => Array ( [id] => 3 ) )
Now provide the response and nothing else.
| You're calling the function right away. When you leave the parentheses on the function reference, what you're basically saying is: Evaluate the closeThis function and assign the result to onclick when what you really want to do is assign the function reference to the click handler: document.getElementById("closeButton").onclick = myclassObj.closeThis; Leave out the parentheses instead, and you'll bind the closeThis function to the onclick. What this instead says is: Assign the function closeThis to the click handler. You are essentially assigning the function to the variable as a first-class object, or a reference to a function. As an aside, my personal preference is to always use an anonymous function wrapper. Sometimes you need to be able to pass parameters into your function, and this makes sure that you can more easily do so: document.getElementById("closeButton").onclick = function() { myclassObj.closeThis(); }; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10101919', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/630238/']} | jdg_84181 |
stackexchange | llm_judgeable_groundtruth_similarity | 65454 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to calculate the last two digits of $9^{9^9}$ using Euler's Totient theorem, what I got is that it is same as the last two digits of $9^9$. How do I proceed further?
Now provide the response and nothing else.
| Euler's Theorem is not needed. It can be completely solved using only the Binomial Theorem: $$\rm 9^{\color{#c00}{\large 10}} =\ (-1+10)^{\color{#c00}{\large 10}} =\: (-1)^{\color{#c00}{\large 10}} - \color{#c00}{10}\cdot 10 + 10^{\large 2}\:(\cdots)\ \color{}{\equiv\ 1}\ \ (mod\ 100)$$ So $\rm \bmod 100\!:\, \ 9^{\large 9^{\LARGE 9}}\!\!\equiv\ 9^{\large 9^{\LARGE 9}\, mod\ \color{#c00}{10}} \equiv\ 9^{\large (-1)^{\LARGE 9}}\!\! \equiv 9^{\large -1}\!\equiv \dfrac{1}9 \equiv \dfrac{-99}9 \equiv {-}11 \equiv 89 $ Remark $ $ Above we used the useful fact that if the powers of $\,a=9\,$ repeat with period length $\color{#c00}{10}\,$ then all exponents on $\,a\,$ can be taken modulo $\,\color{#c00}{10}.\,$ Said more precisely we used the following $$\ \ \color{#c00}{a^{\color{#c00}{\large 10}}\equiv 1}\!\!\pmod{\!m},\,\ J\equiv K\!\!\!\pmod{\!\color{#c00}{10}}\ \,\Rightarrow\,\ a^{\large J}\equiv a^{\large K}\!\!\!\!\pmod{\!m}$$ for the specific values $\ a=9,\,$ and $\,J = 9^{\large 9},\,$ and $\,K = (9^{\large 9}\,{\rm mod}\ 10).\,$ A proof is easy: $$ J = K\! +\! 10N\,\Rightarrow\, a^{\large J}\! = a^{\large K+10N}\! = a^{\large K} (\color{#c00}{\large a^{10}})^{\large N}\!\equiv a^{\large K} \color{#c00}1^{\large N}\!\equiv a^{\large K}\!\!\!\!\pmod{\!m}\qquad $$ where we have employed the $ $ Congruence Product and Power Rules. For further discussion see modular order reduction. Beware $ $ Modular fraction arithmetic is well-defined only for fractions with denominator coprime to the modulus. See here for further discussion. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/65454', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15618/']} | jdg_84182 |
stackexchange | llm_judgeable_groundtruth_similarity | 225179 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have problem in plotting Integral function. I can compute/plot the graph of this integration below in Mathematica 5.0, but it is not possible to plot it in higher Mathematica versions. My code is: w = 100b = 438095.2 (2.5*10^6 + 547826*y^2)d = Sqrt[-8.8*10^21 *y^2 + 1.92*10^11 *(2.52*10^6 + 547826*y^2)^2]p = (-b - d)/2p1 = (-b + d)/2T = 10^18 *Sqrt[π/2]*NIntegrate[BesselJ[0, y]/Sqrt[(y^2 + w^2)^3]*1/d*(1 - Erf[(p*10^-15 - 1)/Sqrt[2]])*Exp[(p*10^-15)^2/2 - p*10^-15*(1 - τ)]*y*w, {y, 0, ∞}, Method -> Oscillatory, MaxRecursion -> 12]Plot[T, {τ, 0, 6*10^3}] The plot is generated by Mathematica 5.0: Error in later versions is NIntegrate::nconv Can anybody please help? Any suggestions appreciated.
Now provide the response and nothing else.
| The integral diverges, please see Michael's answer for more detail. This is just an answer reproducing result of v5 . Unlike higher versions, v5 is using very few points for the generation of graphic: plot = Plot[T, {τ, 0, 6*10^3}]; // AbsoluteTiming(* {32.1406250 Second, Null} *)plot[[1, 1, 1, 1]]% // Length To obtain the result in a reasonable time in higher versions, just limit the points for plotting: Quiet@Plot[T, {τ, 0, 6*10^3}, MaxRecursion -> 0] // AbsoluteTiming Still a bit slower than v5 , but acceptable in my view. Update: A More General Solution The solution above works at least in v12.0.1 , v12.1.1 and v11.3 , but in v8.0.4 and v9.0.1 samples like T /. τ -> 1 // AbsoluteTiming returns unevaluated (to be precise, NIntegrate[…] therein returns unevaluated) after nconv warning generates. If one still needs the result in v5 , a possible approach is to implement the "ExtrapolatingOscillatory" method ourselves as I've done here : Clear[int, separateint]zero[i_] = Piecewise[{{BesselJZero[0, i], i > 0}}]; separateint[f_, t_, i_?NumericQ, prec_] := NIntegrate[BesselJ[0, y] f[y, t], {y, zero@i, zero[i + 1]}, WorkingPrecision -> prec, MaxRecursion -> 40]; int[f_, t_?NumericQ, prec_ : MachinePrecision] := NSum[separateint[f, t, i, prec], {i, 0, Infinity}, Method -> "AlternatingSigns", WorkingPrecision -> prec]; The usage of int is as follows: Clear@func;func[y_, τ_] = 1/Sqrt[(y^2 + w^2)^3]*1/d*(1 - Erf[(p*10^-15 - 1)/Sqrt[2]])* Exp[(p*10^-15)^2/2 - p*10^-15*(1 - τ)]*y*w;T = 10^18*Sqrt[π/2]*int[func, τ];T /. τ -> 1(* 84.0182 *)Plot[T, {τ, 0, 6*10^3}, MaxRecursion -> 0] // AbsoluteTiming The resulting graphic is the same as shown above, but the solution also works in v8.0.4 and v9.0.1 . This solution also works for the Tc and Td in your new question , on which NIntegrate of v12 again returns unevaluated. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/225179', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/73622/']} | jdg_84183 |
stackexchange | llm_judgeable_groundtruth_similarity | 5108099 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
class Api::StoresController < ApplicationController respond_to :json def index @stores = Store.all(:include => :products) respond_with @stores endend Returns only stores without their products, as does Store.find(:all).to_json(:include => :products) The association is tested, I can see the nested products in console ouput from, say, Store.first.products What's the correct way to get them products included with MongoMapper? Here are my models: class Store include MongoMapper::Document many :products, :foreign_key => :store_ids end class Product include MongoMapper::Document key :store_ids, Array, :typecast => 'ObjectId' many :stores, :in => :store_ids end UPDATE In trying Scott's suggestion, I've added the following to the Store model: def self.all_including_nested stores = [] Store.all.each do |store| stores << store.to_hash endenddef to_hash keys = self.key_names hash = {} keys.each{|k| hash[k] = self[k]} hash[:products] = self.products hash[:services] = self.services hashend And in the controller: def index @stores = Store.all_including_nested respond_with @storesend Which looks like it should work? Assuming the array of hashes would have #to_json called on it, and then the same would happen to each hash and each Product + Service. I'm reading through ActiveSupport::JSON's source, and so far that's what I've grokked from it. But, not working yet... :(
Now provide the response and nothing else.
| Have a look at the as_json() method. You put this in your models, define your json, and then simply call the render :json method and get what you want. class Something def as_json(options={}) {:account_name => self.account_name, :expires_on => self.expires_on.to_s, :collections => self.collections, :type => "Institution"} endend You'll notice self.collections which is a many relationship. That model also has as_json() defined: class Collection def as_json(options={}) {:name => self.name, :title => self.title, :isbn => self.isbn, :publisher => self.publisher, :monthly_views => self.monthly_views} endend This one contains self.monthly_views which represents another many relationship. Then in your controller: @somethings = Something.allrender :json => @somethings | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5108099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/358518/']} | jdg_84184 |
stackexchange | llm_judgeable_groundtruth_similarity | 9645803 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The jQuery document tags $.browser as deprecated. So what's the replacement for it?
Now provide the response and nothing else.
| If you really need good old $.browser According to the docs , this feature was deprecated in 1.3 ,and totally removed in 1.9 , although it is still available in the official jQuery Migrate plugin . If you want to do it right Depending on browser detection is not a good idea . Feature detection is the way to go ( Modernizr is a great tool for that). jQuery had a $.support() method to provide some feature detection, but it is now deprecated as well. They also suggest using Modernizer. If you really need browser detection Fixing browser quirks is not a valid use case for browser detection, but there are other use cases. Use any Javascript browser detection tool (like bowser ), as this functionality does not depend on jQuery at all. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9645803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475345/']} | jdg_84185 |
stackexchange | llm_judgeable_groundtruth_similarity | 8550841 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to trigger same location route in backbone.js, for example when location is /My/App/#/About and user clicks again on anchor with the same route in order to refresh the page content.
Now provide the response and nothing else.
| Backbone.history.loadUrl(Backbone.history.fragment); The solution for your specific problem isn't really documented and it seems the suggested best practice is to just call the function that is declared in the route you want to fire :/ Ref: https://github.com/documentcloud/backbone/issues/1214 So far the other answers are correct, but they forgot to mention that if the hash navigated to is the same as the current hash, nothing will fire. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8550841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217288/']} | jdg_84186 |
stackexchange | llm_judgeable_groundtruth_similarity | 5222819 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have Primefaces TabView with two Tab like: <p:tabView dynamic="true" cache="false" onTabShow="scrollBottom(#{stanzaBean.activeIndex})" tabChangeListener="#{messaggioBean.onTabChange}" activeIndex="#{stanzaBean.activeIndex}" > it works fine, except that when I change the Tab the activeIndex isn't updated on the Server and it returns always the default value.I'm using primefaces 2.2.1. Thank you.
Now provide the response and nothing else.
| Going by the PrimeFaces ShowCase example , if you give each tab an id: <p:tabView tabChangeListener="#{indexBean.onTabChange}" > <p:tab title="tab 0" id="tab0"></p:tab> <p:tab title="tab 1" id="tab1" ></p:tab> <p:tab title="tab 2" id="tab2"></p:tab> </p:tabView> you can get that tab id in the tabChangeListener. public void onTabChange(TabChangeEvent event) { System.out.println("tab id = " + event.getTab().getId());} Then you'll know which tab was selected. Edit: There is an open PrimeFaces issue 1640 TabView: Wrong activeIndex in TabChangeListener, always 0 on the problem you are having. Edit 2: With PrimeFaces 5.0 and up the tabChangeListener is no longer available on the tabView element but should be used via an explicit ajax tag with a tabChange event. <p:tabView id="analysisSections" value="#{analysisBean.analysis.sections}" var="section" activeIndex="#{analysisBean.activeIndex}"> <p:ajax event="tabChange" listener="#{analysisBean.onTabChange}"/> Also you can directly get index of tab: public void onTabChange(TabChangeEvent event) { activeIndex = ((TabView) event.getSource()).getIndex();} with all these changes, activeIndex works properly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5222819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/451960/']} | jdg_84187 |
stackexchange | llm_judgeable_groundtruth_similarity | 125110 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I examine /proc/1/environ I can see a null-byte-delimited string of process 1 's environment variables. I'd like to bring these variables into my current environment. Is there an easy way to do this? The proc man page gives me a snippet which helps be print out each environment variable on a line-by-line basis (cat /proc/1/environ; echo) | tr '\000' '\n' . This helps me verify the contents are correct, but what I really need to do is source these variables into my current bash session. How do I do that?
Now provide the response and nothing else.
| The following will convert each environment variable into an export statement, properly quoted for reading into a shell (because LS_COLORS , for example, is likely to have semicolons in it), then sources it. [The printf in /usr/bin , unfortunately, generally doesn't support %q , so we need to call the one built into bash .] . <(xargs -0 bash -c 'printf "export %q\n" "$@"' -- < /proc/nnn/environ) | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/125110', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/61692/']} | jdg_84188 |
stackexchange | llm_judgeable_groundtruth_similarity | 179900 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand the process for training a linear support vector machine . I realize that properties of SMVs allow them to be optimized much quicker than by using a quadratic programming solver, but for learning purposes I'd like to see how this works. Training Data set.seed(2015)df <- data.frame(X1=c(rnorm(5), rnorm(5)+5), X2=c(rnorm(5), rnorm(5)+3), Y=c(rep(1,5), rep(-1, 5)))df X1 X2 Y1 -1.5454484 0.50127 12 -0.5283932 -0.80316 13 -1.0867588 0.63644 14 -0.0001115 1.14290 15 0.3889538 0.06119 16 5.5326313 3.68034 -17 3.1624283 2.71982 -18 5.6505985 3.18633 -19 4.3757546 1.78240 -110 5.8915550 1.66511 -1library(ggplot2)ggplot(df, aes(x=X1, y=X2, color=as.factor(Y)))+geom_point() Finding the Maximum Margin Hyperplane According to this Wikipedia article on SVMs , to find the maximum margin hyperplane I need to solve $$\arg\min_{(\mathbf{w},b)}\frac{1}{2}\|\mathbf{w}\|^2$$subject to (for any i = 1, ..., n)$$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1.$$ How do I 'plug' my sample data into a QP solver in R (for instance quadprog ) to determine $\mathbf{w}$?
Now provide the response and nothing else.
| HINT : Quadprog solves the following: $$\begin{align*}\min_x d^T x + 1/2 x^T D x\\\text{such that }A^T x \geq x_0\end{align*}$$ Consider$$x = \begin{pmatrix}w\\b\end{pmatrix}\text{and } D=\begin{pmatrix}I & 0\\0 & 0\end{pmatrix}$$ where $I$ is the identity matrix. If $w$ is $p \times 1$ and $y$ is $n \times 1$: $$\begin{align*}x &: (2p+1) \times 1 \\D &: (2p+1) \times (2p+1)\end{align*}$$ On similar lines:$$x_0 = \begin{pmatrix}1\\1\end{pmatrix}_{n \times 1}$$ Formulate $A$ using the hints above to represent your inequality constraint. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/179900', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/31542/']} | jdg_84189 |
stackexchange | llm_judgeable_groundtruth_similarity | 1797159 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question that says... THEOREM: The function $f: \mathbb{R}_{\geq 0}\rightarrow \mathbb{R}$ given by $f(x) = ln(x)$ is onto. If you were going to prove this statement, what is the first sentence. After this sentence what is the new goal? So I known this is a proof involving functions, but I'm unsure how to start a proof of this. Is the first sentence just suppose to be $f: \mathbb{R}_{\geq 0}\rightarrow \mathbb{R}$? In general how to you start proof involing functions.
Now provide the response and nothing else.
| Note that since this holds for all positive definite $M$, it holds for $M = I$. So, we must have$$A^TA = I$$which is to say that $A$ is an orthogonal (unitary) matrix. We may therefore rewrite the equation as $$A^{-1}MA = M \implies MA = AM$$That is, $A$ must commute with all positive definite matrices. However, this means that $A$ must also commute with all complex linear combinations of positive definite matrices, which is to say that $A$ must commute with every matrix. It is well known that the only matrices that commute with any matrix are the multiples of the identity. Thus, we conclude that $A$ is an orthogonal (unitary) matrix and a multiple of the identity. If $A$ is real, conclude that $A = \pm I$. If $A$ is complex (and we take the $A^T$ to refer to the complex-conjugate), conclude that $A = \lambda I$ for some $\lambda$ with $|\lambda| = 1$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1797159', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/339897/']} | jdg_84190 |
stackexchange | llm_judgeable_groundtruth_similarity | 423797 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Alienware Aurora R7, running Arch Linux. On shutdown, the kernel panics, with something like this in the panic message (omitting timestamps): BUG: Unable to handle kernel NULL pointer dereference at (null)IP: i2c_dw_isr+0x3ef/0x6d0PGD 0 P4D 0Oops: 0000 [#1] PREEMPT SMP PTI From various sources ( 1 , 2 ), this seems to be related to the i2c-designware-core module, and the workaround is blacklisting it. However, with recent kernels (seems to be 4.10 and above), this doesn't seem to be built as a module: # uname -srv Linux 4.15.2-2-ARCH #1 SMP PREEMPT Thu Feb 8 18:54:52 UTC 2018# zgrep DESIGNWARE /proc/config.gzCONFIG_I2C_DESIGNWARE_CORE=yCONFIG_I2C_DESIGNWARE_PLATFORM=yCONFIG_I2C_DESIGNWARE_SLAVE=yCONFIG_I2C_DESIGNWARE_PCI=mCONFIG_I2C_DESIGNWARE_BAYTRAIL=yCONFIG_SPI_DESIGNWARE=mCONFIG_SND_DESIGNWARE_I2S=mCONFIG_SND_DESIGNWARE_PCM=y So I have resorted to making the kernel reboot on panic: # cat /proc/cmdlineroot=UUID=e5018f7e-5838-4a47-b146-fc1614673356 rw initrd=/intel-ucode.img initrd=/initramfs-linux.img panic=10 sysrq_always_enabled=1 printk.devkmsg=on (The odd paths in the /proc/cmdline are because I boot directly from UEFI, with entries created using efibootmgr . The paths are rooted at /boot , where my ESP is mounted.) This seems to be something for touchpads, but I don't have a touchpad and won't get one. What can I do to disable this thing? Do I have to build a custom kernel ? Since linux-lts is also newer than 4.10, (4.14, currently), there doesn't seem to be an easy way to install an older kernel either, where blacklisting might presumably work. Using nolapic as a kernel parameter solves the shutdown panic problem, but it causes the system to freeze a few minutes after boot, so I can't use it.
Now provide the response and nothing else.
| After reading kernel sources, I found a function we need to blacklist! Thanks to Stephen Kitt for the hint about initcall_blacklist . Add initcall_blacklist=dw_i2c_init_driver to the kernel command line. This works for me on kernel 4.15.0. For anyone else who'll find this answer. You can do it by editing /etc/default/grub : Run in the terminal: sudo -H gedit /etc/default/grub . Append blacklist string to the GRUB_CMDLINE_LINUX_DEFAULT : GRUB_CMDLINE_LINUX_DEFAULT="… initcall_blacklist=dw_i2c_init_driver" . Save the file, close the editor. Run in the terminal: sudo update-grub . Reboot and test! | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/423797', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/70524/']} | jdg_84191 |
stackexchange | llm_judgeable_groundtruth_similarity | 21154 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Four rods A, B, C, D of same length and material but of different radii r, 2r , 3r and 4r respectively are held between two rigid walls. The temperature of all rods is increased by same amount. If the rods do not bend, then which of these are correct: The stress in the rods are in the ratio 1 : 2 : 3 : 4. The force on the rod exerted by the wall are in the ratio 1 : 2 : 3 : 4. The energy stored in the rods due to elasticity are in the ratio 1 : 2 : 3 : 4. The strains produced in the rods are in the ratio 1 : 2 : 3 : 4. Four rods A, B, C, D of same length and material => Same Youngs Modulus, Same coefficient of linear expansion, Same Length. Also, The temperature of all rods is increased by same amount. Before answering the above question I've few other questions: Suppose rods were not held between two rigid walls. Then there would have been change in length. In that case, would there be stress? Intuitively it feels like stress would be zero, as there seems to be no restoring forces developed. But Stress = Youngs modulus * strain. Strain is definitely not zero. So, stress should not be zero. Confused! No say, rods were held between two rigid walls. Assuming rods are not bending. There length will not change. So, strain would be zero. Stress = Youngs modulus * strain. So, Stress must be zero. But intuitively it seems there will be stress, because there will be restoring forces in the rod pushing walls away. Again confused! Now coming back to the original problem. The above two confusions are causing trouble. But just going by intuition. There will be stress developed even though there is no strain. But stress = Restoring Force/Area. Here areas for all rods are different pi*r^2, because r is different. But Restoring force is same. So, the ratio must be 1/1 : 1/4 : 1/9 : 1/16. Right? Surprisingly answers are 3,4. There is lot of confusion. Kindly clarify
Now provide the response and nothing else.
| Your confusion lies within your perception of natural length in the Young's modulus formula. When we say strain=$\Delta L/L$, the $L$ refers to the natural length of the rod at a given temperature . So, if the rod is not clamped, and we increase the temperature, there is no deviation from natural length at that temperature (as we can define natural length of a rod at a temperature by calling it "the length of the rod at that temperature in the absence of any other influences"), so strain is zero. Stress is obviously zero. If the rod is clamped, its length stays $L_0$, but it's natural length becomes $L_0(1+\alpha\Delta T)$, so the $\Delta L$ comes from the fact that its natural length has changed but its length is constant. Summing up, in your Young's modulus formula, use strain=$\Delta L_T/L_T$, where $\Delta L_T$ is $|L_0-L_T|$, $L_T=L_0(1+\alpha\Delta T)$, and $L_0$ is length at a reference temperature. Use this to solve the problem now. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/21154', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/447/']} | jdg_84192 |
stackexchange | llm_judgeable_groundtruth_similarity | 13192594 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to add all hunks in a file matching a regex? I know I can search for a given hunk with /, but that only finds the first. I want to add all matching.
Now provide the response and nothing else.
| Sadly, the patch in July 2011 went nowhere for now. It would have introduced a git add --hunks=magic option. For now, you will have to do with: extracting a patch based on your regexp: see " Filtering a diff with a regular expression " git stash your changes apply your patch and git add reset your index and apply your stash ( git stash pop ) Quite a cumbersome process. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13192594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/94394/']} | jdg_84193 |
stackexchange | llm_judgeable_groundtruth_similarity | 193184 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to mesh the following region it fails. Why? Is there a workaround? Needs["NDSolve`FEM`"];Len = 1; (*length *)ht = 0.125; (* height *)wd = 0.5; (* width *)hwd = 0.25; (* hole width *)reg = RegionDifference[Cuboid[{0, -wd/2, 0}, {Len, wd/2, ht}], Cuboid[{Len/2 - hwd/2, -hwd/2, 0}, {Len/2 + hwd/2, hwd/2, ht}]];Region[reg] mesh = ToElementMesh[reg]; DiscretizeGraphics::rnimpl: The function DiscretizeGraphics is not implemented for Tetrahedron[<<1>>]. Version 11.3 for Windows.
Now provide the response and nothing else.
| I don't know why on earth a Tetrahedron -wrapped list of 3D-simplices is returned by RegionDifference , but this helps to circumvent it: R1 = BoundaryDiscretizeRegion[ Cuboid[{0, -wd/2, 0}, {Len, wd/2, ht}], MaxCellMeasure -> ∞ ];R2 = BoundaryDiscretizeRegion[ Cuboid[{Len/2 - hwd/2, -hwd/2, 0}, {Len/2 + hwd/2, hwd/2, ht}], MaxCellMeasure -> ∞ ];reg = RegionDifference[R1, R2];mesh = ToElementMesh[reg] ElementMesh[{{0., 1.}, {-0.25, 0.25}, {0., 0.125}}, {TetrahedronElement["<" 13557 ">"]}] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/193184', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/12558/']} | jdg_84194 |
stackexchange | llm_judgeable_groundtruth_similarity | 13052735 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm just trying to understand the below code : Here a new type alias Set is declared which is a function that takes an Intparameter and returns a boolean type Set = Int => Boolean Here a new method 'contains' is declared, which takes two parameters of type Set and Intwhich returns a boolean. The boolean is set to the function declared in earlier ('type Set = Int => Boolean')But what logic is performed to determine if the Int 'elem' is a member of Set 's' def contains(set: Set, elem: Int): Boolean = set(elem) Here a method is defined which returns a set which returns a function ? def singletonSet(elem: Int): Set = set => set == elem Complete code with comments : /** * We represent a set by its characteristic function, i.e. * its `contains` predicate. */ type Set = Int => Boolean /** * Indicates whether a set contains a given element. */def contains(set: Set, elem: Int): Boolean = set(elem) /** * Returns the set of the one given element. */ def singletonSet(elem: Int): Set = set => set == elem
Now provide the response and nothing else.
| Let's read sort of backwards, in logical order. Say you have a finite set of integers: 0, 1, 2, 3, 5, 8 for instance One way to describe this set of integers is through a function (its characteristic or indicator function ) that, for each integer, returns true if the integer is in the set, false if it is not.The signature for this function, as we described it, must always be Int => Boolean ("give me an integer, I will tell you if it's in the set"), while its implementation will vary depending on the specific set. For the set in my example above you could write this function simply as: val mySet: Int => Boolean = x => Array(0,1,2,3,5,8) contains x or recognize that the ints in the set are the first ones of the Fibonacci sequence and define f in a slightly more sophisticated way (which I won't do here...).Note that the "contains" I've used is defined for all scala collections.In any case, now you have a function that tells you what is in the set and what is not.Let's try it in the REPL. scala> val mySet: Int => Boolean = x => Array(0,1,2,3,5,8) contains xmySet: Int => Boolean = <function1>scala> mySet(3)res0: Boolean = truescala> mySet(9)res1: Boolean = false Now, mySet has type Int => Boolean , which we can make more readable if we define it as a type alias. scala> type Set = Int => Booleandefined type alias Set Besides readability, defining Set as an alias of Int => Boolean is making it explicit that in a way a Set is its characteristic function. We can redefine mySet in a more concise (but otherwise equivalent) way with the Set type alias: scala> val mySet: Set = x => Array(0,1,2,3,5,8) contains xmySet: Int => Boolean = <function1> Now for the last piece of this long answer. Let's define a characteristic function to describe this Singleton set: 3 .Easy: val Singleton3 : Set = set => set == 3 for a Singleton set containing only 4, it would be: val Singleton4 : Set = set => set == 4 So, let's generalize the creation of these functions and write a method that returns a Singleton function that, for any integer, describes the set containing only that integer: def singletonSet(elem: Int): Set = set => set == elem APPENDIX: I skipped this part, because it wasn't really needed: def contains(set: Set, elem: Int): Boolean = set(elem) I think it's sort of pointless and (without more context) it looks just like a contrived example to demonstrate how you can pass a function around as an argument, just like any other type in scala. It takes the Int => Bool function and the Int and just applies the function to the Int so you can do scala> contains(mySet, 3)res2: Boolean = true which is like calling mySet(3) directly. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13052735', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/701254/']} | jdg_84195 |
stackexchange | llm_judgeable_groundtruth_similarity | 397838 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Defining a torus $T$ as $S^1 \times S^1$, it should follow that $T \subseteq \mathbb{R}^4$. But you can also think of a torus as a bagel, which means it's a subset of $\mathbb{R}^3$. Can anyone clarify this point?
Now provide the response and nothing else.
| These are two different embeddings of the same topological space, i.e. the surface of the bagel in $\mathbb R^3$ and the product of two circles in $\mathbb R^4$ are homeomorphic. Note that they are not isometric though; the torus embeded in $\mathbb R^4$ as a product of circles is flat , while the surface of the bagel has regions of positive curvature and of negative curvature. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/397838', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/78720/']} | jdg_84196 |
stackexchange | llm_judgeable_groundtruth_similarity | 44434803 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
since Java 8 I can use instant and LocalDateTime To get the Unix timestamp: long unixTimestamp = Instant.now().getEpochSecond(); But if I want to get a time from the past, lets say 2 weeks I use this: int unixtime2weeksAgo = (int) ((System.currentTimeMillis() - 24 * 60 * 60 * 1000 * 14) / 1000L) With the docu I couldn't build a solution based on the new java 8 features is here someone who could provide a good solution by using instant or LocalDateTime?
Now provide the response and nothing else.
| "Two weeks ago" is dependent on your time zone (there may have been some DST changes etc.). So using Instant or LocalDateTime may create issues because they don't include any time zone information. Assuming you want to do it in UTC, you can use: ZonedDateTime twoWeeksAgo = ZonedDateTime.now(ZoneOffset.UTC).minusWeeks(2);long unixTs = twoWeeksAgo.toEpochSecond(); You can specify a different time zone in place of ZoneOffset.UTC seen above. For example, ZoneId.of( "Asia/Kolkata" ) . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44434803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8006882/']} | jdg_84197 |
stackexchange | llm_judgeable_groundtruth_similarity | 20892255 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way how to split string after every nth character in PostgreSQL? I thought regexp_split_to_array can be used to do this: select unnest(regexp_split_to_array('abcdefgh', E'...regexp here...')); Example input: abcdefgh Required output (split after every 2nd character): abcdefgh Required output (split after every 3rd character): abcdefgh What regexp will do this? Is there any other solution?
Now provide the response and nothing else.
| Use substring and generate_series : regress=> select substring('abcdefgh' from n for 2) from generate_series(1, length( 'abcdefgh' ), 2) n; substring ----------- ab cd ef gh(4 rows)regress=> select substring('abcdefgh' from n for 3) from generate_series(1, length( 'abcdefgh' ), 3) n; substring ----------- abc def gh(3 rows) This is trivially wrapped into an inlineable SQL function: CREATE OR REPLACE FUNCTION string_nchars(text, integer) RETURNS setof text AS $$SELECT substring($1 from n for $2) FROM generate_series(1, length($1), $2) n;$$ LANGUAGE sql IMMUTABLE; Usage: regress=> SELECT string_nchars('abcdefgh',3); string_nchars --------------- abc def gh(3 rows) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20892255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1442374/']} | jdg_84198 |
stackexchange | llm_judgeable_groundtruth_similarity | 3311788 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading a lot about interfaces and class inheritance in Java, and I know how to do both and I think I have a good feel for both. But it seems that nobody ever really compares the two side by side and explains when and why you would want to use one or the other. I have not found a lot of times when implementing an interface would be a better system than extending a superclass. So when do you implement an interface and when do you extend a superclass?
Now provide the response and nothing else.
| Use an interface if you want to define a contract . I.e. X must take Y and return Z. It doesn't care how the code is doing that. A class can implement multiple interfaces. Use an abstract class if you want to define default behaviour in non-abstract methods so that the endusers can reuse it without rewriting it again and again. A class can extend from only one other class. An abstract class with only abstract methods can be as good definied as an interface. An abstract class without any abstract method is recognizeable as the Template Method pattern (see this answer for some real world examples). An abstract class in turn can perfectly implement an interface whenever you want to provide the enduser freedom in defining the default behaviour. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3311788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245896/']} | jdg_84199 |
stackexchange | llm_judgeable_groundtruth_similarity | 525640 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to prove a commutative, with unit, Noetherian ring $A$ only has finitely many minimal prime ideals via the following step? I have proved: Step. All radical ideals of Noetherian ring $A$ can be expressed as an intersection of finitely many prime ideals. How to continue?
Now provide the response and nothing else.
| If you know that radical ideals are finite intersections of primes, in particular $\sqrt{(0)}=\mathfrak p_1\cap\cdots\cap\mathfrak p_n$. Let $\mathfrak p$ be a minimal prime. Since $(0)\subseteq\mathfrak p$ we get $\sqrt{(0)}\subseteq\sqrt{\mathfrak p}=\mathfrak p$, that is, $\mathfrak p_1\cap\cdots\cap\mathfrak p_n\subseteq\mathfrak p$. It follows that there exists $\mathfrak p_i\subseteq\mathfrak p$ and since $\mathfrak p$ is minimal we must have $\mathfrak p_i=\mathfrak p$. (In other words, the minimal primes of $A$ are among the primes $\mathfrak p_1,,\dots,\mathfrak p_n$.) | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/525640', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/92728/']} | jdg_84200 |
stackexchange | llm_judgeable_groundtruth_similarity | 395971 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I simplify: $$\sqrt{\underbrace{11\dots1}_{2n\ 1's}-\underbrace{22\dots2}_{n\ 2's}}$$ Should I use modulos or should I factor them? Or any I suppose to use combinatorics? Any one have a clue?
Now provide the response and nothing else.
| Nice question there! Let $x=\underbrace{11\cdots1}_{n\ 1's}$, then \begin{align}\therefore\underbrace{11\cdots1}_{2n\ 1's}-\underbrace{22\cdots2}_{n\ 2's}&=\underbrace{11\cdots1}_{n\ 1's}\times10^n+\underbrace{11\cdots1}_{n\ 1's}-2\times\underbrace{11\cdots1}_{n\ 1's}\\ &=x\times10^n+x-2x\\ \\&=x\times10^n-x\\ \\&=x(10^n-1)\\ \\&=x\times\underbrace{99\cdots9}_{n\ 9's}\\&=x\times(9\times\underbrace{11\cdots1}_{n\ 1's})\\&=9x^2\end{align} \begin{align}\therefore\text{The original equation is equal to: }\sqrt{9x^2}=3x=\boxed{\underbrace{33\cdots3}_{n\ 3's}}\end{align} | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/395971', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/78433/']} | jdg_84201 |
stackexchange | llm_judgeable_groundtruth_similarity | 80310 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Today, I came across a question which asked name three reagents can be used for identification of phenol. The answer given was: Neutral $\ce{FeCl3}$ $\ce{ NaNO2 +HCl}$ $\ce {(NH4)2[Ce(NO3)6]}$. I understand (1) and (2), but I am doubtful about CAN test (3). As far as I have studied in various books, it is used for testing presence of alcohol, and phenol is not alcohol . Can phenol be detected with a ceric ammonium nitrate test?
Now provide the response and nothing else.
| With unsubstituted phenol ($\ce{PhOH}$) dark-brown precipitation is formed: $$\ce{(NH4)2[Ce(NO3)6] + 2 PhOH -> [Ce(NO3)4(PhOH)2] + 2 NH4NO3}$$ Other phenols should produce wine-red solution as expected [1, p. 478]. Also, this and other phenol tests are discussed in [2]. Pavia, D. L. A small-scale approach to organic laboratory techniques, 3rd Ed.; Brooks/Cole, Cengage Learning: Belmont, CA, 2011 . ISBN 978-1-4390-4932-7. Atul, S. The Pearson Guide to Objective Chemistry for the AIEEE; Pearson Education India, 2010. ISBN 978-81-317-3381-3. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/80310', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/45545/']} | jdg_84202 |
stackexchange | llm_judgeable_groundtruth_similarity | 19403292 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to test the return value of a parallel assignment, and I write puts (x, y = 1, 2) , but it doesn't work, and print the error message: SyntaxError: (irb):74: syntax error, unexpected ',', expecting ')'puts (x,y =1,2) ^(irb):74: syntax error, unexpected ')', expecting end-of-input Is there anything wrong?
Now provide the response and nothing else.
| You have two problems. The space between puts and the ( prevents the parenthesized list from being interpreted as an argument list. Once you put a space after a method name, any argument list has to be outside of parentheses, and any parenthesized parts must be a legal expressions. In Ruby, (x,y,z) is not a legal single expression, so you get the above error. If you remove the space, you get this: > puts(x, y = 1, 2) NameError: undefined local variable or method `x' for main:Object That's because of your second problem: The statement puts(x,y = 1,2) is parsed as puts(x, y=1, 2) ; that is, Ruby interprets it as passing three arguments to puts : x is the first, y=1 is the second, and 2 is the third. Since the x is not on the left side of an assignment and hasn't been defined yet, you get the above error. Use an extra set of parentheses to group the entire assignment together as one argument: > puts((x,y=1,2)) 1 2 But note that this is passing a single list that contains two items . It doesn't make a difference with puts , but it will for methods that distinguish between lists and individual parameters: > def foo(a,b) puts "a=#{a},b=#{b}" end> foo((x,y=1,2))ArgumentError: wrong number of arguments (1 for 2) So in that case you need one more piece of punctuation - the splat operator: > foo(*(x,y=1,2)) a=1, b=2 Interesting, but of little practical relevance, is the fact that once you've doubled the parentheses, you can put the space back if you want: > puts ((x, y = 1, 2)) 1 2 But again, that turns them from argument-wrappers into just an extra expression-wrapper; you could put any number of parentheses around that without changing anything. That means that in the foo case, the splat operator has to go outside both sets of parentheses: > foo (*(x,y=1,2))SyntaxError: (irb):24: syntax error, unexpected tSTAR> foo *((x,y=1,2))a=1, b=2 It's generally considered bad style in Ruby to use the parenthesisless form when the first argument itself includes parentheses, however. Depending on your Ruby version you may get a warning about such statements, but even if you don't, it's better to use the fully-parenthesized version (with no space after the method name). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19403292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2298986/']} | jdg_84203 |
stackexchange | llm_judgeable_groundtruth_similarity | 10093965 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
if I have the following dataframe: value factorA factorB1 a e2 a f3 a g1 b k2 b l3 b m1 c e2 c g how can I get for each factorA the highest value and the entry from factorB associated with it i.e. value factorA factorB3 a g3 b m2 c g Is this possible without first using blocks<-split(factorA, list(), drop=TRUE) and then sorting each block$a as this will be performed many times and number of blocks will always change.
Now provide the response and nothing else.
| Here is one option, using base R functions: maxRows <- by(df, df$factorA, function(X) X[which.max(X$value),])do.call("rbind", maxRows)# value factorA factorB# a 3 a g# b 3 b m# c 2 c g | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10093965', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1322296/']} | jdg_84204 |
stackexchange | llm_judgeable_groundtruth_similarity | 23258282 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to develop a web api self hosting app using OWIN.In my own XyzStartup class, I need an external argument: contentFolderPath. However, I didn't find a way to pass this argument. Here is my code below: var contentFolderPath = this.TextBox.Text; // user inputvar startOptions = new StartOptions();using(WebApp.Start<XyzStartup>(startOptions)){} My startup public class XyzStartup{ XyzStartup(string contentFolderPath) { ... }} I noticed there is a StartOption class, but don't how to use it. Can I use it in my XyzStartup class? Thanks in advance! I finally find a way to do this: var startOptions = new StartOptions();startOptions.Urls.Add('..some url ..');WebApp.Start(startOptions, (appBuilder)=>{ new XyzStartup(contentFolderPath).Configuration(appBuilder);}
Now provide the response and nothing else.
| If you want to pass parameter to StartUp class, you can use Action<IAppBuilder> in WebApp.Start like Cillié Malan mentioned in the comment instead of launching with Type parameter( WebApp.Start<T> ) Here is a concrete example for self-hosting: object someThingYouWantToAccess;var server = WebApp.Start("http://localhost:8080/", (appBuilder) =>{ // You can access someThingYouWantToAccess here // Configure Web API for self-host. HttpConfiguration config = new HttpConfiguration(); config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); appBuilder.UseWebApi(config);}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23258282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/146121/']} | jdg_84205 |
stackexchange | llm_judgeable_groundtruth_similarity | 68683 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a binary file on Unix, which when executed asks the user for inputs. The inputs to this binary file are given manually. I want to automate the procedure, since I know the inputs already. I can't modify the binary file. How can I do this ? Is this possible via shell script or even with ant targets ?
Now provide the response and nothing else.
| If the program reads from standard input (as opposed to direct from the terminal), you could do something like echo -e "answer1\nanswer2\nanswer3\n" | your_program A here document may be more readable: your_program <<'EOF'answer1answer2answer3EOFdo_more_stuff (You can pick any string instead of EOF , just make sure to use the same in <<'somestring' and to mark the end of the input. The string must not appear as an input line. The end-of-input mark must not be indented.) If you need more complex interaction then an expect script is what you want. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/68683', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/26374/']} | jdg_84206 |
stackexchange | llm_judgeable_groundtruth_similarity | 54095892 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My app is in production and I want to manage user data when user updates the app without loss of their data, how can I achieve this with sqflite . Explicitly I want to add a column and delete another.
Now provide the response and nothing else.
| You can probably add a column using raw sql, but sqlite (and thus sqflite) doesn't support dropping a column. For that you would need to do the following: increase the database version number in onUpgrade copy the old database columns to a temporary table delete the original table create a new table using the original table name but with the right schema copy the data from the temp table delete the temp table Sorry, this isn't a full answer, but it is the direction I would go if I were in your situation. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54095892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7053359/']} | jdg_84207 |
stackexchange | llm_judgeable_groundtruth_similarity | 40585822 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Description: Answer for Objective-C and Swift2.0 : How to center align the cells of a UICollectionView? I usually would try to convert the Swift2.0 answer to the Swift3.0 solution, however the method: func collectionView(collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, insetForSectionAtIndex section: Int) -> UIEdgeInsets { let edgeInsets = (screenWight - (CGFloat(elements.count) * 50) - (CGFloat(elements.count) * 10)) / 2 return UIEdgeInsetsMake(0, edgeInsets, 0, 0); } doesn't seem to exist in Swift3.0 ,and the only other method I found that seems useful is: func collectionView(_ collectionView: UICollectionView, transitionLayoutForOldLayout fromLayout: UICollectionViewLayout, newLayout toLayout: UICollectionViewLayout) -> UICollectionViewTransitionLayout { <#code#>} but I am unsure how to implement it correctly. Question: How to center align the cells of a UICollectionView in Swift3.0 ? (A simple and general solution for iOS and tvOS would be perfect)
Now provide the response and nothing else.
| This ended up being the solution I used. Read the code comments for a better understanding. Swift 5 func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, insetForSectionAt section: Int) -> UIEdgeInsets { //Where elements_count is the count of all your items in that //Collection view... let cellCount = CGFloat(elements_count) //If the cell count is zero, there is no point in calculating anything. if cellCount > 0 { let flowLayout = collectionViewLayout as! UICollectionViewFlowLayout let cellWidth = flowLayout.itemSize.width + flowLayout.minimumInteritemSpacing //20.00 was just extra spacing I wanted to add to my cell. let totalCellWidth = cellWidth*cellCount + 20.00 * (cellCount-1) let contentWidth = collectionView.frame.size.width - collectionView.contentInset.left - collectionView.contentInset.right if (totalCellWidth < contentWidth) { //If the number of cells that exists take up less room than the //collection view width... then there is an actual point to centering them. //Calculate the right amount of padding to center the cells. let padding = (contentWidth - totalCellWidth) / 2.0 return UIEdgeInsets(top: 0, left: padding, bottom: 0, right: padding) } else { //Pretty much if the number of cells that exist take up //more room than the actual collectionView width, there is no // point in trying to center them. So we leave the default behavior. return UIEdgeInsets(top: 0, left: 40, bottom: 0, right: 40) } } return UIEdgeInsets.zero} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40585822', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3589092/']} | jdg_84208 |
stackexchange | llm_judgeable_groundtruth_similarity | 18878802 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Firstly I would like to tell that I come from a non-Computer Science background & I have been learning the C++ language.I am unable to understand what exactly is a cache?It has different meaning in different contexts.I would like to know what would be called as a cache in a C++ program?For example, if I have some int data in a file. If I read it & store in an int array, then would this mean that I have 'cached' the data?To me this seems like common sense to use the data since reading from a file is always bad than reading from RAM.But I am a little confused due to this article. In a CPU there can be several caches, to speed up instructions in loops or to store often accessed data. These caches are small but very fast. Reading data from cache memory is much faster than reading it from RAM. It says that reading data from cache is much faster than from RAM.I thought RAM & cache were the same.Can somebody please clear my confusion? EDIT: I am updating the question because previously it was too broad.My confusion started with this answer. He says RowData and m_data are specific to my implementation, but they are simply used to cache information about a row in the file What does cache in this context mean?
Now provide the response and nothing else.
| Any modern CPU has several layers of cache that are typically named things like L1, L2, L3 or even L4. This is called a multi-level cache. The lower the number, the faster the cache will be. It's important to remember that the CPU runs at speeds that are significantly faster than the memory subsystem. It takes the CPU a tiny eternity to wait for something to be fetched from system memory, many, many clock-cycles elapse from the time the request is made to when the data is fetched, sent over the system bus, and received by the CPU. There's no programming construct for dealing with caches, but if your code and data can fit neatly in the L1 cache, then it will be fastest. Next is if it can fit in the L2, and so on. If your code or data cannot fit at all, then you'll be at the mercy of the system memory, which can be orders of magnitude slower. This is why counter-intuitive things like unrolling loops, which should be faster, might end up being slower because your code becomes too large to fit in cache. It's also why shaving a few bytes off a data structure could pay huge dividends even though the memory footprint barely changes. If it fits neatly in the cache, it will be faster. The only way to know if you have a performance problem related to caching is to benchmark very carefully. Remember each processor type has varying amounts of cache, so what might work well on your i7 CPU might be relatively terrible on an i5. It's only in extremely performance sensitive applications that the cache really becomes something you worry about. For example, if you need to maintain a steady 60FPS frame rate in a game, you'll be looking at cache problems constantly. Every millisecond counts here. Likewise, anything that runs the CPU at 100% for extended periods of time, such as rendering video, will want to pay very close attention to how much they could gain from adjusting the code that's emitted. You do have control over how your code is generated with compiler flags. Some will produce smaller code, some theoretically faster by unrolling loops and other tricks. To find the optimal setting can be a very time-consuming process. Likewise, you'll need to pay very careful attention to your data structures and how they're used. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18878802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/982049/']} | jdg_84209 |
stackexchange | llm_judgeable_groundtruth_similarity | 1696225 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi can someone tell why in Linux and windows the same problem occurs : #include <iostream>using namespace std;class A{private: int _dmember;public: void func() { cout<<"Inside A!! "<<endl; cout<<_dmember; // crash when reach here. }};int main (){ A* a= NULL; a->func(); // prints "Inside A!!!" return 1;} can someone tell why this weird behivior occurs ? i mean , the a->func() was not supposed to get inside the func() ,...? this is unwated behavior , why the above behivor occurs? EDIT: Of course , a* =null was intentionaly!! so for all who answered "this is undefined behavior" or "you should never try to call a function on a NULL pointer !!", come on.... that was the point. and this behavior was explained correctly by some of you.
Now provide the response and nothing else.
| This is undefined behaviour. You must never call functions on a null pointer. With that out of the way, let's answer the question I think you're asking: why do we get partway into the function anyway? When you are invoking UB, the compiler is free to do anything, so it's allowed to emit code that works anyway. That's what happens on some (many?) systems in this particular case. The reason that you're able to call the function on a null pointer successfully is that your compilers don't store the function "in" the object. Rather, the above code is interpreted somewhat like this: class A { int _dmember;};void A::func(A *this) { cout << "Inside A!!" << endl; cout << this->_dmember << endl;}int main() { A *a = ...; A::func(a);} So, you see there is nothing that actually prevents you from calling a function on a null pointer; it'll just invoke the body of the function, with the this pointer set to null. But as soon as the function tries to dereference the this pointer by accessing a field inside the class, the operating system steps in and kills your program for illegal memory access (called segmentation fault on Linux, access violation on Windows). Nitpicker's corner : Virtual functions are a different story. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1696225', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179744/']} | jdg_84210 |
stackexchange | llm_judgeable_groundtruth_similarity | 47634907 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to get the documents inside an subcollection which is part of an document found with the .where function Example: RootColl/ Doc A/ SubColl 1 Doc 1 Doc 2 Doc 3 SubColl 2 Docs Doc A/ SubColl 1 Doc 1 Doc 2 Doc 3 SubColl 2 Docs I want to get all the documents under SubColl 1 from the doc with the field level == 1 I am trying to do it like: db.collection("RootColl").where("field", "==", "1").collection("SubColl 1").get() But by doing that I get the error Uncaught TypeError: db.collection(...).where(...).collection is not a function EDIT 1: By following Frank van Puffelen suggestion, i get the same error, "collection" is not a function Current Code Error
Now provide the response and nothing else.
| A sub-collection lives under a specific document. A query as you've shared now points to a number of documents. You'll need to execute the query to determine what documents it points to, then loop over the results, and get the sub-collection for each document. In code: var query = db.collection("RootColl").where("field", "==", "1");query.get().then((querySnapshot) => { querySnapshot.forEach((document) => { document.ref.collection("SubColl 1").get().then((querySnapshot) => { ... }); });}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47634907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8155416/']} | jdg_84211 |
stackexchange | llm_judgeable_groundtruth_similarity | 1496980 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function in Haskell which finds the maximum value of an exponentiation from a list: prob99 = maximum $ map (\xs -> (head xs)^(head (tail xs))) numbers What I need to find is the location of this maximum value in the resultant list. How would I go about this? Edit: I found a solution that goes like this: n = [[519432,525806],[632382,518061]....prob99b [a,b] = b* (log a)answer = snd $ maximum (zip (map prob99b n) [1..])
Now provide the response and nothing else.
| How to find the index of the maximum element? How about trying all indexes and checking whether they are the maximum? ghci> let maxIndex xs = head $ filter ((== maximum xs) . (xs !!)) [0..] But this sounds like something for which a function already exists. My code will be more readable, maintainable, and probably even more efficient, if I used the existing function. FYI, you can also ask Hoogle that can searchby Haskell type signatures (as Will suggested): $ hoogle "Ord a => [a] -> Int" | head<Nothing relevant>$ # hmm, so no function to give me the index of maximum outright,$ # but how about finding a specific element, and I give it the maximum?$ hoogle "a -> [a] -> Int" | headData.List elemIndex :: Eq a => a -> [a] -> Maybe IntData.List elemIndices :: Eq a => a -> [a] -> [Int] | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1496980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/150851/']} | jdg_84212 |
stackexchange | llm_judgeable_groundtruth_similarity | 12534334 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I updated my iphone4s and xcode to the latest yesterday. And I'm trying to debug my app on my phone since my application uses hardware communication. I'm using a third party library from Redpark(the company provides Dock to RS232 wire.)While compiling, it gives an error ld: file is universal (2 slices) but does not contain a(n) armv7s slice:/Users/...../libRscMgrUniv.a for architecture armv7sclang: error: linker command failed with exit code 1 (use -v to see invocation) Does anyone know how to solve this except waiting for Redpark to update there library?
Now provide the response and nothing else.
| You can simply disable the compilation of armv7s in your Build Settings, by replacing the $VALID_ARCHITECTURES (which evaluates to armv7 armv7s now) value by the explicit value armv7 . Thus your application will only build for armv7 and not armv7s , so will not be fully optimized for the new iPhone5, but it will still run on iPhone 4 and 5 anyway (the armv7s processor of the iPhone 5 can execute armv7 applications, it is just not as optimized as with armv7s processor dedicated instructions) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12534334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1491987/']} | jdg_84213 |
stackexchange | llm_judgeable_groundtruth_similarity | 39602837 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Trying to understand what the 'onlySelf' parameter does when passing to setValue. this.form.get('name').setValue('', { onlySelf: true }) The documentation says: "If onlySelf is true, this change will only affect the validation of this FormControl and not its parent component. This defaults to false." However I'm struggling to understand this. Still fairly new to the using Angulars' model driven forms.
Now provide the response and nothing else.
| Angular2 by default will check for the form control/form group validity cascadingly up to the top level whenever there's an update to any form element value, unless you say no. onlySelf is the tool to help you do that. Say you have a loginForm that has a username field and a password field, both of them are required, like this: this.userNameControl = this.formBuilder.control('Harry', Validators.required);this.passwordControl = this.formBuilder.control('S3cReT', Validators.required);this.loginForm = this.formBuilder.group({ userName: this.userNameControl, password: this.passwordControl}); After this code, this.loginForm.valid is true . If you set the value of a control using the default setting ( onlySelf = false ), Angular2 will update the control's validity as well as form group's validity. For example, this: this.passwordControl.setValue(''); will result in this.passwordControl.valid === falsethis.loginForm.valid === false However, this: this.passwordControl.setValue('', { onlySelf: true }); will only change passwordControl 's validity only: this.passwordControl.valid === falsethis.loginForm.valid === true | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/39602837', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1108312/']} | jdg_84214 |
stackexchange | llm_judgeable_groundtruth_similarity | 9819185 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to switch from Dropbox to the open source Sparkleshare. It uses git for the syncing and versioning. If say I had a 1GB file I deleted in my folder, it stays within the history of the local .git folder. But I would like to have this kind of heavy data on the server and not the client. How can I commit my repository and delete the local one with git?Many thanks!
Now provide the response and nothing else.
| Solution removing the history git fetch --depth=1 to prune the old commits. This makes the old commits and their objects unreachable. git reflog expire --expire-unreachable=now --all . To expire all old commits and their objects git gc --aggressive --prune=all to remove the old objects Specific solution just to remove the large file in the local history GitHub offer's Git Large File Storage , which stores large files always on the server and fetches them only on checkouts. Very easy to setup and use - in case your repository is on GitHub, bitbucket, or gitlab. git-annex - allows storing big files anywhere Discussion: How do Git LFS and git-annex differ? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9819185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376244/']} | jdg_84215 |
stackexchange | llm_judgeable_groundtruth_similarity | 3765234 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a Git repository with plenty of commits that are under no particular branch, I can git show them, but when I try to list branches that contain them, it reports back nothing. I thought this is the dangling commits/tree issue (as a result of -D branch), so I pruned the repo, but I still see the same behavior after that: $ git fetch origin$ git fsck --unreachable$ git fsck No output, nothing dangling (right?). But the commit exists $ git show 793db7f272ba4bbdd1e32f14410a52a412667042commit 793db7f272ba4bbdd1e32f14410a52a412667042Author: ... and it is not reachable through any branch as $ git branch --contains 793db7f272ba4bbdd1e32f14410a52a412667042 gives no output. What exactly is the state of that commit? How can I list all commits in a similar state? How can I delete commits like those?
Now provide the response and nothing else.
| To remove all dangling commits (including those still reachable from stashes and other reflogs) do this: git reflog expire --expire-unreachable=now --allgit gc --prune=now But be certain that this is what you want. I recommend you read the man pages but here is the gist: git gc removes unreachable objects (commits, trees, blobs (files)). An object is unreachable if it isn't part of the history of some branch. Actually it is a bit more complicated: Stashes are implemented using the reflog (i.e not not branches or tags). That means that they are subject to garbage collection. git gc does some other things but they are not relevant here and not dangerous. Unreachable objects that are younger than two weeks are not removed so we use --prune=now which means "remove unreachable objects that were created before now". Objects can also be reached through the reflog. While branches record the history of some project, reflogs record the history of these branches. If you amend, reset etc. commits are removed from the branch history but git keeps them around in case you realize that you made a mistake. Reflogs are a convenient way to find out what destructive (and other) operations were performed on a branch (or HEAD), making it easier to undo a destructive operation. So we also have to remove the reflogs to actually remove everything not reachable from a branch. We do so by expiring --all reflogs. Again git keeps a bit of the reflogs to protect users so we again have to tell it not to do so: --expire-unreachable=now . Since I mainly use the reflog to recover from destructive operations I usually use --expire=now instead, which zaps the reflogs completely. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3765234', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171428/']} | jdg_84216 |
stackexchange | llm_judgeable_groundtruth_similarity | 2925014 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is function $$f(x) = \frac 1 x - \frac 1 {x+1} + \frac{1}{x+2} - \frac 1{x+3} + \cdots$$ continuous on $(0, \infty)$ ? I think I should extend the continuity of each term to $f$ but I cannot show that series converges uniformly.
Now provide the response and nothing else.
| $$\left( \frac 1 x - \frac 1 {x+1} \right) + \left( \frac 1 {x+2} - \frac 1 {x+3} \right) + \cdots + \left( \frac 1 {x+2n} -\frac 1 {x+2n-1} \right)$$ $$=\frac 1 {x(x+1)}+\frac 1 {(x+2)(x+3)}+\cdots +\frac 1 {(x+2n) (x+2n+1)}$$ $$\leq \frac 1 {x(x+1)}+\frac 1 {2^{2}}+ \frac 1 {4^{2}}+\cdots+\frac 1 {(2n)^{2}} $$ . This gives uniform convergence of the partial sums. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2925014', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/424975/']} | jdg_84217 |
stackexchange | llm_judgeable_groundtruth_similarity | 44802939 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've used Scikit-learn's GridSearchCV before to optimize the hyperparameters of my models, but just wondering if a similar tool exists to optimize hyperparameters for Tensorflow (for instance number of epochs, learning rate, sliding window size etc. ) And if not, how can I implement a snippet that effectively runs all different combinations?
Now provide the response and nothing else.
| Another viable (and documented) option for grid search with Tensorflow is Ray Tune . It's a scalable framework for hyperparameter tuning, specifically for deep learning/reinforcement learning. You can try out a fast tutorial here . It also takes care of Tensorboard logging and efficient search algorithms (ie, HyperOpt integration and HyperBand ) in about 10 lines of Python. from ray import tunedef train_tf_model(config): for i in range(num_epochs): accuracy = train_one_epoch(model) tune.report(acc=accuracy)tune.run(train_tf_model, config={ "alpha": tune.grid_search([0.2, 0.4, 0.6]), "beta": tune.grid_search([1, 2]), }) (Disclaimer: I contribute actively to this project!) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44802939', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7550636/']} | jdg_84218 |
stackexchange | llm_judgeable_groundtruth_similarity | 13245258 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
__global__ void add( int a, int b, int *c ) { *c = a + b;}int main( void ) {int c;int *dev_c;HANDLE_ERROR( cudaMalloc( (void**)&dev_c, sizeof(int) ) );add<<<1,1>>>( 2, 7, dev_c );HANDLE_ERROR( cudaMemcpy( &c, dev_c, sizeof(int), cudaMemcpyDeviceToHost ) ); printf( "2 + 7 = %d\n", c );cudaFree( dev_c );} This is the code.HANDLE_ERROR not found error is being generated. i dont know how to solve it. Tried to grab some header files but can't figure it out... Any Help Please!!!
Now provide the response and nothing else.
| If I had to guess, I'd say you're using the book CUDA By Example, which defines the HANDLE_ERROR macro as follows: static void HandleError( cudaError_t err, const char *file, int line ) { if (err != cudaSuccess) { printf( "%s in %s at line %d\n", cudaGetErrorString( err ), file, line ); exit( EXIT_FAILURE ); }}#define HANDLE_ERROR( err ) (HandleError( err, __FILE__, __LINE__ )) Make sure that this code appears somewhere in your source, or somewhere in a header you #include . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13245258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1109827/']} | jdg_84219 |
stackexchange | llm_judgeable_groundtruth_similarity | 10813 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Forgetting Hooke's law for a minute why, from a microscopic perspective (preferably quantum) on up to a macroscopic one, does a spring under tension exert a force? I was thinking that there might be an analogy between the high and low pressure states of an air wave and the density of the mass distribution of the compressed and stretched states of an iron spring, but I don't know nearly enough about solid state physics to even guess.
Now provide the response and nothing else.
| You could make an analogy between the pressure distribution of a sound wave and the mass density distribution of a realistic spring undergoing vibrations, but it wouldn't give you the explanation you're looking for. As a matter of fact, that would be more like explaining a sound wave in terms of springs, rather than what you're trying to do, i.e. explaining a spring in terms of waves. Although I'm not intimately familiar with the details, basically what goes on at the microscopic level of a spring is that, when the spring is at equilibrium, the atoms are set in some sort of rigid structure. Any given pair of atoms has a potential energy which is a function of the distance between those two atoms, so the entire spring has a potential energy determined by all the distances between every possible pair of atoms: $$U = \sum_{i,j} U_{ij}(r_{ij})$$ In equilibrium, the spring will take a shape which minimizes this total potential energy. If you think about it, a metal spring might typically be formed by heating some metal to make it malleable (or even melting it), and then forming it into the desired shape before it cools. The heat allows the atoms to move around relatively freely so that they can reach the equilibrium configuration that minimizes their potential energy, then once the spring cools, they are frozen in place. Of course, the atoms are not completely frozen in place. As I see that Georg has already written in his answer, the potential energy between two atoms ($U_{ij}(r_{ij})$) has a minimum at their equilibrium distance and goes up on either side. If you add some energy into the system, say by exerting a force on it, you can get the atoms to move closer together or further apart. When you stretch or compress a string, you are really just doing this to all the (pairs of) atoms in the spring simultaneously. The atoms will, of course, "try" to return to their equilibrium position, i.e. they will "try" to minimize their potential energy, and this is what you feel as the restoring force of a spring under tension. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/10813', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/2820/']} | jdg_84220 |
stackexchange | llm_judgeable_groundtruth_similarity | 11536934 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: ń ǹ ň ñ ṅ ņ ṇ ṋ ṉ ̈ ɲ ƞ ᶇ ɳ ȵ --> n or Remove diacritical marks from unicode chars How to replace special characters in a string? I would like to format some String such as "I>Télé" to something like "itele" .The idea is that I want my String to be lower case (done), without whitespaces (done), no accents or special characters (like > , < , / , % , ~ , é , @ , ï etc). It is okay to delete occurences of special characters, but I want to keep letters while removing accents (as I did in my example). Here is what I did, but I don't think that the good solution is to replace every é,è,ê,ë by "e", than do it again for "i","a" etc, and then remove every special character... String name ="I>télé" //exampleString result = name.toLowerCase().replace(" ", "").replace("é","e").........; The purpose of that is to provide a valid filename for resources for an Android app, so if you have any other idea, I'll take it !
Now provide the response and nothing else.
| You can use the java.text.Normalizer class to convert your text into normal Latin characters followed by diacritic marks (accents), where possible. So for example, the single-character string "é" would become the two character string ['e', {COMBINING ACUTE ACCENT}] . After you've done this, your String would be a combination of unaccented characters, accent modifiers, and the other special characters you've mentioned. At this point you could filter the characters in your string using only a whitelist to keep what you want (which could be as simple as [A-Za-z0-9] for a regex, depending on what you're after). An approach might look like: String name ="I>télé"; //exampleString normalized = Normalizer.normalize(name, Form.NFD);String result = normalized.replaceAll("[^A-Za-z0-9]", ""); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11536934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1520739/']} | jdg_84221 |
stackexchange | llm_judgeable_groundtruth_similarity | 2871048 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Just recently I was given the task to find a non separable field extension. At first it seems like an easy task, but common fields you encounter are usually perfect. Eventually I found the example $\mathbb{F}_2(X)/\mathbb{F}_2(X^2)$, neither of characteristic $0$ nor of chracteristic $p>0$ where the Frobenius homomorphism is an automorphism, which would imply perfectness. I then started to wonder what an infinite perfect field of characteristic $p>0$ would look like and only found algebraically closed fields, i.e. the algebraic closure of $\mathbb{F}_p(X)$. What would a non algebraically closed field look like where the characteristic isn't $0$, but it is perfect nonetheless, i.e. the Frobenius homomorphism is surjective. Can anyone give me an example?
Now provide the response and nothing else.
| Take your favorite infinite non-separably closed field $F$ of characteristic $p>0$ (e. g. $F = \Bbb F_p(X)$; the field $\Bbb F_{p^2}(X)$ is a separable non-trivial extension of $F$). Let $\overline F$ be an algebraic closure of $F$ and consider the perfect closure $F^{\rm perf}$ inside $\overline F$ given by$$F^{\rm perf} := \{x\in \overline F \mid x^{p^n}\in F\text{ for some $n\in \Bbb N$}\}.$$The field $F^{\rm perf}$ is perfect and purely inseparable over $F$. (The last claim follows from the fact that $x\in \overline F$ is separable over $F$ if and only if $F(x) = F(x^p)$.) It is clear that $F^{\rm perf}$ is not algebraically closed, since for any separable (over $F$) $x\in \overline F$ with $x\notin F$ we necessarily have $x\notin F^{\rm perf}$. In the case of $F = \Bbb F_p(X)$ we have $F^{\rm perf} = \bigcup_{n\in \Bbb N} \Bbb F_p(X^{p^{-n}})$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2871048', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/574887/']} | jdg_84222 |
stackexchange | llm_judgeable_groundtruth_similarity | 44208164 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a WebView. I want to call public void evaluateJavascript(String script, ValueCallback<String> resultCallback) this method. Here is the ValueCallback interface: public interface ValueCallback<T> { /** * Invoked when the value is available. * @param value The value. */ public void onReceiveValue(T value);}; Here is my kotlin code: webView.evaluateJavascript("a", ValueCallback<String> { // cant override function }) Anyone have idea to override the onReceiveValue method in kotlin?I tried the "Convert Java to Kotlin" but result is the next: v.evaluateJavascript("e") { } Thanks!
Now provide the response and nothing else.
| The following line is called a SAM conversion : v.evaluateJavascript("e", { value -> // Execute onReceiveValue's code}) Whenever a Java interface has a single method, Kotlin allows you to pass in a lambda instead of an object that implements that interface. Since the lambda is the last parameter of the evaluateJavascript function, you can move it outside of the brackets, which is what the Java to Kotlin conversion did: v.evaluateJavascript("e") { value -> // Execute onReceiveValue's code} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44208164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5269276/']} | jdg_84223 |
stackexchange | llm_judgeable_groundtruth_similarity | 2868337 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I load a class that is already on the class path, instantiate it, and also instantiate any inner classes defined within it? EG: public class TestClass { public class InnerClass { }}
Now provide the response and nothing else.
| Inner classes cannot exist outside the parent class. You need to construct the parent class first. Without reflection this would look like: InnerClass innerClass = new TestClass().new InnerClass(); In reflection, you need to pass the parent class in during construction of the inner class. Object testClass = Class.forName("com.example.TestClass").newInstance();for (Class<?> cls : testClass.getClass().getDeclaredClasses()) { // You would like to exclude static nested classes // since they require another approach. if (!Modifier.isStatic(cls.getModifiers())) { Object innerClass = cls .getDeclaredConstructor(new Class[] { testClass.getClass() }) .newInstance(new Object[] { testClass }); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2868337', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17675/']} | jdg_84224 |
stackexchange | llm_judgeable_groundtruth_similarity | 16621706 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using an Angular $resource to post a model to a webapi endpoint, but Angular sends the data in the request payload, rather than a JSON body or form parameters. As a result, the model always ends up as null. My API is as follows: public class UserController : APIController { [HttpPost] public void Disconnect(Models.Users.User model) { }} The request headers are: POST /siteapi/User/Disconnect HTTP/1.1Host: rouge2Connection: keep-aliveContent-Length: 125Accept: application/json, text/plain, */*Origin: http://rouge2X-Requested-With: XMLHttpRequestUser-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31Content-Type: application/json;charset=UTF-8Referer: http://rouge2/UsersAccept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Cookie: .ASPXAUTH=115C6015BDD5C1A9D111B0A9FBF05294B73287328566F65CB1BCB457DF206EF0916D4615B5914EB443AA567E860742AC14EAA2951B9C327260C4F00649694260A1B3960771FB6675FEE8F3E68B0BB46436020ACAB33ECE0D3E70B50D6569E52B27F69161762C10C19A806A12D3254DF60EF4758DEDCA529A91CB36C74B7FA7F4; __RequestVerificationToken=Rp4Vu8R67ziDNc36DoOLZH7KmEfumig1zFDmYiFWHTsWyf2I037xJQydcmLtOfaJ3ccc8GEZXmHoa8LBRusxKFRYVoy27GuFEfNhKKYS_hUysjwCjmsxw5OCK3RKsiXIAh1Gbi0PxcdqBfzctSJn9g2 And the request payload: {"Id":3,"FirstName":"Test","LastName":"User","Username":"testuser","IsApproved":true,"IsOnlineNow":true,"IsChecked":true} If I test in Fiddler, posting the same JSON string in the body, the model is correctly populated, as expected. Is there a built-in model binder which will work in this scenario, or something pre-built somewhere I can use? Or can someone please point me in the direction of a working example?
Now provide the response and nothing else.
| You could use the $http module . Here's an example: <!DOCTYPE html><html ng-app><head> <meta name="viewport" content="width=device-width" /> <title>Index</title></head><body> <div ng-controller="TestController"> <ul ng-model="person"> <li>FirstName: {{person.FirstName}}</li> <li>LastName: {{person.LastName}}</li> <li>UserName: {{person.Username}}</li> <li>IsApproved: {{person.IsApproved}}</li> <li>IsOnlineNow: {{person.IsOnlineNow}}</li> <li>IsChecked: {{person.IsChecked}}</li> </ul> </div> <script type="text/javascript" src="~/scripts/angular.min.js"></script> <script type="text/javascript"> function TestController($scope, $http) { var data = { "Id": 3, "FirstName": "Test", "LastName": "User", "Username": "testuser", "IsApproved": true, "IsOnlineNow": true, "IsChecked": true }; $http.post( '/api/values', JSON.stringify(data), { headers: { 'Content-Type': 'application/json' } } ).success(function (data) { $scope.person = data; }); } </script></body></html> Assuming the following controller: public class ValuesController : ApiController{ [HttpPost] public User Post(User model) { return model; }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16621706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65890/']} | jdg_84225 |
stackexchange | llm_judgeable_groundtruth_similarity | 8581 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to have the root account in safety even if my unprivileged user is compromised. On Ubuntu you can only use sudo for "security reasons" by default. However I am not sure it is any safer than just using login on a text-mode console. There are too many things that can go wrong if an attacker can run code as my normal user. For example adding aliases, adding stuff to my PATH, setting LD_PRELOAD and X11 keyloggers just to mention a few. The only advantage I can see is the timeout so I never forget to log out. I have the same doubts about su but it doesn't even have time limit. Some operations (especially IO redirection) are more convinient with su but security-wise this seems to be worse. Login on a text-mode console seems to be the safest. Since it is started by init if an attacker can control PATH or LD_PRELOAD he is already root. The keypress events can't be intercepted by programs running on X. I don't know if programs running on X can intercept [ctrl]+[alt]+[f1] (and open a fullscreen window that looks like a console) or it is safe like [ctrl]+[alt]+[del] on Windows. Besides that the only problem I see is the lack of timeout. So am I missing something? Why did the Ubuntu guys decide to only allow sudo? What can I do to improve the security of any of the methods? What about SSH? Traditionally root can't log in through SSH. But using the above logic wouldn't this be the safest thing to do: allow root through SSH switch to text-mode log in as root ssh to the other machine log in as root?
Now provide the response and nothing else.
| Security is always about making trade-offs. Just like the proverbial server which is in a safe, unplugged, at the bottom of the ocean, root would be most secure if there were no way to access it at all. LD_PRELOAD and PATH attacks like those you describe assume that there is an attacker with access to your account already, or at least to your dotfiles. Sudo doesn't protect against that very well at all — if they have your password, after all, no need to try tricking you for later... they can just use sudo now . It's important to consider what Sudo was designed for originally: delegation of specific commands (like those to manage printers) to "sub-administrators" (perhaps grad students in a lab) without giving away root completely. Using sudo to do everything is the most common use I see now, but it's not necessarily the problem the program was meant to solve (hence the ridiculously complicated config file syntax). But, sudo-for-unrestricted-root does address another security problem: manageability of root passwords. At many organizations, these tend to be passed around like candy, written on whiteboards, and left the same forever. That leaves a big vulnerability, since revoking or changing access becomes a big production number. Even keeping track of what machine has what password is a challenge — let alone tracking who knows which one. Remember that most "cyber-crime" comes from within. With the root password situation described, it's hard to track down who did what — something sudo with remote logging deals with pretty well. On your home system, I think it's really more a matter of the convenience of not having to remember two passwords. It's probable that many people were simply setting them to be the same — or worse, setting them to be the same initially and then letting them get out of sync, leaving the root password to rot. Using passwords at all for SSH is dangerous, since password-sniffing trojaned ssh daemons are put into place in something like 90% of the real-world system compromises I've seen. It's much better to use SSH keys, and this can be a workable system for remote root access as well. But the problem there is now you've moved from password management to key management, and ssh keys aren't really very manageable. There's no way of restricting copies, and if someone does make a copy, they have all the attempts they want to brute-force the passphrase. You can make policy saying that keys must be stored on removable devices and only mounted when needed, but there's no way of enforcing that — and now you've introduced the possibility of a removable device getting lost or stolen. The highest security is going to come through one-time keys or time/counter-based cryptographic tokens. These can be done in software, but tamper-resistant hardware is even better. In the open source world, there's WiKiD , YubiKey , or LinOTP , and of course there's also the proprietary heavyweight RSA SecurID . If you're in a medium-to-large organization, or even a security-conscious small one, I highly recommend looking into one of these approaches for administrative access. It's probably overkill for home, though, where you don't really have the management hassles — as long as you follow sensible security practices. | {} | {'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/8581', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4477/']} | jdg_84226 |
stackexchange | llm_judgeable_groundtruth_similarity | 13375360 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to generate XMl output. It should display arrays as shown later. I am unable to render the arrays in the proper way. Input XML: <accounts> <displayOrdinal>0</displayOrdinal> <name>String</name> <account> <accountNumber>String</accountNumber> <name>String</name> <balance> <balanceAmount>0.0</balanceAmount> </balance> <balance> <balanceAmount>0.0</balanceAmount> </balance> <properties> <displayOrdinal>0</displayOrdinal> </properties> <properties> <displayOrdinal>0</displayOrdinal> </properties> <usage> <type>String</type> </usage> <usage> <type>String</type> </usage> </account> <account> <accountNumber>String</accountNumber> <name>String</name> <balance> <balanceAmount>0.0</balanceAmount> </balance> <balance> <balanceAmount>0.0</balanceAmount> </balance> <properties> <displayOrdinal>0</displayOrdinal> </properties> <properties> <displayOrdinal>0</displayOrdinal> </properties> <usage> <type>String</type> </usage> <usage> <type>String</type> </usage> </account></accounts> My expected output should be as follows: <json:object xmlns:json="http://www.ibm.com/xmlns/prod/2009/jsonx"> <json:object name="accounts"> <json:string name="displayOrdinal">0</json:string> <json:string name="name">String</json:string> <json:array name="account"> <json:object> <json:string name="accountNumber">String</json:string> <json:string name="name">String</json:string> <json:array name="balance"> <json:object> <json:string name="balanceAmount">0.0</json:string> </json:object> <json:object> <json:string name="balanceAmount">0.0</json:string> </json:object> </json:array> <json:array name="properties"> <json:object> <json:string name="displayOrdinal">0</json:string> </json:object> <json:object> <json:string name="displayOrdinal">0</json:string> </json:object> </json:array> <json:array name="usage"> <json:object> <json:string name="type">String</json:string> </json:object> <json:object name="usage"> <json:string name="type">String</json:string> </json:object> </json:array> </json:object> <json:object> <json:string name="accountNumber">String</json:string> <json:string name="name">String</json:string> <json:object name="balance"> <json:string name="balanceAmount">0.0</json:string> </json:object> <json:array name="balance"> <json:object> <json:string name="balanceAmount">0.0</json:string> </json:object> <json:object> <json:string name="displayOrdinal">0</json:string> </json:object> <json:object> <json:string name="displayOrdinal">0</json:string> </json:object> </json:array> <json:array name="usage"> <json:object> <json:string name="type">String</json:string> </json:object> <json:object> <json:string name="type">String</json:string> </json:object> </json:array> </json:object> </json:array> </json:object></json:object> The XSL that I am using is as below: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:json="http://www.ibm.com/xmlns/prod/2009/jsonx"> <xsl:output method="xml" indent="yes" encoding="UTF-8" omit-xml-declaration="yes"/> <xsl:strip-space elements="*"/> <!-- Array --> <xsl:template match="*[*[2]][name(*[1])=name(*[2])]"> <json:object name="{name()}"> <json:array name="{name(*[1])}"> <xsl:apply-templates/> </json:array> </json:object> </xsl:template> <!-- Array member --> <xsl:template match="*[parent::*[ name(*[1])=name(*[2]) ]] | /"> <json:object> <xsl:apply-templates/> </json:object> </xsl:template> <!-- Object --> <xsl:template match="*"> <xsl:choose> <xsl:when test="text()"> <json:string name="{name()}"> <xsl:value-of select="."/> </json:string> </xsl:when> <xsl:otherwise> <json:object name="{name()}"> <xsl:apply-templates/> </json:object> </xsl:otherwise> </xsl:choose> </xsl:template> <!-- String --> <xsl:template match="*[not(*)]"> <xsl:choose> <xsl:when test="not(boolean(text()))"> <xsl:element name="json:null"> <xsl:attribute name="name"><xsl:value-of select="name()"/></xsl:attribute> </xsl:element> </xsl:when> <xsl:otherwise> <xsl:choose> <xsl:when test="text()= 'false' or text()='true'"> <xsl:element name="json:boolean"> <xsl:attribute name="name"><xsl:value-of select="name()"/></xsl:attribute> <xsl:value-of select="text()"/> </xsl:element> </xsl:when> <xsl:otherwise> <json:string name="{name()}"> <xsl:if test="@*"> <xsl:attribute name="{name(@*)}"><xsl:value-of select="@*"/></xsl:attribute> </xsl:if> <xsl:value-of select="."/> </json:string> </xsl:otherwise> </xsl:choose> </xsl:otherwise> </xsl:choose> </xsl:template></xsl:stylesheet>
Now provide the response and nothing else.
| Android R Update: From Android R, this method always returns false. Google says that this is done "to protect goat privacy": /** * Used to determine whether the user making this call is subject to * teleportations. * * <p>As of {@link android.os.Build.VERSION_CODES#LOLLIPOP}, this method can * now automatically identify goats using advanced goat recognition technology.</p> * * <p>As of {@link android.os.Build.VERSION_CODES#R}, this method always returns * {@code false} in order to protect goat privacy.</p> * * @return Returns whether the user making this call is a goat. */public boolean isUserAGoat() { if (mContext.getApplicationInfo().targetSdkVersion >= Build.VERSION_CODES.R) { return false; } return mContext.getPackageManager() .isPackageAvailable("com.coffeestainstudios.goatsimulator");} Previous answer: From their source , the method used to return false until it was changed in API 21. /** * Used to determine whether the user making this call is subject to * teleportations. * @return whether the user making this call is a goat */public boolean isUserAGoat() { return false;} It looks like the method has no real use for us as developers. Someone has previously stated that it might be an Easter egg . In API 21 the implementation was changed to check if there is an installed app with the package com.coffeestainstudios.goatsimulator /** * Used to determine whether the user making this call is subject to * teleportations. * * <p>As of {@link android.os.Build.VERSION_CODES#LOLLIPOP}, this method can * now automatically identify goats using advanced goat recognition technology.</p> * * @return Returns true if the user making this call is a goat. */public boolean isUserAGoat() { return mContext.getPackageManager() .isPackageAvailable("com.coffeestainstudios.goatsimulator");} Here is the source and the change . | {} | {'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/13375360', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1731504/']} | jdg_84227 |
stackexchange | llm_judgeable_groundtruth_similarity | 8678 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm planning to build a digitally-controlled light dimmer circuit. I'm not trained in electrical engineering, and I don't want to do something that an electrician would immediately recognize as dangerous and stupid. Please suggest a reasonable home set up for prototyping circuits involving 120V AC power.
Now provide the response and nothing else.
| As far as lab equipment goes, a 1:1 safety transformer (AC mains : AC mains) is worth a lot. They aren't cheap, but I would not want to work without one. Mine is home-built and uses two 250 W transformers back-to-back. The trick of the 1:1 transformer is this: Current from its secondary winding can only go back to the other end of this exact winding. As long as you touch any circuit connected to the 1:1 transformer with one hand only, you are safe because the current from your finger can not go anywhere. You are a bit like a bird on a wire. A regular wall outlet is referenced to earth, just like you are when standing on the floor: In a fault, current from the live pin of the outlet runs through you, the floor and to earth, which is equal to the other end (neutral) of the wall outlet. Keep in mind that current always needs a loop to flow in: Any energy that comes from the transformer can go back to this transformer only, and can not go anywhere else, especially not to the ground via your body. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/8678', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2172/']} | jdg_84228 |
stackexchange | llm_judgeable_groundtruth_similarity | 14314426 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a project that fetches Date/Time from backend in IST (Indian standard Time) as shown "2013-01-09T19:32:49.103+05:30" . However when i parse it using following DateFormat DateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ"); followed by parsing.. Date date = sdf.parse("2013-01-09T19:32:49.103+05:30");System.out.println("XYZ ==============>"+date); its Displaying date in GMT format as output i.e Wed Jan 09 14:02:49 GMT+00:00 2013. I have tried it using TimeZone class as.. TimeZone timeZone=TimeZone.getTimeZone("IST");sdf.setTimeZone(timeZone); but no effect.. How could i get a Date class Object having Date in IST format instead of GMT... Please provide an appropriate solution.. EDIT: This is how Code Looks Like: SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ");TimeZone timeZone=TimeZone.getTimeZone("IST");sdf.setTimeZone(timeZone);Date date = sdf.parse("2013-01-09T19:32:49.103+05:30");String formattedDate=sdf.format(date);System.out.println("XYZ ==============>"+formattedDate);
Now provide the response and nothing else.
| Date does not have any time zone. It is just a holder of the number of milliseconds since January 1, 1970, 00:00:00 GMT. Take the same DateFormat that you used for parsing, set IST timezone and format your date as in the following example DateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSXXX"); Date date = sdf.parse("2013-01-09T19:32:49.103+05:30"); sdf.setTimeZone(TimeZone.getTimeZone("IST")); System.out.println(sdf.format(date)); output 2013-01-09T19:32:49.103+05:30 Note that XX X pattern is used for ISO 8601 time zone (-08:00) since 1.7. If you are in 1.6 try Z . See SimpleDateFormat API for details of format patterns | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14314426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1768001/']} | jdg_84229 |
stackexchange | llm_judgeable_groundtruth_similarity | 10429 |
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am about to implement a variation of the LIBOR-Market-Model (complete with Least-Square-Montecarlo, calibration, pricing etc.) and decided to implement it in C#. The implementation will involve working with statistical data, numerical optimisation and monte carlo. Being fluent in R, C++, C# and Mathematica I usually distribute the workload among the platforms according to their strengths. Thus: Mathematica for optimization, R for statisical analysis and C#/C++ for brutforce montecarlo etc. Last week I was given the task to create a distributable pricing-/risk-management-environment for the Libor-Market-Model that can be accessed via MS-Excel. I opted for C# because it is very easy to interface with Excel via Excel-DNA (I did the interface with C++ once but it is no where as convenient as C#) Thus I am looking for well documented solutions for: statistics (for the estimation of a convariance matrix) - perhaps linking to R ? numerics (rank reduction, matrix decomposition) - ALGLIB - is there an alternative ? optimization ( forcalibration - above all I need non-gradient methods ) visualisation (people from risk management usuallylike pretty graphs :) - C# has a charting environment. I have never used it before and do not know whether it is any good. Also note: I am allowed to make a somewhat altered version of the code avaliable to the public . Thus it would be nice for the packages/add-ins to be avaliable for free for non commercial usage.
Now provide the response and nothing else.
| A popular open-source option for the numerics in .NET is Math.NET ( https://github.com/mathnet/mathnet-numerics ). It has both managed implementations and allows you to use the optimized MKL native libraries. This use of .NET as a front-end to an optimized native library is quite common. Meta.Numerics ( http://www.meta-numerics.net ) is an alternative open-source library which is quite strong on various statistical distributions. ILNumerics ( http://ilnumerics.net/ ) is free or paid for, and gives a highly optimized managed numerics library - with performance that is comparable with C or Fortran, though not as fast as MKL. It also has rich visualization tools. Extreme Optimization ( http://www.extremeoptimization.com/ ) is a rich commercial library. For linking to R, you'd use R.NET ( https://rdotnet.codeplex.com/ ). For charting (WinForms or WPF), OxyPlot ( https://oxyplot.codeplex.com/ ) is a great open-source project. If you already use Excel as your substrate, you might rather use Excel's charting - you'll probably have a more coherent solution. For optimization (and stuff like machine learning) you might consider Accord.NET ( http://accord-framework.net/ ) (LGPL license). However, the built-in GRG solver in Excel is excellent if your problem fits. Constrained, non-linear, derivative-free optimization codes (some merged into Accord.NET) are discussed here: http://cureos.blogspot.com/2012/05/derivative-free-nonlinear-optimization.html . Another optimization library is from Microsoft - the Microsoft Solver Foundation 3.1 now has a Nelder-Mead solver for nonlinear programming problems. While there is a matching Excel add-in for driving the Solver Foundation, you might also integrate it as worksheet functions using Excel-DNA . The licensing is a bit confusing though, and I'm not sure it's being actively developed any more. As a final link, I'll add Dodoni.net ( https://dodoni.codeplex.com/ ): "Dodoni.net is a free/open-source library with the aim to provide a framework for quantitative finance (pricing and risk management) as well as for general numerical methods (i.e. numerical integration, FFT, optimization etc.). The main idea is to construct a flexible and easy to use toolbox which is easy extendable by 3th party libraries." It includes native back-end bindings, and integration into Excel via Excel-DNA . | {} | {'log_upvote_score': 5, 'links': ['https://quant.stackexchange.com/questions/10429', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/7279/']} | jdg_84230 |
stackexchange | llm_judgeable_groundtruth_similarity | 68293 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read this question. I am now stuck with the difference between " if and only if " and " only if ". Please help me out. Thanks
Now provide the response and nothing else.
| Let's assume A and B are two statements. Then to say "A only if B" means that A can only ever be true when B is true. That is, B is necessary for A to be true. To say "A if and only if B" means that A is true if B is true, and B is true if A is true. That is, A is necessary and sufficient for B. Succinctly, $A \text{ only if } B$ is the logic statement $A \Rightarrow B$. $A \text{ iff } B$ is the statement $(A \Rightarrow B) \land (B \Rightarrow A)$ | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/68293', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_84231 |
stackexchange | llm_judgeable_groundtruth_similarity | 2117486 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given this code: int *p, *q;p = (int *) 1000;q = (int *) 2000; What is q - p and how?
Now provide the response and nothing else.
| It's actually undefined, according to the standard. Pointer arithmetic is not guaranteed to work unless the pointers are both pointing to either an element in, or just beyond, the same array. The relevant section of the standard is 6.5.6:9 (n1362 draft of c1x but this hasn't changed since c99) which states: When two pointers are subtracted, both shall point to elements of the same array object, or one past the last element of the array object; the result is the difference of the subscripts of the two array elements. You'll most likely get 250 if your int datatype is 4 bytes but there's no guarantee. Undefined behaviour (unlike implementation-defined behaviour) means just that, undefined. Anything can happen, up to an including the total destruction of a large proportion of space-time. A refresher course: Defined behaviour is what is mandated by the standard. Implementations must do this to be conformant. Implementation-defined behaviour is left up to the implementation but it must document that behaviour clearly. Use this if you don't care too much about portability. Undefined behaviour means anything can happen. Don't ever do that! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2117486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/250524/']} | jdg_84232 |
stackexchange | llm_judgeable_groundtruth_similarity | 1639241 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using gradient descent with mean squared error as error function to do linear regression. Take a look at the equations first. As you can see in eq.1, the prediction is done with a bias term b and a weight vector W. Eq.2 shows the error function (MSE) while eq.3 shows the partial derivatives used to update the weights (eq.4). My question is, should all the weights in the weight vector be updated each iteration by the same number? It seems like eq.3 should return a single number; not a vector.
Now provide the response and nothing else.
| The $Q$ is a parameter, and $q$ is a variable ranging from $0$ to $Q$: basically, you have $Q+1$ parameters $\textrm{ceps}_0,\dots, \textrm{ceps}_Q$; or, in programming terms, you have an array $\textrm{ceps}[0\dots Q]$. Similarly, the LPC coefficients are a list of $p$ values $a_1,\dots, a_p$ (i.e., $a_q$ for $q=1\dots p$), where $p$ is another parameter. The recursion procedure explains how to compute value $Q+1$ values in $\textrm{ceps}[0\dots Q]$, recursively, starting with $\textrm{ceps}[0]$ and then applying the formula:$$\textrm{ceps}[1] = a_1 + \sum_{k=1}^0 \frac{k-1}{1}a_k \textrm{ceps}[1-k] = a_1$$then $$\textrm{ceps}[2] = a_2 + \sum_{k=1}^1 \frac{k-2}{2}a_k \textrm{ceps}[2-k] = a_2 - \frac{1}{2}a_1 \textrm{ceps}[1] = a_2-\frac{a_1^2}{2}$$etc. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1639241', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/236301/']} | jdg_84233 |
stackexchange | llm_judgeable_groundtruth_similarity | 11247 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Define: fat = ArrayPlot[RandomReal[1, {20, 10}], ColorFunction->"Rainbow"];skinny = ArrayPlot[RandomReal[1, {20, 1}], ColorFunction->"Rainbow"]; What must one do to display fat and skinny side-by-side, so that the bottom and top edges of fat line up vertically with the bottom and top edges (respectively) of skinny (IOW, both fat and skinny appear as having the same height) fat appears 10x wider than skinny ( GraphicsRow[{fat, skinny}] produces a ridiculous monster featuring a tiny fat next to a gigantic skinny . Pretty much everything else I've tried produces the same nonsense.)
Now provide the response and nothing else.
| GraphicsGrid makes too many assumptions about similarity of sizes. For this kind of case, use Grid . I'd also recommend using the PlotRangePadding option in fat so that the two are exactly the same height. I'm not sure why the one-column case skinny doesn't use plot range padding, but it seems to be the way it works. (If you turn off the frame using Frame->False , you do need to turn off PlotRangePadding for both graphics.) fat = ArrayPlot[RandomReal[1, {20, 10}], ColorFunction -> "Rainbow", PlotRangePadding -> 0];skinny = ArrayPlot[RandomReal[1, {20, 1}], ColorFunction -> "Rainbow"];Grid[{{fat, skinny}}, Spacings -> -0.2] This is what it looks like without the Spacings option. As for producing the desired plot type, MatrixPlot does the job with just a few custom option values. Consider the following example. First, I create some fake data: testdata = RandomVariate[TriangularDistribution[{0, 1}, 0.2], {30, 30}]; Some custom ticks will be handy, too. Notice that I use the list-valued third position to specify that the ticks will be zero length. xtix = Array[{#, Rotate["E" <> ToString[#], 3 Pi/2], {0, 0}} &, {30}];ytix = Array[{#, "F" <> ToString[#], {0, 0}} &, {30}]; To get the look shown in the example plot in your question, you need an appropriate usage of Blend , together with the ColorFunctionScaling->False option (if the data are already scaled between zero and 1, or True if they aren't). Notice the form of Blend that sets particular colors to particular values. This is in the documentation but little used. MatrixPlot[testdata, ColorFunctionScaling -> False, ColorFunction -> (Blend[{{0, Red}, {0.2, Black}, {1, Green}}, #] &), PlotRangePadding -> 0, FrameTicks -> {{ytix, None}, {xtix, None}}] It would be easy to automate this further, either by building this into a custom function, or using SetOptions to set some of those options to MatrixPlot by default. However, I can imagine that you might want to tweak the ColorFunction depending on how your data are distributed. As for building up arrays of these graphic elements, you need to investigate the ImageSize and AspectRatio options, as well as the original question which your previous question was a duplicate of . Once that is done, it is easy to put together something that looks a lot like the Wikipedia example. First, define a custom color Blend : myblend = (Blend[{{-1, Red}, {0, Black}, {0.5, Black}, {1, Green}}, #] &); Then load in the HierarchicalClustering package to access the DendrogramPlot function. We can then do something like this (I used a different sized random data set to the previous case): Grid[{{DendrogramPlot[Transpose@testdata, AspectRatio -> 1/5, ImageSize -> 240, ImagePadding -> {{15, 0}, {0, 0}}], Null}, {MatrixPlot[testdata, ColorFunctionScaling -> False, AspectRatio -> 1, ColorFunction -> myblend, FrameStyle -> AbsoluteThickness[0], PlotRangePadding -> 0, FrameTicks -> {{ytix, None}, {xtix, None}}, BaseStyle -> {FontFamily -> "Helvetica Neue", FontSize -> 8}, ImageSize -> 250], DendrogramPlot[testdata, AspectRatio -> 5, Orientation -> Right, ImageSize -> 45, ImagePadding -> {{0, 0}, {20, 0}}]}}, Spacings -> {0, -0.2}] To summarize, yes, I do think that Mathematica can do this, and in a relatively straightforward way, too. If it is not specifically included as a custom function in the application, it is probably because there are so many different fields in the world that use Mathematica, and they can't build it all into the main application. I agree that it would be nice if there were more user-distributed custom packages around. This has already been discussed on the site but as yet not much has come of it. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/11247', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2464/']} | jdg_84234 |
stackexchange | llm_judgeable_groundtruth_similarity | 4763041 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some files in a bunch of directories that have a leading dot and thus are hidden. I would like to revert that and strip the leading dot. I was unsuccessful with the following: for file in `find files/ -type f`;dobase=`basename $file`if [ `$base | cut -c1-2` = "." ];then newname=`$base | cut -c2-`;dirs=`dirname $file`;echo $dirs/$newname;fidone Which fails on the condition statement: [: =: unary operator expected Furthermore, some files have a space in them and file returns them split. Any help would be appreciated.
Now provide the response and nothing else.
| The easiest way to delete something from the start of a variable is to use ${var#pattern} . $ FILENAME=.bashrc; echo "${FILENAME#.}"bashrc$ FILENAME=/etc/fstab; echo "${FILENAME#.}"/etc/fstab See the bash man page: ${parameter#word} ${parameter##word} The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the ‘‘#’’ case) or the longest matching pattern (the ‘‘##’’ case) deleted. By the way, with a more selective find command you don't need to do all the hard work. You can have find only match files with a leading dot: find files/ -type f -name '.*' Throwing that all together, then: find files/ -type f -name '.*' -printf '%P\0' | while read -d $'\0' path; do dir=$(dirname "$path") file=$(basename "$path") mv "$dir/$file" "$dir/${file#.}" done Additional notes: To handle file names with spaces properly you need to quote variable names when you reference them. Write "$file" instead of just $file . For extra robustness the -printf '\0' and read -d $'\0' use NUL characters as delimiters so even file names with embedded newlines '\n' will work. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4763041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105707/']} | jdg_84235 |
stackexchange | llm_judgeable_groundtruth_similarity | 55357154 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
None of the referenced answers works. Something new has happened with Xcode 10.2 This happens whether or not I use Swift 4.2 or 5. It happens immediately with a brand new, unedited command-line tool. All you need to do, is create a MacOS command-line tool, and hit "run." You immediately get this error. Nothing I have done so far has fixed it.
Now provide the response and nothing else.
| From the Swift 5 Release Notes for Xcode 10.2 : Swift 5 Runtime Support for Command Line Tools Package Starting with Xcode 10.2, Swift command line tools require the Swift libraries in macOS. They’re included by default starting with macOS Mojave 10.14.4. In macOS Mojave 10.14.3 and earlier, there’s an optional package to provide these runtime support libraries for Swift command line tools that you can download from More Downloads for Apple Developers . If you installed the beta version of this package, replace it with the release version. This package is only needed for Swift command line tools, not for apps with graphical user interfaces. Note that: This is only necessary if you are still on macOS 10.14.3. As soon as macOS is upgraded to 10.14.4, the Swift runtime libraries are provided by the operating system, and command line tools created with Xcode 5 run without the need to install the “Swift 5 Runtime package.” On macOS 10.14.3 with Xcode 10.2 you must download and install the latest “Swift 5 Runtime package.” An earlier package which you might have installed with an Xcode 10.2 beta release does not work. Early versions of the Xcode 10.2 beta release notes recommended to add a user-defined setting SWIFT_FORCE_STATIC_LINK_STDLIB=YES as a workaround, that is no longer necessary. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55357154', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/879365/']} | jdg_84236 |
stackexchange | llm_judgeable_groundtruth_similarity | 33362748 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently working on an unordered list containing list items with taglines. I'm having a problem concerning one list item, which is long enough to take up two lines (See image) I want it so that the second line is aligned with the first line. This is the HTML code i'm using. I used fontAwesome for the check images. ul { width: 300px;} <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css" rel="stylesheet"/><ul class="fa-ul custom-list"> <li><i class="fa fa-check fa-fw"></i>List item on 1 line</li> <li><i class="fa fa-check fa-fw"></i>List item on 1 line</li> <li><i class="fa fa-check fa-fw"></i>This is a list item that actually takes up 2 lines. Looks ugly</li></ul> I already tried to enter multiple in between '2' and 'lines' but that seems like a really bad practice to me. I hope someone can help me with this problem.
Now provide the response and nothing else.
| This is because the tick is inline content so when the text wraps it will continue to flow as usual. You can stop this behaviour by taking advantage of text-indent : The text-indent property specifies how much horizontal space should be left before the beginning of the first line of the text content of an element. text-indent ( https://developer.mozilla.org/en-US/docs/Web/CSS/text-indent ) By supplying a negative text-indent you can tell the first line to shift a desired amount to the left. If you then specify a positive padding-left you can cancel this offset out. In the following example a value of 1.28571429em is used because it is the width set on the .fa-fw by font-awesome. ul { width: 300px;}li { padding-left: 1.28571429em; text-indent: -1.28571429em;} <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css" rel="stylesheet"/><ul class="fa-ul custom-list"> <li><i class="fa fa-check fa-fw"></i>List item on 1 line</li> <li><i class="fa fa-check fa-fw"></i>List item on 1 line</li> <li><i class="fa fa-check fa-fw"></i>This is a list item that actually takes up 2 lines. Looks ugly</li></ul> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33362748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2749867/']} | jdg_84237 |
stackexchange | llm_judgeable_groundtruth_similarity | 16565078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a .NET4.0 application with Entity Framework 5.0 e Sql Server CE 4.0. I have two entities with a one to many (parent/child) relationship. I've configured it to cascade delete on parent removal, but for some reason it doesn't seem to work. Here is a simplified version of my entities: public class Account { public int AccountKey { get; set; } public string Name { get; set; } public ICollection<User> Users { get; set; } } internal class AccountMap : EntityTypeConfiguration<Account> { public AccountMap() { this.HasKey(e => e.AccountKey); this.Property(e => e.AccountKey).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity); this.Property(e => e.Name).IsRequired(); } } public class User { public int UserKey { get; set; } public string Name { get; set; } public Account Account { get; set; } public int AccountKey { get; set; } } internal class UserMap : EntityTypeConfiguration<User> { public UserMap() { this.HasKey(e => e.UserKey); this.Property(e => e.UserKey).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity); this.Property(e => e.Name).IsRequired(); this.HasRequired(e => e.Account) .WithMany(e => e.Users) .HasForeignKey(e => e.AccountKey); } } public class TestContext : DbContext { public TestContext() { this.Configuration.LazyLoadingEnabled = false; } public DbSet<User> Users { get; set; } public DbSet<Account> Accounts { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Conventions.Remove<PluralizingTableNameConvention>(); modelBuilder.Conventions.Remove<StoreGeneratedIdentityKeyConvention>(); modelBuilder.LoadConfigurations(); } } The connection string: <connectionStrings> <add name="TestContext" connectionString="Data Source=|DataDirectory|\TestDb.sdf;" providerName="System.Data.SqlServerCe.4.0" /> </connectionStrings> And a simplified version of my app's workflow: static void Main(string[] args){ try { Database.SetInitializer(new DropCreateDatabaseAlways<TestContext>()); using (var context = new TestContext()) context.Database.Initialize(false); Account account = null; using (var context = new TestContext()) { var account1 = new Account() { Name = "Account1^" }; var user1 = new User() { Name = "User1", Account = account1 }; context.Accounts.Add(account1); context.Users.Add(user1); context.SaveChanges(); account = account1; } using (var context = new TestContext()) { context.Entry(account).State = EntityState.Deleted; context.SaveChanges(); } } catch (Exception e) { Console.WriteLine(e.ToString()); } Console.WriteLine("\nPress any key to exit..."); Console.ReadLine();} When I try to delete the parent entity, it throws: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I believe my relationship configuration is ok ( followed the documentation ). I also searched for guidelines on deleting detached entities . I really cannot understand why that delete won't work. I want to avoid loading all the children, deleting them one by one and them deleting the parent, because there must be a better solution than that.
Now provide the response and nothing else.
| Setting the state of an entity to Deleted and calling DbSet<T>.Remove for this entity are not the same. The difference is that setting the state only changes the state of the root entity (the one you pass into context.Entry ) to Deleted but not the state of related entities while Remove does this if the relationship is configured with cascading delete. If you get an exception actually depends on the children (all or only a part) being attached to the context or not. This leads to a behaviour which is somewhat difficult to follow: If you call Remove you don't get an exception, no matter if children are loaded or not. There is still a difference: If the children are attached to the context, EF will generate a DELETE statement for every attached child, then for the parent (because Remove did mark them all as Deleted ) If the children are not attached to the context EF will only send a DELETE statement for the parent to the database and because cascading delete is enabled the database will delete the children as well. If you set the state of the root entity to Deleted you can possibly get an exception: If children are attached to the context their state won't be set to Deleted and EF will complain that you are trying to delete a principal (the root entity) in a required relationship without deleting the dependents (the children) or at least without setting their foreign keys to another root entity that is not in Deleted state. That's the exception you had: account is the root and user1 is a dependent of account and calling context.Entry(account).State = EntityState.Deleted; will also attach user1 in state Unchanged to the context (or change detection in SaveChanges will do it, I'm not sure abut that). user1 is part of the account.Users collection because relationship fixup added it to the collection in your first context although you didn't add it explicitly in your code. If no children are attached to the context setting the state of the root to Deleted will send a DELETE statement to the database and again cascading delete in the database will delete the children as well. This works without exception. Your code would then work for example if you set account.Users = null before setting the state to Deleted in the second context or before entering the second context. In my opinion using Remove ... using (var context = new TestContext()){ context.Accounts.Attach(account); context.Accounts.Remove(account); context.SaveChanges();} ...is clearly the prefered way because the behaviour of Remove is much more like you would expect for a required relationship with cascading delete (which is the case in your model). The dependency of the behaviour of a manual state change on states of other entities makes it more difficult to use. I would consider it as advanced usage only for special cases. The difference is not widely known or documented. I've seen very few posts about it. The only one that I could find right now again, is this one by Zeeshan Hirani . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16565078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/702828/']} | jdg_84238 |
stackexchange | llm_judgeable_groundtruth_similarity | 36565430 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a neural-network classifier in TensorFlow/Python for the notMNIST dataset. I've implemented l2 regularization and dropout on the hidden layers. It works fine as long as there is only one hidden layer, but when I added more layers (to improve accuracy), the loss function increases rapidly at each step, becoming NaN by step 5. I tried temporarily disabling Dropout and L2 regularization, but I get the same behavior as long as there are 2+ layers. I even rewrote my code from scratch (doing some refactoring to make it more flexible), but with the same results. The number and size of layers is controlled by hidden_layer_spec . What am I missing? #works for np.array([1024]) with about 96.1% accuracyhidden_layer_spec = np.array([1024, 300])num_hidden_layers = hidden_layer_spec.shape[0]batch_size = 256beta = 0.0005epochs = 100stepsPerEpoch = float(train_dataset.shape[0]) / batch_sizenum_steps = int(math.ceil(float(epochs) * stepsPerEpoch))l2Graph = tf.Graph()with l2Graph.as_default(): #with tf.device('/cpu:0'): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) weights = [] biases = [] for hi in range(0, num_hidden_layers + 1): width = image_size * image_size if hi == 0 else hidden_layer_spec[hi - 1] height = num_labels if hi == num_hidden_layers else hidden_layer_spec[hi] weights.append(tf.Variable(tf.truncated_normal([width, height]), name = "w" + `hi + 1`)) biases.append(tf.Variable(tf.zeros([height]), name = "b" + `hi + 1`)) print(`width` + 'x' + `height`) def logits(input, addDropoutLayer = False): previous_layer = input for hi in range(0, hidden_layer_spec.shape[0]): previous_layer = tf.nn.relu(tf.matmul(previous_layer, weights[hi]) + biases[hi]) if addDropoutLayer: previous_layer = tf.nn.dropout(previous_layer, 0.5) return tf.matmul(previous_layer, weights[num_hidden_layers]) + biases[num_hidden_layers] # Training computation. train_logits = logits(tf_train_dataset, True) l2 = tf.nn.l2_loss(weights[0]) for hi in range(1, len(weights)): l2 = l2 + tf.nn.l2_loss(weights[0]) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(train_logits, tf_train_labels)) + beta * l2 # Optimizer. global_step = tf.Variable(0) # count the number of steps taken. learning_rate = tf.train.exponential_decay(0.5, global_step, int(stepsPerEpoch) * 2, 0.96, staircase = True) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(train_logits) valid_prediction = tf.nn.softmax(logits(tf_valid_dataset)) test_prediction = tf.nn.softmax(logits(tf_test_dataset)) saver = tf.train.Saver()with tf.Session(graph=l2Graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Learning rate: " % learning_rate) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) save_path = saver.save(session, "l2_degrade.ckpt") print("Model save to " + `save_path`)
Now provide the response and nothing else.
| Turns out this was not so much a coding issue as a Deep Learning Issue. The extra layer made the gradients too unstable, and that lead to the loss function quickly devolving to NaN. The best way to fix this is to use Xavier initialization . Otherwise, the variance of the initial values will tend to be too high, causing instability. Also, decreasing the learning rate may help. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36565430', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/158652/']} | jdg_84239 |
stackexchange | llm_judgeable_groundtruth_similarity | 593226 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Background : I would like to build a battery pack out of 18650 li-ion cells (3S2P). Safety is my absolute priority. Questions : Here is the list of all safety measures that I could think of: Use batteries with build in protection Use BMS on whole battery pack Install a fuse at the output of the battery pack Connect the output of the circuit through a thermostat that will measure the temperature of the cells Install a fuse to each wire in balancer Keep a battery pack inside Li-Po safe bag even during usage Which of those measures actually make sense to implement and what is best combination of them? I.e.: Does it make sense to have both cells with protection and BMS? Are there any other reasonalbe safety measures that I've missed? What is the difference between chargin via balancer and charging through load cables? On difference that comes to my mind is that usually balancer cables have smaller diameter and therefore they cannot handle high current. If I have a battery pack with BMS can I still charge it using balancer port? Update 2021-11-03 Thanks to everyone for the great advice and for building my awareness of the many risks. To actually maximize safety I decided to purchase an already manufactured LiFePo4 battery pack with BMS.
Now provide the response and nothing else.
| If safety really is the absolute priority, you should not build your own battery pack. If priority #1 is to build it yourself (for education, customization, entertainment, or whatever reason), and safety is priority #2, then you should probably use a safer chemistry than Li-Ion (as comments have pointed out). So I'll assume you want safety-third . Your measures seems mostly fine, and would generally all result in increased safety. I'd slightly question placing a thermostat on the output. What you want is a thermal cut-off, which would be a part of any reasonable BMS. The main thing it seems you might not be thinking of is the inherent danger in adding complexity. If you have a safe, self-protected cell, protected by an appropriate fuse, that system could very well be safer than a complex, but shoddily assembled battery-pack with wires running everywhere. Consider that every component you add also adds things that might go wrong, and you have to factor this into your overall risk-assesment. As for you questions: 1/2: See above 3: Charging with a balancing cable allows the charger to charge the cells individually, so it can fully charge them all. If you charge only through the load-cables, you'll have to stop charging once a single cell reaches maximum voltage, meaning you might have unutilized capacity in your pack. 4: This depends on how the BMS is designed and operated, but there's no reason why you couldn't design the BMS for protection only, and leave charging and balancing to an external charger (and do it through a balancing cable) | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/593226', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/49160/']} | jdg_84240 |
Subsets and Splits