source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 92712 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
suppose numbers from 1 to 1000 are saved in a binary search tree and we want to find 363. Which of the following sequences cannot be the order of elements while reaching the searched value? 925, 202, 911, 240, 912, 245, 363 924, 220, 911, 244, 898, 258, 362, 363 2, 252, 401, 398, 330, 344, 397, 363 2, 399, 387, 219, 266, 382, 381, 278, 363 I converted them to rooted binary search trees but I couldn't figure out what should be wrong with them. More precisely, I don't know how to check BST properties in such sequences.
Now provide the response and nothing else.
| You need to check the following conditions: (i) The left subtree of a node contains only nodes with keys less than the node's key. (ii) The right subtree of a node contains only nodes with keys greater than the node's key. So let's do this for case one by looking at the values for the nodes we traverse until we either find 363 or until one of the conditions is violated: 925 / \ 202 / \ 911 / \ 240 / \ 912 Woops! We see that $912 > 911$ even though it is in the left subtree of $911$ hence condition (i) is violated. Let me know if you can do the other cases or if you need more help. Hope this helps. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/92712', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/181853/']} | jdg_85941 |
stackexchange | llm_judgeable_groundtruth_similarity | 3997748 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the steps to create a keystore for android? I need to use google maps in my app and I don't know what steps I missed.Please provide me with the specific detailed steps (I didn't understand it from the guides).
Now provide the response and nothing else.
| To answer the question in the title, you create a keystore with the Java Keytool utility that comes with any standard JDK distribution and can be located at %JAVA_HOME%\bin . On Windows this would usually be C:\Program Files\Java\jre7\bin . On Windows, open a command window and switch to that directory. On Linux type OS do the same with a terminal. Then run: keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Keytool prompts you to provide passwords for the keystore, provide the Distinguished Name fields and then the password for your key. It then generates the keystore as a file called my-release-key.keystore in the directory you're in. The keystore and key are protected by the passwords you entered. The keystore contains a single key, valid for 10000 days. The alias_name is a name that you — will use later, to refer to this keystore when signing your application. For more information about Keytool, see the documentation at: http://docs.oracle.com/javase/6/docs/technotes/tools/windows/keytool.html and for more information on signing Android apps go here: http://developer.android.com/tools/publishing/app-signing.html | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3997748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/482762/']} | jdg_85942 |
stackexchange | llm_judgeable_groundtruth_similarity | 72090 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So, if I have something like: F[IdentityMatrix[3] // MatrixForm, 1] I'd like F[] to display: $$\begin{pmatrix}1 & & \\ & 1 & \\ & & 1 \\\end{pmatrix}$$ I'm looking for a function that works for any matrix, and basically filters out all but one value. This will make it easy for me to scan through a matrix, since the function will do most of the work for me.
Now provide the response and nothing else.
| f[x_List, n_] := (x /. y_ /; y != n -> " ") // MatrixFormf[{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}, 7] If you need to display only elements in list n , then use: g[x_List, n_List] := x /. y_ /; (MemberQ[Complement[Flatten[x], n], y]) -> " " // MatrixFormg[{{1,2,3},{4,5,6},{7,8,9}}, {3,7}] I'm sure there's a more elegant way to do this last transformation, but the obvious ones involving !MemberQ[n,y] or FreeQ[n,y] somehow didn't work. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/72090', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1735/']} | jdg_85943 |
stackexchange | llm_judgeable_groundtruth_similarity | 275803 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to export certain tables from one database and import them to another. The tables in question are used for my MediaWiki installation and are prefixed with "wiki_". I do not have access to phpMyAdmin, but I am able to connect to the server via SSH. I'm guessing I need to use the mysqldump command, but how do I specify that I only want to dump tables prefixed with "wiki_"?
Now provide the response and nothing else.
| You have to generate the list of tables you want to dump then act upon it. mysql -u USER -p -D test -Bse "show tables like 'wiki_%'" >tables.outmysqldump -u USER -p test <tables.out >wiki_tables.dump or as a one liner mysqldump -u USER -p test $(mysql -u USER -p -D test -Bse "show tables like 'wiki_%'") but you still get to enter the password twice. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/275803', 'https://serverfault.com', 'https://serverfault.com/users/15030/']} | jdg_85944 |
stackexchange | llm_judgeable_groundtruth_similarity | 39234528 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a package that does screen capture in xamarin.forms ? I need also to capture google maps screen shots
Now provide the response and nothing else.
| Check out this blog post by Daniel Hindrikes. I'm going to assume that you use a PCL for your shared code. You will need to create an interface in your PCL. He calls it IScreenshotManager . The declaration looks like this: public interface IScreenshotManager{ Task<byte[]> CaptureAsync();} Now all platforms will have their own implementation for it.For iOS; public class ScreenshotManager : IScreenshotManager{ public async System.Threading.Tasks.Task<byte[]> CaptureAsync() { var view = UIApplication.SharedApplication.KeyWindow.RootViewController.View; UIGraphics.BeginImageContext(view.Frame.Size); view.DrawViewHierarchy(view.Frame, true); var image = UIGraphics.GetImageFromCurrentImageContext(); UIGraphics.EndImageContext(); using(var imageData = image.AsPNG()) { var bytes = new byte[imageData.Length]; System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, bytes, 0, Convert.ToInt32(imageData.Length)); return bytes; } }} For Android: public class ScreenshotManager : IScreenshotManager{ public static Activity Activity { get; set; } public async System.Threading.Tasks.Task<byte[]> CaptureAsync() { if(Activity == null) { throw new Exception("You have to set ScreenshotManager.Activity in your Android project"); } var view = Activity.Window.DecorView; view.DrawingCacheEnabled = true; Bitmap bitmap = view.GetDrawingCache(true); byte[] bitmapData; using (var stream = new MemoryStream()) { bitmap.Compress(Bitmap.CompressFormat.Png, 0, stream); bitmapData = stream.ToArray(); } return bitmapData; }} And for Windows Phone: public class ScreenshotManager : IScreenshotManager{ public async Task<byte[]> CaptureAsync() { var rootFrame = Application.Current.RootVisual as PhoneApplicationFrame; var screenImage = new WriteableBitmap((int)rootFrame.ActualWidth, (int)rootFrame.ActualHeight); screenImage.Render(rootFrame, new MatrixTransform()); screenImage.Invalidate(); using (var stream = new MemoryStream()) { screenImage.SaveJpeg(stream, screenImage.PixelWidth, screenImage.PixelHeight, 0, 100); var bytes = stream.ToArray(); return bytes; } }} Don't forget to register your platform specific implementations with the attribute which registers it with the Dependency Service , like this: [assembly: Xamarin.Forms.Dependency (typeof (ScreenshotManager))] It goes above the namespace declaration. Now from your shared code you would be able to get the byte[] of a screenshot with a call like this: var screenshotBytes = DependencyService.Get<IScreenshotManager>().CaptureAsync(); You probably want to check if DependencyService.Get<IScreenshotManager>() isn't null before using it. After that you can turn your byte[] into an image and do whatever you like with it! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39234528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4058164/']} | jdg_85945 |
stackexchange | llm_judgeable_groundtruth_similarity | 117815 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a plug-in or product that will act as a remote probe and perform tests then send back the results to the central Nagios server. Reason for this is that I'd like to monitor internal systems and servers at customers, but don't want to allow all the traffic passing the firewalls. Ideally I'd like a soft-probe that would be installed and then perform the tests and send back the results (via SSH) to the central Nagios installation. Does anyone know of a product or plug-in that would offer such service? If not Nagios, is there any other monitoring system that does such a thing (ideally open-source)?
Now provide the response and nothing else.
| There are multiple ways to solve this. You can have a secondary server with just nrpe running. In this way it's acting as a proxy. So the main nagios sends a check through the server running nrpe. Example: From the main nagios server: check_nrpe -H NRPEPROXYHOST -c check_ping -H 10.0.0.3 .... The NRPEPROXYHOST runs the command as if it were the nagios server and submits the results back to the main server. In this setup the secondary server does not run nagios or any bloated daemons. Just the nrpe daemon, the nagios plugins to be ran. This can even be configured on some sort of gateway device and would not necessarily require a dedicated server be deployed. ====== Method 2 would be configuring a second instance of Nagios at the site and having it perform the active checks and submit the results to the main Nagios server. The main nagios server would have all the checks configured with active checks disabled and passive checks enabled. This configuration is a true distributed Nagios as documented on their site. It's quite a bit more robust so if you see yourself having to perform several hundred or thousands checks to these server (every 5 minutes) then this is your best choice. In most instances the secondary server is called a "satelite" nagios instance and the results are usually submitted to the main Nagios server via the NSCA protocol (which is encrypted). The Main nagios server listens for these via the nsca daemon and submits them to the external command file for processing by nagios. The downside is you have to have the config files on two servers and make changes to both sets of configs. You have to have these hosts as passive on the main server and active checks on the satelite server. This is scalable to no end and the preferred solution for installations with tens of thousands of service checks to be performed. Also, look at building the configs on a central server and keeping them in revision control and have a script on the nagios server periodically checkout the new configs and reload nagios. ===== Method 3 DNX, http://dnx.sourceforge.net/ an awesome project that patches Nagios so that it can send checks to be performed to "node" nagios servers. To the best of my knowledge though this configuration does not allow you to pick and choose which checks are executed by which node (node affinity), or if they are NOT to be executed by a node. So this solution adds distribution more than it does a proxy into a secondary network. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/117815', 'https://serverfault.com', 'https://serverfault.com/users/17900/']} | jdg_85946 |
stackexchange | llm_judgeable_groundtruth_similarity | 44389464 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to align the form to the center and keep it responsive. I have tried several ways but no success. I am trying to center all the text and the form. I am using Bootstrap v4. I am not sure if that helps. HTML: <section id="cover"> <div id="cover-caption"> <div id="container"> <div class="col-sm-10 col-sm offset-1"> <h1 class="display-3">Welcome to Bootstrap 4</h1> <div class="info-form"> <form action="" class="form-inline"> <div class="form-group"> <label class="sr-only">Name</label> <input type="text" class="form-control" placeholder="Jane Doe"> </div> <div class="form-group"> <label class="sr-only">Email</label> <input type="text" class="form-control" placeholder="[email protected]"> </div> <button type="submit" class="btn btn-success ">okay, go!</button> </form> </div> <br> <a href="#nav-main" class="btn btn-secondary-outline btn-sm" role="button">↓</a> </div> </div> </div> </section> CSS: html,body{ height: 100%;}#cover { background: #222 url('../img/stars.jpg') center center no-repeat; background-size: cover; color: white; height: 100%; text-align: center; display: flex; align-items: center; } #cover-caption { width: 100%; }
Now provide the response and nothing else.
| You need to use the various Bootstrap 4 centering methods... Use text-center for inline elements. Use justify-content-center for flexbox elements (ie; form-inline ) https://codeply.com/go/Am5LvvjTxC Also, to offset the column, the col-sm-* must be contained within a .row , and the .row must be in a container... <section id="cover"> <div id="cover-caption"> <div id="container" class="container"> <div class="row"> <div class="col-sm-10 offset-sm-1 text-center"> <h1 class="display-3">Welcome to Bootstrap 4</h1> <div class="info-form"> <form action="" class="form-inline justify-content-center"> <div class="form-group"> <label class="sr-only">Name</label> <input type="text" class="form-control" placeholder="Jane Doe"> </div> <div class="form-group"> <label class="sr-only">Email</label> <input type="text" class="form-control" placeholder="[email protected]"> </div> <button type="submit" class="btn btn-success ">okay, go!</button> </form> </div> <br> <a href="#nav-main" class="btn btn-secondary-outline btn-sm" role="button">↓</a> </div> </div> </div> </div></section> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44389464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6782388/']} | jdg_85947 |
stackexchange | llm_judgeable_groundtruth_similarity | 417454 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Recently I learned a nice constructive proof of the irrationality of $\sqrt{2}$ , which uses the 2-adic valuation of an integer: the count of how many times a number is divisible by 2. The valuation requires some induction to construct, and this nice answer by François Dorais talks about how Robinson's Arithmetic $Q$ isn't strong enough to prove $\sqrt{2}$ irrational. By the question "how much induction...?" I mean what is the complexity of the statement that is used in the application of induction to prove the existence of the valuation (I think the particular case $p=2$ is not special here). Further, I think the only property really needed in this irrationality proof is that the parity of the valuation is well-defined, so in principle it is this specific property that I need to know the strength of: there is a well-defined multiplicative function $p_2\colon \mathbb{N}\to \{\pm 1\}$ encoding the parity of the 2-adic valuation. I can easily think of a recursion (say in some dependent type theory, or a proof assistent) that defines this function, but I don't know how to classify the precise strength of the induction principle needed, in the usual arithmetic hierarchy. [As an aside, I really like this proof, not just because it gives a constructive lower bound on how far a rational is from $\sqrt{2}$ , but also because it doesn't rely on more extensive factorisation properties of integers, like one of the most common proofs relying on fractions in 'lowest terms', or on the beautiful, but more subtle, use of infinite descent]
Now provide the response and nothing else.
| If you want to stick to theories in the basic language of arithmetic $\langle0,1,+,\cdot,<\rangle$ , the irrationality of $\sqrt2$ can be easily proved in the theory $IE_1$ (i.e., using induction for bounded existential formulas), since it proves the $\gcd$ property; or even more directly, you can just prove $$\forall a,b<x\,(a^2=2b^2\to b=0)$$ by induction on $x$ (this is a bounded universal formula rather than existential, but $IE_1=IU_1$ ). $IE_1$ also proves that any number $x$ can be written uniquely as $yz$ where $y$ is odd and $z$ a power of two (meaning $z$ has no odd divisor apart from $1$ ), and you can define the parity of $v_2(x)$ by saying that $v_2(x)$ is even iff $z$ is a square. This will be multiplicative as required in the question. Alternatively (but leading to an equivalent definition), $IE_1$ proves that any $x$ can be uniquely written as $y^2z$ with $z$ square-free, and then we can define that $v_2(x)$ is even iff $z$ is odd. Some larger fragment of $I\Delta_0$ has a $\Delta_0$ -definition of the graph of exponentiation, which will enable to define $v_2(x)$ itself. However, even $IE_1$ is essentially overkill. People usually study bounded arithmetic in languages that are in various ways more suitable than the basic language of arithmetic, and then the 2-adic valuation is definable in very weak fragments; usually not because of some clever trick, but since it more or less belongs to the language. E.g., whenever you have a well-defined bit predicate $$\DeclareMathOperator\bit{bit}\bit(i,X)=\lfloor X2^{-i}\rfloor\bmod2,$$ you can define $$v_2(X)=i\iff\bit(i,X)=1\land\forall j<i\:\bit(j,X)=0.$$ In particular, you can do just that (with a $\Sigma^B_0$ formula) for binary integers $X$ in the basic theory $V^0$ (see [1]), which cannot even define general multiplication , hence it has no meaningful way of stating (let alone proving) the irrationality of $\sqrt2$ . The latter can be done in the extension $\mathrm{VTC}^0$ of $\mathrm V^0$ (see [1] again for the definition). In the realm of one-sorted theories of arithmetic, which is perhaps more in line with the spirit of the question, you can do all this using $\Sigma^b_0$ length-induction (i.e., in the theory $\Sigma^b_0$ -LIND as defined e.g. in [2]). [1] Stephen A. Cook, Phuong Nguyen: Logical foundations of proofcomplexity , Cambridge University Press, 2010. https://doi.org/10.1017/CBO9780511676277 [2] Chris Pollett: A propositional proof system for $R^i_2$ . In: Proof complexity and feasible arithmetics (P. Beame, S. Buss, eds.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science 39 39, AMS, 1997, pp. 253–278. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/417454', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4177/']} | jdg_85948 |
stackexchange | llm_judgeable_groundtruth_similarity | 26831 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We take observations from earth but to define radial velocity, transverse velocity and the proper motion of stars, why we consider them with respect to the sun? and then we do some corrections due to both rotational and orbital motion of the earth. Why can't we directly consider these motions with respect to earth instead of Sun? And what is the whole procedure of correction?
Now provide the response and nothing else.
| The Earth is a moving (actually, accelerating) platform from which we make our observations. If you want to describe the motion of a distant celestial body, then it does not make much sense to provide a geocentric velocity, because this will depend on exactly when the observations were taken (because the Earth orbits with a speed of about 30 km/s, but the velocity changes throughout the year), the position on the Earth that the measurement was taken (because the Earth's equator revolves at around 0.5 km/s, but this is different at different latitudes) and the position of the star in the sky (because the previous two effects have to be resolved in the direction of the star). Given that, then it is far easier to quote a velocity in a frame of reference that does not depend on these things. One can use the heliocentric frame in most circumstances - that is the frame of reference in which the Sun is stationary. To do this, one has to remove the components of the Earth's orbital and rotational velocity. However, if very precise measurements are being done then it is better to use the barycentric reference frame, which is with respect to the centre of mass of the solar system (since the Sun orbits this with a speed of around 0.1 m/s). | {} | {'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/26831', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/23550/']} | jdg_85949 |
stackexchange | llm_judgeable_groundtruth_similarity | 7644036 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating a new web crawler using C# to crawl some specific websites. every thing goes fine. but the problem is that some websites are blocking my crawler IP address after some requests. I tried using timestamps between my crawl requests. but did not worked. is there any way to prevent websites from blocking my crawler ?some solutions like this would help (but I need to know how to apply them): simulating Google bot or yahoo slurp using multiple IP addresses (event fake IP addresses) as crawler client IP any solution would help.
Now provide the response and nothing else.
| If speed/throughput is not a huge concern, then probably the best solution is to install Tor and Privoxy and route your crawler through that. Then your crawler will have a randomly changing IP address. This is a very effective technique if you need to crawl sites that do not want you crawling them. It also provides a layer of protection/anonymity by making the activities of your crawler very difficult to trace back to you. Of course, if sites are blocking your crawler because it is just going too fast, then perhaps you should just rate-limit it a bit. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7644036', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/680275/']} | jdg_85950 |
stackexchange | llm_judgeable_groundtruth_similarity | 1053006 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i have a QMainWindow. It has this parameters: this->setWindowFlags(Qt::Tool);this->setFocusPolicy(Qt::StrongFocus);this->setAttribute(Qt::WA_QuitOnClose,true); After showEvent calles my window is shown but unactivated.I tried to overload show function: ... QMainWindow::showEvent(event);this->activateWindow();... But it doesn't help me. EDIT: When i commented line this->setWindowFlags(Qt::Tool); everything worked fine, but i need in tool-flag.Any ideas? EDIT: OS: Linux Programming language: c++ Qt version: 4.5.1
Now provide the response and nothing else.
| The Windows Manager Decides Before I start: As pointed out by elcuco and Javier , focus policy and other aspects of the windows layout (e.g. the title bar) belongs to a substantial extend to the respective windows manager, and Qt might have limited control. To see this, just look at a user interface that has a " focus follows mouse " policy. In these cases, the windows manager might ignore Qt's focus request. For this reasons, the Qt documentation calls many of the respective flags "hints". Consequently, some of the suggested solutions might or might not work for you. QApplication::setActiveWindow() This not withstanding, e.tadeu's solution to use QApplication::setActiveWindow() works for me for both Windows and Ubuntu with Gnome. I tested it with the following code. Apologies that it is Python using PyQt (I use questions like these to learn a bit about PyQt). It should be fairly easy for you to read it and translate it into C++. import sysfrom PyQt4 import QtGuifrom PyQt4 import QtCoreclass MainWindow(QtGui.QMainWindow): def __init__(self, parent=None): QtGui.QMainWindow.__init__(self) # main window self.setGeometry(300, 300, 250, 150) self.setWindowTitle('Test') # text editor self.textEdit = QtGui.QTextEdit() self.setCentralWidget(self.textEdit) def closeEvent(self, event): QtGui.QApplication.instance().quit()#mainapp = QtGui.QApplication(sys.argv)testWindow = MainWindow()testWindow.setWindowFlags(QtCore.Qt.Tool)testWindow.show()app.setActiveWindow(testWindow)app.exec_() Note that you have to add some handling of the close event of the testWindow , because the app does not exit automatically if you close a Qt::Tool window. The grabKeyboard() Hack If this does not work for you, the following hack might. I assume that you have a window in your application that is active. You can then use grabKeyboard() to redirect the input. The Qt::Tool window doesn't get the focus, but receives the input. The following main code demonstrates it (the other code remains unchanged). #mainapp = QtGui.QApplication(sys.argv)testWindow = MainWindow()testWindow.setWindowFlags(QtCore.Qt.Tool)testWindow2 = MainWindow() # second window which is activetestWindow2.show()testWindow.show()testWindow.textEdit.grabKeyboard()app.exec_() Basically, while the window testWindow2 is the active one, all text entered shows up in testWindow.textEdit . It is not nice, I know... Creating Your Own Window You gain the most flexibility (and create the most work for yourself) by rolling out your own window layout. The idea is described in the following FAQ . Other "Solutions" You could directly call the respective window manager's API function to get the desired result (clearly against the very reason for using Qt in the first place). You could also hack the Qt source code. For example, on Windows, Qt uses the ShowWindow() function with a SW_SHOWNOACTIVATE flag, to show a window with style WS_EX_TOOLWINDOW if you set the Qt::Tool flag. You could easily replace the SW_SHOWNOACTIVATE with whatever you want. Linux should be the same. Clearly also not recommended. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1053006', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/87152/']} | jdg_85951 |
stackexchange | llm_judgeable_groundtruth_similarity | 1361504 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How many numbers are relatively prime to $42$ and $70$? There's no set limit (i.e. numbers relatively prime must be less than $42$ or $70$), so I'm unsure how to figure this out. I think I'm overthinking this problem! Any hints? (This came from a primitive roots of unity problem.)
Now provide the response and nothing else.
| Note: This is just a kind of streamlining of existing answers. The addendum of @MarkoRiedels answer already provides the calculation and it's using as essential step @StephenMontgomery-Smith's hint regarding telescoping. In fact we don't need any generating functions, since we can show the validity of OPs identity by a few simple transformations. \begin{align*} \color{blue}{\sum_{k=1}^{n}}&\color{blue}{\frac{n!}{(n-k)!}kn^{n-k-1}}\\ &=n!\sum_{k=1}^{n}\frac{n-(n-k)}{(n-k)!}n^{n-k-1}\tag{1}\\ &=n!\left(\sum_{k=1}^{n}\frac{n^{n-k}}{(n-k)!}-\sum_{k=1}^{n-1}\frac{n^{n-k-1}}{(n-k-1)!}\right)\tag{2}\\ &=n!\left(\sum_{k=1}^{n}\frac{n^{n-k}}{(n-k)!}-\sum_{k=2}^{n}\frac{n^{n-k}}{(n-k)!}\right)\tag{3}\\ &=n!\frac{n^{n-1}}{(n-1)!}\\ &\color{blue}{=n^n} \end{align*} Comment: In (1) we use $k=n-(n-k)$ In (2) observe, that the upper limit of the second sum is $n-1$ since $(n-k)=0$ in case $k=n$ In (3) we shift the index of the second sum by $1$ to prepare the telescoping. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1361504', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/185384/']} | jdg_85952 |
stackexchange | llm_judgeable_groundtruth_similarity | 1169977 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am asked to prove the identities of $(12)$ and $(13)$ , which are given on page 438 of the textbook PDE Evans, 2nd edition as follows: THEOREM 3 (Properties of resolvent operators) . (i) If $\lambda,\mu \in \rho(A)$ , we have $$R_\lambda - R_\mu=(\mu-\lambda)R_\lambda R_\mu \quad \text{(resolvent identity)} \tag{12}$$ and $$R_\lambda R_\mu = R_\mu R_\lambda \tag{13}.$$ If it helps, here are the relevant definitions on the previous page. Also, $A$ is a closed linear operator on the real Banach space $X$ , with domain $D(A)$ . DEFINITIONS. (i) We say a real number $\lambda$ belongs to $\rho(A)$ , the resolvent set of $A$ , provided the operator $$\lambda I - A : D(A) \to X$$ is one-to-one and onto. (ii) If $\lambda \in \rho(A)$ , the resolvent operator $R_\lambda : X \to X$ is defined by $$R_\lambda u := (\lambda I - A)^{-1} u.$$ What can I do to prove the first identity $$R_\lambda - R_\mu=(\mu-\lambda)R_\lambda R_\mu$$ at least? I got stuck after writing the following: $$R_\lambda - R_\mu = (\lambda I - A)^{-1} - (\mu I - A)^{-1}$$ and $$(\mu - \lambda) R_\lambda R_\mu = (\mu - \lambda) (\lambda I - A)^{-1} (\mu I - A)^{-1}$$ Perhaps the second identity requires similar justification, so I can try to do that on my own after getting help with the first one.
Now provide the response and nothing else.
| By definition, $$R_{\lambda} (\lambda-A)= (\lambda-A) R_{\lambda} = \text{id}.$$ Consequently, $$\begin{align*} R_{\lambda}-R_{\mu} &= R_{\lambda} (\mu-A) R_{\mu} - R_{\lambda}(\lambda-A) R_{\mu} \\ &= R_{\lambda} ((\mu-A)-(\lambda-A)) R_{\mu}. \end{align*}$$ This proves (12). In order to show (13) use (12). (Hint: What happens if we switch $\lambda \leftrightarrow \mu$ in (12)?) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1169977', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/111793/']} | jdg_85953 |
stackexchange | llm_judgeable_groundtruth_similarity | 1079024 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
An answer to this question makes clever use of an integral of this form: $$\int_0^1 \frac{x^n(1-x)^n}{1+x^2} dx$$ Is there a closed form for this for arbitrary positive integer $n$? (I expect this question has been asked before, but I couldn't find it. So perfectly happy if you can find it and close this question out.)
Now provide the response and nothing else.
| Warning: incoming eyesore. Introduction Using the binomial theorem, \begin{align*}\int_{0}^{1} \frac{x^n (1 - x)^n}{1 + x^2}\,\text{d}x &= \int_{0}^{1} \frac{x^n}{1+x^2}\sum_{m = 0}^{n} {n \choose m}(-1)^{m}x^m\,\text{d}x \\&= \sum_{m = 0}^{n} {n \choose m}(-1)^{m} \int_{0}^{1} \frac{x^{n+m}}{1+x^2} \,\text{d}x\end{align*} Suppose first that $n + m = 2q$ for some $q$ then $$\frac{x^{2q}}{1+x^2} = \frac{(-1)^{q}}{1 + x^2} - \sum_{\ell = 0}^{q - 1}(-1)^{\ell + q}x^{2\ell}$$ Therefore, $$\int_{0}^{1} \frac{x^{2q}}{1+x^2} = (-1)^{q}\frac{\pi}{4} - \sum_{\ell = 0}^{q - 1} \frac{(-1)^{\ell + q}}{2\ell + 1}$$ Now suppose that $n + m = 2p + 1$ for some $p$ then $$\frac{x^{2p + 1}}{1+x^2} = \frac{(-1)^{p}x}{1 + x^2} - \sum_{\ell = 0}^{p - 1}(-1)^{\ell + p}x^{2\ell + 1}$$ Therefore, $$\int_{0}^{1} \frac{x^{2p + 1}}{1+x^2} \,\text{d}x =\frac{(-1)^{p}}{2}\ln(2) - \sum_{\ell = 0}^{p - 1} \frac{(-1)^{\ell + p}}{2\ell + 2} $$ Case: $n$ even If $n$ is even then write $n = 2n'$ and then \begin{align*}\int_{0}^{1} \frac{x^{2n'} (1 - x)^{2n'}}{1 + x^2}\,\text{d}x &= \sum_{m = 0}^{2n'} {2n' \choose m}(-1)^{m} \int_{0}^{1} \frac{x^{2n'+m}}{1+x^2} \,\text{d}x \\&= \sum_{m = 0}^{n'} {2n' \choose 2m} \int_{0}^{1} \frac{x^{2(n'+m)}}{1+x^2} \,\text{d}x \\&- \sum_{m = 0}^{n' - 1} {2n' \choose 2m + 1} \int_{0}^{1} \frac{x^{2(n'+m) + 1}}{1+x^2} \,\text{d}x \end{align*} Use the previous results \begin{align*}\int_{0}^{1} \frac{x^{2n'} (1 - x)^{2n'}}{1 + x^2}\,\text{d}x &= \sum_{m = 0}^{n'} {2n' \choose 2m}\left[(-1)^{n' + m}\frac{\pi}{4} - \sum_{\ell = 0}^{n' + m - 1} \frac{(-1)^{\ell + n' + m}}{2\ell + 1} \right] \\&- \sum_{m = 0}^{n' - 1} {2n' \choose 2m + 1} \left[ \frac{(-1)^{n' + m}}{2}\ln(2) - \sum_{\ell = 0}^{n' + m - 1} \frac{(-1)^{\ell + n' + m}}{2\ell + 2} \right]\end{align*} Case: $n$ odd If $n$ is odd then write $n = 2n' + 1$ and then \begin{align*}\int_{0}^{1} \frac{x^{2n' + 1} (1 - x)^{2n' + 1}}{1 + x^2}\,\text{d}x &= \sum_{m = 0}^{2n' + 1} {2n' + 1 \choose m}(-1)^{m} \int_{0}^{1} \frac{x^{2n'+m + 1}}{1+x^2} \,\text{d}x \\&= \sum_{m = 0}^{n'} {2n' + 1\choose 2m} \int_{0}^{1} \frac{x^{2(n'+m) + 1}}{1+x^2} \,\text{d}x \\&- \sum_{m = 0}^{n'} {2n' + 1 \choose 2m + 1} \int_{0}^{1} \frac{x^{2(n'+m + 1)}}{1+x^2} \,\text{d}x \end{align*} Again, using the previous results, \begin{align*}\int_{0}^{1} \frac{x^{2n' + 1} (1 - x)^{2n' + 1}}{1 + x^2}\,\text{d}x &= \sum_{m = 0}^{n'} {2n' + 1 \choose 2m}\left[\frac{(-1)^{n' + m}}{2}\ln(2) - \sum_{\ell = 0}^{n' + m - 1} \frac{(-1)^{\ell + n' + m}}{2\ell + 2} \right] \\&- \sum_{m = 0}^{n' - 1} {2n' + 1 \choose 2m + 1} \left[ (-1)^{n' + m + 1}\frac{\pi}{4} - \sum_{\ell = 0}^{n' + m} \frac{(-1)^{\ell + n' + m + 1}}{2\ell + 1} \right]\end{align*} Those sums are all finite, so in principle that's the solution. I can't immediately see how to simplify it though. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1079024', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/21495/']} | jdg_85954 |
stackexchange | llm_judgeable_groundtruth_similarity | 326490 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Running multiple scp threads simultaneously: Background: I'm often finding myself mirroring a set of server files a lot, and included in these server files are thousands of little 1kb-3kb files. All the servers are connected to 1Gbps ports, generally spread out in a variety of data-centers. Problem: SCP transfers these little files, ONE by ONE, and it takes ages, and I feel like I'm wasting the beautiful network resources I have. Solution?: I had an idea; Creating a script, which divides the files up into equal amounts, and starts up 5-6 scp threads, which theoretically would then get done 5-6 times faster, no? But I don't have any linux scripting experience! Question(s): Is there a better solution to the mentioned problem? Is there something like this that exists already? If not, is there someone who would give me a start, or help me out? If not to 2, or 3, where would be a good place to start looking to learn linux scripting? Like bash, or other.
Now provide the response and nothing else.
| I would do it like this: tar -cf - /manyfiles | ssh dest.server 'tar -xf - -C /manyfiles' Depending on the files you are transferring it can make sense to enable compression in the tar commands: tar -czf - /manyfiles | ssh dest.server 'tar -xzf - -C /manyfiles' It may also make sense that you choose a CPU friendlier cipher for the ssh command (like arcfour): tar -cf - /manyfiles | ssh -c arcfour dest.server 'tar -xf - -C /manyfiles' Or combine both of them, but it really depends on what your bottleneck is. Obviously rsync will be a lot faster if you are doing incremental syncs. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/326490', 'https://serverfault.com', 'https://serverfault.com/users/35550/']} | jdg_85955 |
stackexchange | llm_judgeable_groundtruth_similarity | 2494355 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Although there is a certain sense in which universal algebra is a subfield of model theory, people in the two fields are interested in very different things. Nevertheless, I understand that universal algebraists sometimes talk about ultraproducts. This suggests to me that full first-order theories of structures are sometimes useful in study of their equational theories; if not, the could just use products since they preserve equations (but not necessarily first-order properties). But I fail to see how they are useful. Skimming the last chapter of Burris and Sankappanavar did not quite help me. How can full theories of structures be useful in study of their equational theories?
Now provide the response and nothing else.
| Every equational theory is a first-order theory, so, at the very least, universal algebraists should be able to use the ultraproduct. You ask why would they would need the ultraproduct when the product preserves all equations. In general, one could make the claim that universal algebraists tend to care about finite structures whereas model theorists prefer infinite structures (again, this is a generalization). The ultraproduct construction breaks down with finite structures: If $\mathcal{K}$ is a finite set of finite algebras, then any ultraproduct of algebras from $\mathcal{K}$ is again in $\mathcal{K}$. However, a product of algebras from $\mathcal{K}$ can be made arbitrarily large. Two nice results that use ultraproducts in this way are a result of Jonsson, which states that a finitely-generated congruence-distributive variety is residually small, and a result of Quackenbush, which states that if a locally finite variety has only finitely many finite subdirectly irreducible algebras, then it cannot have any infinite subdirectly irreducibles. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2494355', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20552/']} | jdg_85956 |
stackexchange | llm_judgeable_groundtruth_similarity | 15762650 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I write : trait T { val t = 3 val u = 1::t::Nil}class U extends T { override val t = 2}(new U).u it shows this. List(1, 0) How should I change the above code to make it display the following: List(1, 2) i.e. override val t sets the value of t for u in the trait T ?
Now provide the response and nothing else.
| One way to do this is to delay evaluation of u by using def or lazy val as follows: trait T { def t = 3 def u = 1::t::Nil}class U extends T { override def t = 2}(new U).u or trait T { val t = 3 lazy val u = 1::t::Nil}class U extends T { override val t = 2}(new U).u The differences are as follows: val makes an expression evaluate during initialization def makes an expression evaluate each time u is used lazy val makes it evaluated on first u usage and caches the result | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15762650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1287856/']} | jdg_85957 |
stackexchange | llm_judgeable_groundtruth_similarity | 2485028 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I implement signal Handling for Ctrl-C and Ctrl-D in C....So If Ctrl-C is pressed then the program will ignore and try to get the input from the user again...If Ctrl-D is pressed then the program will terminate... My program follows: int main(){ char msg[400]; while(1){ printf("Enter: "); fgets(msg,400,stdin); printf("%s\n",msg); }} Thanks, Dave
Now provide the response and nothing else.
| When dealing with POSIX signals, you have two means at your disposal. First, the easy (but discouraged) way, signal() . Second, the more elegant, current but complex way, sigaction() . Please use sigaction() unless you find that it isn't available on some platform that you need to work on. This chapter of the glibc manual explains differences between the two and gives good example code on how to use both. It also lists the signals that can be handled, recommends how they should be handled and goes more in depth on how to tell how any given signal is (or is not) currently being handled. That's way more code than I'd want to paste into an answer here, hence the links. It really is worth the hour or two it would take you to read the links and work through the examples. Signal handling (especially in programs that daemonize) is extremely important. A good program should handle all fatal signals that can be handled (i.e. SIGHUP) and explicitly ignore signals that it might not be using (i.e. SIGUSR1 / SIGUSR2). It also won't hurt to study the difference between normal and real time signals, at least up to the understanding of how the kernel merges the prior and not the latter. Once you work through it, you'll probably feel inclined to write up an easy to modify set of functions to handle your signals and re-use that code over and over again. Sorry for not giving a quick and dirty code snippet to show you how to solve your immediate need, but this isn't a quick and dirty topic :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2485028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/298190/']} | jdg_85958 |
stackexchange | llm_judgeable_groundtruth_similarity | 25907596 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two separate applications running on heroku and pointing to same database, first one responsible for user interface and second one for admin interface , I am using sidekiq with redis for background job processing, I have added one worker and I am able to share 'redis-server' by setting environment variable pointing to same Redis providing Addon, Now i wish to share worker too, because adding the extra worker will cost double. Please suggest, whether this is even possible or not?
Now provide the response and nothing else.
| If both apps are using the same Redis URL and same namespace, you can spin up one worker with that same Redis config and it will be shared by both. Note that your Sidekiq process will boot one app or the other. The code for your Workers must be in that app. The other app won't be able to reference the code but can push jobs using: Sidekiq::Client.push('class' => 'SomeWorker', 'args' => [1,2,3]) Note that 'class' is a String so SomeWorker can actually be defined in the other app. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25907596', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1262519/']} | jdg_85959 |
stackexchange | llm_judgeable_groundtruth_similarity | 1860118 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it true that $\sum_{k=1}^n \cos k$ is a bounded sequence? If so how to prove? I want to prove the series of $\cos n/(\sqrt n)$ is convergent by abel test but I dont know if the partial sums of cosine are bounded thanks
Now provide the response and nothing else.
| Hint . Assume $x \in (0,2\pi)$ with $e^{ix}\neq 1$. One may write$$\left|\sum_{k=1}^n\cos kx\right|=\left|\Re \sum_{k=1}^{n} (e^{ix})^k \right|=\left|\Re\: \frac{e^{ix}(1-e^{inx})}{1-e^{ix}}\right|\le \frac2{|1-e^{ix}|}.$$ Putting $x:=1$ gives a bound for the given sum. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1860118', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_85960 |
stackexchange | llm_judgeable_groundtruth_similarity | 8632104 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What's the simplest way to generate a sine wave sound at any frequency in Java?A sample size more than 2 bytes would help, but it doesn't really matter.
Now provide the response and nothing else.
| See Beeper for a self-contained example. Perhaps something simpler? That 51 lines of snippet (repeated below - spaced out for single line & in-line comments) as shown at the top of the linked answer, is about as simple as generating a tone gets (OK, you can take out 5+ lines for the harmonic). People seem to assume it should be a method built into the toolkit to produce a pure tone. It is not, and takes a little calculating to make one. /** Generates a tone, and assigns it to the Clip. */public void generateTone() throws LineUnavailableException { if ( clip!=null ) { clip.stop(); clip.close(); } else { clip = AudioSystem.getClip(); } boolean addHarmonic = harmonic.isSelected(); int intSR = ((Integer)sampleRate.getSelectedItem()).intValue(); int intFPW = framesPerWavelength.getValue(); float sampleRate = (float)intSR; // oddly, the sound does not loop well for less than // around 5 or so, wavelengths int wavelengths = 20; byte[] buf = new byte[2*intFPW*wavelengths]; AudioFormat af = new AudioFormat( sampleRate, 8, // sample size in bits 2, // channels true, // signed false // bigendian ); int maxVol = 127; for(int i=0; i<intFPW*wavelengths; i++){ double angle = ((float)(i*2)/((float)intFPW))*(Math.PI); buf[i*2]=getByteValue(angle); if(addHarmonic) { buf[(i*2)+1]=getByteValue(2*angle); } else { buf[(i*2)+1] = buf[i*2]; } } try { byte[] b = buf; AudioInputStream ais = new AudioInputStream( new ByteArrayInputStream(b), af, buf.length/2 ); clip.open( ais ); } catch(Exception e) { e.printStackTrace(); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8632104', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/899828/']} | jdg_85961 |
stackexchange | llm_judgeable_groundtruth_similarity | 5178 |
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to have a computer at home that I rsync my files to from anywhere, preferably at any time I choose. The daily volume of data will rarely exceed 100M. It needs to be cheap and easy to replace, not fast (it's for backup/archiving purposes only). One idea is to use a Raspberry Pi with a USB hard disk. (Or two.)Would you? Can you list any drawbacks that make alternatives obviously superior?
Now provide the response and nothing else.
| I also think this is a good setup. I currently do run a Raspberry Pi, connected directly with a 3.5'' USB 2.0 hard disk (Western Digital, 1.5 TB) with external power, and it works like a charm: I get hourly backups when I'm home, and the Raspberry is also an excellent file/media server. I get 23MB/s instead of 30MB/s on the hard disk, but that's still quite good.I can stream movies off the hard disk (through Samba or MiniDLNA) without problems. In fact, the hourly backups are done from my laptop using Apple Time Machine, but using rsync shouldn't change much. As @goldilocks mentioned, hard disks without external power (like most 2.5'' hard disk) will require a powered hub: my Raspberry shuts down as soon as I try attaching one directly ( edit : that was with a 5V-1.2A power supply, but still happens when attaching directly with a 5V-2A power supply; a powered hub solves the problem). In addition, I've installed BarracudaDrive ( http://barracudadrive.com/ ), available for free from the PiStore, which I'll use as my personal cloud in the future. Hard disk reliability The USB HD powers down when not used, but this is implemented by the HD enclosure (most recent ones I bought have this). Note that spinning down and up is a stress in itself for the hard disk if done too often, but since you can't set the spindown timeout, this is only a problem if the producer chose the wrong setup. (Google for "ubuntu destroys hard disk" to find discussions such as: https://ata.wiki.kernel.org/index.php/Known_issues#Drives_which_perform_frequent_head_unloads_under_Linux ). Anyway, you should simply use a reliable USB hard disk designed for your kind of scenarios. Or two ones in mirroring ( edit : but I would check performance first, since mirroring would double the required USB bandwidth for the same amount of writes). Instead of mirroring, I carry a rugged hard disk always with me and do backups also there. General software setup tips Of course, you should pay attention to the filesystem you use on the HD: nowadays ext4 is probably the safest choice since it's easiest to recover it. You might want to use rdiff-backup instead of rsync , to have access to historic backups. Finally, since you'll probably do rsync over SSH, you might want to select the RC4 cipher for extra speed ( edit : but see the warnings about security in this other answer if you do that over the Internet). Here's an excerpt from my .ssh/config on my host: Host pibbw # Replace pibbw with the hostname of your Raspberry Pi User pi Ciphers arcfour128 | {} | {'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/5178', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/2926/']} | jdg_85962 |
stackexchange | llm_judgeable_groundtruth_similarity | 8139066 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let f x y = x * y . We can apply this function in two ways: f 5 6 , or, using infix notation, 5 `f` 6 . Do the operator rules apply to this last expression? What precedence will this application have? Is it just another form of function application, and so will it also have the highest precedence? I suppose that the compiler sees this special form (due to `` and/or the name starting with a letter(?)), and actually treats this as ordinary function application, instead of considering it an operator.
Now provide the response and nothing else.
| The Haskell 98 Report has a section on Operator Applications that clears it up: An operator is either an operator symbol, such as + or $$, or is an ordinary identifier enclosed in grave accents (backquotes), such as `op`. For example, instead of writing the prefix application op x y, one can write the infix application x `op` y. If no fixity declaration is given for `op` then it defaults to highest precedence and left associativity (see Section 4.4.2). As indicated by the other answers, the Report also has a section on Fixity Declarations that allows you to define your own fixity, for example: infixl 7 `op` | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8139066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/211885/']} | jdg_85963 |
stackexchange | llm_judgeable_groundtruth_similarity | 14435164 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a few console command in Symfony2 and I need to execute one command from another command with some parameters. After successfull execution of the second command I need to get the result (as an array for example) , not the display output. How can I do that?
Now provide the response and nothing else.
| Here you can have a basic command inside a command. The output from the second command can be a json, then you just have to decode the output json to retrieve your array. $command = $this->getApplication()->find('doctrine:fixtures:load');$arguments = array( //'--force' => true '');$input = new ArrayInput($arguments);$returnCode = $command->run($input, $output);if($returnCode != 0) { $text .= 'fixtures successfully loaded ...'; $output = json_decode(rtrim($output));} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14435164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1368140/']} | jdg_85964 |
stackexchange | llm_judgeable_groundtruth_similarity | 25028609 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Assuming I have an array LogoDataBy{byte[0x00000008]} [0x00000000]: 0x41 [0x00000001]: 0x42 [0x00000002]: 0x43 [0x00000003]: 0x44 [0x00000004]: 0x31 [0x00000005]: 0x32 [0x00000006]: 0x33 [0x00000007]: 0x34 I would like to create an array of an arbitrary length and left pad it with 0x00 newArray{byte[0x00000010]} [0x00000000]: 0x00 [0x00000001]: 0x00 [0x00000002]: 0x00 [0x00000003]: 0x00 [0x00000004]: 0x00 [0x00000005]: 0x00 [0x00000006]: 0x00 [0x00000007]: 0x00 [0x00000008]: 0x41 [0x00000009]: 0x42 [0x0000000a]: 0x43 [0x0000000b]: 0x44 [0x0000000c]: 0x31 [0x0000000d]: 0x32 [0x0000000e]: 0x33 [0x0000000f]: 0x34 I have my current snippet here string test = "ABCD1234"; byte[] LogoDataBy = ASCIIEncoding.ASCII.GetBytes(test); var newArray = new byte[16]; var difference = newArray.Length - LogoDataBy.Length; for (int i = 0; i < LogoDataBy.Length; i++) { newArray[difference + i] = LogoDataBy[i]; } Is there a more efficient way to do this?
Now provide the response and nothing else.
| I would recommend starting with Array.Copy like this: string test = "ABCD1234";byte[] LogoDataBy = ASCIIEncoding.ASCII.GetBytes(test);var newArray = new byte[16];var startAt = newArray.Length - LogoDataBy.Length;Array.Copy(LogoDataBy, 0, newArray, startAt, LogoDataBy.Length); If you really need the speed you can do Buffer.BlockCopy too: string test = "ABCD1234";byte[] LogoDataBy = ASCIIEncoding.ASCII.GetBytes(test);var newArray = new byte[16];var startAt = newArray.Length - LogoDataBy.Length;Buffer.BlockCopy(LogoDataBy, 0, newArray, startAt, LogoDataBy.Length); Note that I did not check the length of the array you provided - you should take care that it's big enough. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25028609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1177278/']} | jdg_85965 |
stackexchange | llm_judgeable_groundtruth_similarity | 195183 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to numerically integrate precisely, $$ \frac{1}{2 \pi r^{n}}\int_0^{2\pi} e^{-n i t}C(t)dt$$ with $C(t)$ a single-cycled branch of the function $w(z)$ given by the implicit expression $$-z^2+z^3+w (-4 z+3 z^2)+w^3 (-2+8 z+4 z^2-4 z^3)+w^2 (-z^3-9 z^4)+w^4 (6-8 z^2+7 z^3+8 z^4)=0$$ and $n=100$ . $C(t)$ is an NDSolve result in the code below. A plot of Re[e^{100 it] C[t]] is shown below. I know the precise value of the integral by another method. The following code uses LevinRule and can only get the results accurate to $10^3$ difference when I set the working precision to 70 and MaxStepSize to 1/60000. If I attempt to increase working precision or decrease step size, the results are less accurate. I was wondering if there is a way to get the results accurate to less than 1? Here is an example output for getIntegral[20,20,1/1000] routine listed below for n=20 with working precision 20 and max step size=1/1000: $$\begin{array}{ccccc}20 & \{20,1/1000\} & \begin{array}{ll} (1.4347) \\ (1.43468)\end{array} & 1.20279* 10^{-8} & 1.52 \end{array}$$ Reading from left to right, n=20, then the working precision and step size, then next column is the integral results below which is the actual value, then next column is the difference and last column is time in seconds. For `getIntegral[100,70,1/60000]`, the results are: $$\begin{array}{ccccc}100 & \{70,1/60000\} & \begin{array}{ll} (834190592984+17i) \\(834190588733) \end{array} & 4.2 * 10^{3} & 250 \end{array}$$ Note: the result is very large because r=1/2 and recall AccountingForm uses parenthesis to note negative numbers. theFunction = -z^2 + z^3 + w (-4 z + 3 z^2) + w^3 (-2 + 8 z + 4 z^2 - 4 z^3) + w^2 (-z^3 - 9 z^4) +w^4 (6 - 8 z^2 + 7 z^3 + 8 z^4)theBaseValues = w /. NSolve[theFunction == 0 /. z -> zstart, w,WorkingPrecision -> 200];theBaseValues = Sort[theBaseValues, If[Re[#1] != Re[#2], Re[#1] < Re[#2] , Im[#1] < Im[#2] ] &];wstart = theBaseValues[[3]];rnorm = 1/2;tStart = 0;tEnd = 2 \[Pi];zstart = rnorm Exp[I tStart];actualValue = -8.341905887336485206837863135597`20.*^11; wDeriv = w'[t] == ((-(D[theFunction, z]/ D[theFunction, w]) (I rnorm Exp[I t])) /. {w -> w[t],z -> rnorm Exp[I t]}); getIntegral[j_, wp_, sSize_] := Module[{numIndex, difResults, myazsol, theCentralTrace, n1,finalValue, intResults}, myazsol = First[NDSolve[{wDeriv, w[tStart] == wstart}, w, {t, tStart, tEnd}, MaxSteps -> 20000000, MaxStepSize -> sSize,WorkingPrecision -> wp]]; theCentralTrace[t_] = Evaluate[Flatten[w[t] /. myazsol]]; n1 = AbsoluteTiming[ NIntegrate[( Exp[I t])^-j theCentralTrace[t], {t, tStart, tEnd}, WorkingPrecision -> wp, Method -> {"GlobalAdaptive", "MaxErrorIncreases" -> 10000, Method -> "LevinRule"}, MaxRecursion -> 500]]; finalValue = 1/(2 \[Pi] rnorm^j) (n1[[2]]); intResults = actualValue; difResults = Abs[intResults - finalValue]; {j, {wp, sSize}, Column[{N[AccountingForm[finalValue], 20], N[AccountingForm[intResults], 20]}, Alignment -> Left], N[ScientificForm[difResults], 10], n1[[1]]} ];
Now provide the response and nothing else.
| The behaviour we see is due to an optimization that is being applied by the evaluator. Once the evaluator sees that an expression is inert, it assumes that for the remainder of the evaluation the same expression need not be evaluated again. In the case at hand, Inverse[{{1, 2}, {1, 2}}] is inert since the definitions of Inverse have determined that the matrix is singular and have declined to transform the expression. We can use Update to force the evaluator to avoid the optimization: Do[Print@Check[Inverse[({{1,2},{1,2}})]; "wrong", "right"], {i,2}];(* Inverse::sing: Matrix {{1,2},{1,2}} is singular. right wrong*)Do[Print@Check[Update[]; Inverse[({{1,2},{1,2}})]; "wrong", "right"], {i,2}];(* Inverse::sing: Matrix {{1,2},{1,2}} is singular. right Inverse::sing: Matrix {{1,2},{1,2}} is singular. right*) Analysis (current as of version 11.3.0) An expression is called "inert" when there are no applicable definitions which transform it. The key word here is applicable . Definitions can exist , but as long as they are not applicable the expression is treated as inert. In this case of a singular matrix, there does exist at least one definition of Inverse that is evaluated -- but that definition has a condition that detects the singularity, issues an error message, and then indicates that the definition is not applicable. The optimization is insensitive to whether or not the definition condition has any side-effects (in this case, the issuance of a message). We can emulate this behaviour of Inverse ourselves with a simpler example: f[x_] := 10 x /; If[x >= 0, True, Message[f::ltzero]; False]f::ltzero = "less than zero!"; We have defined a function which takes a value and multiplies it by ten, but only under the condition that the value is not less than zero. Otherwise, the condition issues an error message and returns False indicating that the definition does not apply. So: f[1](* 10 *)f[-1](* >>f::ltzero: less than zero! f[-1]*) With this in place, we can see the action of the optimization: Table[f[-1], {2}](* >>f::ltzero: less than zero! {f[-1], f[-1]}*) Note how the message is issued only once even though we tried to evaluate f[-1] twice. We can use Update tell the evaluator to make no optimizing assumptions about inert expressions: Table[Update[]; f[-1], {2}]Table[f[-1], {2}](* >>f::ltzero: less than zero! >>f::ltzero: less than zero! {f[-1], f[-1]}*) Now, the expression is evaluated twice and the condition message appears twice. What About Assert ? The question observes that wrapping Assert around the call to Inverse causes it to behave "correctly". Actually, the call to Assert is failing because the argument is expected to be True or False -- not Inverse[...] . Assert issues a message in that case and it is that message that causes Check to return its second argument. All of this can be seen if we remove Quiet from the expression: Reap @ Do[Sow@Check[Assert[Inverse[({{1,2},{1,2}})]];2+2,3+3],{i,2}](*>>Inverse::sing: Matrix {{1,2},{1,2}} is singular.>>Assert::asrttf: Assertion test Inverse[{{1,2},{1,2}}] evaluated to Inverse[{{1,2},{1,2}}] that is neither True nor False.>>Assert::asrttf: Assertion test Inverse[{{1,2},{1,2}}] evaluated to Inverse[{{1,2},{1,2}}] that is neither True nor False.{Null,{{6,6}}}*) Notice how the Inverse::sing message was still only issued once whereas the Assert -related messages were shown multiple times. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/195183', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/47466/']} | jdg_85966 |
stackexchange | llm_judgeable_groundtruth_similarity | 4201175 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to figure out how QXmlStreamReader works for a C++ application I'm writing. The XML file I want to parse is a large dictionary with a convoluted structure and plenty of Unicode characters so I decided to try a small test case with a simpler document. Unfortunately, I hit a wall. Here's the example xml file: <?xml version="1.0" encoding="UTF-8" ?><persons> <person> <firstname>John</firstname> <surname>Doe</surname> <email>[email protected]</email> <website>http://en.wikipedia.org/wiki/John_Doe</website> </person> <person> <firstname>Jane</firstname> <surname>Doe</surname> <email>[email protected]</email> <website>http://en.wikipedia.org/wiki/John_Doe</website> </person> <person> <firstname>Matti</firstname> <surname>Meikäläinen</surname> <email>[email protected]</email> <website>http://fi.wikipedia.org/wiki/Matti_Meikäläinen</website> </person></persons> ...and I'm trying to parse it using this code: int main(int argc, char *argv[]){ if (argc != 2) return 1; QString filename(argv[1]); QTextStream cout(stdout); cout << "Starting... filename: " << filename << endl; QFile file(filename); bool open = file.open(QIODevice::ReadOnly | QIODevice::Text); if (!open) { cout << "Couldn't open file" << endl; return 1; } else { cout << "File opened OK" << endl; } QXmlStreamReader xml(&file); cout << "Encoding: " << xml.documentEncoding().toString() << endl; while (!xml.atEnd() && !xml.hasError()) { xml.readNext(); if (xml.isStartElement()) { cout << "element name: '" << xml.name().toString() << "'" << ", text: '" << xml.text().toString() << "'" << endl; } else if (xml.hasError()) { cout << "XML error: " << xml.errorString() << endl; } else if (xml.atEnd()) { cout << "Reached end, done" << endl; } } return 0;} ...then I get this output: C:\xmltest\Debug>xmltest.exe example.xml Starting... filename: example.xml File opened OK Encoding: XML error: Encountered incorrectly encoded content. What happened? This file couldn't be simpler and it looks consistent to me. With my original file I also get a blank entry for the encoding, the entries' names() are displayed, but alas, the text() is also empty. Any suggestions greatly appreciated, personally I'm thorougly mystified.
Now provide the response and nothing else.
| I'm answering this myself as this problem was related to three issues, two of which were brought up by the responses. The file actually wasn't UTF-8 encoded. I changed the encoding to iso-8859-1 and the encoding warning disappeared. The text() function doesn't work as I expected. I have to use readElementText() to read the entries' contents. When I try to readElementText() on an element that doesn't contain text, like the top-level <persons> in my case, the parser returns an "Expected character data" error and the parsing is interrupted. I find this behaviour strange (in my opinion returning an empty string and continuing would be better) but I guess as long as the specification is known, I can work around it and avoid calling this function on every entry. The relevant code section that works as expected now looks like this: while (!xml.atEnd() && !xml.hasError()) { xml.readNext(); if (xml.isStartElement()) { QString name = xml.name().toString(); if (name == "firstname" || name == "surname" || name == "email" || name == "website") { cout << "element name: '" << name << "'" << ", text: '" << xml.readElementText() << "'" << endl; } }}if (xml.hasError()){ cout << "XML error: " << xml.errorString() << endl;}else if (xml.atEnd()){ cout << "Reached end, done" << endl;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4201175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/190600/']} | jdg_85967 |
stackexchange | llm_judgeable_groundtruth_similarity | 16090072 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to replace all characters of string to asterisks except first and last characters in PHP? For example test should become t**t and profanity become p******y and so on
Now provide the response and nothing else.
| function get_starred($str) { $len = strlen($str); return substr($str, 0, 1).str_repeat('*', $len - 2).substr($str, $len - 1, 1);}$myStr = 'YourName';echo get_starred($myStr); //should show Y******e | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16090072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1848708/']} | jdg_85968 |
stackexchange | llm_judgeable_groundtruth_similarity | 2984146 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank . Go to Emacs, connect with M-x slime-connect . Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval ( C-c C-c ) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one ( C-c C-k ) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.
Now provide the response and nothing else.
| Not a complete workflow description, just a few ideas: It is possible to remove a Var from a namespace with the ns-unmap function. For added convenience, an undef macro can be built on top of it e.g. like so: (defmacro undef [& syms] `(do ~@(map (fn [s] `(ns-unmap *ns* '~s)) syms))) ns-unalias may also be of interest. There's no reason to go through the files holding the namespaces just to do C-c C-k in each of them; you can just require the namespaces you need at the REPL. Moreover, if you type a few characters at the SLIME REPL and then use M-p / M-n to browse history, only the entries matching the initial bit of text you entered by hand will be displayed. This is compatible with Paredit (the trailing closing bracket(s) will not be a problem). So, if you build up a huge require at the start -- (require '[foo :as f] '[bar :as b] '[clojure.contrib.sql :as sql] ...) -- then after restarting Swank, all you need to do is to type something like (require '[f and press M-p to bring that form to the REPL prompt again. Admittedly, this could be automated in a number of ways (e.g. having the Swank REPL search for a configuration file, or perhaps a simple macro expanding into an appropriate require form which could be used after bringing in just one utility namespace from the project -- especially the latter idea would be easy to implement), but I find it sufficiently low on the annoyance factor that I haven't so far bothered with any improvements. You can use C-c C-z to make a window with the SLIME REPL pop up while you are in a SLIME-enabled buffer. Also, you should try using ido if you haven't already. I tend to work with a code buffer open in a window on the left hand side of the screen and a REPL buffer on the right; with windmove-left and windmove-right bound to some convenient keys, I tend to be pretty happy. If I need to look at additional buffers often, I use extra Emacs frames. Incidentally, I don't use lein swank normally, I prefer my custom clojure-project function (a tweaked version of Phil Hagelberg's original). On occasion, I feel a desire to improve it... perhaps I'll deal with per-project import / require automation next time that happens. ;-) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2984146', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23643/']} | jdg_85969 |
stackexchange | llm_judgeable_groundtruth_similarity | 69885 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $C\subset\mathbb{P}^r$ be a smooth nondegenerate curve (not contained in any hyperplane) of degree $d$ genus $g>0$. Consider the tangential variety $X$ of $C$: $X=\cup_{p\in C}T_pC\subset \mathbb{P^r}$. This is a surface in $\mathbb{P}^r$ which is singular along $C$. My feeling is that $X$ can not be contained in any quadric hypersurface $Q$. Is this something reasonable to expect? A baby case is when $C$ is the rational normal curve in $\mathbb{P}^3, then X$ is a quartic hypersurface thus not contained in any $Q$. Any imput is welcome. Thanks a lot.
Now provide the response and nothing else.
| Edited. Here is a construction of curves $C$ on a four-dimensional quadric $Q^4$ such that $TC\subset Q^4$. I am sure that this is a classical construction, (it might be I saw it previously and forgot). Construction. Recall that $Q^4$ is isomorphic to $G(2,4)$ -- the Grassmanian of $2$-planes in a four-dimensional space, or equivalently to the space of in lines $\mathbb P^3$. The isomorphism is given by Plucker embedding of $G(2,4)$ to $\mathbb P^5$. Now, take any curve $C'$ in $\mathbb P^3$ and associate to it a curve $C$ in $G(2,4)$ consisting of the collection of lines tangent to $C'$. I claim $TC\subset Q^4$ once we identify $G(2,4)$ with $Q^4$. The proof is left as an exercise. PS. I think it will be more interesting to answer the following question : For each $n$, what is the maximal $k(n)$ such that $Q^n$ contains "non-degenerate" $k(n)$-dimensional subvariety $C^{k(n)}$ of arbitrary high degree, such that $TC^{k(n)}\subset Q^n$? I am pretty sure that the above construction can be generalised to show that $k(n)$ tends to infinity when $n$ tends to infinity. In fact from the very first glance it is not clear (for me) why the behaviour of such varieties $C$ should not have resemblance with algebraic Legendrian varieties about which you can read, for example, here : http://arxiv.org/abs/0805.3848 . | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/69885', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10646/']} | jdg_85970 |
stackexchange | llm_judgeable_groundtruth_similarity | 2713601 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to know how people decide whether to define a method as static. I'm aware that a method can only be defined as static if it doesn't require access to instance fields. So let's say we have a method that does not access instance fields, do you always define such a method as static, or only if you need to call it statically (without a reference to an instance). Perhaps another way of asking the same question is whether you use static or non-static as the default?
Now provide the response and nothing else.
| I use static methods whenever I can. Advantages: When calling a static method from inside an instance method, you can be sure that there are no side-effects on the state of the current object. From inside a static method, you can be sure you don't accidentally modify any state of the object instance. You can use a static method from outside the class without constructing an instance. If it was possible to make the method static, it clearly doesn't need an instance, so there's no need to require one. Static methods may be slightly more efficient because no "this" pointer needs to be passed, and no dynamic dispatch is necessary. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2713601', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2648/']} | jdg_85971 |
stackexchange | llm_judgeable_groundtruth_similarity | 13845508 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to perform a Popover segue when user touches a cell in a dynamic TableView. But when I try to do this with this code: - (void)tableView:(UITableView *)tableview didSelectRowAtIndexPath:(NSIndexPath *)indexPath{ [self performSegueWithIdentifier:@"toThePopover" sender:[tableView cellForRowAtIndexPath]]; //...} than I get an error: Illegal Configuration Popover segue with no anchor Is there any way to do this (to perform a popover segue from dynamic TableView manually)?
Now provide the response and nothing else.
| I was faced with this same issue tonight, there a couple workarounds (including presenting the popover the old fashioned way). For this example, I have an object that is stored in my custom cell class. When the cell is selected I call a function like this to open details in a popOverViewController about the object, and point (anchor) to it's corresponding cell in the table. - (void)openCustomPopOverForIndexPath:(NSIndexPath *)indexPath{ CustomViewController* customView = [[self storyboard] instantiateViewControllerWithIdentifier:@"CustomViewController"]; self.myPopOver = [[UIPopoverController alloc] initWithContentViewController:customView]; self.myPopOver.delegate = self; //Get the cell from your table that presents the popover MyCell *myCell = (MyCell*)[self.tableView cellForRowAtIndexPath:indexPath]; CGRect displayFrom = CGRectMake(myCell.frame.origin.x + myCell.frame.size.width, myCell.center.y + self.tableView.frame.origin.y - self.tableView.contentOffset.y, 1, 1); [self.myPopOver presentPopoverFromRect:displayFrom inView:self.view permittedArrowDirections:UIPopoverArrowDirectionLeft animated:YES]; } The problem with this method is that we often need the popover view to have a custom initializer. This is problematic if you want your view to be designed in storyboard instead of a xib and have a custom init method that takes your cells associated object as a parameter to use for it's display. You also can't just use a popover segue (at first glance) because you need a dynamic anchor point (and you can't anchor to a cell prototype). So here is what I did: First, create a hidden 1px X 1px UIButton in your view controllers view. (important to give the button constraints that will allow it to be moved anywhere in the view) Then make an outlet for the button (I called mine popOverAnchorButton) in your view controller and control drag a segue from the hidden button to the view controller you wish to segue to. Make it a popOver segue. Now you have a popover segue with a 'legal' anchor. The button is hidden, so no one can touch it accidentally. You are only using this for an anchor point. Now just call your segue manually in your function like this. - (void)openCustomPopOverForIndexPath:(NSIndexPath *)indexPath{ //Get the cell from your table that presents the popover MyCell *myCell = (MyCell*)[self.tableView cellForRowAtIndexPath:indexPath]; //Make the rect you want the popover to point at. CGRect displayFrom = CGRectMake(myCell.frame.origin.x + myCell.frame.size.width, myCell.center.y + self.tableView.frame.origin.y - self.tableView.contentOffset.y, 1, 1); //Now move your anchor button to this location (again, make sure you made your constraints allow this) self.popOverAnchorButton.frame = displayFrom; [self performSegueWithIdentifier:@"CustomPopoverSegue" sender:myCell]; } And...... Voila. Now you are using the magic of segues with all their greatness and you have a dynamic anchor point that appears to point to your cell.now in -(void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender you can simply cast the sender to your cell's class (given that you do the proper checks on sender type and which segue is being called) and give the segue's destinationViewController the cell's object. Let me know if this helps, or anyone has any feedback or improvements. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13845508', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1333315/']} | jdg_85972 |
stackexchange | llm_judgeable_groundtruth_similarity | 280427 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have searched past questions and answers on this platform but none answers this question. A prof said it is possible to sample below Nyquist rate under certain conditions.I will like to know, first, if it is possible to do this, if so, when?
Now provide the response and nothing else.
| First of all, let's get rid of the Nyquist rate misconception. People are usually taught that the minimum sampling frequency needs to be twice the frequency of the highest frequency in the signal. This is completely false! What is true is that if you have a "full" spectrum, and by full, I mean that it completely uses up all frequencies between the lower edge of its bandwidth and the upper edge of its bandwidth, then you need to have the sampling frequency that is at least twice the bandwidth of the signal. So in the picture here, the sampling frequency needs to be at least 2*(Fh-Fl) in order to get the spectrum. You also need to keep in mind that, after you do the sampling, all information about the actual frequency is lost in the sampled signal. This is where the entire story about the Nyquist frequency comes into play. If the sampling frequency is twice the highest frequency of the signal, then we can safely assume (as we're often trained to do subconsciously) that all the frequencies in the sampled signal are between zero and one half of the sampling frequency. In reality, the spectrum of the sampled signal is periodic around Fs/2 and we can use that periodicity in order to achieve lower sampling rates. Take a look at the following picture: The area between the 0 and Fs/2 is the so-called first Nyquist zone. This is the area where we're doing the "traditional" sampling. Next take a look at the area between the Fs/2 and Fs. This is the second Nyquist zone. If we have any signals in this area, their spectrum will be sampled and its spectrum will be flipped, that is to say, the high and the low frequencies will be inverted. Next, we have the third Nyquist zone, between the Fs and 3Fs/2. Signals here, when sampled, will look as if they came from the first zone and their spectrum will be normal. The same goes for all the other zones, with the rule being that the spectrum of odd-numbered zones is normal and the spectrum of even-numbered zones is inverted. Now this goes against the "traditional" rules about aliasing, since aliasing is usually taught as some evil monster coming to eat your signals away and that you have to use the low-pass anti-aliasing filters to get rid of it. In real life, this isn't how things really work. The anti-aliasing filters can't actually prevent aliasing, they just bring it down to the level where it doesn't matter any more. What we really want to do instead is to eliminate any strong signal from Nyquist zones that are not of interest and let through the signals from the Nyquist zone that is of interest to us. If we're in the first zone, then a low-pass filter is fine, but for all the other zones, we need a band-pas filter that will allow us to get the useful signals from that zone and remove junk that we don't need that's coming from the other zones. So let's take a look at this example: Here we have a signal in the third Nyquist zone that is being let through by a band-pass filter. Our ADC will need to only have the sampling frequency of twice the bandwidth of the signal to reconstruct it, but we always need to keep in mind that this is actually a signal from the third zone, when we need to calculate the frequencies inside of our signal. This procedure is often called bandpass sampling or undersampling. Now, after all this exposition, to answer your question when: Well, let's take a look at radio, perhaps something in the microwave spectrum, maybe WiFi. A typical old-style WiFi channel might have 20 MHz of bandwidth, but the carrier frequency would be around 2.4 GHz. So, if we take our naive approach to sampling the signal directly, we'd need a 5 GHz ADC to see our signal, even though we're only interested in particular 20 MHz of spectrum. A 5 GHz analog to digital converter is something that is very complicated and expensive and it requires very complicated and expensive design too. On the other hand, a 40 MHz ADC is something that is not as "magical" as a 5 GHz ADC. One thing that needs to be kept in mind is that, although we could in theory capture out signal with a 40 MHz ADC, we'd need very sharp anti-aliasing filters, so in practice we don't really want to run the sampling frequency too close to the bandwidth. Another thing that's also overlooked is that the circuitry of a real-life ADC behaves as a filter on its own. The filtering effects of an ADC need to be taken into account when doing band-pass sampling. Quite often, there are special ADCs with bandwidths much wider than the sampling rate that are specifically designed with band-pass sampling in mind. Finally, there's the other side of the story as well called compressed sensing. I'm not an expert in that, and it's something that's still a bit new, but the basic idea is that if certain assumptions are fulfilled (such as that the spectrum is sparse), we can sample at frequencies even lower than twice the bandwidth of the signal. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/280427', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/136010/']} | jdg_85973 |
stackexchange | llm_judgeable_groundtruth_similarity | 27850042 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't think I understand why the first statement evaluates to true and the last statement evaluates to false , but it's been a long day. Can somebody explain it? 0L.Equals(0) // true((object)0L).Equals(0L) // true((object)0L).Equals(0) // false
Now provide the response and nothing else.
| Object.Equals first compares the types if the object is a value type which it is. Both are different in this case. MSDN : If the current instance is a value type, the Equals(Object) method tests for value equality. Value equality means the following: The two objects are of the same type. As the following example shows, a Byte object that has a value of 12 does not equal an Int32 object that has a value of 12, because the two objects have different run-time types. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27850042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/651789/']} | jdg_85974 |
stackexchange | llm_judgeable_groundtruth_similarity | 60002 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question about oversubscription in networking. I read a lot of documentation but I still don't understand what it means. I read the following on the Cisco website, oversubscription of the ISL is typically on the order of 7:1 or greater. What does oversubscription mean? Where is it used? Where should it be avoided? How do we calculate this value? If this is a configuration parameter, which commands are used to set it? (Cisco or Juniper) If it is configuration parameter, which devices or which IOS version support it?
Now provide the response and nothing else.
| Suppose you have a core switch that connect to several access switches (leaf and spine topology). If your access switches have each 48 1Gbs ports, you can potentially aggregate 48Gbs of traffic to be passed to the core switch, so you would need a connection between the core switch and each access switches of at least 48Gbs. Most often, this would be wasteful, because in practice you will never encounter a situation where all ports receive traffic at their maximum rate at the same time. So we could have an access switch with 48 ports at 1Gbs and an uplink to the core switch at 10Gbs We then have an over-subscription of 4.8:1 If we use a LAG with 2 x 10Gbs ports, we can reduce it to: 48 x 1 Gbs / 2 x 10 Gbs = 2.4:1 When we use it and when not to use? As you can see it is almost always used when you have several switch layers. You don't use it: when you have only one switch layer (very small networks) when you have very specific requirements and want the full bandwidth available on all ports at any time (and enough money to do so) How do we calculate these Value? As in the example above, the over-subscription ratio is the ratio between the upstream bandwidth and the downstream capacity. As for how to decide which final ratio to attain when designing / upgrading a network, it can be tricky. This is why, from its vast experience and analysis of real networks, Cisco make some recommendation, such as the one you quoted, or the one quoted by @RonMaupin in a comment: the access to distribution oversubscription ratio is recommended to beno more than 20:1 (for every 20 access 1 Gbps ports on your accessswitch, you need 1 Gbps in the uplink to the distribution switch), andthe distribution to core ratio is recommended to be no more than 4:1 But the correct values for a given network highly depend on the traffic pattern. For existing network, a close monitoring of the bandwidth used on each port should give enough insight. You can also use NetFlow / sFlow to analyze further what use the bandwidth. When designing a new network you need to assess the expected traffic. If this is a configurable parameter, what are the commands which useto configure?(Cisco or Juniper) You can see now that it is not something we configure, but it is a design choice. Note: The ports speed is not always the limiting factor. Most often the switch hardware is not capable of handling the full bandwidth on all its ports simultaneously; this is indeed a kind of internal over-subscription (once again mostly driven by real usage patterns and costs). | {} | {'log_upvote_score': 6, 'links': ['https://networkengineering.stackexchange.com/questions/60002', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/52517/']} | jdg_85975 |
stackexchange | llm_judgeable_groundtruth_similarity | 115170 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
File compression utilities like Winrar or ZIP or 7zip encrypt the password and store it inside the archive. How safe is that? I mean you are giving away the archive with the password inside,it's not like authenticating against a web site where the password is stored somewhere remotely. Can for example binary examination of the archive, given that the rar or zip file format structure is public, reveal the password's encrypted form? For example look for the header that contains the actual password and extract the encrypted string? Say that this is plausible,is there something you can do to extract the clear password of the encrypted one?
Now provide the response and nothing else.
| The password is required to decrypt the archive, so storing a copy of the password inside the archive would be pointless - anyone who could decrypt the archive already has the password. The only things that get stored unencrypted in the archive are the encryption type, and perhaps the file structure and file attributes. Perhaps you're confused about how the archive is getting locked and unlocked. It sounds like you're under the impression that the password gets stored inside the archive, and then the archive utility compares the password entered by the user to the password stored in the archive. However, this is not how encryption works. If the password were stored in the archive such that the utility could read it, then anyone with a simple file editor could read it. Not only that, if the protection scheme were setup to simply force the utility to check against the stored password in this manner, but not obfuscate the data, anyone could simply read the contents of the file without even looking at the password. If the payload were encrypted and the password were stored somewhere in the unencrypted section of the archive, anybody could read the password and quickly decrypt the payload, thus defeating the encryption. Instead, what encryption does is use a key and an algorithm to manipulate some data to obfuscate what it means. Once it has gone through this encryption process, the only way to make sense of the data is to process it through the corresponding decryption algorithm. The only way to successfully do this is by using the correct key and algorithm. Since the algorithm is always known, only the key is unique. The way you know that you used the correct key is the fact that the data you get out of the decryption process has a valid format and isn't just random garbage data. The only information an encrypted archive will give you is the encrypted payload and the algorithm used to encrypt it. It is up to you to supply the correct key. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/115170', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/90321/']} | jdg_85976 |
stackexchange | llm_judgeable_groundtruth_similarity | 1142314 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Background I was creating a secondary navigation menu using embedded unordered lists with anchors and headers. Using a CSS reset sheet all headers and anchors are set to "display: block". When list-style-position: inside is set Firefox and Camino render the headers and anchors below the bullet while Safari, Camino, and IE render it inline. Example Screen Shots (source: michaelgrace.org ) (source: michaelgrace.org ) Example Code <html><head> <style type="text/css"> /* css reset */ h1, h2, h3, h4, h5, h6, a { display: block; } /* list styling */ ul { list-style-position: inside; } </style></head><body><ul> <li> <h3>Primary</h3> <ul> <li> <h4>Secondary</h4> <ul> <li> <h5>Tertiary</h5> <ul> <li><a href="#">Tertiary Link</a></li> </ul> </li> </ul> </li> </ul> </li><ul></body></html> To get Firefox and Camino to render the same as the others I set the unordered lists, headers, and links to "display: inline" but I still want to know... Question Why does Firefox & Camino render the list item below the list bullet when Safari, Opera, & IE render it "normal"?
Now provide the response and nothing else.
| Update This is actually broken and has been since 2000. Still not fixed. I thought I had figured it out but it was a mistake on my part. STILL BROKEN! :( Answer Setting the CSS property of "list-style" to "disc" will cause the Firefox and Camino rendering engine, Gecko, to render the headers inside an unordered list "normal". Answer Background After following @o.k.w's advice of digging into the rendering engine I found that my problem had been reported on bugzilla.mozilla.org on April 22, 2000! (*Cough* Um, Mozilla... the bug is still there.) The reported bug at https://bugzilla.mozilla.org/show_bug.cgi?id=36854 discusses the fact that Mozilla's rendering engine, Gecko , has a problem displaying headers in an unordered list while displaying the list item marker inside. It also says about this problem: "This actually seems to be a major CSS1 compliance issue..." - David Baron At the bottom of the bug report thread there is a link a w3c.org document that led me to find my fix: "There is a description in a CSS 2.0 recommendation: http://www.w3.org/TR/CSS2/generate.html#q11 which tell us that Gecko behavior is faulty." - Listy Blaut At the bottom of that document there is a suggestion to set the CSS list-style to disc: ul { list-style: disc } Setting the unordered list list-style to "disc" has "fixed" the rendering problem in Gecko rendering engine browsers, Firefox & Camino, while leaving the lists unchanged in other browsers. *Note: Although "disc" is a list-style-type property , if "list-style-type: disc" is used instead of "list-style: disc" it does not fix the problem. Solution Example Code <html><head> <style type="text/css"> /* css reset */ h1, h2, h3, h4, h5, h6, a { display: block; } /* list styling */ ul { list-style-position: inside; list-style: disc;} </style></head><body> <ul> <li> <h3>Primary</h3> <ul> <li> <h4>Secondary</h4> <ul> <li> <h5>Tertiary</h5> <ul> <li><a href="#">Tertiary Link</a></li> </ul> </li> </ul> </li> </ul> </li> <ul></body></html> How I feel finding the answer to my own question (source: michaelgrace.org ) I can finally sleep ; ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1142314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117068/']} | jdg_85977 |
stackexchange | llm_judgeable_groundtruth_similarity | 49725012 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just started learning ReactJS. Now I want to know how to handle response status when I make an API request using fetch. Here's my code : componentDidMount(){ this.setState({ isLoading: true }) var id = this.props.match.params.id const api = `bla/bla/${id}`; console.log("start call api") fetch(api) .then((response) => { if(response.status === 200){ console.log("SUCCESSS") return response.json(); }else if(response.status === 408){ console.log("SOMETHING WENT WRONG") this.setState({ requestFailed: true }) } }) .then((data) => { this.setState({ isLoading: false, downlines: data.response }) console.log("DATA STORED") }) .catch((error) => { this.setState({ requestFailed: true }) }) console.log("end call api")} I turned off my connection to make a test for 408 , but my loading is still appears. render(){ const { isLoading, requestFailed } = this.state; if(requestFailed){ return( <div className="errorContainer"> <a className="errorMessage">Opss.. Something went wrong :(</a> </div> ) }} here's the log in my browser: Any ideas to fix this ?
Now provide the response and nothing else.
| Throw an error when the response is not OK so that it proceeds directly to the catch : fetch(api) .then((response) => { if(!response.ok) throw new Error(response.status); else return response.json(); }) .then((data) => { this.setState({ isLoading: false, downlines: data.response }); console.log("DATA STORED"); }) .catch((error) => { console.log('error: ' + error); this.setState({ requestFailed: true }); }); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49725012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7776561/']} | jdg_85978 |
stackexchange | llm_judgeable_groundtruth_similarity | 34698842 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I took discrete math (in which I learned about master theorem, Big Theta/Omega/O) a while ago and I seem to have forgotten the difference between O(logn) and O(2^n) (not in the theoretical sense of Big Oh). I generally understand that algorithms like merge and quick sort are O(nlogn) because they repeatedly divide the initial input array into sub arrays until each sub array is of size 1 before recursing back up the tree, giving a recursion tree that is of height logn + 1. But if you calculate the height of a recursive tree using n/b^x = 1 (when the size of the subproblem has become 1 as was stated in an answer here ) it seems that you always get that the height of the tree is log(n). If you solve the Fibonacci sequence using recursion, I would think that you would also get a tree of size logn, but for some reason, the Big O of the algorithm is O(2^n). I was thinking that maybe the difference is because you have to remember all of the fib numbers for each subproblem to get the actual fib number meaning that the value at each node has to be recalled, but it seems that in merge sort, the value of each node has to be used (or at least sorted) as well. This is unlike binary search, however, where you only visit certain nodes based on comparisons made at each level of the tree so I think this is where the confusion is coming from. So specifically, what causes the Fibonacci sequence to have a different time complexity than algorithms like merge/quick sort?
Now provide the response and nothing else.
| The other answers are correct, but don't make it clear - where does the large difference between the Fibonacci algorithm and divide-and-conquer algorithms come from? Indeed, the shape of the recursion tree for both classes of functions is the same - it's a binary tree. The trick to understand is actually very simple: consider the size of the recursion tree as a function of the input size n . In the Fibonacci recursion, the input size n is the height of the tree; for sorting, the input size n is the width of the tree. In the former case, the size of the tree (i.e. the complexity) is an exponent of the input size, in the latter: it is input size multiplied by the height of the tree, which is usually just a logarithm of the input size. More formally, start by these facts about binary trees : The number of leaves n is a binary tree is equal to the the number of non-leaf nodes plus one. The size of a binary tree is therefore 2n-1. In a perfect binary tree, all non-leaf nodes have two children. The height h for a perfect binary tree with n leaves is equal to log(n) , for a random binary tree: h = O(log(n)) , and for a degenerate binary tree h = n-1 . Intuitively: For sorting an array of n elements with a recursive algorithm, the recursion tree has n leaves . It follows that the width of the tree is n , the height of the tree is O(log(n)) on the average and O(n) in the worst case. For calculating a Fibonacci sequence element k with the recursive algorithm, the recursion tree has k levels (to see why, consider that fib(k) calls fib(k-1) , which calls fib(k-2) , and so on). It follows that height of the tree is k . To estimate a lower-bound on the width and number of nodes in the recursion tree, consider that since fib(k) also calls fib(k-2) , therefore there is a perfect binary tree of height k/2 as part of the recursion tree. If extracted, that perfect subtree would have 2 k/2 leaf nodes. So the width of the recursion tree is at least O(2^{k/2}) or, equivalently, 2^O(k) . The crucial difference is that: for divide-and-conquer algorithms, the input size is the width of the binary tree. for the Fibonnaci algorithm, the input size is it the height of the tree. Therefore the number of nodes in the tree is O(n) in the first case, but 2^O(n) in the second. The Fibonacci tree is much larger compared to the input size. You mention Master theorem ; however, the theorem cannot be applied to analyze the complexity of Fibonacci because it only applies to algorithms where the input is actually divided at each level of recursion. Fibonacci does not divide the input; in fact, the functions at level i produce almost twice as much input for the next level i+1 . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34698842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3957097/']} | jdg_85979 |
stackexchange | llm_judgeable_groundtruth_similarity | 3922251 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In this semester I study differential geometry and in this chapter we want to define what is a surface. In order to do that we first define what a manifold is on a Euclidean space, not generally what is a manifold, and Euclidean space I mean $(\mathbb{R}^{n},\left \| \cdot \right \|)$ . That been said, if someone who doesn't study math ask you "what this a manifold?" how would you answer, in simple terms (as Feynman says). Can we just say it's a homomorphic function from open sets to open sets ? Older posts had been made on this topic (but I don't think they answer well my question, in my option) and have already been answered if there isn't anything new to add I will delete this.
Now provide the response and nothing else.
| So the OP asked this: if someone who doesn't study math ask you "what this a manifold?" how would you answer, in simple terms (as Feynman says). My emphasis. So clearly the interest is about informing people who have no practice with math and requires analogies and explanations that do not include jargon. Which is great, because I am not a mathematician so I will provide the answer that was provided to me when I had this same questions hovering in my head. You can start by saying that, the simplest example of a manifold is surface that is, in general, globally curved. As an example you can give a sphere. The surface of a sphere. Then explain that despite the fact that the sphere is curved, you can divide it in several little squares or triangles and if your division is fine enough and each little triangle is small enough, you can tile the entire surface of the sphere very tightly even though its surface is curved and the small polygonal tiles are flat. So it is locally flat, despite being globally curved. This, incidentally, is why the surface of the Earth seems so flat when seen from our vantage point. Here you make your point that if you can tile a surface with such minuscule flat polygons, then it is a manifold because even though it is generally curved, it is locally flat. As a general example you can give a waving flag, relief terrain maps etc. 2D manifolds for which global curvature isn't constant, the most general examples you can think of. The next point in the instruction is harder because then you need to show your interlocutor that the notion is general. That is, there is no reason why manifolds should be 2D, tiled by flat polygons. They may very well be 3D, tiled by flat polygons or 4D, tiled by flat polytopes. Once you do that you can say that an entire manifold can, in fact, be flat. In which case it will be both globally and locally flat. When you study a manifold embedded inside a flat Euclidean space it is just a fancy way of saying that you, as the observer, is watching your curved study manifold from an external flat manifold. For instance, when you study the curvature of the surface of a sphere, a 2D manifold, inserted in a 3D flat space like the room where you are studying it, which is a 3D flat manifold. You can contrast this type of study with a study of the surface of the sphere where the observer isn't in an external room, but constrained to walk on the surface of the sphere itself. Again, the notion of observing a sphere inside a room can be extended to the general case of $n-1$ dimensional manifolds embedded in $n$ dimensional flat manifolds. I will leave it as homework how to explain what you mean by "Euclidean space". I hope this helps, good luck. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3922251', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/636226/']} | jdg_85980 |
stackexchange | llm_judgeable_groundtruth_similarity | 3773582 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Take a Torus such that 'r', the radius of the circle of the torus's cross-section (see fig. 1), is $300$ m. Now, given that there are two 'floors' in said Torus and the sum of their areas is $864,000 \; m^2$ , find R. (see fig. 2) [Floors highlighted.] This is what I tried: To find the length of the 2nd floor, draw a line from the centre to F (Fig. 2). This is 300 m . We then get a right angle with hypotenuse 300 m, height 200 m. The base is easily calculated to be 223.60 (2d.p.) by The Pythagorean Theorem. Double that to get the length of the 2nd floor as the line segment from the centre is perpendicular (known) to the chord and thus bisects it. Length of Floor-1 = $600 \; m$ Length of Floor-2 = $447.21 \; m $ (2d.p.) First consider the Triangle T-1, $\text{Height} = 200 \ m \quad \; and \quad \text{Base} = 300 - 223.60 = 76.40 \; (2 d.p.) $ So, $$ \text{Hyp.} = 214.1 \quad (1 \, d.p.) $$ Also, note that the angle between the base and hypotenuse is $69.1$ DEG. Now, consider T-2, The largest angle is $180 - 69.1 = 110.9$ DEG, and the unlabelled short length is 214 m. Applying Law of Cosines we get: $$c = \sqrt{R^2 + 159.64R + 45796} $$ Finally consider T-3, Since the height of the triangle remains 200 m, Applying Pyhtaogra's Theorem: $$ d = \sqrt{R^2 + 159.64R + 5796} \qquad (1)$$ We can now finally calculate the areas of the floors (as rings) in terms of 'R'. For Floor-1: $$A_1 = \pi [ \ (R+600)^2 - R^2 \ ] $$ Which simplifies to: $$ A_1 = \pi [ \ 1200R + 3.6 \times 10^5 \ ] \qquad (2) $$ Likewise for Floor-2, $$A_2 = \pi[ \ (d+447.21)^2 - d^2 \ ] \implies A_2 = \pi[\ 894.42d + 1.999 \times 10^5 \ ] \qquad (3)$$ Total Area then is $A_1 + A_2 = \pi[ \ 894.42d + 1200R + 5.60 \times 10^5 \ ]$ Which becomes: $$A_{Net} = \pi[\ \sqrt{R^2 + 159.64R + 5796} + 1200R + 5.6 \times 10^5 \ ] \qquad (4) $$ Since $A_{Net}$ is given to be $864,000 \ m^2$ : $$ \pi[\ \sqrt{R^2 + 159.64R + 5796} + 1200R + 5.6 \times 10^5 \ ] = 864,000 $$ I then proceed to solve it as follows: $$\sqrt{R^2 + 159.64R + 5796} = 1200 R - \big[ {864000\over \pi} {-5.6 \times 10^5} \big] $$ Squaring both sides, $$ R^2 + 159.64R + 5796 = 1.44 \times 10^6 R^2 + 6.84 \times 10^8 R + 8.12 \times 10^{10} $$ This reduces to the following quadratic: $$ 1.44 \times 10^6 R^2 + 6.84 \times 10^8 R + 8.12 \times 10^{10} = 0 \qquad (5)$$ (You might've noticed that I've taken some liberties with rounding. I'm just working towards getting an answer that's good enough, hopefully that shouldn't mess up the calculations too much) Solving this quadratic -- and here's the problem! -- leads to negative results, which is obviously bad. $R_1 = -233 {1\over 3}$ & $R_2 = -247 {2\over 3}$ So what exactly am I doing wrong? Is the problem with the working? Or is there something wrong with the method? I tried solving this with another method, which was basically thinking of the torus as it's cartesian (or polar) equation and then solving for the sum of the areas of intersection (area 'within the torus') of two parallel planes 200m apart where one cuts through the centre of the torus, horizontally. It didn't go well though... Hyelp! [Feel free to edit the question, working, pictures & tags of the question to make it more readable]
Now provide the response and nothing else.
| Yes, transitive models inside $L$ can be very non- $L$ -ish. Specifically, recall Shoenfield absoluteness . Since " $T$ has a countable transitive model" is $\Sigma^1_2$ , $^*$ by applying Downwards Lowenheim-Skolem in $V$ we have that whenever $T$ is a theory with a transitive set model in $V$ then $T$ has a countable transitive model in $L$ . So, for example, if $\mathsf{ZFC+\neg CH}$ + "There is a proper class of supercompacts" has a transitive model, then it has a constructible transitive model, despite the fact that of course that theory is highly incompatible with the axiom of constructibility for both large cardinal and combinatorial reasons. It may help, visualization-wise, to consider e.g. $L_{\omega^2}(\mathbb{R}^L)$ . This is a transitive set contained in $L$ of height $\omega^2$ but containing all constructible reals - basically, it's "short and wide" in a way which makes it very different from any level of $L$ . Now that's sort of a bad example since it's "informationally" equivalent to the better-behaved $L_{\omega_1^L}$ - each $L_\alpha$ for $\alpha<\omega_1^L$ is represented by a real in $L_{\omega^2}(\mathbb{R}^L)$ and can be "decoded" in a definable way - but it's a good first taste of how the particular shape of the levels of $L$ , rather than their mere constructibility, constrains their behavior. EDIT: There's another theorem which is relevant here. It gets a weaker conclusion than Shoenfield, but is quite different and interesting: (Barwise) Every countable model of $\mathsf{ZF}$ has an end extension which is a model of $\mathsf{ZFC+V=L}$ . See here . Of course, that end extension will probably be ill-founded - that's why Barwise's theorem doesn't prove $\mathsf{V=L}$ outright. Incidentally, the picture at that blogpost is quite nice on its own - note the added width , in addition to height, per the comment about the shape of levels of $L$ in the previous section. Barwise's theorem does not give us a top extension. Barwise's theorem lets us transfer consistency results: if $\mathsf{ZFC}$ + "There is a transitive model of $T$ " is consistent then so is $\mathsf{ZFC+V=L}$ + "There is a transitive model of $T$ ." More generally, note that end extensions preserve internal transitivity satisfaction: if $M\models\mathsf{ZF}$ , $A$ and $T$ are in $A$ , $M$ thinks $T$ is a theory and $A$ is a transitive set satisfying $T$ , and $N$ is an end extension of $M$ (perhaps one satisfying $\mathsf{ZFC+V=L}$ !), then $N$ also thinks that $A$ is a transitive model of the theory $T$ . $^*$ OK, that's not strictly true: rather, it's $\Sigma^1_2$ relative to $T$ . So really all we can conclude is that every $\{\in\}$ -theory which is in $L$ which has a transitive model in $V$ also has one in $L$ . A good example of how this can play out is to consider the following. Let $T_0=\mathsf{ZFC}$ + " $0^\sharp$ exists," and let $T_1$ be $T_0$ + axioms correctly stating each bit of $0^\sharp$ . Now per the above (under reasonable hypotheses) $T_0$ has a transitive model in $L$ . On the other hand, $T_1$ definitely won't : a transitive model of $T_1$ has to compute $0^\sharp$ correctly, which $L$ can't. But this is fine, since $T_1$ itself computes $0^\sharp$ : $T_1\not\in L$ so we can't apply Shoenfield. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3773582', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/738447/']} | jdg_85981 |
stackexchange | llm_judgeable_groundtruth_similarity | 109833 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Water is normally assumed to be an incompressible fluid - for example in the context of calculations involving water pressure. I wondered whether that is strictly true, or an approximation? Later I noticed some side note implying water is not fully incompressible. Of course that makes sense, as there are not many things "perfect" in nature in this sense, like maybe supraconductivity and suprafluidity. Now, why, or "in what kind of way", is water only almost incompressible - is it caused by impurities like gases and other fluids in the water? Or is it not completely incompressible in some fundamental way ?
Now provide the response and nothing else.
| Formally, the incompressibility of a fluid is defined by the compressibility ,$$\beta=\frac1\rho\,\frac{\partial\rho}{\partial p}$$where $\rho$ is the mass density and $p$ the gas pressure. This means that, the compressibility is the measure of how much the density (volume) changes when a pressure is applied. For water at standard pressure, this works out to be on the order $10^{-10}\,m^2/N$ which is pretty darn small but definitely non-zero. This value isn't due to impurities in the water, it is due to the properties of H$_2$0 itself . If you look at the $\rho\,{\rm vs}\,p$ plot, you can see how the density changes with both (solid green line): The left-most end of the chart is standard pressure of 1 atm (roughly 0.1 MPa). As you can see, it is not until very high pressures that the density begins to really change. But at most every-day temperatures the deviations from horizontal are negligible. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/109833', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/42975/']} | jdg_85982 |
stackexchange | llm_judgeable_groundtruth_similarity | 2367785 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I get stack traces in Perl?
Now provide the response and nothing else.
| Carp::confess (from use Carp; ) will give you a full stack trace as part of the error. If you just need it as part of something failing, confess is all you really need. Per comments, here's output of various Carp functions: use strict;use warnings;use Carp qw/longmess cluck confess/;sub foo { &bar;}sub bar { &baz;}sub baz { shift->();}my %tests = ( 'longmess' => sub { print longmess 'longmess' }, 'cluck' => sub { cluck 'using cluck' }, 'confess' => sub { confess 'using confess' },);while (my ($name, $sub) = each %tests) { print "$name - before eval:\n"; eval { foo($sub); }; print "$name - before if:\n"; if ($@) { print "caught: $@"; } print "$name - done\n\n";} Running this script, you get: longmess - before eval:longmess at - line 14 main::baz called at - line 10 main::bar called at - line 6 main::foo('CODE(0x183a4d0)') called at - line 26 eval {...} called at - line 25longmess - before if:longmess - doneconfess - before eval:confess - before if:caught: using confess at - line 20 main::__ANON__() called at - line 14 main::baz called at - line 10 main::bar called at - line 6 main::foo('CODE(0x183a3e0)') called at - line 26 eval {...} called at - line 25confess - donecluck - before eval:using cluck at - line 19 main::__ANON__() called at - line 14 main::baz called at - line 10 main::bar called at - line 6 main::foo('CODE(0x183a434)') called at - line 26 eval {...} called at - line 25cluck - before if:cluck - done Running this script but redirecting STDOUT (thus showing what gets printed on STDERR), you get: using cluck at - line 19 main::__ANON__() called at - line 14 main::baz called at - line 10 main::bar called at - line 6 main::foo('CODE(0x183a434)') called at - line 26 eval {...} called at - line 25 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2367785', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/101823/']} | jdg_85983 |
stackexchange | llm_judgeable_groundtruth_similarity | 7599576 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a script that will create or update products in my catalog. The script works fine when the product needs to be created, but it fails when the product already exists in the database giving me (many times) the following messages : 2011-09-30T08:00:53+00:00 ERR (3): Recoverable Error: Argument 3 passed to Mage_Catalog_Model_Resource_Eav_Mysql4_Abstract::_canUpdateAttribute() must be an array, null given, called in ... 2011-09-30T08:00:53+00:00 ERR (3): Recoverable Error: Argument 3 passed to Mage_Eav_Model_Entity_Abstract::_canUpdateAttribute() must be an array, null given, called in ... 2011-09-30T08:00:53+00:00 ERR (3): Warning: array_key_exists() [function.array-key-exists]: The second argument should be either an array or an object in ... I've been looking at the method quoted in the message, but I can't find any reason why the script fails. The script first try to load a product using : $product = Mage::getModel('catalog/product')->loadByAttribute('sku', $sku); and then test if the product was retrieved using a simple if(!$product) { //creation } . All the code that follow the if statement is shared for creation or update and consists of setter calls on product object. Here is the code I use : $product = Mage::getModel('catalog/product')->loadByAttribute('sku', $sku);if(!$product) { // the product doesn't exist yet $product = new Mage_Catalog_Model_Product(); $product->setSku($sku); $product->setTypeId(Mage_Catalog_Model_Product_Type::TYPE_SIMPLE); $product->setCreatedAt(strtotime('now'));}// setters calls$product->setTeinte(trim((string)$record->web_teinte));// ...// finally save the product$product->save(); Maybe someone has already faced the same problem. Any help is welcome ! Thank you.
Now provide the response and nothing else.
| Chances are, in your "setter calls" you are trying to set something that cannot be directly set on $product. It could even be the "setTeinte" as I am not sure what that is trying to set. But as we cannot see all your code, it is a little difficult to say, so as I guide, take a look at the code below, which sets some information directly on the product and then stock levels. It does therefore, illustrate how certain data has to be set. I hope it helps. $SKU = (string)$XMLproduct->Sku;$product = Mage::getModel('catalog/product')->loadByAttribute('sku',$SKU);if ($product) { //Product found, so we need to update it in Magento. $product->setName((string)$XMLproduct->Name); $product->setPrice((real)$XMLproduct->SalePrice); //$product->setDescription((string)$XMLproduct->LongDescription); //$product->setShortDescription((string)$XMLproduct->Description); $product->save(); $productId = $product->getId(); $stockItem =Mage::getModel('cataloginventory/stock_item')->loadByProduct($productId); $stockItemId = $stockItem->getId(); $stockItem->setData('manage_stock', 1); $stockItem->setData('qty', (integer)$XMLproduct->QtyInStock); $stockItem->save(); echo $SKU," Updated: Name: '",(string)$XMLproduct->Name,"', Price: ",(real)$XMLproduct->SalePrice,", Stock level: ",$XMLproduct->QtyInStock,PHP_EOL; $updated++;} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7599576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85984 |
stackexchange | llm_judgeable_groundtruth_similarity | 26670 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working through the Programming Paradigms via Mathematica (A First Course) and am attempting to answer the following: Use recursion to count how many times the argument can have its square root taken before the result is less than 2: "manySqrt[x_Real]" should report how many times the "Sqrt[]" function has to be applied to "x" before the result is less than 2. For instance, Sqrt[81.0] is 9.0, Sqrt[9.0] is 3.0 and Sqrt[3.0] is less than 2 so, "manySqrt[81.0]" is 3. Of course, don't use a logarithm. Additionally: First, use of a repetition function ("Map[]", "MapThread[]", "Nest[]", "NestList[]", "Fold[]", "FoldList[]", "Table[]", "Apply[]", and so on, being our "adverbs") will generally disqualify a method as purely recursive, for the repetition is accomplished externally from the nested function calls. Also, repetition accomplished with a loop structure, such as a "While[]", a "Do[]", or some other repetition command, is explicitly forbidden. My strategy is to make a list of 'Real' values and then count the 'Real' values in the list. I can get the following to work: Clear[manySqrt]manySqrt[x_Real] := If[x >= 2, Flatten[Append[{x}, manySqrt[Sqrt[x]]]]] manySqrt[81.](* {81., 9., 3., Null} *) Count[%, _Real](* 3 *) However I get an erroneous answer when I try to wrap Count as follows: Clear[manySqrt]manySqrt[x_Real] := If[x >= 2, Count[Flatten[Append[{x}, manySqrt[Sqrt[x]]]], _Real]] manySqrt[81.] (* 1 *) I would appreciate a point in the right direction.
Now provide the response and nothing else.
| Since you're learning, it'll be instructive to know why your solution didn't work. For that, let's try replacing Count with count (note the lower case c ), which doesn't have any definitions attached with it (clear it first if you have) and evaluate the function: ClearAll[manySqrt2, count]manySqrt2[x_Real] := If[x >= 2, count[Flatten[Append[{x}, manySqrt2[Sqrt[x]]]], _Real]]manySqrt2[1000.] (* test *)(* count[{1000., count[{31.6228, count[{5.62341, count[{2.37137, Null}, _Real]}, _Real]}, _Real]}, _Real] *) So you can see that the eventual solution is a final "count" of the individual counts, not the square root values at each recursion (which is what you tried to do). Since the output of Count is an Integer , the final evaluation will always be of the form: Count[{Real, Integer, Integer, Integer, ..., Null}, Real] which is always 1 . In general, it is always good to approach recursive definitions by first defining a "stopping condition". For instance, your solution does not return anything (or returns Null ) for input < 2, when it should return 0. I would've written the definition as ClearAll[manySqrt]manySqrt[x_Real] /; x < 2 = 0;manySqrt[x_Real] := 1 + manySqrt[√x] I find this much more readable and maintainable than the If based approach, because it is immediately clear what the function does and it is easily extendable to more than one criterion. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/26670', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/7133/']} | jdg_85985 |
stackexchange | llm_judgeable_groundtruth_similarity | 58060530 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some troubles with dynamically changing List height that dependent on elements count. I tried this solution but it didn't work. List { ForEach(searchService.searchResult, id: \.self) { item in Text(item) .font(.custom("Avenir Next Regular", size: 12)) }}.frame(height: CGFloat(searchService.searchResult.count * 20))
Now provide the response and nothing else.
| TL;DR This is not how the designers of SwiftUI want you to use lists. Either you will have to come up with a hacky solution that will probably break in the future (see below), or use something other than a list. Background SwiftUI tends to have two types of Views Those designed to be easily modifiable and composable, providing unlimited customizability for a unique look and feel. Those designed to provide a standard, consistent feel to some type of interaction, regardless of what app they are used in. An example of type 1 would be Text. You can change font size, weight, typeface, color, background, padding, etc. It is designed for you to modify it. An example of type 2 would be List. You are not in direct control of row height, you can't change the padding around views, you can't tell it to show only so many rows, etc. They don't want it to be very customizable, because then each app's lists would behave differently, defeating the purpose of a standard control. List is designed to fill the entire parent View with as many rows as possible, even if they are empty or only partially on screen (and scroll if there are too many to show at once). Your issue The problem you are having comes about because the size of the List does not affect the size of its rows in any way. SwiftUI doesn't care if there are too many or too few rows to fit in your preferred size; it will happily size its rows according to content, even if it means they don't all show or there are empty rows shown. If you really need rows to resize according to the size of their parent, you should use a VStack. If it needs to scroll, you will need to wrap the VStack in a ScrollView. Hacky solution If you still insist on using a list, you will have to do something like the following: struct ContentView: View { @State private var textHeight: Double = 20 let listRowPadding: Double = 5 // This is a guess let listRowMinHeight: Double = 45 // This is a guess var listRowHeight: Double { max(listRowMinHeight, textHeight + 2 * listRowPadding) } var strings: [String] = ["One", "Two", "Three"] var body: some View { VStack { HStack { Text(String(format: "%2.0f", textHeight as Double)) Slider(value: $textHeight, in: 20...60) } VStack(spacing: 0) { Color.red List { ForEach(strings, id: \.self) { item in Text(item) .font(.custom("Avenir Next Regular", size: 12)) .frame(height: CGFloat(self.textHeight)) .background(Color(white: 0.5)) } } // Comment out the following line to see how List is expected to work .frame(height: CGFloat(strings.count) * CGFloat(self.listRowHeight)) Color.red }.layoutPriority(1) } }} The slider is there to show how the list row heights change with the height of their child view. You would have to manually pick listRowPadding and listRowMinHeight to get the appearance that best matches your expectation. If Apple ever changes how a List looks (changes padding, minimum row heights, etc.) you will have to remember to come back and adjust these values manually. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58060530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10667562/']} | jdg_85986 |
stackexchange | llm_judgeable_groundtruth_similarity | 56288949 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let us consider the following data frame: import pandas as pdd = {'col1': [1, 2, 3], 'col2': [3, 4, 5]}df=pd.DataFrame(data=d) If I want to access the first element in pandas series df['col1'] , I can simply go df['col1'][0] . But how can I access the last element in this series?I have tried df['col1'][-1] which returns the following error: KeyError: -1L I know that I could go for something like df['col1'][len(df)-1] but why is reverse indexing impossible here?
Now provide the response and nothing else.
| For select last value need Series.iloc or Series.iat , because df['col1'] return Series : print (df['col1'].iloc[-1])3print (df['col1'].iat[-1])3 Or convert Series to numpy array and select last: print (df['col1'].values[-1])3 Or use DataFrame.iloc or DataFrame.iat - but is necessary position of column by Index.get_loc : print (df.iloc[-1, df.columns.get_loc('col1')])3print (df.iat[-1, df.columns.get_loc('col1')])3 Or is possible use last value of index (necessary not duplicated) and select by DataFrame.loc : print (df.loc[df.index[-1], 'col1'])3 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/56288949', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6440589/']} | jdg_85987 |
stackexchange | llm_judgeable_groundtruth_similarity | 16010630 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Visual Studio 2012 Update 2 and am having trouble trying to understand why std::vector is trying to use the copy constructor of unique_ptr. I have looked at similar issues and most are related to not having an explicit move constructor and/or operator. If I change the member variable to a string, I can verify that the move constructor is called; however, trying to use the unique_ptr results in the compilation error: error C2248: 'std::unique_ptr<_Ty>::unique_ptr' : cannot access private member declared in class 'std::unique_ptr<_Ty>' . I'm hoping someone can point me to what I am missing, thanks! #include <vector>#include <string>#include <memory>class MyObject{public: MyObject() : ptr(std::unique_ptr<int>(new int)) { } MyObject(MyObject&& other) : ptr(std::move(other.ptr)) { } MyObject& operator=(MyObject&& other) { ptr = std::move(other.ptr); return *this; }private: std::unique_ptr<int> ptr;};int main(int argc, char* argv[]){ std::vector<MyObject> s; for (int i = 0; i < 5; ++i) { MyObject o; s.push_back(o); } return 0;}
Now provide the response and nothing else.
| The push_back() function takes its argument by value. Therefore, an attempt is made to either copy-construct the argument of push_back() (if you are passing an lvalue), or to move-construct it (if you are passing an rvalue). In this case, o is an lvalue - because named objects are lvalues - and rvalue references cannot bind to lvalues. Therefore, the compiler cannot invoke your move constructor. In order to have your object moved , you have to write: s.push_back(std::move(o));// ^^^^^^^^^ What surprises me in this case is that it seems VC11 generated a copy-constructor for MyObject implicitly without defining it as deleted (judging from the error you posted). This should not be the case, since your class declares a move constructor. Per Paragraph 12.8/7 of the C++11 Standard, in fact: If the class definition does not explicitly declare a copy constructor, one is declared implicitly. If the class definition declares a move constructor or move assignment operator, the implicitly declared copy constructor is defined as deleted ; otherwise, it is defined as defaulted (8.4) I must conclude that while the error you are getting is correct - because you are not passing an rvalue to push_back() - VC11 is not fully compliant here. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16010630', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2281698/']} | jdg_85988 |
stackexchange | llm_judgeable_groundtruth_similarity | 14247 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've only done a bit of Flex development thus far, but I've preferred the approach of creating controls programmatically over mxml files, because (and please , correct me if I'm wrong!) I've gathered that you can't have it both ways -- that is to say, have the class functionality in a separate ActionScript class file but have the contained elements declared in mxml. There doesn't seem to be much of a difference productivity-wise, but doing data binding programmatically seems somewhat less than trivial. I took a look at how the mxml compiler transforms the data binding expressions. The result is a bunch of generated callbacks and a lot more lines than in the mxml representation. So here's the question: is there a way to do data binding programmatically that doesn't involve a world of hurt?
Now provide the response and nothing else.
| Don't be afraid of MXML. It's great for laying out views. If you write your own reusable components then writing them in ActionScript may sometimes give you a little more control, but for non-reusable views MXML is much better. It's more terse, bindings are extemely easy to set up, etc. However, bindings in pure ActionScript need not be that much of a pain. It will never be as simple as in MXML where a lot of things are done for you, but it can be done with not too much effort. What you have is BindingUtils and it's methods bindSetter and bindProperty . I almost always use the former, since I usually want to do some work, or call invalidateProperties when values change, I almost never just want to set a property. What you need to know is that these two return an object of the type ChangeWatcher , if you want to remove the binding for some reason, you have to hold on to this object. This is what makes manual bindings in ActionScript a little less convenient than those in MXML. Let's start with a simple example: BindingUtils.bindSetter(nameChanged, selectedEmployee, "name"); This sets up a binding that will call the method nameChanged when the name property on the object in the variable selectedEmployee changes. The nameChanged method will recieve the new value of the name property as an argument, so it should look like this: private function nameChanged( newName : String ) : void The problem with this simple example is that once you have set up this binding it will fire each time the property of the specified object changes. The value of the variable selectedEmployee may change, but the binding is still set up for the object that the variable pointed to before. There are two ways to solve this: either to keep the ChangeWatcher returned by BindingUtils.bindSetter around and call unwatch on it when you want to remove the binding (and then setting up a new binding instead), or bind to yourself. I'll show you the first option first, and then explain what I mean by binding to yourself. The currentEmployee could be made into a getter/setter pair and implemented like this (only showing the setter): public function set currentEmployee( employee : Employee ) : void { if ( _currentEmployee != employee ) { if ( _currentEmployee != null ) { currentEmployeeNameCW.unwatch(); } _currentEmployee = employee; if ( _currentEmployee != null ) { currentEmployeeNameCW = BindingUtils.bindSetter(currentEmployeeNameChanged, _currentEmployee, "name"); } }} What happens is that when the currentEmployee property is set it looks to see if there was a previous value, and if so removes the binding for that object ( currentEmployeeNameCW.unwatch() ), then it sets the private variable, and unless the new value was null sets up a new binding for the name property. Most importantly it saves the ChangeWatcher returned by the binding call. This is a basic binding pattern and I think it works fine. There is, however, a trick that can be used to make it a bit simpler. You can bind to yourself instead. Instead of setting up and removing bindings each time the currentEmployee property changes you can have the binding system do it for you. In your creationComplete handler (or constructor or at least some time early) you can set up a binding like so: BindingUtils.bindSetter(currentEmployeeNameChanged, this, ["currentEmployee", "name"]); This sets up a binding not only to the currentEmployee property on this , but also to the name property on this object. So anytime either changes the method currentEmployeeNameChanged will be called. There's no need to save the ChangeWatcher because the binding will never have to be removed. The second solution works in many cases, but I've found that the first one is sometimes necessary, especially when working with bindings in non-view classes (since this has to be an event dispatcher and the currentEmployee has to be bindable for it to work). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14247', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/266/']} | jdg_85989 |
stackexchange | llm_judgeable_groundtruth_similarity | 4433620 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The play framework has a function in the view to create an SEO friendly URL via the slugify() function. There does not seem to be an "unslugify" function, so how do you lookup models from the slugified parameter? If for instance I have a Blog model which has a title property set to "hello world", slugify would produce "hello-world". If I then perform a query Blog.find("byTitle", title) where title is the slugified title, it will return no results. How can you lookup a model using the provided slug?
Now provide the response and nothing else.
| There doesn't appear to be a method, but I am not surprised. Slugify removes characters from the string and unslugify would not know where to put it back in. For example, if you look at the URL for this question, it is stackoverflow.com/questions/4433620/play-framework-how-do-i-lookup-an-item-from-a-slugify-url It has removed the exclamation (!), parentheses and the quotes from the title of this question. How would an unslugify method know how and where to put those characters back in? The approach you want to take is to also include the ID, just as the stackoverflow URL has. If you wanted to take the same format as the stackoverflow URL, your route would be GET /questions/{id}/{title} Question.show() Then in your action, you would ignore the title, and simply do Blog.findById(id); You then have a SEO friendly URL, plus use a good REST approach to accessing the Blog post. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4433620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58733/']} | jdg_85990 |
stackexchange | llm_judgeable_groundtruth_similarity | 49619754 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to style the child component of a styled-component, but it sends the css to the parent instead of the child/ This is my code, export const Card = styled.div` position: relative; ${props => props.horizontal && ` ${CardImage}{ max-height: 60%; overflow: hidden; }`}`export const CardImage = styled.div` position: relative;` EDIT: When I add a condition before rendering that's when it doesn't work
Now provide the response and nothing else.
| You're almost there, you're just missing a $ in your code and you'll need to move the CardImage above the Card component: export const CardImage = styled.div` position: relative;`export const Card = styled.div` position: relative; ${CardImage}{ max-height: 60%; overflow: hidden; }` Edit (04/04/2018): If you want to add a condition around a whole block like you have, you need to import the css function from styled components and use that: import styled, {css} from "styled-components";export const CardImage = styled.div` position: relative;`export const Card = styled.div` position: relative; ${props => props.horizontal && css` // - Notice the css` here. ${CardImage}{ max-height: 60%; overflow: hidden; } `}` | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49619754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9062690/']} | jdg_85991 |
stackexchange | llm_judgeable_groundtruth_similarity | 5634835 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose if I have two fields in my form viz first_name and last_name I have set the validation rule only for first_name. set_value should work for both the fields regardless of rule ? right ? For me it works only for the fields which have rules with them !!!
Now provide the response and nothing else.
| set_value only works against the items you have chosen to validate using the form_validation class. Easiest way is to pretend you are validating by adding this to your form validation config array( 'field' => 'address1', 'label' => '', 'rules' => '' ), There is a hack someone has done to the CI library but I haven't been able to find the documentation on that today...Or use this modification instead: http://codeigniter.com/forums/viewthread/159535/#775628 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5634835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197836/']} | jdg_85992 |
stackexchange | llm_judgeable_groundtruth_similarity | 9717852 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a JavaFX 2.0 application, which consists of two FXML files, and two controllers for them + one "main" .java file. At the start time, FXML1 is initialized, like this: public void start(Stage stage) throws Exception { stage.setTitle("Demo Jabber JavaFX Chat"); Parent root = FXMLLoader.load(getClass().getResource("fxml_example.fxml"), ResourceBundle.getBundle("fxmlexample.fxml_example")); Scene scene = new Scene(root, 226, 264); stage.setScene(scene); scene.getStylesheets().add("fxmlexample/fxmlstylesheet.css"); stage.show();} Then, when a button from scene1 is clicked, in its event handler in Controller1 class, I change scene1 root, to show new gui-view for a user. And in this controller I initialize some object. For example, like this: public class FXMLExampleController { //some fields... private MySuperObject c; @FXML protected void handleSubmitButtonAction(ActionEvent event) { //some fields... c = new MySuperObject(); //here i initialize my object, i'm interested in try { c.login(username, password); // some actions with this object, which i need to make. Scene cc = buttonStatusText.getScene(); Parent root = null; try { //changing a scene content... root = FXMLLoader.load(getClass().getResource("fxml_example2.fxml"), ResourceBundle.getBundle("fxmlexample.fxml_example")); } catch (IOException ex) { Logger.getLogger(FXMLExampleController.class.getName()).log(Level.SEVERE, null, ex); } cc.setRoot(root); } And, after that, I have to do some work with that object on the next scene, and it must be NOT a new instance of the same class, but the object which I have initialized on the first one scene. I understand how to make these all using "standard java", but I'm kind of confused on this task using JavaFX + FXML.
Now provide the response and nothing else.
| In FX 2.2 new API for controller-node was introduced: // create class which is both controller and nodepublic class InnerFxmlControl extends HBox implements Initializable { @FXML public ComboBox cb; public InnerFxmlControl () { FXMLLoader fxmlLoader = new FXMLLoader(getClass().getResource("fxml_example2.fxml")); fxmlLoader.setRoot(this); fxmlLoader.setController(this); try { fxmlLoader.load(); } catch (IOException exception) { throw new RuntimeException(exception); } } with next fxml (note tag fx:root ): <fx:root type="javafx.scene.layout.HBox" xmlns:fx="http://javafx.com/fxml"> <children> <ComboBox fx:id="cb" /> </children></fx:root> By this you've created a new control, which you can use as regular JavaFX controls. E.g. in your case: @FXML protected void handleSubmitButtonAction(ActionEvent event) { // you just create new control, all fxml tricks are encapsulated InnerFxmlControl root = new InnerFxmlControl(); // and you can access all its' methods and fields including matched by @FXML tag: root.cb.getItems().add("new item"); Scene cc = buttonStatusText.getScene(); cc.setRoot(root); } and in fxml: <InnerFxmlControl /> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9717852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1267404/']} | jdg_85993 |
stackexchange | llm_judgeable_groundtruth_similarity | 18450835 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is similar to some other questions ( Explicitly select items from a Python list or tuple , Grabbing specific indices of a list in Python ), but I'm looking to do the opposite: What is a clean way to specify a list/tuple of indices to exclude , instead of to select? I'm thinking of something similar to R or MATLAB where you can specify indices to exclude, like: vector1 <- c('a', 'b', 'c', 'd')vector2 <- vector1[-1] # ['b', 'c', 'd']vector3 <- vector1[c(-1, -2)] # ['c', 'd'] Is there a good way to accomplish the same thing in Python? Apologizes if this is a dupe, I wasn't sure exactly what to search for.
Now provide the response and nothing else.
| >>> to_exclude = {1, 2}>>> vector = ['a', 'b', 'c', 'd']>>> vector2 = [element for i, element in enumerate(vector) if i not in to_exclude] The tricks here are: Use a list comprehension to transform one list into another. (You can also use the filter function, especially if the predicate you're filtering on is already lying around as a function with a nice name.) Use enumerate to get each element and its index together. Use the in operator against any Set or Sequence * type to decide which ones to filter. (A set is most efficient if there are a lot of values, and probably conceptually the right answer… But it really doesn't matter much for just a handful; if you've already got a list or tuple with 4 indices in it, that's a " Set or Sequence " too, so you can just use it.) * Technically, any Container will do. But most Container s that aren't a Set or Sequence would be silly here. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18450835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/108311/']} | jdg_85994 |
stackexchange | llm_judgeable_groundtruth_similarity | 112361 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
img = Import /@ {"http://i.stack.imgur.com/klqh9.png", "http://i.stack.imgur.com/zCjXN.png"};mask = Dilation[Binarize[First@img, {0, .15}], 2] Then we get result=Inpaint[#,mask,Method->"TextureSynthesis"]&/@img As you see that arrow point out.The effect is very bad.We get some thing look like a blob,especially that grayscale image have a obvious situation.So how to get a better result with that grayscale image? update Since the @bill s's comment and the @Berg's answer think we can convert the RGB image convert to grayscale image.So I update this question to specify maybe this is not good methed.And you can see the last image is the worst. ImageAssemble[{Last@img, Last@result, ColorConvert[First@result, "Grayscale"]}]
Now provide the response and nothing else.
| There's not much information about what "TextureSynthesis" does in the documentation, but it probably does something roughly like this: Look at the pixels at the border of each inpainting region. Look for nearby similar pixels. Generate a texture based on the pixels found in (2) Let's look at one region in detail: The border of the inpainting mask is marked red in the image on the right. If you look at the pixels near that border, they're mostly orange, some slightly darker orange. Now if I look for similar pixels in the neighborhood, I will find (almost exclusively) bright orange pixels. There are darker pixels in the neighborhood, but they have a different color, so they're not very similar. And that's most likely the reason you get different results for the gray scale image: In the gray scale image, if there are darker pixels at the mask border, Inpaint will find "similar" pixels (in gray value) in the neighborhood - so it will generate an inpainting texture based on these pixels too. And that means you get a darker inpainted region. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/112361', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/21532/']} | jdg_85995 |
stackexchange | llm_judgeable_groundtruth_similarity | 1827677 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some code-under-test that calls on a Java logger to report its status.In the JUnit test code, I would like to verify that the correct log entry was made in this logger. Something along the following lines: methodUnderTest(bool x){ if(x) logger.info("x happened")}@Test tester(){ // perhaps setup a logger first. methodUnderTest(true); assertXXXXXX(loggedLevel(),Level.INFO);} I suppose that this could be done with a specially adapted logger (or handler, or formatter), but I would prefer to re-use a solution that already exists. (And, to be honest, it is not clear to me how to get at the logRecord from a logger, but suppose that that's possible.)
Now provide the response and nothing else.
| I've needed this several times as well. I've put together a small sample below, which you'd want to adjust to your needs. Basically, you create your own Appender and add it to the logger you want. If you'd want to collect everything, the root logger is a good place to start, but you can use a more specific if you'd like. Don't forget to remove the Appender when you're done, otherwise you might create a memory leak. Below I've done it within the test, but setUp or @Before and tearDown or @After might be better places, depending on your needs. Also, the implementation below collects everything in a List in memory. If you're logging a lot you might consider adding a filter to drop boring entries, or to write the log to a temporary file on disk (Hint: LoggingEvent is Serializable , so you should be able to just serialize the event objects, if your log message is.) import org.apache.log4j.AppenderSkeleton;import org.apache.log4j.Level;import org.apache.log4j.Logger;import org.apache.log4j.spi.LoggingEvent;import org.junit.Test;import java.util.ArrayList;import java.util.List;import static org.hamcrest.CoreMatchers.is;import static org.junit.Assert.assertThat;public class MyTest { @Test public void test() { final TestAppender appender = new TestAppender(); final Logger logger = Logger.getRootLogger(); logger.addAppender(appender); try { Logger.getLogger(MyTest.class).info("Test"); } finally { logger.removeAppender(appender); } final List<LoggingEvent> log = appender.getLog(); final LoggingEvent firstLogEntry = log.get(0); assertThat(firstLogEntry.getLevel(), is(Level.INFO)); assertThat((String) firstLogEntry.getMessage(), is("Test")); assertThat(firstLogEntry.getLoggerName(), is("MyTest")); }}class TestAppender extends AppenderSkeleton { private final List<LoggingEvent> log = new ArrayList<LoggingEvent>(); @Override public boolean requiresLayout() { return false; } @Override protected void append(final LoggingEvent loggingEvent) { log.add(loggingEvent); } @Override public void close() { } public List<LoggingEvent> getLog() { return new ArrayList<LoggingEvent>(log); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1827677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/221538/']} | jdg_85996 |
stackexchange | llm_judgeable_groundtruth_similarity | 3307330 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm parsing third party log files containing date/time using Joda. The date/time is in one of two different formats, depending on the age of the log files I'm parsing. Currently I have code like this: try { return DateTimeFormat.forPattern("yyyy/MM/dd HH:mm:ss").parseDateTime(datePart);} catch (IllegalArgumentException e) { return DateTimeFormat.forPattern("E, MMM dd, yyyy HH:mm").parseDateTime(datePart);} This works but contravenes Joshua Bloch's advice from Effective Java 2nd Edition (Item 57: Use exceptions only for exceptional conditions). It also makes it hard to determine if an IllegalArgumentException occurs due to a screwed up date/time in a log file. Can you suggest a nicer approach that doesn't misuse exceptions?
Now provide the response and nothing else.
| You can create multiple parsers and add them to the builder by using DateTimeFormatterBuilder.append method: DateTimeParser[] parsers = { DateTimeFormat.forPattern( "yyyy-MM-dd HH" ).getParser(), DateTimeFormat.forPattern( "yyyy-MM-dd" ).getParser() };DateTimeFormatter formatter = new DateTimeFormatterBuilder().append( null, parsers ).toFormatter();DateTime date1 = formatter.parseDateTime( "2010-01-01" );DateTime date2 = formatter.parseDateTime( "2010-01-01 01" ); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3307330', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2959/']} | jdg_85997 |
stackexchange | llm_judgeable_groundtruth_similarity | 10310917 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to get a UIColor from an hex value in Monotouch?
Now provide the response and nothing else.
| I found some solutions for Objective C and none specifically for Monotouch I ended up developing an extension method based on the most popular solution for IOS: public static class UIColorExtensions { public static UIColor FromHex(this UIColor color,int hexValue) { return UIColor.FromRGB( (((float)((hexValue & 0xFF0000) >> 16))/255.0f), (((float)((hexValue & 0xFF00) >> 8))/255.0f), (((float)(hexValue & 0xFF))/255.0f) ); } } and use it like this: new UIColor().FromHex(0x4F6176); Update , it seems that as off Monotouch 5.4 UIColor does not have a parameterless constructor so use it like this: UIColor.Clear.FromHex(0xD12229); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10310917', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/210462/']} | jdg_85998 |
stackexchange | llm_judgeable_groundtruth_similarity | 27176134 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm dealing with an API that sends back integers (1=true, other=false) to represent booleans. I've seen this question and answer , but I need to be able to specify which field this should apply to, since some times an integer is actually an integer. EDIT: The incoming JSON could possibly look like this (could also be String instead of int, etc...): { "regular_int": 1234, "int_that_should_be_a_boolean": 1} I need a way to specify that int_that_should_be_a_boolean should be parsed as a boolean and regular_int should be parsed as an integer.
Now provide the response and nothing else.
| We will provide Gson with a little hook, a custom deserializer for booleans, i.e. a class that implements the JsonDeserializer<Boolean> interface: CustomBooleanTypeAdapter import java.lang.reflect.Type;import com.google.gson.*;class BooleanTypeAdapter implements JsonDeserializer<Boolean> {public Boolean deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { if (((JsonPrimitive) json).isBoolean()) { return json.getAsBoolean(); } if (((JsonPrimitive) json).isString()) { String jsonValue = json.getAsString(); if (jsonValue.equalsIgnoreCase("true")) { return true; } else if (jsonValue.equalsIgnoreCase("false")) { return false; } else { return null; } } int code = json.getAsInt(); return code == 0 ? false : code == 1 ? true : null; }} To use it we’ll need to change a little the way we get the Gson mapper instance, using a factory object, the GsonBuilder, a common pattern way to use GSON is here. GsonBuilder builder = new GsonBuilder();builder.registerTypeAdapter(Boolean.class, new BooleanTypeAdapter());Gson gson = builder.create(); For primitive Type use below one GsonBuilder builder = new GsonBuilder(); builder.registerTypeAdapter(boolean.class, new BooleanTypeAdapter()); Gson gson = builder.create(); Enjoy the JSON parsing! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27176134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/691639/']} | jdg_85999 |
stackexchange | llm_judgeable_groundtruth_similarity | 46693496 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you create a for-each loop in Google Apps Script? I'm writing an email script with GAS, and I'd like to iterate through an array using a for-each loop, rather than a regular for-loop. I've already seen this answer, but the object is undefined, presumably because the for loop doesn't work. // threads is a GmailThread[]for (var thread in threads) { var msgs = thread.getMessages(); //msgs is a GmailMessage[] for (var msg in msgs) { msg.somemethod(); //somemethod is undefined, because msg is undefined. }} (I'm still new to javascript, but I know of a for-each loop from java.)
Now provide the response and nothing else.
| Update: See @BBau answer below https://stackoverflow.com/a/60785941/5648223 for update on Migrating scripts to the V8 runtime. In Google Apps Script:When using "for (var item in itemArray)",'item' will be the indices of itemArray throughout the loop (0, 1, 2, 3, ...).When using "for each (var item in itemArray)",'item' will be the values of itemArray throughout the loop ('item0', 'item1', 'item2', 'item3', ...). Example: function myFunction() { var arrayInfo = []; arrayInfo.push('apple'); arrayInfo.push('orange'); arrayInfo.push('grapefruit'); Logger.log('Printing array info using for loop.'); for (var index in arrayInfo) { Logger.log(index); } Logger.log('Printing array info using for each loop.'); for each (var info in arrayInfo) { Logger.log(info); }} Result: [17-10-16 23:34:47:724 EDT] Printing array info using for loop. [17-10-16 23:34:47:725 EDT] 0 [17-10-16 23:34:47:725 EDT] 1 [17-10-16 23:34:47:726 EDT] 2 [17-10-16 23:34:47:726 EDT] Printing array info using for each loop. [17-10-16 23:34:47:727 EDT] apple [17-10-16 23:34:47:728 EDT] orange [17-10-16 23:34:47:728 EDT] grapefruit | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46693496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6844235/']} | jdg_86000 |
stackexchange | llm_judgeable_groundtruth_similarity | 40020388 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a weird bug I've stumbled upon, and I am not sure why is it happening, whether it's a bug in SQLAlchemy, in Flask-SQLAlchemy, or any feature of Python I'm not yet aware of. We are using Flask 0.11.1, with Flask-SQLAlchemy 2.1 using a PostgreSQL as DBMS. Examples use the following code to update data from the database: entry = Entry.query.get(1)entry.name = 'New name'db.session.commit() This works totally fine when executing from the Flask shell, so the database is correctly configured. Now, our controller for updating entries, slightly simplified (without validation and other boilerplate), looks like this: def details(id): entry = Entry.query.get(id) if entry: if request.method == 'POST': form = request.form entry.name = form['name'] db.session.commit() flash('Updated successfully.') return render_template('/entry/details.html', entry=entry) else: flash('Entry not found.') return redirect(url_for('entry_list'))# In the application the URLs are built dynamically, hence why this instead of @app.routeapp.add_url_rule('/entry/details/<int:id>', 'entry_details', details, methods=['GET', 'POST']) When I submit the form in details.html, I can see perfectly fine the changes, meaning the form has been submitted properly, is valid and that the model object has been updated. However, when I reload the page, the changes are gone, as if it had been rolled back by the DBMS. I have enabled app.config['SQLALCHEMY_ECHO'] = True and I can see a "ROLLBACK" before my own manual commit. If I change the line: entry = Entry.query.get(id) To: entry = db.session.query(Entry).get(id) As explained in https://stackoverflow.com/a/21806294/4454028 , it does work as expected, so my guess what there was some kind of error in Flask-SQLAlchemy's Model.query implementation. However, as I prefer the first construction, I did a quick modification to Flask-SQLAlchemy, and redefined the query @property from the original: class _QueryProperty(object): def __init__(self, sa): self.sa = sa def __get__(self, obj, type): try: mapper = orm.class_mapper(type) if mapper: return type.query_class(mapper, session=self.sa.session()) except UnmappedClassError: return None To: class _QueryProperty(object): def __init__(self, sa): self.sa = sa def __get__(self, obj, type): return self.sa.session.query(type) Where sa is the Flask-SQLAlchemy object (ie db in the controller). Now, this is where things got weird: it still doesn't save the changes. Code is exactly the same, yet the DBMS is still rolling back my changes. I read that Flask-SQLAlchemy can execute a commit on teardown, and tried adding this: app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN'] = True Suddenly, everything works. Question is: why? Isn't teardown supposed to happen only when the view has finished rendering? Why is the modified Entry.query behaving different to db.session.query(Entry) , even if the code is the same?
Now provide the response and nothing else.
| Below is the correct way to make changes to a model instance and commit them to the database: # get an instance of the 'Entry' modelentry = Entry.query.get(1)# change the attribute of the instance; here the 'name' attribute is changedentry.name = 'New name'# now, commit your changes to the database; this will flush all changes # in the current session to the databasedb.session.commit() Note: Don't use SQLALCHEMY_COMMIT_ON_TEARDOWN , as it's considered harmful and also removed from docs. See the changelog for version 2.0 . Edit: If you have two objects of normal session (created using sessionmaker() ) instead of scoped session , then on calling db.session.add(entry) above code will raise error sqlalchemy.exc.InvalidRequestError: Object '' is already attached to session '2' (this is '3') . For more understanding about sqlalchemy session, read below section Major Difference between Scoped Session vs. Normal Session The session object we mostly constructed from the sessionmaker() call and used to communicate with our database is a normal session . If you call sessionmaker() a second time, you will get a new session object whose states are independent of the previous session. For example, suppose we have two session objects constructed in the following way: from sqlalchemy import Column, String, Integer, ForeignKeyfrom sqlalchemy.ext.declarative import declarative_baseBase = declarative_base()class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String)from sqlalchemy import create_engineengine = create_engine('sqlite:///')from sqlalchemy.orm import sessionmakersession = sessionmaker()session.configure(bind=engine)Base.metadata.create_all(engine)# Construct the first session objects1 = session()# Construct the second session objects2 = session() Then, we won't be able to add the same User object to both s1 and s2 at the same time. In other words, an object can only be attached at most one unique session object. >>> jessica = User(name='Jessica')>>> s1.add(jessica)>>> s2.add(jessica)Traceback (most recent call last):......sqlalchemy.exc.InvalidRequestError: Object '' is already attached to session '2' (this is '3') If the session objects are retrieved from a scoped_session object, however, then we don't have such an issue since the scoped_session object maintains a registry for the same session object. >>> session_factory = sessionmaker(bind=engine)>>> session = scoped_session(session_factory)>>> s1 = session()>>> s2 = session()>>> jessica = User(name='Jessica')>>> s1.add(jessica)>>> s2.add(jessica)>>> s1 is s2True>>> s1.commit()>>> s2.query(User).filter(User.name == 'Jessica').one() Notice that s1 and s2 are the same session object since they are both retrieved from a scoped_session object who maintains a reference to the same session object. Tips So, try to avoid creating more than one normal session object. Create one object of the session and use it everywhere from declaring models to querying. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40020388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4454028/']} | jdg_86001 |
stackexchange | llm_judgeable_groundtruth_similarity | 49518376 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker. When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all. Please tell me how to do load balance? Is swarm manager not responsible to do when worker started?
Now provide the response and nothing else.
| Swarm currently (18.03) does not move or replace containers when new nodes are started, if services are in the default "replicated mode". This is by design. If I were to add a new node, I don't necessarily want a bunch of other containers stopped, and new ones created on my new node. Swarm only stops containers to "move" replicas when it has to (in replicated mode). docker service update --force <servicename> will rebalance a service across all nodes that match its requirements and constraints. Further advice: Like other container orchestrators, you need to give capacity on your nodes in order to handle the workloads of any service replicas that move during outages. You're spare capacity should match the level of redundancy you plan to support. If you want to handle capacity for 2 nodes failing at once, for instance, you'd need a minimum percentage of resources on all nodes for those workloads to shift to other nodes. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/49518376', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7785493/']} | jdg_86002 |
stackexchange | llm_judgeable_groundtruth_similarity | 41063331 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have few blocking functions foo , bar and I can't change those (Some internal library I don't control. Talks to one or more network services). How do I use it as async?. E.g. I wan't to do the following. results = []for inp in inps: val = foo(inp) result = bar(val) results.append(result) This will be inefficient as I can call foo for the second input while I am waiting for the first and same for bar . How do I wrap them such that they are usable with asyncio (i.e new async , await syntax)? Lets assume the functions are re-entrant. i.e it is fine to call foo again when already a previous foo is processing. Update Extending answer with reusable decorator. Click here for example. def run_in_executor(f): @functools.wraps(f) def inner(*args, **kwargs): loop = asyncio.get_running_loop() return loop.run_in_executor(None, functools.partial(f, *args, **kwargs)) return inner
Now provide the response and nothing else.
| There are (sort of) two questions here: how can I run blocking code asynchronously within a coroutine how can I run multiple async tasks at the "same" time (as an aside: asyncio is single-threaded, so it is concurrent, but not truly parallel ). Concurrent tasks can be created using the high-level asyncio.create_task or the low-level asyncio.ensure_future . Starting with 3.11, they can also be created through asyncio task groups , as pioneered by the Trio library (the creator of Trio has an excellent blog post on the subject here ). To run synchronous code, you will need to run the blocking code in an executor . Example: import concurrent.futuresimport asyncioimport timedef blocking(delay): time.sleep(delay) print('Completed.')async def non_blocking(executor): loop = asyncio.get_running_loop() # Run three of the blocking tasks concurrently. asyncio.wait will # automatically wrap these in Tasks. If you want explicit access # to the tasks themselves, use asyncio.ensure_future, or add a # "done, pending = asyncio.wait..." assignment await asyncio.wait( fs={ # Returns after delay=12 seconds loop.run_in_executor(executor, blocking, 12), # Returns after delay=14 seconds loop.run_in_executor(executor, blocking, 14), # Returns after delay=16 seconds loop.run_in_executor(executor, blocking, 16) }, return_when=asyncio.ALL_COMPLETED )executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)asyncio.run(non_blocking(executor)) If you want to schedule these tasks using a for loop (as in your example), you have several different strategies, but the underlying approach is to schedule the tasks using the for loop (or list comprehension, etc), await them with asyncio.wait, and then retrieve the results. Example: done, pending = await asyncio.wait( fs=[loop.run_in_executor(executor, blocking_foo, *args) for args in inps], return_when=asyncio.ALL_COMPLETED)# Note that any errors raise during the above will be raised here; to# handle errors you will need to call task.exception() and check if it# is not None before calling task.result()results = [task.result() for task in done] | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41063331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/463758/']} | jdg_86003 |
stackexchange | llm_judgeable_groundtruth_similarity | 11268289 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a char that is given from fgets , and I would like to know how I can convert it into a char* . I am sure this has been posted before, but I couldn't find one that was doing quite what I wanted to do. Any answer is appreciated. EDIT:Here is the code. char *filename = "file.txt";FILE *file = fopen(filename, "r");if(file != NULL) { char line[260]; char *fl; while(fgets(line, sizeof line, file) != NULL) { // here I combine some strings with the 'line' variable. str_replace(line, "\"", "\"\""); // A custom function, but it only takes char*'s. } printf(fl); printf("\n");} else { printf(" -- *ERROR* Couldn't open file.\n");}
Now provide the response and nothing else.
| Well, first of all, line is an array of char s and so can be manipulated in much the same way as a char * (See comp.lang.c FAQs for important differences), so you don't need to worry about it. However, in case you want an answer to the general question... The & operator is what you need: char c;char *pChar = &c; However, bear in mind that pChar is a pointer to the char and will only be valid while c is in scope. That means that you can't return pChar from a function and expect it to work; it will be pointing to some part of the heap and you can't expect that to stay valid. If you want to pass it as a return value, you will need to malloc some memory and then use the pointer to write the value of c: char c;char *pChar = malloc(sizeof(char));/* check pChar is not null */*pChar = c; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11268289', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1043404/']} | jdg_86004 |
stackexchange | llm_judgeable_groundtruth_similarity | 25510155 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Table A name is source ID | date | valueS | commonID1 26.8.14 Svalue01 112 21.8.14 Svalue02 113 25.8.14 Svalue03 11 Table B name is destination ID | date | valueD | commonID1 26.8.14 Dvalue01 112 21.8.14 Dvalue03 113 24.8.14 Dvalue03 11 So currently im using SELECT a.*, b.* FROM (SELECT * FROM Source WHERE commonID = '11')a JOIN destination b ON a.commonID = b.commonID But this dont get me the wished result. i want something sorted by date, and if there is no record for both on the date, one is zero. example how it should look ID | date | valueD | commonID | ID | date | valueS | commonID1 26.8.14 Dvalue01 11 1 26.8.14 Svalue01 11 3 25.8.14 Svalue03 113 24.8.14 Dvalue03 112 21.8.14 Dvalue03 11 2 21.8.14 Svalue02 11 Is and how would this be possible? Additional Info: -Using Mysql 5.5.37 (MariaDB)-ID is primary on both-date fields are "timestamp"-value fields are INT-ID fields are INT-Engine is InnoDB I hope i provided enough information and tried to make a good explained question thank you for your help
Now provide the response and nothing else.
| This has nothing to do with BigInteger , this is happening because your calculate() method is calling itself. Once you've gotten your call stack deep enough, you run out of memory and the JVM throws the StackOverflowError . You don't have any sort of termination condition in your calculate() method. Each time you enter the while loop, you kick off another call. In fact, it seems like you are trying to solve this both iteratively and recursively at the same time. You shouldn't need both a while loop and a recursive call in your code. It looks to me like if you just take out the call to calculate() your program will work just fine: public static void calculate(BigInteger s, BigInteger t) { while (i < 4000000) { System.out.printf("%d\t%d\n", p, i); temp = p; p = p.add(q); q = temp; i++; // calculate(p, q); // you don't need this }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25510155', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3979711/']} | jdg_86005 |
stackexchange | llm_judgeable_groundtruth_similarity | 3449348 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
$string = 'Some string';$pos = 5;...??...$begging // == 'Some s';$end // == 'tring'; What is the best way to separate string in two by given position?
Now provide the response and nothing else.
| You can use substr to get the two sub-strings: $str1 = substr($str, 0, $pos);$str2 = substr($str, $pos); If you omit the third parameter length , substr takes the rest of the string. But to get your result you actually need to add one to $pos : $string = 'Some string';$pos = 5;$begin = substr($string, 0, $pos+1);$end = substr($string, $pos+1); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3449348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159982/']} | jdg_86006 |
stackexchange | llm_judgeable_groundtruth_similarity | 24399 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $k$ be a field, $K/k$ a separable quadratic extension,and $D/K$ a central division algebra of dimension $r^2$ over $K$with an involution $\sigma$ of second kind(i.e. $\sigma$ acts non-trivially on $K$ and trivially on $k$).Does there exist a field extension $F/k$ such that $L:=K\otimes_k F$is a field, and $D\otimes_K L$ splits (i.e. is isomorphic to the matrix algebra $M_r(L)$ over $L$)? Motivation: Let $h\in D$ be a Hermitian element ($h^\sigma =h$), and let $G$ be the $k$-group with$G(k)=${$g\in D^\times\ | \ ghg^\sigma=h$}.I want to find a field extension $F/k$ such that $G\times_k F$is a unitary group over a field $L$ (and not over a division algebra over $L$).
Now provide the response and nothing else.
| I answer my own question. The answer is yes . Since there are no non-trivial division algebras over finite fields,we may assume that $k$ and $K$ are infinite.Let $H=${$h\in D\ |\ h^\sigma=h$} denote the $k$-space of Hermitian elements of $D$.Consider the embedding $D\hookrightarrow M_r(\bar K)$ inducedby an isomorphism $D\otimes_K \bar K\simeq M_r(\bar K)$.An element x of $D$ is called semisimple regular,if its image in $D\otimes_K \bar K\simeq M_r(\bar K)$is a semisimple matrix that has $r$ different eigenvalues.A standard argument using an isomorphism$D\otimes_k \bar K\simeq M_r(\bar K)\times M_r(\bar K)$shows that there is a dense Zariski open subset$H_{reg}$ consisting of semisimple regular elements in $H$.Clearly $H_{reg}$ contains $k$-points. Let $h\in H_{reg}$ be a semisimple regular Hermitian element.Let $L$ be the centralizer of $h$ in $D$.Since $h$ is Hermitian ($\sigma$-invariant), the $k$-algebra $L$ is $\sigma$-invariant.Since $h$ is semisimple and regular, the algebra $L$ is a commutative étale $K$-subalgebra of $D$of dimension $r$ over $K$ (we calculate in $D\otimes_K K_s$).Clearly $L$ is a field, $[L:K]=r$.Since $L\subset D$ and $[L:K]=r$, the field $L$ is a splitting field for $D$,see e.g. Scharlau, Quadratic and Hermitian Forms, Ch. 8, Thm. 5.4. Since $L\supset K$, we see that $\sigma$ acts non-trivially on $L$.Let $F$ denote the subfield of fixed points of $\sigma$ in $L$,then $[L:F]=2$ and $[F:k]=r$.Clearly $F\cap K=k$ and $FK=L$, hence $L=K\otimes_k F$.The extension $F/k$ is separable. Another version of the proof vas proposed by Uzi Vishne. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/24399', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4149/']} | jdg_86007 |
stackexchange | llm_judgeable_groundtruth_similarity | 389638 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a web application. I don't believe the technology is important. The structure is an N-tier application, shown in the image on the left. There are 3 layers. UI (MVC pattern), Business Logic Layer (BLL) and Data Access Layer (DAL) The problem I have is my BLL is massive as it has the logic and paths through the application events call. A typical flow through the application could be: Event fired in UI, traverse to a method in the BLL, perform logic (possibly in multiple parts of the BLL), eventually to the DAL, back to the BLL (where likely more logic) and then return some value to the UI. The BLL in this example is very busy and I'm thinking how to split this out. I also have the logic and the objects combined which I don't like. The version on the right is my effort. The Logic is still how the application flows between UI and DAL, but there are likely no properties... Only methods (the majority of classes in this layer could possibly be static as they don't store any state). The Poco layer is where classes exist which do have properties (such as a Person class where there would be name, age, height etc). These would have nothing to do with the flow of the application, they only store state. The flow could be: Even triggered from UI and passes some data to the UI layer controller (MVC). This translates the raw data and converts it into the poco model. The poco model is then passed into the Logic layer (which was the BLL) and eventually to the command query layer, potentially manipulated on the way. The Command query layer converts the POCO to a database object (which are nearly the same thing, but one is designed for persistence, the other for the front end). The item is stored and a database object is returned to the Command Query layer. It is then converted into a POCO, where it returns to the Logic layer, potentially processed further and then finally, back to the UI The Shared logic and interfaces is where we may have persistent data, such as MaxNumberOf_X and TotalAllowed_X and all the interfaces. Both the shared logic/interfaces and DAL are the "base" of the architecture. These know nothing about the outside world. Everything knows about poco other than the shared logic/interfaces and DAL. The flow is still very similar to the first example, but it's made each layer more responsible for 1 thing (be it state, flow or anything else)... but am I breaking OOP with this approach? An example to demo the Logic and Poco could be: public class LogicClass{ private ICommandQueryObject cmdQuery; public PocoA Method1(PocoB pocoB) { return cmdQuery.Save(pocoB); } /*This has no state objects, only ways to communicate with other layers such as the cmdQuery. Everything else is just function calls to allow flow via the program */ public PocoA Method2(PocoB pocoB) { pocoB.UpdateState("world"); return Method1(pocoB); }}public struct PocoX{ public string DataA {get;set;} public int DataB {get;set;} public int DataC {get;set;} /*This simply returns something that is part of this class. Everything is self-contained to this class. It doesn't call trying to directly communicate with databases etc*/ public int GetValue() { return DataB * DataC; } /*This simply sets something that is part of this class. Everything is self-contained to this class. It doesn't call trying to directly communicate with databases etc*/ public void UpdateState(string input) { DataA += input; }}
Now provide the response and nothing else.
| Yes, you are very likely breaking core OOP concepts. However don't feel bad, people do this all the time, it doesn't mean that your architecture is "wrong". I would say it is probably less maintainable than a proper OO design, but this is rather subjective and not your question anyway. ( Here is an article of mine criticizing the n-tier architecture in general). Reasoning : The most basic concept of OOP is that data and logic form a single unit (an object). Although this is a very simplistic and mechanical statement, even so, it is not really followed in your design (if I understand you correctly). You are quite clearly separating most of the data from most of the logic. Having stateless (static-like) methods for example is called "procedures", and are generally antithetic to OOP. There are of course always exceptions, but this design violates these things as a rule. Again, I would like to stress "violates OOP" != "wrong", so this is not necessarily a value judgement. It all depends on your architecture constraints, maintainability use-cases, requirements, etc. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/389638', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/119594/']} | jdg_86008 |
stackexchange | llm_judgeable_groundtruth_similarity | 2027182 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read somewhere there are some theorems that are shown to be "unprovable". It was a while ago and I don't remember the details, and I suspect that this question might be the result of a total misunderstanding. By the way, I assume that unprovable theorem does exist. Please correct me if I am wrong and skip reading the rest. As far as I know, the mathematical statements are categorized into: undefined concepts, definitions, axioms, conjectures, lemmas and theorems. There might be some other types that I am not aware of as an amateur math learner. In this categorization, an axiom is something that cannot be built upon other things and it is too obvious to be proved (is it?). So axioms are unprovable. A theorem or lemma is actually a conjecture that has been proved. So "a theorem that cannot be proved" sounds like a paradox. I know that there are some statements that cannot be proved simply because they are wrong. I am not addressing them because they are not theorems . So what does it mean that a theorem is unprovable? Does it mean that it cannot be proved by current mathematical tools and it may be proved in the future by more advanced tools that are not discovered yet? So why don't we call it a conjecture? If it cannot be proved at all, then it is better to call it an axiom. Another question is, how can we be sure that a theorem cannot be proved ? I am assuming the description might be some high level logic that is way above my understanding. So I would appreciate if you put it into simple words. Edit- Thanks to a comment by @user21820 I just read two other interesting posts, this and this that are relevant to this question. I recommend everyone to take a look at them as well.
Now provide the response and nothing else.
| When we say that a statement is 'unprovable', we mean that it is unprovable from the axioms of a particular theory. Here's a nice concrete example. Euclid's Elements , the prototypical example of axiomatic mathematics, begins by stating the following five axioms: Any two points can be joined by a straight line Any finite straight line segment can be extended to form an infinite straight line. For any point $P$ and choice of radius $r$ we can form a circle centred at $P$ of radius $r$ All right angles are equal to one another. [The parallel postulate:] If $L$ is a straight line and $P$ is a point not on the line $L$ then there is at most one line $L'$ that passes through $P$ and is parallel to $L$ . Euclid proceeds to derive much of classical plane geometry from these five axioms. This is an important point. After these axioms have been stated, Euclid makes no further appeal to our natural intuition for the concepts of 'line', 'point' and 'angle', but only gives proofs that can be deduced from the five axioms alone. It is conceivable that you could come up with your own theory with 'points' and 'lines' that do not resemble points and lines at all. But if you could show that your 'points' and 'lines' obey the five axioms of Euclid, then you could interpret all of his theorems in your new theory. In the two thousand years following the publication of the Elements , one major question that arose was: do we need the fifth axiom? The fifth axiom - known as the parallel postulate - seems less intuitively obvious than the other four: if we could find a way of deducing the fifth axiom from the first four then it would become superfluous and we could leave it out. Mathematicians tried for millennia to find a way of deducing the parallel postulate from the first four axioms (and I'm sure there are cranks who are still trying to do so now), but were unable to. Gradually, they started to get the feeling that it might be impossible to prove the parallel postulate from the first four axioms. But how do you prove that something is unprovable? The right approach was found independently by Lobachevsky and Bolyai (and possibly Gauss) in the nineteenth century. They took the first four axioms and replaced the fifth with the following: [Hyperbolic parallel postulate:] If $L$ is a straight line and $P$ is a point not on the line $L$ then there are at least two lines that pass through $P$ and are parallel to $L$ . This axiom is clearly incompatible with the original parallel postulate. The remarkable thing is that there is a geometrical theory in which the first four axioms and the modified parallel postulate are true. The theory is called hyperbolic geometry and it deals with points and lines inscribed on the surface of a hyperboloid : In the bottom right of the image above, you can see a pair of hyperbolic parallel lines. Notice that they diverge from one another. The first four axioms hold (and you can check this), but now if $L$ is a line and $P$ is a point not on $L$ then there are infinitely many lines parallel to $L$ passing through $P$ . So the original parallel postulate does not hold. This now allows us to prove very quickly that it is impossible to prove the parallel postulate from the other four axioms: indeed, suppose there were such a proof. Since the first four axioms are true in hyperbolic geometry, our proof would induce a proof of the parallel postulate in the setting of hyperbolic geometry. But the parallel postulate is not true in hyperbolic geometry, so this is absurd. This is a major method for showing that statements are unprovable in various theories. Indeed, a theorem of Gödel (Gödel's completeness theorem) tells us that if a statement $s$ in the language of some axiomatic theory $\mathbb T$ is unprovable then there is always some structure that satisfies the axioms of $\mathbb T$ in which $s$ is false. So showing that $s$ is unprovable often amounts to finding such a structure. It is also possible to show that things are unprovable using a direct combinatorial argument on the axioms and deduction rules you are allowed in your logic. I won't go into that here. You're probably interested in things like Gödel's incompleteness theorem, that say that there are statements that are unprovable in a particular theory called ZFC set theory, which is often used as the foundation of all mathematics (note: there is in fact plenty of mathematics that cannot be expressed in ZFC, so all isn't really correct here). This situation is not at all different from the geometrical example I gave above: If a particular statement is neither provable nor disprovable from the axioms of all mathematics it means that there are two structures out there, both of which interpret the axioms of all mathematics , in one of which the statement is true and in the other of which the statement is false. Sometimes we have explicit examples: one important problem at the turn of the century was the Continuum Hypothesis . The problem was solved in two steps: Gödel gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was true. Later, Cohen gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was false. Between them, these results show that the Continuum Hypothesis is in fact neither provable nor disprovable in ZFC set theory. | {} | {'log_upvote_score': 9, 'links': ['https://math.stackexchange.com/questions/2027182', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/301977/']} | jdg_86009 |
stackexchange | llm_judgeable_groundtruth_similarity | 27117449 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I set the collection cell view to dynamically stretch to iphone screen width (e.g. iphone 5s, iphone 6 plus)? I tried: - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath { (ResultCell *)[collectionView dequeueReusableCellWithReuseIdentifier:cellClass forIndexPath:indexPath]; cell.bounds = CGRectMake(0,0, self.view.bounds.size.width, 150); return cell;} That does not work. I don't see the content get stretch to the right side of the screen. I have tried adding this delegate method: - (CGSize)collectionView:(UICollectionView *)collectionView layout:(UICollectionViewLayout *)collectionViewLayout sizeForItemAtIndexPath:(NSIndexPath *)indexPath{ CGSize cellSize; cellSize.width = self.view.bounds.size.width; // body view height cellSize.height = 150; return cellSize;} I have set breakpoints in the method, but that method never get called?
Now provide the response and nothing else.
| I had the same issue with the same use case. I finally ended up doing - (CGSize)collectionView:(UICollectionView *)collectionView layout:(UICollectionViewLayout*)collectionViewLayout sizeForItemAtIndexPath:(NSIndexPath *)indexPath{ return CGSizeMake(collectionView.bounds.size.width, 150);} This changes the size of each item (cell) of the collection view. Hence you could change the size of each cell using this. Here i need the cell width to be same as of my UICollectionView so i passed the UICollectionview's width and specific height that i desired. Hope this would help. Also there is no need to set the cell bounds in - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath as the size gets set via - (CGSize)collectionView:(UICollectionView *)collectionView layout:(UICollectionViewLayout*)collectionViewLayout sizeForItemAtIndexPath:(NSIndexPath *)indexPath Also make sure that you have attached the desired Delegates & DataSources to the UICollectionView | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27117449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/286802/']} | jdg_86010 |
stackexchange | llm_judgeable_groundtruth_similarity | 2173016 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I create a rectangle with 100px width and 100px height and then rotate it, the size of the element's "box" will have increased. With 45 rotation, the size becomes about 143x143 (from 100x100). Doing sometimes like cos(angleRad) * currentWidth seems to work for 45 rotation, but for other bigger angles it doesn't. At the moment I am doing this: var currentRotation = object.rotation;object.rotation = 0;var normalizedWidth = object.width;var normalizedHeight = object.height;object.rotation = currentRotation; Surely, there must be a better and more efficient way. How should I get the "normalized" width and height of a displayobject, aka the size when it has not been rotated?
Now provide the response and nothing else.
| The best approach would probably be to use the code posted in the question - i.e. to unrotate the object, check its width, and then re-rotate it. Here's why. First, simplicity . It's obvious what's being done, and why it works. Anyone coming along later should have no trouble understanding it. Second, accuracy . Out of curiosity I coded up all three suggestions currently in this thread, and I was not really surprised to find that for an arbitrarily scaled object, they give three slightly different answers. The reason for this, in a nutshell, is that Flash's rendering internals are heavily optimized, and among other things, width and height are not stored internally as floats. They're stored as "twips" (twentieths of a pixel) on the ground that further accuracy is visually irrelevant. Anyway, if the three methods give different answers, which is the most accurate? For my money, the most correct answer is what Flash thinks the width of the object is when it's unrotated, which is what the simple method gives us. Also, this method is the only one that always give answers rounded to the nearest 1/20, which I surmise (though I'm guessing) to mean it's probably equal to the value being stored internally, as opposed to being a calculated value. Finally, speed . I assume this will surprise you, but when I coded the three methods up, the simple approach was the fastest by a small margin. (Don't read too much into that - they were all very close, and if you tweak my code, a different method might edge into the lead. The point is they're very comparable.) You probably expected the simple method to be slower on the grounds that changing an object's rotation would cause lots of other things to be recalculated, incurring overhead. But all that really happens immediately when you change the rotation is that the object's transform matrix gets some new values. Flash doesn't really do much with that matrix until it's next time to draw the object on the screen. As for what math occurs when you then read the object's width/height, it's difficult to say. But it's worth noting that whatever math takes place in the simple method is done by the Player's heavily optimized internals, rather than being done in AS3 like the algebraic method. Anyway I invite you to try out the sample code, and I think you'll find that the simple straightforward method is, at the least, no slower than any other. That plus simplicity makes it the one I'd go with. Here's the code I used: // initvar clip:MovieClip = new MovieClip();clip.graphics.lineStyle( 10 );clip.graphics.moveTo( 12.345, 37.123 ); // arbitraryclip.graphics.lineTo( 45.678, 29.456 ); // arbitraryclip.scaleX = .87; // arbitraryclip.scaleY = 1.12; // arbitraryclip.rotation = 47.123; // arbitrary// run the testvar iterations:int = 1000000;test( method1, iterations );test( method2, iterations );test( method3, iterations );function test( fcn:Function, iter:int ) { var t0:uint = getTimer(); for (var i:int=0; i<iter; i++) { fcn( clip, i==0 ); } trace(["Elapsed time", getTimer()-t0]);}// the "simple" methodfunction method1( m:MovieClip, traceSize:Boolean ) { var rot:Number = m.rotation; m.rotation = 0; var w:Number = m.width; var h:Number = m.height; m.rotation = rot; if (traceSize) { trace([ "method 1", w, h ]); }}// the "algebraic" methodfunction method2( m:MovieClip, traceSize:Boolean ) { var r:Number = m.rotation * Math.PI/180; var c:Number = Math.abs( Math.cos( r ) ); var s:Number = Math.abs( Math.sin( r ) ); var denominator:Number = (c*c - s*s); // an optimization var w:Number = (m.width * c - m.height * s) / denominator; var h:Number = (m.height * c - m.width * s) / denominator; if (traceSize) { trace([ "method 2", w, h ]); }}// the "getBounds" methodfunction method3( m:MovieClip, traceSize:Boolean ) { var r:Rectangle = m.getBounds(m); var w:Number = r.width*m.scaleX; var h:Number = r.height*m.scaleY; if (traceSize) { trace([ "method 3", w, h ]); }} And my output: method 1,37.7,19.75Elapsed time,1416method 2,37.74191378925391,19.608455916982187Elapsed time,1703method 3,37.7145,19.768000000000004Elapsed time,1589 Surprising, eh? But there's an important lesson here about Flash development. I hereby christen Fen's Law of Flash Laziness: Whenever possible, avoid tricky math by getting the renderer to do it for you. It not only gets you done quicker, in my experience it usually results in a performance win anyway. Happy optimizing! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2173016', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/45974/']} | jdg_86011 |
stackexchange | llm_judgeable_groundtruth_similarity | 65688 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Special Relativity, the Lorentz Group is the set of matrices that preserve the metric, i.e. $\Lambda \eta \Lambda^T=\eta$. Is there any equivalent in General Relativity, like: $\Lambda g \Lambda^T=g$? (We could at least take locally $g\approx\eta$, so we recover the Lorentz group, but I don't know whether we could extend this property globally.) Why does Group Theory have much less importance in General Relativity than in QFT and particle physics?
Now provide the response and nothing else.
| As you point out, the Minkowski metric $\eta = \mathrm{diag}(-1,+1, \dots, +1)$ in $d+1$ dimensions possesses a global Lorentz symmetry. A highbrow way of saying this is that the (global) isometry group of the metric is the Lorentz group. Well, translations are also isometries of Minkowski, so the full isometry group is the Poincare group. The general notion of isometry that applies to arbitrary spacetimes is defined as follows. Let $(M,g)$ be a semi-Riemannian manifold, then any diffeomorphism $f:M\to M$ (coordinate transformation essentially) that leaves the metric invariant is called an isometry of this manifold. A closely related notion that is often useful in relation to isometries is that of Killing vectors . Intuitively a killing vector of a metric generates an "infinitesimal" isometry of a given metric. Intuitively this means that they change very little under the action of the transformations generated by the Killing vectors. Isometries and Killing vectors are a big reason for which group theory is relevant in GR. Killing vectors often satisfy vector field commutator relations that form a Lie algebra of some Lie Group. Addendum (May 28, 2013). Remarks on symmetric spaces and physics. One can show that in $D$ dimensions, a metric can posses at most $D(D+1)/2$ independent killing vectors. Any metric that has this maximum number of killing vectors is said to be maximally symmetric . Example. Consider $4$-dimensional Minkowski space $\mathbb R^{3,1}$. The isometry group of this space, the Poincare group, has dimension $10$ since there are $4$ translations, $3$ rotations, and $3$ boosts. On the other hand, in this case we have $D=4$ so that the maximum number of independent killing vectors is $4(4+1)/2 = 10$. It turns out, in fact, that each rotation, translation, and boost gives rise to an independent Killing vector field, so that Minkowski is maximally symmetric. One can in fact show that there are (up to isometry) precisely three distinct maximally symmetric spacetimes: $\mathrm{AdS}_{d+1}, \mathbb R^{d,1}, \mathrm{dS}_{d+1}$ called anti de-Sitter space, Minkowski space, and de-Sitter space respectively, and that these spacetimes all have constant negative, zero and positive curvature respectively. The isometry groups of these spacetimes are well-studied, and these spacetimes form the backbone of a lot of physics. In particular, the whole edifice of $\mathrm{AdS}/\mathrm{CFT}$ relies on the fact that $\mathrm{AdS}$ has a special isometry group that is related to the conformal group of Minkowski space. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/65688', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/21817/']} | jdg_86012 |
stackexchange | llm_judgeable_groundtruth_similarity | 26024908 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have added this line to my build.gradle compile group: 'org.apache.httpcomponents' , name: 'httpclient-android' , version: '4.3.5' and I want to use MultipartEntityBuilder in my code. However Android studio doesn't add the library to my code.Can anyone help me with this?
Now provide the response and nothing else.
| If you are using target SDK as 23 add the below code in your build.gradle android{ useLibrary 'org.apache.http.legacy'} Additional note here: don't try using the gradle versions of those files. They are broken (28.08.15). I tried over 5 hours to get it to work. It just doesn't. not working: compile 'org.apache.httpcomponents:httpcore:4.4.1'compile 'org.apache.httpcomponents:httpclient:4.5' Another thing don't use: 'org.apache.httpcomponents:httpclient-android:4.3.5.1' It's referring to 21 API level. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/26024908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3942974/']} | jdg_86013 |
stackexchange | llm_judgeable_groundtruth_similarity | 152993 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Could someone explain how open drain io acts as bidirectional. Whether it is actually bidirectional or it can act only as input or output at a time.
Now provide the response and nothing else.
| The Line is read through a high impedance input. The line is controlled by the internal n-channel mosfet and the pull-up resistor. By turning off the mosfet, it releases the line so it is a passive input. When it turns on the fet, it tries to force the line low, but it still reads the input to check if the line is logic low or not. simulate this circuit – Schematic created using CircuitLab | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/152993', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/22348/']} | jdg_86014 |
stackexchange | llm_judgeable_groundtruth_similarity | 29448432 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a query which filters results: public IEnumerable<FilteredViewModel> GetFilteredQuotes(){ return _context.Context.Quotes.Select(q => new FilteredViewModel { Quote = q, QuoteProductImages = q.QuoteProducts.SelectMany(qp => qp.QuoteProductImages.Where(qpi => q.User.Id == qpi.ItemOrder)) });} In the where clause I'm using the parameter q to match a property against a property from the parameter qpi .Because the filter will be used in several places I'm trying to rewrite the where clause to an expression tree which would look like something like this: public IEnumerable<FilteredViewModel> GetFilteredQuotes(){ return _context.Context.Quotes.Select(q => new FilteredViewModel { Quote = q, QuoteProductImages = q.QuoteProducts.SelectMany(qp => qp.QuoteProductImages.AsQueryable().Where(ExpressionHelper.FilterQuoteProductImagesByQuote(q))) });} In this query the parameter q is used as a parameter to the function: public static Expression<Func<QuoteProductImage, bool>> FilterQuoteProductImagesByQuote(Quote quote){ // Match the QuoteProductImage's ItemOrder to the Quote's Id} How would I implement this function? Or should I use a different approach alltogether?
Now provide the response and nothing else.
| If I understand correctly, you want to reuse an expression tree inside another one, and still allow the compiler to do all the magic of building the expression tree for you. This is actually possible, and I have done it in many occasions. The trick is to wrap your reusable part in a method call, and then before applying the query, unwrap it. First I would change the method that gets the reusable part to be a static method returning your expression (as mr100 suggested): public static Expression<Func<Quote,QuoteProductImage, bool>> FilterQuoteProductImagesByQuote() { return (q,qpi) => q.User.Id == qpi.ItemOrder; } Wrapping would be done with: public static TFunc AsQuote<TFunc>(this Expression<TFunc> exp) { throw new InvalidOperationException("This method is not intended to be invoked, just as a marker in Expression trees!"); } Then unwrapping would happen in: public static Expression<TFunc> ResolveQuotes<TFunc>(this Expression<TFunc> exp) { var visitor = new ResolveQuoteVisitor(); return (Expression<TFunc>)visitor.Visit(exp); } Obviously the most interesting part happens in the visitor.What you need to do, is find nodes that are method calls to your AsQuote method, and then replace the whole node with the body of your lambdaexpression. The lambda will be the first parameter of the method. Your resolveQuote visitor would look like: private class ResolveQuoteVisitor : ExpressionVisitor { public ResolveQuoteVisitor() { m_asQuoteMethod = typeof(Extensions).GetMethod("AsQuote").GetGenericMethodDefinition(); } MethodInfo m_asQuoteMethod; protected override Expression VisitMethodCall(MethodCallExpression node) { if (IsAsquoteMethodCall(node)) { // we cant handle here parameters, so just ignore them for now return Visit(ExtractQuotedExpression(node).Body); } return base.VisitMethodCall(node); } private bool IsAsquoteMethodCall(MethodCallExpression node) { return node.Method.IsGenericMethod && node.Method.GetGenericMethodDefinition() == m_asQuoteMethod; } private LambdaExpression ExtractQuotedExpression(MethodCallExpression node) { var quoteExpr = node.Arguments[0]; // you know this is a method call to a static method without parameters // you can do the easiest: compile it, and then call: // alternatively you could call the method with reflection // or even cache the value to the method in a static dictionary, and take the expression from there (the fastest) // the choice is up to you. as an example, i show you here the most generic solution (the first) return (LambdaExpression)Expression.Lambda(quoteExpr).Compile().DynamicInvoke(); } } Now we are already half way through. The above is enough, if you dont have any parameters on your lambda. In your case you do, so you want to actually replace the parameters of your lambda to the ones from the original expression. For this, I use the invoke expression, where I get the parameters I want to have in the lambda. First lets create a visitor, that will replace all parameters with the expressions that you specify. private class MultiParamReplaceVisitor : ExpressionVisitor { private readonly Dictionary<ParameterExpression, Expression> m_replacements; private readonly LambdaExpression m_expressionToVisit; public MultiParamReplaceVisitor(Expression[] parameterValues, LambdaExpression expressionToVisit) { // do null check if (parameterValues.Length != expressionToVisit.Parameters.Count) throw new ArgumentException(string.Format("The paraneter values count ({0}) does not match the expression parameter count ({1})", parameterValues.Length, expressionToVisit.Parameters.Count)); m_replacements = expressionToVisit.Parameters .Select((p, idx) => new { Idx = idx, Parameter = p }) .ToDictionary(x => x.Parameter, x => parameterValues[x.Idx]); m_expressionToVisit = expressionToVisit; } protected override Expression VisitParameter(ParameterExpression node) { Expression replacement; if (m_replacements.TryGetValue(node, out replacement)) return Visit(replacement); return base.VisitParameter(node); } public Expression Replace() { return Visit(m_expressionToVisit.Body); } } Now we can advance back to our ResolveQuoteVisitor, and hanlde invocations correctly: protected override Expression VisitInvocation(InvocationExpression node) { if (node.Expression.NodeType == ExpressionType.Call && IsAsquoteMethodCall((MethodCallExpression)node.Expression)) { var targetLambda = ExtractQuotedExpression((MethodCallExpression)node.Expression); var replaceParamsVisitor = new MultiParamReplaceVisitor(node.Arguments.ToArray(), targetLambda); return Visit(replaceParamsVisitor.Replace()); } return base.VisitInvocation(node); } This should do all the trick.You would use it as: public IEnumerable<FilteredViewModel> GetFilteredQuotes() { Expression<Func<Quote, FilteredViewModel>> selector = q => new FilteredViewModel { Quote = q, QuoteProductImages = q.QuoteProducts.SelectMany(qp => qp.QuoteProductImages.Where(qpi => ExpressionHelper.FilterQuoteProductImagesByQuote().AsQuote()(q, qpi))) }; selector = selector.ResolveQuotes(); return _context.Context.Quotes.Select(selector); } Of course I think you can make here much more reusability, with defining expressions even on a higher levels. You could even go one step further, and define a ResolveQuotes on the IQueryable, and just visit the IQueryable.Expression and creating a new IQUeryable using the original provider and the result expression, e.g: public static IQueryable<T> ResolveQuotes<T>(this IQueryable<T> query) { var visitor = new ResolveQuoteVisitor(); return query.Provider.CreateQuery<T>(visitor.Visit(query.Expression)); } This way you can inline the expression tree creation. You could even go as far, as override the default query provider for ef, and resolve quotes for every executed query, but that might go too far :P You can also see how this would translate to actually any similar reusable expression trees. I hope this helps :) Disclaimer: Remember never copy paste code from anywhere to production without understanding what it does. I didn't include much error handling here, to keep the code to minimum. I also didn't check the parts that use your classes if they would compile. I also don't take any responsability for the correctness of this code, but i think the explanation should be enough, to understand what is happening, and fix it if there are any issues with it.Also remember, that this only works for cases, when you have a method call that produces the expression. I will soon write a blog post based on this answer, that allows you to use more flexibility there too :P | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29448432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2731817/']} | jdg_86015 |
stackexchange | llm_judgeable_groundtruth_similarity | 307112 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using VIM through PuTTY to work on my development server, but it gets rather difficult when I have to work on files containing characters like those mentioned in the title. Not only does it make VIM look like this Denne applikasjonen krever at Javascript er aktivert. Bruk en nettleser som støtter JavaScript eller aktiver JavaScript i din nåværende nettleser. But also it completely f*cks VIM up, making the visual caret sitting somewhere completely random relative to where text appears when I'm typing. The files I'm editing uses ISO-8859-1 encoding root@foo:~/www/pltest# file --mime-encoding index.phpindex.php: iso-8859-1 And PuTTY is set to expect ISO-8859-1 in Settings->Window->Translation. So the only issue can be Debian. If I try typing any of the above characters into the console, random appears: root@foo:~/www/pltest# ��������� Any idea how I can get Debian to allow ISO-8859-1 symbols? Solved it ( Partly )! Apparently displaying the symbols was as simple as printing out /etc/locale.alias, finding my locale in the list and setting the corresponding language code (in my case nb_NO.ISO-8859-1) to the LANG environment variable. However, I'm still not able to type the characters into the PuTTY window. Any time I try I hear a windows error 'ding'. Any ideas?
Now provide the response and nothing else.
| In order for non-ASCII characters to work, all the programs involved must use the same character encoding. The encoding needs to be set: 1) in Putty (you have done that) 2) on the remote system (usually by setting the LANG env variable to a suitable locale) Then using non-ASCII on the console should work. The settings for vim are (mostly) independent of this. vim has its own internal setting for the encoding of the files it edits, and will automatically re-encode text to be suitable for the system settings (i.e. the locale set using LANG). To address your problem: You have apparently taken care of 1) & 2), by setting the same encoding in Putty and using LANG. Now editing a file with non-ASCII characters should work in vim, provided vim knows which encoding the file has (either by autodetection, or by doing a :set fileencoding=UTF-8 etc. in vim). As to "getting a ding when typing non-ASCII characters": You might need to put set input-meta on into your ~/.inputrc (this tells the readline library and thus bash that you want to type non-ASCII stuff) - but most modern Linux distros should no longer need this. Try to see whether other programs like vim accept non-ASCII stuff - that should tell us where the problem is. A final remark: While setting the locale to use ISO 8859-1 certainly works, it is usually better in the long term to use UTF-8. UTF-8 works for just about any language, not just western languages, and is quickly becoming the standard on modern Linux distros. Note that you can still work with ISO-encoded files in vim, as vim can transcode automatically (it will often even auto-detect that the file is not in UTF-8). | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/307112', 'https://serverfault.com', 'https://serverfault.com/users/57839/']} | jdg_86016 |
stackexchange | llm_judgeable_groundtruth_similarity | 23314564 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Pandas dataframe where I have designated some of the columns as indices: planets_dataframe.set_index(['host','name'], inplace=True) and would like to be able to refer to these indices in a variety of contexts. Using the name of an index works fine in queries planets_dataframe.query('host == "PSR 1257 12"') but results in an error if try to use it to get a list of the values of an index as I could when it was a column planets_dataframe.name#AttributeError: 'DataFrame' object has no attribute 'name' or to use it to list results as I could when it was a "regular" column planets_dataframe.query('30 > mass > 20 and discoveryyear > 2009')['name']#KeyError: u'no item named name' How do I refer to the "columns" of the dataframe that I'm using as indexes? Before set_index : planets_dataframe.columns# Index([u'name', u'lastupdate', u'temperature', u'semimajoraxis', u'discoveryyear', u'calculated', u'period', u'age', u'mass', u'host', u'verification', u'transittime', u'eccentricity', u'radius', u'discoverymethod', u'inclination'], dtype='object') After set_index : planets_dataframe.columns#Index([u'lastupdate', u'temperature', u'semimajoraxis', u'discoveryyear', u'calculated', u'period', u'age', u'mass', u'verification', u'transittime', u'eccentricity', u'radius', u'discoverymethod', u'inclination'], dtype='object')
Now provide the response and nothing else.
| I think you have a slight misunderstanding of what indexes are. You don't just "designate" columns as indexes; that is, you don't just "tag" certain columns with info that says "this is an index". The index is a separate data structure that can hold data that aren't even present in the columns. If you do set_index , you move those columns into the index, so they no longer exist as regular columns. This is why you can no longer use them in the ways you mention: they aren't there anymore. One thing you can do is, when using set_index , pass drop=False to tell it to keep the columns as columns in addition to putting them in the index (effectively copying them to the index rather than moving them), e.g., df.set_index('SomeColumn', drop=False) . However, you should be aware that the index and column are still distinct, so for instance if you modify the column values this will not affect what's stored in the index. The upshot is that indexes aren't really columns of the DataFrame, so if you want to be able to use some data as both an index and a column, you need to duplicate it in both places. There is some discussion of this issue here . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23314564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/656912/']} | jdg_86017 |
stackexchange | llm_judgeable_groundtruth_similarity | 3028453 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Disabling Back button on the browser How do I disable the back button of a browser using JavaScript?
Now provide the response and nothing else.
| You can't, and you shouldn't try to . The users browser is expected to behave a certain way, you should not break that expectation. Your site should behave like the other 99.999999% of the internet, it shouldn't suddenly change the meaning of the users toolbar buttons, their very fundamental browser operation. What are you trying to prevent? There's probably a much better solution to the problem, e.g. page cache timeout, etc. I would say form posting and not wanting to re-post is the most common problem you're trying to avoid, if that's the case, give another model a look . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3028453', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/342916/']} | jdg_86018 |
stackexchange | llm_judgeable_groundtruth_similarity | 3997078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to paste yanked text into Vim's command line. Is it possible?
Now provide the response and nothing else.
| Yes. Hit Ctrl - R then " . If you have literal control characters in what you have yanked, use Ctrl - R , Ctrl - O , " . Here is an explanation of what you can do with registers. What you can do with registers is extraordinary, and once you know how to use them you cannot live without them. Registers are basically storage locations for strings. Vim has many registers that work in different ways: 0 (yank register: when you use y in normal mode, without specifying a register, yanked text goes there and also to the default register), 1 to 9 (shifting delete registers, when you use commands such as c or d , what has been deleted goes to register 1, what was in register 1 goes to register 2, etc. ), " (default register, also known as unnamed register. This is where the " comes in Ctrl - R , " ), a to z for your own use (capitalized A to Z are for appending to corresponding registers). _ (acts like /dev/null (Unix) or NUL (Windows), you can write to it but it's discarded and when you read from it, it is always empty), - (small delete register), / (search pattern register, updated when you look for text with / , ? , * or # for instance; you can also write to it to dynamically change the search pattern), : (stores last VimL typed command via Q or : , readonly), + and * (system clipboard registers, you can write to them to set the clipboard and read the clipboard contents from them) See :help registers for the full reference. You can, at any moment, use :registers to display the contents of all registers. Synonyms and shorthands for this command are :display , :reg and :di . In Insert or Command-line mode, Ctrl - R plus a register name, inserts the contents of this register. If you want to insert them literally (no auto-indenting, no conversion of control characters like 0x08 to backspace, etc), you can use Ctrl - R , Ctrl - O , register name.See :help i_CTRL-R and following paragraphs for more reference. But you can also do the following (and I probably forgot many uses for registers). In normal mode, hit " : p . The last command you used in vim is pasted into your buffer. Let's decompose: " is a Normal mode command that lets you select what register is to be used during the next yank, delete or paste operation. So " : selects the colon register (storing last command). Then p is a command you already know, it pastes the contents of the register. cf. :help " , :help quote_: You're editing a VimL file (for instance your .vimrc ) and would like to execute a couple of consecutive lines right now: y j : @ " Enter . Here, y j yanks current and next line (this is because j is a linewise motion but this is out of scope of this answer) into the default register (also known as the unnamed register). Then the :@ Ex command plays Ex commands stored in the register given as argument, and " is how you refer to the unnamed register. Also see the top of this answer, which is related. Do not confuse " used here (which is a register name) with the " from the previous example, which was a Normal-mode command. cf. :help :@ and :help quote_quote Insert the last search pattern into your file in Insert mode, or into the command line, with Ctrl - R , / . cf. :help quote_/ , help i_CTRL-R Corollary: Keep your search pattern but add an alternative: / Ctrl - R , / \|alternative . You've selected two words in the middle of a line in visual mode, yanked them with y , they are in the unnamed register. Now you want to open a new line just below where you are, with those two words: :pu . This is shorthand for :put " . The :put command, like many Ex commands, works only linewise. cf. :help :put You could also have done: :call setreg('"', @", 'V') then p . The setreg function sets the register of which the name is given as first argument (as a string), initializes it with the contents of the second argument (and you can use registers as variables with the name @x where x is the register name in VimL), and turns it into the mode specified in the third argument, V for linewise, nothing for characterwise and literal ^V for blockwise. cf. :help setreg() . The reverse functions are getreg() and getregtype() . If you have recorded a macro with qa ... q , then :echo @a will tell you what you have typed, and @a will replay the macro (probably you knew that one, very useful in order to avoid repetitive tasks) cf. :help q , help @ Corollary from the previous example: If you have 8go in the clipboard, then @+ will play the clipboard contents as a macro, and thus go to the 8th byte of your file. Actually this will work with almost every register. If your last inserted string was dd in Insert mode, then @. will (because the . register contains the last inserted string) delete a line. (Vim documentation is wrong in this regard, since it states that the registers # , % , : and . will only work with p , P , :put and Ctrl - R ). cf. :help @ Don't confuse :@ (command that plays Vim commands from a register) and @ (normal-mode command that plays normal-mode commands from a register). Notable exception is @: . The command register does not contain the initial colon neither does it contain the final carriage return. However in Normal mode, @: will do what you expect, interpreting the register as an Ex command, not trying to play it in Normal mode. So if your last command was :e , the register contains e but @: will reload the file, not go to end of word. cf. :help @: Show what you will be doing in Normal mode before running it: @='dd' Enter . As soon as you hit the = key, Vim switches to expression evaluation: as you enter an expression and hit Enter , Vim computes it, and the result acts as a register content. Of course the register = is read-only, and one-shot. Each time you start using it, you will have to enter a new expression. cf. :help quote_= Corollary: If you are editing a command, and you realize that you should need to insert into your command line some line from your current buffer: don't press Esc ! Use Ctrl - R =getline(58) Enter . After that you will be back to command line editing, but it has inserted the contents of the 58th line. Define a search pattern manually: :let @/ = 'foo' cf. :help :let Note that doing that, you needn't to escape / in the pattern. However you need to double all single quotes of course. Copy all lines beginning with foo , and afterwards all lines containing bar to clipboard, chain these commands: qaq (resets the a register storing an empty macro inside it), :g/^foo/y A , :g/bar/y A , :let @+ = @a . Using a capital register name makes the register work in append mode Better, if Q has not been remapped by mswin.vim , start Ex mode with Q , chain those “colon commands” which are actually better called “Ex commands”, and go back to Normal mode by typing visual . cf. :help :g , :help :y , :help Q Double-space your file: :g/^/put _ . This puts the contents of the black hole register (empty when reading, but writable, behaving like /dev/null ) linewise, after each line (because every line has a beginning!). Add a line containing foo before each line: :g/^/-put ='foo' . This is a clever use of the expression register. Here, - is a synonym for .-1 (cf. :help :range ). Since :put puts the text after the line, you have to explicitly tell it to act on the previous one. Copy the entire buffer to the system clipboard: :%y+ . cf. :help :range (for the % part) and :help :y . If you have misrecorded a macro, you can type :let @a=' Ctrl - R =replace(@a,"'","''",'g') Enter ' and edit it. This will modify the contents of the macro stored in register a , and it's shown here how you can use the expression register to do that. Another, simpler, way of modifying a macro is to paste it in a buffer ( "ap ), edit it, and put it again into the register, by selecting it and "ay . If you did dddd , you might do uu in order to undo. With p you could get the last deleted line. But actually you can also recover up to 9 deletes with the registers @1 through @9 . Even better, if you do "1P , then . in Normal mode will play "2P , and so on. cf. :help . and :help quote_number If you want to insert the current date in Insert mode: Ctrl - R =strftime('%y%m%d') Enter . cf. :help strftime() Once again, what can be confusing: :@ is a command-line command that interprets the contents of a register as vimscript and sources it @ in normal mode command that interprets the contents of a register as normal-mode keystrokes (except when you use : register, that contains last played command without the initial colon: in this case it replays the command as if you also re-typed the colon and the final return key). " in normal mode command that helps you select a register for yank, paste, delete, correct, etc. " is also a valid register name (the default, or unnamed, register) and therefore can be passed as an arguments for commands that expect register names | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/3997078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232417/']} | jdg_86019 |
stackexchange | llm_judgeable_groundtruth_similarity | 4434347 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $n \in \mathbb{N}$ and $\zeta_n = e^{2i\pi/n} $ we define the following subring of $\mathbb{C}$ by, $$\mathbb{Z}[\zeta_n] = \{ P(\zeta_n) : P \in \mathbb{Z}[x]\}$$ One can easily show that, $$\mathbb{Z}[\zeta_n] = \{a_0+a_1 \zeta_n+\dots+a_{n-1} \zeta_n^{n-1} : a_i \in \mathbb{Z} \}$$ My question is about the number $N_n$ of ring endomorphisms of $\mathbb{Z}[\zeta_n]$ . My work so far : If $f : \mathbb{Z}[\zeta_n] \rightarrow \mathbb{Z}[\zeta_n]$ is a ring morphism then as $\mathbb{Z}[\zeta_n]$ is a subring of $\mathbb{C}$ we have, $f(m) = m$ for all $m \in \mathbb{Z}$ . From that we easily deduce that for any element of $\mathbb{Z}[\zeta_n]$ we must have, $$f\left(\sum_{i=0}^{n-1} a_i(\zeta_n)^i \right)=\sum_{i=0}^{n-1} a_if(\zeta_n)^i$$ And more generally, $$\forall P \in \mathbb{Z}[x],f(P(\zeta_n))= P(f(\zeta_n)) $$ Moreover, as we know that $f(1) = 1$ and $\zeta_n^n = 1$ then $f(\zeta_n)^n=f(\zeta_n^n) = f(1) = 1$ . Hence, $f(\zeta_n)$ is a $n$ th root of unity, so there exists $0\leq k \leq n-1$ such that $f(\zeta_n) = \zeta_n^k$ . Therefore $N_n \leq n$ . If we only considered the number of group morphisms then the answer would be $n$ , but here having a ring morphism is far more restrictive, and we can see on simple examples that it should not be the case. Indeed, $\mathbb{Z}[\zeta_1] = \mathbb{Z}[\zeta_2] = \mathbb{Z}$ , thus $N_1 =N_2 = 1$ . Likewise, $\mathbb{Z}[\zeta_4] = \mathbb{Z}[i]$ where there are only two ring morphisms, so $N_4 = 2$ . From this observation, I deduced the following conjecture $$\forall n \in \mathbb{N}, N_n = \varphi(n) $$ where $\varphi$ denotes the Euler's totient function. My hope would be to show that $f(\zeta_n) = \zeta_n^k$ has to be a generator of the cyclic group of the $n$ th roots of unity, that is to say $k$ and $n$ are coprimes. Equivalently, I want to prove that the map, $$f : P(\zeta_n) \in \mathbb{Z}[\zeta_n] \mapsto P(\zeta_n^k) \in \mathbb{Z}[\zeta_n]$$ is only well-defined when $k \wedge n = 1$ (i.e. $f(P(\zeta_n))$ does not depend on the choice of $P$ ). From this point on, it will be easy to prove that $f$ is indeed a ring morphism. If $P = a_0+a_1x+\dots+a_p x^p \in \mathbb{Z}[x]$ and $Q = b_0+b_1x+\dots+b_q x^q \in \mathbb{Z}[x]$ are such that $P(\zeta) = Q(\zeta)$ where $\zeta$ is any generator of the $n$ th roots of unity, then we have, $$\sum_{j=0}^{n-1} \left( \sum_{i \equiv j [n]} a_i \right)\zeta^j = \sum_{j=0}^{n-1} \left(\sum_{i \equiv j [n]} b_i \right) \zeta^j$$ But somehow I have difficulties going further, does this equality for one generator implies that all coefficients involved must be equal? Is $(1,\zeta,\dots,\zeta^{n-1})$ some sort of basis? What happens when $k$ and $n$ are not coprimes, are there any simple counter examples? I possibly could have missed simpler arguments. What are you thoughts on this problem and on my reasoning? Any hint or suggestion will be welcomed. Edit - current status : Thanks to Rob Arthan's answer, we deduce that $k$ being coprime with $n$ is a necessary condition for $f$ to be a ring homomorphism which leads me to, $$N_n \leq \varphi(n) $$ I want to show that this number is reached by defining $\varphi(n)$ different ring homomorphisms. Some observations: As $\zeta_n^k$ is a primitive $n$ th root of unity then $\Bbb{Z}[\zeta_n] = \Bbb{Z}[\zeta_n^k]$ . I realised that if, $$\sum_{j=0}^{n-1} a_j \zeta_n^j = 0 $$ $\qquad$ then all $a_j$ are not necessarily zero, but this was to be expected. As Z Wu pointed out, the cyclotomic polynomials come in handy for this problem that involves primitive roots of unity. I really like this approach, but I am not supposed to know anything about cyclotomic polynomials in my algebra course, so it led me to wonder that there must maybe exist a more elementary approach.
Now provide the response and nothing else.
| From a perfect score of $80$ , for every unmarked or wrong question, $4$ or $5$ points are lost. $20$ correct $\to $ score $\in \{80\}$ . $19$ correct $\to $ score $\in \{75,76\}$ $18$ correct $\to $ score $\in \{70,71,72\}$ $17$ correct $\to $ score $\in \{65,66,67,68\}$ $16$ correct $\to $ score $\in\{60,61,62,63,64\}$ And from now on all lower numbers will be achieved. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4434347', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/745510/']} | jdg_86020 |
stackexchange | llm_judgeable_groundtruth_similarity | 1965918 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It's been habitual in most of the scenarios while developing a database design we set primary key as integer type for a unique identifier in the Table. Why not use string or float for primary keys? Does this affect the accessibility of values, or in plain words retrieval speed ? Are there any specific reasons?
Now provide the response and nothing else.
| An integer will use less disk space than a string, thus giving you a smaller index file to search through. This is important for large tables where you want to have as much of the index as possible cached in RAM. Also, they can be autoincremented so you don't need to write your own routines to generate keys. You often want to have a technical key (also called a surrogate key ), a key that is only used to identify the row and not used for anything else. Most data may change sooner or later for reasons you can't control and you don't want to update it everywhere. Even such seemingly static data as a nation-assigned personal id number can change (if you get a new identity) or there may be laws prohibiting their use. A key generated by you, however, is in your own control. For such surrogate keys it's useful to have a small key that is easily generated . As for "floats as primary keys": Don't do this. A primary key should uniquely identify a row. Floats have no equality relation, which means you cannot safely compare two float values for equality. This is an inherent shortcoming of floating-point values. If you need decimals, use a fixed-point number type instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1965918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131024/']} | jdg_86021 |
stackexchange | llm_judgeable_groundtruth_similarity | 434802 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $(X, \mu)$ be your favourite measure space (finite or $\sigma$ -finite if you like), let $g \in L^2$ (say, the scalar field of $L^2$ is $\mathbb{R}$ , though this probably doesn't matter). Let $\tilde g: X \to \mathbb{R}$ be a measurable function and assume that there exists a norm dense vector subspace $D$ of $L^2$ with the following property: For every $f \in D$ the function $\tilde g f$ is integrable, and we have $\int \tilde g f \, d\mu = \int g f \, d\mu$ . Question. Does it follow that $\tilde g = g$ almost everywhere? Remarks. Under the given assumptions, the propertiy $\tilde g = g$ almost everyhwere is equivalent to $\tilde g \in L^2$ . For if $\tilde g \in L^2$ , then the integral equality in the assumption extends to all $f \in L^2$ by density, and for $f := \tilde g - g$ we thus obtain that $\lVert \tilde g - g\rVert^2 = \int (\tilde g - g) f \, d\mu = 0$ . In other words, the question asks whether it is impossible to represent, on any dense subspace of $L^2$ , a continuous linear functional on $L^2$ by a non- $L^2$ -function. If $D$ is a lattice ideal in $L^2$ , meaning that $f_1 \in D$ whenever $\lvert f_1 \rvert \le \lvert f_2 \rvert$ for some $f_2 \in D$ , then the answer to the question is yes . Proof of the claim in the second bullet point. Assume that $D$ is a lattice ideal, and let $f \in D$ . There exists a measurable function $s: X \to \mathbb{R}$ of modulus $\lvert s \rvert = 1$ such that $\tilde g f s \ge 0$ . Note that this implies $\lvert \tilde g f \rvert = \tilde g f s$ . As $\lvert s f \rvert = \lvert f \rvert$ we have $sf \in D$ , so it follows that $$ \label{1}\tag{$*$} \int \lvert \tilde g f \rvert \, d\mu = \int \tilde g f s \, d\mu = \int g f s \, d\mu \le \lVert f \rVert \lVert g \rVert.$$ Now, consider a function $0 \le h \in L^2$ . By density, there exists a sequence $(f_n)$ in $D$ which converges to $h$ . By replacing each $f_n$ with $(h \land f_n) \lor 0$ (where $\land$ denotes the pointwise minimum and $\lor$ denotes the pointwise maximum of functions) we may assume that $0 \le f_n \le h$ for each $n$ . Moreover, by then replacing each $f_n$ with the pointwise maximum of the functions $f_1, \dots, f_n$ , we may also assume that the sequence $(f_n)$ is increasing. Hence, it follows from the monotone convergence theorem and from \eqref{1} that $$ \int \lvert \tilde g h \rvert \, d\mu = \lim_{n \to \infty} \int \lvert \tilde g \rvert f_n \, d \mu \le \lim_{n \to \infty} \lVert f_n \rVert \lVert g \rVert = \lVert h \rVert \lVert g \rVert.$$ For a general (i.e., not necessarily positive) function $h \in L^2$ we can apply the estimate that we just proved to $\lvert h \rvert$ and thus obtain the same estimate $$ \int \lvert \tilde g h \rvert \, d\mu \le \lVert h \rVert \lVert g \rVert$$ for even all $h \in L^2$ . But this shows that $\tilde g\in L^2$ , so $\tilde g = g$ almost everyhwere due to the remark in the first bullet point. $\square$
Now provide the response and nothing else.
| In a stable $\infty$ -category, there are no nontrivial projectives. Of course, $0$ is always projective. Now let $X$ be an arbitrary projective in some stable $C$ , $X\simeq\Sigma \Omega X$ is a simplicial colimit of things of the form $\Omega X^n$ for some $n$ 's, so that the identity $X\to X \simeq \Sigma \Omega X$ factors through some $\Omega X^n$ by projectivity. However, each of the individual maps $\Omega X^n\to \Sigma \Omega X \simeq X$ is nullhomotopic, so this implies that the identity of $X$ is nullhomotopic, so $X=0$ . What is interesting is, when $C$ has a $t$ -structure, the projective objects of $C_{\geq 0}$ as those can be nontrivial. For instance, if $R$ is a connective ring spectrum and $Mod_R$ is given the usual $t$ -structure, the projectives of $(Mod_R)_{\geq 0}$ are exactly what you'd expect: retracts of modules of the form $\bigoplus_I R$ - note that no shifts are allowed. This appears in Lurie's Higher algebra , around 7.2.2.5., 7.2.2.6., 7.2.2.7.. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/434802', 'https://mathoverflow.net', 'https://mathoverflow.net/users/102946/']} | jdg_86022 |
stackexchange | llm_judgeable_groundtruth_similarity | 402024 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to solve this inequality:$$\frac{a}{1-a}+\frac{b}{1-b}+\frac{c}{1-c}+\frac{d}{1-d}+\frac{e}{1-e}\ge\frac{5}{4}$$with $$a+b+c+d+e=1$$I am stuck at this. I don't want the full solution, a hint would be enough.
Now provide the response and nothing else.
| Hint: You have $1-a=b+c+d+e$ and$a=1-(b+c+d+e)$ $\dfrac{1-(b+c+d+e)}{b+c+d+e}=\dfrac{1}{b+c+d+e}-1$ and $b+c+d+e=1-a$ Using AM $\ge$ HM $(\dfrac{1}{1-a}+\dfrac{1}{1-b}+\dfrac{1}{1-c}+\dfrac{1}{1-d}+\dfrac{1}{1-e}) \ge \dfrac{5^2}{1+1+1+1+1-(a+b+c+d+e)} $ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/402024', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/74812/']} | jdg_86023 |
stackexchange | llm_judgeable_groundtruth_similarity | 14862612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am quite new to Magento. As far as I understand, I need to form configurable products, to manage different size of a product. I want to show stock qty of each size seperately in the product view page. Is this possible? My attribute name is 'size'. I have used the following code to get stock qty. However, it is unable to get stock qty of indivual simple product of a configurable product. <?php $__manStock = $_product->getStockItem()->getManageStock();$__invAmt =(int)Mage::getModel('cataloginventory/stock_item')->loadByProduct($_product)->getQty('small');if ($__manStock > 0) { echo $this->__("$__invAmt"); }?>
Now provide the response and nothing else.
| $_product is your configurable product. To get all its simple products use : $_product->getTypeInstance(true)->getUsedProducts ( null, $_product); So you might have something like : foreach ($_product->getTypeInstance(true)->getUsedProducts ( null, $_product) as $simple) { $stock = Mage::getModel('cataloginventory/stock_item')->loadByProduct($simple)->getQty(); echo $simple->getName()." with size ".$simple->getSize()." have a stock of $stock"; echo '<br/>'; } I let you adapt to your precise needs and ask question if needed | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14862612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2069757/']} | jdg_86024 |
stackexchange | llm_judgeable_groundtruth_similarity | 136471 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This seems like such an obvious and elementary-level question for someone who has taken high school chemistry, but yet I'm having a difficult time solving it. I need to find the right amount of $\ce{Mg(OH)2}$ to have a $\pu{250 mL}$ solution with a $\mathrm{pH}$ of $8$ and $9.$ I first started by trying to determine the concentration of $\ce{Mg(OH)2}$ needed to obtain a solution with a certain pH. Since $\ce{Mg(OH)2}$ is a weak base, we must use a RICE table to determine its concentration at equilibrium. $$\begin{array}{lcccc} & \ce{Mg(OH)2 &<=> & Mg^2+ &+ & 2OH-} \\\text{I} & x & & 0 && 0 & \\\text{C} & x & & +x && +2x \\\text{E} & x & & x && 2x\end{array}$$ However, the equation involving $K_\mathrm{sp}$ only involves the $x$ values in the RICE table above, not the actual concentration. How would I calculate the $\mathrm{pH}$ level for $\ce{Mg(OH)2}$ with varying molarity? I thought it was a weak base, so I'd have to use a RICE table.
Now provide the response and nothing else.
| There is an easy way to do what OP wants, assuming OP wants to prepare solutions at $\pu{25 ^\circ C}$ . So, OP can prepare saturated solution of $\ce{Mg(OH)2}$ solution: $$\ce{Mg(OH)2_{(s)} <=> Mg^2+_{(aq)} + 2OH-_{(aq)}}$$ Since $K_\mathrm{sp}$ of $\ce{Mg(OH)2}$ is $\pu{5.61 \times 10^{-12} M3}$ , you can find the solubility of $\ce{Mg(OH)2}$ at $\pu{25 ^\circ C}$ ( $s$ ): $$K_\mathrm{sp} = s \times (2s)^2 = 4s^3 \ \Rightarrow \ s = \left(\frac{K_\mathrm{sp}}{4}\right)^{\frac13} = \left(\frac{\pu{5.61 \times 10^{-12} M3}}{4}\right)^{\frac13} = \pu{\pu{1.12 \times 10^{-4} M}}$$ Thus, $[\ce{Mg^2+}] = \pu{1.12 \times 10^{-4} M}$ and $[\ce{OH-}] = 2 \times \pu{1.12 \times 10^{-4} M} = \pu{2.24 \times 10^{-4} M}$ . $$\therefore \ \mathrm{pOH} = -\log {[\ce{OH-}]} = -\log (\pu{2.24 \times 10^{-4} M}) = 3.65$$ Thus, $\mathrm{pH} = 14.00 - 3.65 = 10.35$ . This means the $\mathrm{pH}$ of saturated $\ce{Mg(OH)2}$ solution is a little higher than what OP anticipated. The dilution of the saturated solution with deionized water do the trick as demonstrated in following example: Suppose you want to make $\pu{250 mL}$ of $\ce{Mg(OH)2}$ solution with $\mathrm{pH} = 8.00$ . Thus, $\mathrm{pOH} = 14.00 - 8.00 = 6.00$ . Thus, $[\ce{OH-}] = \pu{1.00 \times 10^{-6} M}$ . For the calculation for the dilution, you can use $c_1V_1 = c_2V_2$ equation. In OP's case, $c_1 = \pu{2.24 \times 10^{-4} M}$ , $c_2 = \pu{1.00 \times 10^{-6} M}$ , and $V_2 = \pu{250 mL}$ , the volume of anticipated solution with $\mathrm{pH} = 8.00$ . The unknown $V_1$ is the volume of saturated $\ce{Mg(OH)2}$ solution ( $\mathrm{pH} = 10.35$ ) needed to be diluted: $$c_1V_1 = c_2V_2 \ \Rightarrow \ V_1 = \frac{c_2V_2}{c_1} = \frac{\pu{1.00 \times 10^{-6} M} \times \pu{250 mL}}{\pu{2.24 \times 10^{-4} M}} = \pu{1.12 mL}$$ Thus, you can measure $\pu{1.12 mL}$ of saturated $\ce{Mg(OH)2}$ solution into $\pu{250 mL}$ volumetric flask and diluted it with DI water to the $\pu{250 mL}$ line mark. After shaking well to get homogeneous solution, its $\mathrm{pH}$ should be anticipated $8$ (or closer to $8$ based on the accuracy of the measurements). Note: It would be better if you can measure the $\mathrm{pH}$ of saturated solution before do the calculations. That's because, the factors such as temperature influence the realtime $\mathrm{pH}$ . | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/136471', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/95828/']} | jdg_86025 |
stackexchange | llm_judgeable_groundtruth_similarity | 18022705 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I always had ambiguity on why we need to use inflater in android, Why are they used in ListView for custom layouts (like below)? What is an Inflater ? What is the advantage of using Inflater ? public class MobileArrayAdapter extends ArrayAdapter<String> { private final Context context; private final String[] values;public MobileArrayAdapter(Context context, String[] values) { super(context, R.layout.list_mobile, values); this.context = context; this.values = values;}@Overridepublic View getView(int position, View convertView, ViewGroup parent) { LayoutInflater inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); View rowView = inflater.inflate(R.layout.list_mobile, parent, false); TextView textView = (TextView) rowView.findViewById(R.id.label); ImageView imageView = (ImageView) rowView.findViewById(R.id.logo); textView.setText(values[position]); Thanks,
Now provide the response and nothing else.
| What is an Inflater ? To summarize what the LayoutInflater Documentation says... A LayoutInflater is one of the Android System Services that is responsible for taking your XML files that define a layout, and converting them into View objects. The OS then uses these view objects to draw the screen. I always had ambiguity on why we need to use inflater in android, Why are they used in android ListView for a custom layout ? Typically, you don't ever need to directly use a LayoutInflater. Android does most of the layout inflation for you when you call setContentView() in the onCreate() method of your activity. So you, as the programmer, are responsible for making sure the views are inflated . Now you want to inflate views in the context of a ListView. The Adapter class can do the inflation for you if you do not want to customize each item. But if you want to customize the views shown in a list, you will have to manually inflate each view with the LayoutInflater, since there is no other existing method you can use. What is the advantage of using Inflater ? There is no advantage to using it. You are required to use a LayoutInflater in some shape or form to inflate your static XML layouts. Alternatively, you could create views dynamically with java code. However, you would need to call methods to set each property for the view by hand. In my opinion, it is easier to use the XML/inflation process. In addition, Android pre-processes your XML files at build time, so this results in a faster execution time. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18022705', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_86026 |
stackexchange | llm_judgeable_groundtruth_similarity | 556364 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to run a shell script while ignoring some of the commands, because they need privileges. Commands like insmod . So I filter the script with this, and it works (those commands are replaced by true ): sed -e 's/command1/true/g' -e 's/command2/true/g' -e 's/command3/true/g' ... -e 's/commandN/true/g' Is there a more concise way of expressing that? With simple caracters, to change 'a', 'b', and 'm' into an 'x' I could do something like: sed -e 's/[abm]/x/g' or tr abm x (or its tr abm '[x*]' POSIX equivalent). But with strings?
Now provide the response and nothing else.
| If your sed is able to use the non-standard -E option to understand extended regular expressions, you may use sed -E 's/string1|string2|string3/true/g' The alternation with | is an extended regular expression feature not supported by basic regular expression of the type that sed usually supports. The sed command above would replace any of the three given string even if they occurred as substrings of other strings (such as string1 in string10 ). To prevent that, also match word boundaries on either side: sed -E 's/\<(string1|string2|string3)\>/true/g' On macOS, you may want to use [[:<:]] and [[:>:]] in place of \< and \> to specify word boundaries. As an aside, your s/[a|b|m]/x/g expression (now corrected in the question) would not only replace a , b and m with x but would also replace any | in the text with x . This is the same as s/[abm|]/x/g or y/abm|/xxxx/ , or the tr commands tr 'abm|' 'xxxx' and tr 'abm|' '[x*4]' . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/556364', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/354526/']} | jdg_86027 |
stackexchange | llm_judgeable_groundtruth_similarity | 21589 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this correct? I thought it would be but when I entered it into wolfram alpha, I got a different answer. $$\int (\cos^3x)(\sin^2x)dx = \int(\cos x)(\cos^2x)(\sin^2x)dx= \int (\cos x)(1-\sin^2x)(\sin^2x)dx.$$ let $u = \sin x$, $du = \cos xdx$ $$\int(1-u^2)u^2du = \int(u^2-u^4)du= \frac{u^3}{3} - \frac{u^5}{5} +C$$ Plugging in back $u$, we get $\displaystyle\frac{\sin^3 x}{3} - \frac{\sin^5 x}{5}$ + C
Now provide the response and nothing else.
| \begin{align*}\frac{\sin^3 (x)}{3} - \frac{\sin^5 (x)}{5} &= \sin^3 (x) (\frac{1}{3} - \frac{\sin^2 (x)}{5});\\\cos(2x) &= 1 - 2 \sin^2(x)\\\sin^2(x) &= \frac{1- \cos(2x)}{2}.\end{align*} Hence, we get, \begin{align*}\sin^3 (x) \left(\frac{1}{3} - \frac{\sin^2 (x)}{5}\right) &= \sin^3 (x) \left(\frac{1}{3} - \frac{1 - \cos(2x)}{10}\right)\\& = \sin^3(x) \left(\frac{10 - 3 + 3 \cos(2x)}{30}\right)\\& = \frac{\sin^3(x)}{30} (3 \cos(2x)+7)\end{align*} | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/21589', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6331/']} | jdg_86028 |
stackexchange | llm_judgeable_groundtruth_similarity | 162940 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a normally distributed random variable $X$. I sample two points $x_1$ and $x_2$, and I am interested in the absolute difference between these two sampled points: $d=|x_2-x_1|$. I repeat this $N$ times collecting $N$ pairs of points, giving me absolute differences of $d_1, d_2, ... d_N$. Is it possible to say anything about the distribution of $d$? Specifically, I would like to construct a confidence interval on $d$, i.e. "with $1-\alpha$ confidence, $d$ falls within this interval."
Now provide the response and nothing else.
| A sum of two normals is normal. The variance doubles. The mean is going to be zero. The absolute difference is going to be like an absolute value of a normal, i.e. the density function will be something like: $f(d)=\frac{1}{\sqrt{\pi}\sigma}e^{-\frac{d^2}{4\sigma^2}}$ Obviously, the domain is $d\in[0,\infty)$ As @A.Donda pointed out, it's a half-normal distribution, with properly a plugged variance. You observe X, which means that you can estimate its variance $\hat\sigma^2$ by using usual estimator such as $\hat\sigma^2=\frac{1}{N-1}\sum_{i=1}^n (x_i-\bar x)^2$, where the mean estimator is usual $\bar x =\frac{1}{N}\sum_{i=1}^n x_i$ The variance of d can be easily computed using half-normal distribution properties: $\hat\sigma_X=2\hat\sigma^2(1-\frac{2}{\pi})$ | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/162940', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/83134/']} | jdg_86029 |
stackexchange | llm_judgeable_groundtruth_similarity | 516443 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found this wonderful NSManagedObjectID. This would be very good for referencing an Entity/NSManagedObject/NSEntityDescription, right? Let's get an ID from an entity: NSEntityDescription *entity = [self newEntity]; NSManagedObjectID *objID = [entity objectID]; So... any idea how to get this objID into a string? Or better: NSData. Actually something to be able to save it to the NSUserDefaults. ;-) Btw: NSFetchRequest doesn't want to work in my case. I use an modified version of this example: answer of an old question .
Now provide the response and nothing else.
| To get an archived URI corresponding to a NSManagedObject 's objectID : NSManagedObject* myMO;...NSURL *uri = [[myMO objectID] URIRepresentation];NSData *uriData = [NSKeyedArchiver archivedDataWithRootObject:uri]; In order to get back to an instance of the original managed object, you need a CoreData stack with the persistent store holding that instance already added to the NSPersistentStoreCoordinator . Then: NSData *uriData;NSPersistentStoreCoordinator *psc;NSManagedObjectContext *moc; //with moc.persistentStoreCoordinator = psc....NSURL *uri = [NSKeyedUnarchiver unarchiveObjectWithData:uriData];NSManagedObjectID *moID = [psc managedObjectIDForURIRepresentation:uri];NSManagedObject *myMO = [moc objectWithID:moID]; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/516443', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59518/']} | jdg_86030 |
stackexchange | llm_judgeable_groundtruth_similarity | 146065 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is always stated that it is hard to store applications objects in relational databases - the object-relational impedance mismatch - and that is why Document databases are better. However, is there really an impedance mismatch? And object has a key (albeit it may be hidden away by the runtime as a pointer to memory), a set of values, and foreign keys to other objects. Objects are as much made up of tables as it is a document. Neither really fit. I can see a use for databases to model the data into specific shapes for scenarios in the application - e.g. to speed up database lookup and avoid joins, etc., but won't it be better to keep the data as normalized as possible at the core, and transform as required?
Now provide the response and nothing else.
| There is definitely object-relational impedance. I had to deal with this for years on a product that kept its data on a database but was OO when it ran. As a vastly simplified example, consider three DB tables. Two are just lists, one containing people and one containing jobs . The third is a link between people and jobs , with a column for each. (And all three have lots of columns for lots of subsidiary data.) In the UI, a user wants to look at the data in terms of a job and the people working on it. A look at the DB, if it's large, is going to be worthless--too much disorganized info. But in the UI, a job object can serve as the basis for good display. Even if the user has a list of a million jobs, he can scroll to the one he wants and see all the relevant info: who's working on it and when and how hard and with what skills, etc. So on the DB you have 3 tables. In the OO program, you have job objects, people objects and the links between them. The jobs and people lists relate directly to the DB tables, but there is no list of links . The links are part of, or at any rate tied to, the job objects. To create a link table from the OO data, you have to work through all the jobs. To create a proper jobs object, you have to read through the link table and do some awkward rearranging of data. This is the essence of the impedance. This example is simple in order to be easily understood, but it drastically understates the real-world difficulty in translating between the two concepts. OO is much faster. It's easier to use once it has been set up . However, it forces a certain organization on the data. If this organization is not done right, or is not adequate, there can be trouble. For instance, in my example the key object is the job . If you want to know what work an individual is doing, it's no good. In this simple example, it is obvious that there needs to be a people object that contains all the work a person is doing, but in a real system the number of possible structures will approach infinity; you have to choose to do that limited number that will let the program do its job. OO bogs when the random-access aspect of memory becomes slow. OO depends on a lot of direct links between bits of data. How is a large OO database supposed to work? A Document Database makes more sense, but it will either have to really be an OO DB or duplicate vast quantities of data: note that a job object includes people objects, and those people objects include the job object. Relational data is much slower, but it is without prejudices. A bit of SQL and you can look at the data any way you want. The other, overwhelming, advantage it has is that slow, not-so-random memory--which will bring OO to a halt--doesn't bother it. So if you have terrabytes of data, you want your data stored relationally, not object-orientedly. So I think we're stuck with relational databases and object-oriented programs and doing a lot of gyrations to get data from the one to the other and back again. Also: I have recently done some work on a system that uses in-memory relational data. It is very slow, but not so slow as to be seriously annoying (it's not a computer game!) and the original developers, while they had to design a DB, got to skip worrying about an OO design. I considered OO'ing a bit of it, but that meant a lot of programming versus writing just a bit of fancy SQL (LINQ, actually). It clued me in that relational could sometimes compete with OO on its own turf. It also points out that OO forces organization on the data. If the original developers had known what I needed to do and used OO to do it, the program would have run faster and I could have modified it with a lot less effort. (SQL is tricky.) But if they hadn't OO'ed it they way I needed it done, I'd have been right where I was but without the in-memory relational DB. Summary: OO and relational are two different things, and converting between them involves impedance--it's a pain. Each has its uses, and I think we're often going to be stuck using both. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/146065', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/-1/']} | jdg_86031 |
stackexchange | llm_judgeable_groundtruth_similarity | 8315974 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my mvvm ViewModel I have such field public int Delta { get; private set; } However when I update it like that: Delta = newValue; UI is not refreshed. I was thinking that databinding will do that for me. For example I can declare collection as ObservableCollection and then databinding will work. However there are no ObservableInt , how to say View that it need to be refreshed then? Probably I should raise some event "notify property changed" or something?
Now provide the response and nothing else.
| You have two choices: Implement the INotifyPropertyChanged interface on your class. Inherit from DependencyObject and implement Delta as a DependencyProperty. The simplest option is #1. You can implement the INotifyPropertyChanged interface on your class quite easily: public class YourClass : INotifyPropertyChanged{ private int _delta; public int Delta { get { return _delta; } set { _delta = value; PropertyChanged?.Invoke(nameof(Delta)); } } public event PropertyChangedEventHandler PropertyChanged;} You can read more about using and implementing dependency properties on MSDN. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8315974', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93647/']} | jdg_86032 |
stackexchange | llm_judgeable_groundtruth_similarity | 24125028 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading The Swift Programming Language, in the Simple Values section “Use let to make a constant and var to make a variable. The value of a constant doesn’t need to be known at compile time, but you must assign it a value exactly once” So I think I can do this let aConstant:IntaConstant = 5 But I get let declarations require an initializer expression !! Why is that ? What does they mean by "The value of a constant doesn’t need to be known at compile time" ?
Now provide the response and nothing else.
| From the Swift Language Reference : When a constant is declared at global scope, it must be initialized with a value. You can only defer initialization of a constant in classes/structs, where you can choose to initialize it in the initializer of the class/struct. The meaning of "The value of a constant doesn’t need to be known at compile time" refers to the value of the constant. In C/Objective-C a global constant needs to be assigned a value that can be computed by the compiler (usually a literal like 10 or @"Hello" ). The following would not be allowed in Objective-C: static const int foo = 10; // OKstatic const int bar = calculate_bar(); // Error: Initializer element is not a compile-time constant In Swift you don't have this restriction: let foo = 10 // OKlet bar = calculateBar(); // OK Edit: The following statement in the original answer is not correct: You can only defer initialization of a constant in classes/structs, where you can choose to initialize it in the initializer of the class/struct. The only place where you cannot defer is in global scope (i.e. top level let expressions). While it's true that you can defer initialization in a class/struct, that's not the only place. The following is also legal for example: func foo() { let bar: Int bar = 1 } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24125028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1418457/']} | jdg_86033 |
stackexchange | llm_judgeable_groundtruth_similarity | 57167826 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to draw an animated map using plotly_express .here is a sample code import plotly.express as pxgapminder = px.data.gapminder()fig = px.choropleth(gapminder, locations="iso_alpha", color="lifeExp", hover_name="country", animation_frame="year", range_color=[20,80], color_continuous_scale='RdYlGn')fig.show() This shows the scale from Red to Green. But I want to reverse itI want it to start from green to a maximum of red. This is done simply using Matplotlib by adding '_r' at the end of the color scale name, i.e., to be color_continuous_scale='RdYlGn_r' , but this does not work with plotly_express.In the documentation , it is written that we can express the normal color scale in the methods form color_continuous_scale=px.colors.diverging.RdYlGn , this works too. However, when I add the .reverse method, i.e. color_continuous_scale=px.colors.diverging.RdYlGn.reverse it gives the following error: TypeError Traceback (most recent call last)<ipython-input-63-9103eb8a4cd9> in <module> 5 range_color=[4, 23], 6 title='Fasting durations (h) for the world througout the year',----> 7 color_continuous_scale=px.colors.diverging.RdYlGn.reverse)#'RdYlGn') 8 fig2.show()~\Anaconda3\lib\site-packages\plotly\express\_chart_types.py in choropleth(data_frame, lat, lon, locations, locationmode, color, hover_name, hover_data, size, animation_frame, animation_group, category_orders, labels, color_continuous_scale, range_color, color_continuous_midpoint, size_max, projection, scope, center, title, template, width, height) 822 args=locals(), 823 constructor=go.Choropleth,--> 824 trace_patch=dict(locationmode=locationmode), 825 ) 826 ~\Anaconda3\lib\site-packages\plotly\express\_core.py in make_figure(args, constructor, trace_patch, layout_patch) 1007 colorvar = "z" if constructor == go.Histogram2d else "color" 1008 range_color = args["range_color"] or [None, None]-> 1009 d = len(args["color_continuous_scale"]) - 1 1010 1011 colorscale_validator = ColorscaleValidator("colorscale", "make_figure")TypeError: object of type 'builtin_function_or_method' has no len() What is the problem, and how can I override this error and applies the reverse colormap?
Now provide the response and nothing else.
| The colorscales in px.colors are all simple lists, meaning you can use color_continuous_scale=px.colors.diverging.RdYlGn[::-1] (I should add that the reason you're seeing an error like this is that since px.colors.diverging.RdYlGn is a list, px.colors.diverging.RdYlGn.reverse is defined, so you're currently trying to pass in a function to an argument that expects a string or a list) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57167826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5820024/']} | jdg_86034 |
stackexchange | llm_judgeable_groundtruth_similarity | 11860730 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently trying to match with this regex: ^(?<Date>(0[1-9]|[12][0-9]|3[01])[/.](0[1-9]|1[012])[/.](19|20)\d\d) (?<Time>(?:(?:(\d+):)?(\d+):)?(\d+)): (?<Name>.*) (?<IP>\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:[0-9]?[0-9]?[0-9]?[0-9]?[0-9]?[0-9]?\)) (?<GUID>.*) - (?<Line>#([0-9]?[0-9]?[0-9]?[0-9]?)) (?<Code>.*)\n(?<Code2>(\n|.)*) With this as the input: 08.08.2012 01:47:40: ferngully (98.169.247.118:2304) 6ff0e06c6d5434a953b2780f852bf762 - #17 "moveIn", "land", "addWPCur", "animate", "setDate", "playmusic", "playsound", "switchmove", 08.08.2012 01:47:40: Thugnificent (72.86.3.240:50218) 696089009a4e885fe5ec0323d4537253 - #0 f)')}; BIS_ALICE_fnc_houseEffects = compile preprocessFileLineNumbers '\ca\modules\alice\d08.08.2012 01:36:28: [SOER]MrWolf (58.110.48.174:2304) 464f91fcefe7f1014979c5140a0f1649 - #12 airing..."] call d_fnc_VehicleChat};_object setDamage 0;sleep d_reload_time_factor;if (!alive _ob This issue is that it returns only 1 match, it needs to be able to match that as 3 matches. The goal at the end is to be able to pull the code in between each of those "headers" which include the date, time, name, ip, guid, line and then the multiline code. EDIT: I resolved the issue, I need to use a look ahead on the second date. Current regex: ^(?<Date>(0[1-9]|[12][0-9]|3[01])[/.](0[1-9]|1[012])[/.](19|20)\d\d?) (?<Time>(?:(?:(\d+):)?(\d+):)?(\d+)?): (?<Name>.*?) (?<IP>\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:.*?\)?) (?<GUID>[0-9a-z]{32}?) - (?<Line>#[0-9]*?) (?<Code>(\n|.)*?)(?=(0[1-9]|[12][0-9]|3[01])[/.](0[1-9]|1[012])[/.](19|20)\d\d)
Now provide the response and nothing else.
| Performance has nothing to do with the number of fields an object has. Memory consumption is of course potentially affected, but if the variables are needed, you can't do much about it.Don't worry too much about performance. Make your code simple, readable, maintainable, tested. Then, if you notice performance problems, measure and profile to see where they come from, and optimize where needed. Maintainability and readability is affected by the number of fields an object has though. 40 to 50 fields is quite a lot of fields, and is probably an indication that your classes do too much on their own, and have too many responsibilities. Refactoring them to many, smaller subclasses, and using composition would probably be a good idea. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11860730', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1584083/']} | jdg_86035 |
stackexchange | llm_judgeable_groundtruth_similarity | 34446364 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have add supporting jars and jdbc drivers to my project but still i am getting below exception getting could not find resultset exceptionError performing load command :getting sqlgrammerexception my configuration file <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd"><hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.password">1234</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/test</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <mapping class="com.util.Product"/> </session-factory></hibernate-configurationentity class@Entity@Table(name="productdetails")public class Product implements Serializable { @Id @Column(name="product-id") private int id; @Column(name="product-name") private String name; @Column(name="product-description") private String description; @Column(name="product-price") private float price; public Product() { super(); } public Product(int id, String name, String description, float price) { super(); this.id = id; this.name = name; this.description = description; this.price = price; } public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public float getPrice() { return price; } public void setPrice(float price) { this.price = price; }}public class Executingclass { public static void main(String args[]) { SessionFactory sessionFactory=Hibernateutil.getSessionFactory(); Session session=sessionFactory.openSession(); Product product=(Product)session.get(Product.class, 1); System.out.println(product.getId()); System.out.println(product.getName()); System.out.println(product.getDescription()); System.out.println(product.getPrice()); }}public class Hibernateutil { static SessionFactory sessionFactory=null; static{ Configuration configuration=new Configuration(); configuration.configure(); ServiceRegistry serviceRegistry=new StandardServiceRegistryBuilder().applySettings(configuration.getProperties()).build(); sessionFactory=configuration.buildSessionFactory(serviceRegistry); } public static SessionFactory getSessionFactory() { return sessionFactory; }} Error: INFO: HHH000327: Error performing load command : org.hibernate.exception.SQLGrammarException: could not extract ResultSetException in thread "main" org.hibernate.exception.SQLGrammarException: could not extract ResultSet at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:80) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:91) at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.getResultSet(AbstractLoadPlanBasedLoader.java:449) at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.executeQueryStatement(AbstractLoadPlanBasedLoader.java:202) at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.executeLoad(AbstractLoadPlanBasedLoader.java:137) at org.hibernate.loader.plan.exec.internal.AbstractLoadPlanBasedLoader.executeLoad(AbstractLoadPlanBasedLoader.java:102) at org.hibernate.loader.entity.plan.AbstractLoadPlanBasedEntityLoader.load(AbstractLoadPlanBasedEntityLoader.java:186) at org.hibernate.persister.entity.AbstractEntityPersister.load(AbstractEntityPersister.java:4126) at org.hibernate.event.internal.DefaultLoadEventListener.loadFromDatasource(DefaultLoadEventListener.java:503) at org.hibernate.event.internal.DefaultLoadEventListener.doLoad(DefaultLoadEventListener.java:468) at org.hibernate.event.internal.DefaultLoadEventListener.load(DefaultLoadEventListener.java:213) at org.hibernate.event.internal.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventListener.java:275) at org.hibernate.event.internal.DefaultLoadEventListener.onLoad(DefaultLoadEventListener.java:151) at org.hibernate.internal.SessionImpl.fireLoad(SessionImpl.java:1106) at org.hibernate.internal.SessionImpl.access$2000(SessionImpl.java:176) at org.hibernate.internal.SessionImpl$IdentifierLoadAccessImpl.load(SessionImpl.java:2587) at org.hibernate.internal.SessionImpl.get(SessionImpl.java:991) at com.util.Executingclass.main(Executingclass.java:10)Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'product0_.product' in 'field list' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.Util.getInstance(Util.java:387) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:941) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2470) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2617) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2550) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1861) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1962) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:82) ... 16 more please help me in addressing this issue I am new to hibernate
Now provide the response and nothing else.
| In the stacktrace, there is a line that points towards the problem: Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'product0_.product' in 'field list' If you see unknown column, the first thing that comes to my mind is that there is some error in the mapping of the fields in the class. @Id@Column(name="product-id")private int id;@Column(name="product-name")private String name;@Column(name="product-description")private String description;@Column(name="product-price")private float price; And when you see this together with the output in the stacktrace i mentioned above, you think, well, there is not a product0_.product column name, the closest thing is just product, followed by a hyphen, maybe its the hyphen what it gives some troubles. After googling a bit, i found this answer from another question, that points that using hyphens is possible, but it has a special requirement, perhaps hibernate is not dealing with this correctly. So to summarize, try without hyphens in the schema, in both places, the database, and the @Entity . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34446364', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5713057/']} | jdg_86036 |
stackexchange | llm_judgeable_groundtruth_similarity | 192195 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi we have the following problem: $\iiint x\,dx\,dy\,dz$ limited by the paraboloid of equation $x=4 y^2+4z^2$ and for the plane $x=4$. We are having difficulty on finding the limits of each integral and how to turn to polar coordinates. Could you offer any tips? I can provide more details on the comments on what we tried. Thank you.
Now provide the response and nothing else.
| It sounds as if the professor is talking about contour integration . If complex analysis is a prerequisite for this course, then that is most likely what it is. An example of an integral with a real singularity and a removable singularity is the principal value integral$$\mathrm{PV}\int_{-\infty}^\infty\frac{e^{ix}}{x}\mathrm{d}x=\lim_{R\to\infty}\left(\int_{-R}^{-1/R}\frac{e^{ix}}{x}\mathrm{d}x+\int_{1/R}^{R}\frac{e^{ix}}{x}\mathrm{d}x\right)\tag{1}$$Since $e^{ix}=\cos(x)+i\sin(x)$, $(1)$ covers a principal value integral with a real singularity and an integral with a removable singularity :$$\mathrm{PV}\int_{-\infty}^\infty\frac{\cos(x)}{x}\mathrm{d}x\qquad\text{and}\qquad\int_{-\infty}^\infty\frac{\sin(x)}{x}\mathrm{d}x\tag{2}$$The integral on the left hand side of $(1)$ can be handled with the following contour: $\hspace{3.2cm}$ The integral in $(1)$ equals the integral of $\dfrac{e^{iz}}{z}$ over the two red pieces of the contour above as the large curve (radius $R$) gets larger and the small curve (radius $1/R$) gets smaller. To use contour integration, we close the contour with the two circular curves. Since there is no singularity of inside the contour, the integral over the entire contour is $0$. The integral over the large curve where $y<\sqrt{R}$ tends to $0$ since the absolute value of the integrand is $\le1/R$ over two pieces of the contour whose length is essentially $\sqrt{R}$. The integral over the large curve where $y\ge\sqrt{R}$ also tends to $0$ since the absolute value of the integral is less than $e^{-\sqrt{R}}/R$ over a contour whose length is less than $\pi R$. Thus, the integral over the large curve tends to $0$. Near the origin, we have$$\frac{e^{iz}}{z}=\frac1z+i-\frac{z}{2}-i\frac{z^2}{6}+\dots\tag{3}$$Aside from $\frac1z$, the series in $(3)$ is bounded and so its integral over the small curve tends to $0$ since the length of the curve is $\pi/R$. The integral of $\frac1z$ over a counter-clockwise circular arc centered at the origin is $i$ times the angle of the arc. Thus, the integral over the small curve is $-\pi i$ (clockwise circular arc with angle $\pi$). Since the integral over the entire contour is $0$ and the integral over the two curves tends to a total of $-\pi i$, the integral over the two red pieces must tend to $\pi i$. Taking real and imaginary parts yields$$\mathrm{PV}\int_{-\infty}^\infty\frac{\cos(x)}{x}\mathrm{d}x=0\qquad\text{and}\qquad\int_{-\infty}^\infty\frac{\sin(x)}{x}\mathrm{d}x=\pi\tag{4}$$Alternatively, we can also use the following contour: $\hspace{3.2cm}$ Everything is as above except that the contour contains the singularity at $0$ and the small circular curve is counter-clockwise. The integral over the entire contour is $2\pi i$ since the residue of the singularity is $1$. The integral over the large curve tends to $0$ as above. The integral over the small curve is $\pi i$ (counter-clockwise circular arc with angle $\pi$). Therefore, the integral over the two red pieces must again tend to $\pi i$. Thus, the integral can be handled circling the singularity either way. Perhaps something like this is what is going on in class. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/192195', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39675/']} | jdg_86037 |
stackexchange | llm_judgeable_groundtruth_similarity | 26593316 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm uploading 15 photos to AWS S3 (v2) and I want to show the progress for each photo. First, I created an AWSS3TransferManagerUploadRequest for each photo. AWSS3TransferManagerUploadRequest *uploadRequest = [AWSS3TransferManagerUploadRequest new];// load uploadRequest atts...uploadRequest.uploadProgress = ^(int64_t bytesSent, int64_t totalBytesSent, int64_t totalBytesExpectedToSend) { int progress = (int)(totalBytesSent * 100.0 / totalBytesExpectedToSend); DDLogInfo(@"%d", progress);} Then I created a NSArray of BFTask BFTask *task = [self.s3transferManager upload:uploadRequest];[tasks addObject:task]; Finally: [[BFTask taskForCompletionOfAllTasks:tasks] continueWithExecutor:[BFExecutor mainThreadExecutor] withBlock:^id(BFTask *task) { if (task.error != nil) { DDLogError(@"Error: [%@]", task.error); } else { DDLogInfo(@"Complete!"); } return nil; }]; The problem I have is that the block associated with "uploadProgress" executes for the first 4 photos and then the rest of them are just uploaded but not tracking the progress. Any idea? Thanks!
Now provide the response and nothing else.
| Django REST Framework cannot automatically optimize queries for you, in the same way that Django itself won't. There are places you can look at for tips, including the Django documentation . It has been mentioned that Django REST Framework should automatically, though there are some challenges associated with that. This question is very specific to your case, where you are using a custom SerializerMethodField that makes a request for each object that is returned. Because you are making a new request (using the Friends.objects manager), it is very difficult to optimize the query. You can make the problem better though, by not creating a new queryset and instead getting the friend count from other places. This will require a backwards relation to be created on the Friendship model, most likely through the related_name parameter on the field, so you can prefetch all of the Friendship objects. But this is only useful if you need the full objects, and not just a count of the objects. This would result in a view and serializer similar to the following: class Friendship(models.Model): from_friend = models.ForeignKey(User, related_name="friends") to_friend = models.ForeignKey(User)class GetAllUsers(generics.ListAPIView): ... def get_queryset(self): return User.objects.all().prefetch_related("friends")class GetAllUsersSerializer(serializers.ModelSerializer): ... def get_is_friend_already(self, obj): request = self.context.get('request', None) friends = set(friend.from_friend_id for friend in obj.friends) if request.user != obj and request.user.id in friends: return True else: return False If you just need a count of the objects (similar to using queryset.count() or queryset.exists() ), you can include annotate the rows in the queryset with the counts of reverse relationships. This would be done in your get_queryset method, by adding .annotate(friends_count=Count("friends")) to the end (if the related_name was friends ), which will set the friends_count attribute on each object to the number of friends. This would result in a view and serializer similar to the following: class Friendship(models.Model): from_friend = models.ForeignKey(User, related_name="friends") to_friend = models.ForeignKey(User)class GetAllUsers(generics.ListAPIView): ... def get_queryset(self): from django.db.models import Count return User.objects.all().annotate(friends_count=Count("friends"))class GetAllUsersSerializer(serializers.ModelSerializer): ... def get_is_friend_already(self, obj): request = self.context.get('request', None) if request.user != obj and obj.friends_count > 0: return True else: return False Both of these solutions will avoid N+1 queries, but the one you pick depends on what you are trying to achieve. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26593316', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3074872/']} | jdg_86038 |
stackexchange | llm_judgeable_groundtruth_similarity | 290482 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a library which deals a lot with sub-sequences of ordered containers. So for example I have a container (1,2,3,4,5,6) and a user wants to access (3,4,5). I am providing the subsequence by a pair of iterators, pointing to its first and last element respectively, i.e. 3 and 5. Since the library is written in C++ and AFAIK the std convention is to have the last iterator point beyond the last element, I am wondering if what I am doing is good practice or wether I should return a pair of iterators, pointing to the first and beyond last element respectively, i.e. 3 and 6? Also from a programming perspective, it complicates things when using std functionality, for example to count the number of elements, I have to do: int elementCnt = std::distance(startIt, endIt) + 1;
Now provide the response and nothing else.
| Follow the standard - the end is the iterator past the one you want. This allows you to use all the standard algorithms and containers without problem. It also means your users will be able to write the code they always have (eg for (x=startIt; x != endIt; x++) and this will work as expected. If you change this behaviour and set the last iterator to the last element, all that goes out of the window and you might as well use a different nomenclature than iterators as you're effectively changing the way everyone will expect them to work. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/290482', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/186488/']} | jdg_86039 |
stackexchange | llm_judgeable_groundtruth_similarity | 380943 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
please could someone clarify the following: I apply Affinity Propagation (AP) algorithm to data set. The minimal number of elements in one cluster I got is three. In advance I know that my data set contain some unique elements that should be allocated to separate alone cluster. So i would like to set the minumum number of elements in any cluster is one and apply AP. Is it possible to do it in relation to AP (change some parameters) or it is property of AP to do clusterization such that each cluster has at least three elememnts. Thank you in advance.
Now provide the response and nothing else.
| Like @Chris said, you can use some kind of divergence or information theory ideas. KL divergence like he said is good, or more generally f-divergence, etc., but more simply just use entropy. You said minimized when uniform, so entropy would be maximized, so do negative entropy. You said maps to real numbers, maybe you want for easier comparison, [0,1] interval, so use (negative) normalized entropy (divide by logN): https://math.stackexchange.com/questions/395121/how-entropy-scales-with-sample-size Or since you said the end goal is to compare some empirical CDF's, then depending on your problem maybe there is a more direct way you can do this like: KS test: https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test Anderson-Darling: https://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test Cramer von Mises https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion etc. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/380943', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/116569/']} | jdg_86040 |
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.