source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 278101 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been studying the PICAXE microcontroller as a driver for a set of three 12VDC solenoid valves (though I may choose the 24V models). For reference, the controller will use a temperature sensor to determine when to actuate the solenoids, in addition to elapsed time (via the firmware). Conceptually, the use of a relay to control each solenoid makes sense to me since they operate at a higher voltage than the microcontroller. However it has been about 20 years since I last studied electronics in high school, and I get a bit confused when I see slightly more complicated solutions to this problem. Here is a schematic of how I thought I should build the circuit, simplified to one output pin and one solenoid for brevity. I then found the "Standard Circuit" in the PICAXE Interfacing Circuits manual , which is represented as follows. Why is a transistor is required to actuate the relay? Perhaps the output current of the microcontroller pin (~20mA) is insufficient to power the coil in the relay? Finally, I became concerned with the space that three relays would occupy on my circuit board and though I might be able to use an AND gate chip as follows. The benefit being that such chips usually contain four gates, and thus replacing three relays with one smaller chip. In this example, one gate input is fixed at 12V. Is this gate solution feasible? That is, if the two "high" inputs are different, but recognized as a high by the chip, will the output voltage generally be the max of the two inputs? I found a data sheet for the BU4081B chip, but I am having difficulty finding the answer to this question in it.
Now provide the response and nothing else.
| Why use a transistor to drive a relay? Because the transistor can handle a lot more current than a digital output of a microcontroller. A relay might need 60 mA at 5 V, whereas a microcontroller output might only be able to deliver 20 mA. The transistor provides current gain. It can be controlled with only a few mA, and can then switch considerably more than that. Driving a solenoid Your ultimate goal is to drive a solenoid from a digital logic signal. First driving a relay, then having the relay drive the solenoid is unnecessary complication. With the right circuit, the solenoid can be driven using transistors as amplifying devices instead of relays. This is desirable because transistors are smaller, cheaper, more reliable, and the result will be more efficient. Consider that the coil of a relay is a solenoid. You already know how to drive one. The only difference might be that the solenoid you really want to drive requires more current than the coil of a relay to switch the solenoid. However, that's just a matter of choosing the right active part to drive the solenoid. For example, here is a circuit that can drive a 24 V solenoid from a 0-5 V digital logic signal: This transistor can support several amps, and has a maximum on resistance of only 29 mΩ with 4.5 V gate drive. At 2 A, for example, it will dissipate no more than 116 mW. Unless your solenoids need more than 2 A or so, this will do fine. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/278101', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/3814/']} | jdg_81041 |
stackexchange | llm_judgeable_groundtruth_similarity | 11234921 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For C++ STL containers such as vector and list , the complexity of finding elements and inserting or removing them is self-explanatory. However, for the map container, even though I know from my reading that the access and insertion complexity/performance is O(log(n)), I can't work out why . I clearly don't understand maps as much as I need to, so some enlightenment on this topic would be very much appreciated.
Now provide the response and nothing else.
| The elements of a map or set are contained in a tree structure; every time you examine a node of the tree, you determine if the element you're trying to find/insert is less than or greater than the node. The number of times you need to do this (for a properly balanced tree) is log2(N) because each comparison throws out half of the possibilities. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11234921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1472459/']} | jdg_81042 |
stackexchange | llm_judgeable_groundtruth_similarity | 327706 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am aware that light partially polarizes upon reflection off a non-metallic surface, however, why is it that this only occurs for non-metallic surfaces?
Now provide the response and nothing else.
| The phenomenon is nicely explained by a diagram on http://www.giangrandi.ch/optics/polarizer/polarizer.shtml You see light that is incident on a dielectric material - some is refracted, and some is reflected. The reflection is in fact due to the motion of electrons in the material. Now if the angle between the refracted and reflected beam is exactly 90 degrees (the condition known as the Brewster angle), there is no component of the electron motion inside the material that can generate a vertical component of polarization in the reflected light - and so the reflected light has purely horizontal polarization. For other angles of incidence, the effect is less pronounced - but still there. In metals, the mechanism for reflection is different. There is no refracted beam, because the conductivity is so high (there are many electrons in the conduction band - where in a dielectric they are all more tightly bound). As a result, it's the electrons at the very surface that move in order to reflect the light - and because of this there is no polarization in the reflected light. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/327706', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/136407/']} | jdg_81043 |
stackexchange | llm_judgeable_groundtruth_similarity | 73968 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm interested in specifying a digital low pass filter with low (and near constant) pass band group delay. I tried python's spicy.signal.minimum_phase to transform a linear phase FIR low pass filter to minimum phase, but there was a steep price in stop-band rejection level. Is there a "standard" approach to this problem?
Now provide the response and nothing else.
| Minimum phase filters will not give you a near constant group delay. You can design a non-linear phase FIR filter with a linear desired passband phase with a specified group delay that is smaller than the group delay of the corresponding linear phase filter. If you use a least-squares criterion, this is equivalent to solving a system of linear equations. As an example, take an FIR lowpass with $61$ taps. The group delay of a linear-phase filter would be $30$ samples. Now we specify a group delay of $20$ samples instead: w = pi * [linspace(0,.2,100), linspace(.25,1,375)]; % frequency vectorD = [exp(-1i*w(1:100)*20), zeros(1,375)]; % desired complex frequency responseW = ones(1,475); % unity weightingh = lslevin(61,w,D,W); % least squares design The function lslevin.m can be found here . The result looks like this: Note that the passband group delay is of course not exactly constant, but the desired group delay of $20$ samples is approximated quite accurately over a large part of the passband. Trying to achieve a low group delay in the passband comes with a penalty in the magnitude response. The figure below shows the magnitude and group delay responses of $4$ length $61$ FIR lowpass filters with different desired group delays in the passband. One is a linear phase filter with a constant group delay of $30$ samples, and the other $3$ filters are non-linear phase low delay filters with specified passband group delays of $25$ , $20$ , and $15$ samples respectively. Clearly, the magnitude response deteriorates with decreasing desired passband group delay. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/73968', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/22596/']} | jdg_81044 |
stackexchange | llm_judgeable_groundtruth_similarity | 7699 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why don't US politicians in the Senate, House of Representatives, and Supreme Court decide to give themselves, say, 100 million dollars each? I understand that the politicians doing so wouldn't get reelected and that their political career would end, but why would it matter if they already made themselves rich?
Now provide the response and nothing else.
| Formally, the Twenty Seventh Amendment stipulates that salary changes made by congress cannot take effect until after the next house elections. No law, varying the compensation for the services of the Senators and Representatives, shall take effect, until an election of Representatives shall have intervened. In other words, Representatives would be voted out of office before they would receive the benefit, and Representatives have no incentive to approve the change for Senators who will remain in office. (A quid pro quo plan would also prove ineffective as those who would receive the future salary would have no incentive to pay back the poor ousted congressperson; the rich and powerful need a reason to give back to the poor and weak, heart isn't enough). There are also several informal reasons why congress doesn't vote itself to become rich. Here I name a few. First,the average congressional person likely receives more benefits from office than 100 million dollars. Second, in times of divided government (when one party does not control both chambers or both congress and the executive) coordinating 268 people to become rich is difficult. Third, because both parties would receive the pay raise, they lack an incentive to empower their opposition. Finally, morals and norms can constrain behaviors. For instance,some politicians may have scruples that would lead them to avoid devastating the country's finances. There are also congressional norms that dictate behavior and constrain congresspeople from wild actions, such as blatantly putting money in their own pockets. | {} | {'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/7699', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/5122/']} | jdg_81045 |
stackexchange | llm_judgeable_groundtruth_similarity | 29678510 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently working on a simple 3D panorama viewer for a website. For mobile performance reasons I'm using the Three.js CSS 3 renderer . This requires a cube map, split up into six single images. I'm recording the images on the iPhone with Google Photo Sphere , or similar apps that create 2:1 equirectangular panoramas. I then resize and convert these to a cubemap with this website: http://gonchar.me/panorama/ (Flash) Preferably, I'd like to do the conversion myself , either on the fly in Three.js, if that's possible, or in Photoshop. I found Andrew Hazelden's Photoshop actions, and they seem kind of close, but no direct conversion is available. Is there a mathematical way to convert these, or some sort of script that does it? I'd like to avoid going through a 3D application like Blender , if possible. Maybe this is a long shot, but I thought I'd ask. I have okay experience with JavaScript, but I'm pretty new to Three.js. I'm also hesitant to rely on the WebGL functionality, since it seems either slow or buggy on mobile devices. Support is also still spotty.
Now provide the response and nothing else.
| If you want to do it server side there are many options. ImageMagick has a bunch of command-line tools which could slice your image into pieces. You could put the command to do this into a script and just run that each time you have a new image. It's hard to tell quite what algorithm is used in the program. We can try and reverse engineer quite what is happening by feeding a square grid into the program. I've used a grid from Wikipedia : Which gives: This gives us a clue as to how the box is constructed. Imaging a sphere with lines of latitude and longitude on it, and a cube surrounding it. Now projecting from the point at center of the sphere produces a distorted grid on the cube. Mathematically, take polar coordinates r, θ, ø, for the sphere r=1, 0 < θ < π, -π/4 < ø < 7π/4 x= r sin θ cos ø y= r sin θ sin ø z= r cos θ centrally project these to the cube. First we divide into four regions by the latitude -π/4 < ø < π/4, π/4 < ø < 3π/4, 3π/4 < ø < 5π/4, 5π/4 < ø < 7π/4. These will either project to one of the four sides the top or the bottom. Assume we are in the first side -π/4 < ø < π/4. The central projection of(sin θ cos ø, sin θ sin ø, cos θ) will be (a sin θ cos ø, a sin θ sin ø, a cos θ) which hits the x=1 plane when a sin θ cos ø = 1 so a = 1 / (sin θ cos ø) and the projected point is (1, tan ø, cot θ / cos ø) If | cot θ / cos ø | < 1, this will be on the front face. Otherwise, it will be projected on the top or bottom and you will need a different projection for that. A better test for the top uses the fact that the minimum value of cos ø will be cos π/4 = 1/√2, so the projected point is always on the top if cot θ / (1/√2) > 1 or tan θ < 1/√2. This works out as θ < 35º or 0.615 radians. Put this together in Python: import sysfrom PIL import Imagefrom math import pi,sin,cos,tandef cot(angle): return 1/tan(angle)# Project polar coordinates onto a surrounding cube# assume ranges theta is [0,pi] with 0 the north poll, pi south poll# phi is in range [0,2pi]def projection(theta,phi): if theta<0.615: return projectTop(theta,phi) elif theta>2.527: return projectBottom(theta,phi) elif phi <= pi/4 or phi > 7*pi/4: return projectLeft(theta,phi) elif phi > pi/4 and phi <= 3*pi/4: return projectFront(theta,phi) elif phi > 3*pi/4 and phi <= 5*pi/4: return projectRight(theta,phi) elif phi > 5*pi/4 and phi <= 7*pi/4: return projectBack(theta,phi)def projectLeft(theta,phi): x = 1 y = tan(phi) z = cot(theta) / cos(phi) if z < -1: return projectBottom(theta,phi) if z > 1: return projectTop(theta,phi) return ("Left",x,y,z)def projectFront(theta,phi): x = tan(phi-pi/2) y = 1 z = cot(theta) / cos(phi-pi/2) if z < -1: return projectBottom(theta,phi) if z > 1: return projectTop(theta,phi) return ("Front",x,y,z)def projectRight(theta,phi): x = -1 y = tan(phi) z = -cot(theta) / cos(phi) if z < -1: return projectBottom(theta,phi) if z > 1: return projectTop(theta,phi) return ("Right",x,-y,z)def projectBack(theta,phi): x = tan(phi-3*pi/2) y = -1 z = cot(theta) / cos(phi-3*pi/2) if z < -1: return projectBottom(theta,phi) if z > 1: return projectTop(theta,phi) return ("Back",-x,y,z)def projectTop(theta,phi): # (a sin θ cos ø, a sin θ sin ø, a cos θ) = (x,y,1) a = 1 / cos(theta) x = tan(theta) * cos(phi) y = tan(theta) * sin(phi) z = 1 return ("Top",x,y,z)def projectBottom(theta,phi): # (a sin θ cos ø, a sin θ sin ø, a cos θ) = (x,y,-1) a = -1 / cos(theta) x = -tan(theta) * cos(phi) y = -tan(theta) * sin(phi) z = -1 return ("Bottom",x,y,z)# Convert coords in cube to image coords# coords is a tuple with the side and x,y,z coords# edge is the length of an edge of the cube in pixelsdef cubeToImg(coords,edge): if coords[0]=="Left": (x,y) = (int(edge*(coords[2]+1)/2), int(edge*(3-coords[3])/2) ) elif coords[0]=="Front": (x,y) = (int(edge*(coords[1]+3)/2), int(edge*(3-coords[3])/2) ) elif coords[0]=="Right": (x,y) = (int(edge*(5-coords[2])/2), int(edge*(3-coords[3])/2) ) elif coords[0]=="Back": (x,y) = (int(edge*(7-coords[1])/2), int(edge*(3-coords[3])/2) ) elif coords[0]=="Top": (x,y) = (int(edge*(3-coords[1])/2), int(edge*(1+coords[2])/2) ) elif coords[0]=="Bottom": (x,y) = (int(edge*(3-coords[1])/2), int(edge*(5-coords[2])/2) ) return (x,y)# convert the in image to out imagedef convert(imgIn,imgOut): inSize = imgIn.size outSize = imgOut.size inPix = imgIn.load() outPix = imgOut.load() edge = inSize[0]/4 # the length of each edge in pixels for i in xrange(inSize[0]): for j in xrange(inSize[1]): pixel = inPix[i,j] phi = i * 2 * pi / inSize[0] theta = j * pi / inSize[1] res = projection(theta,phi) (x,y) = cubeToImg(res,edge) #if i % 100 == 0 and j % 100 == 0: # print i,j,phi,theta,res,x,y if x >= outSize[0]: #print "x out of range ",x,res x=outSize[0]-1 if y >= outSize[1]: #print "y out of range ",y,res y=outSize[1]-1 outPix[x,y] = pixelimgIn = Image.open(sys.argv[1])inSize = imgIn.sizeimgOut = Image.new("RGB",(inSize[0],inSize[0]*3/4),"black")convert(imgIn,imgOut)imgOut.show() The projection function takes the theta and phi values and returns coordinates in a cube from -1 to 1 in each direction. The cubeToImg takes the (x,y,z) coordinates and translates them to the output image coordinates. The above algorithm seems to get the geometry right using an image of buckingham palace . We get: This seems to get most of the lines in the paving right. We are getting a few image artefacts. This is due to not having a one-to-one map of pixels. We need to do is use an inverse transformation. Rather than a loop through each pixel in the source and finding the corresponding pixel in the target, we loop through the target images and find the closest corresponding source pixel. import sysfrom PIL import Imagefrom math import pi,sin,cos,tan,atan2,hypot,floorfrom numpy import clip# get x,y,z coords from out image pixels coords# i,j are pixel coords# face is face number# edge is edge lengthdef outImgToXYZ(i,j,face,edge): a = 2.0*float(i)/edge b = 2.0*float(j)/edge if face==0: # back (x,y,z) = (-1.0, 1.0-a, 3.0 - b) elif face==1: # left (x,y,z) = (a-3.0, -1.0, 3.0 - b) elif face==2: # front (x,y,z) = (1.0, a - 5.0, 3.0 - b) elif face==3: # right (x,y,z) = (7.0-a, 1.0, 3.0 - b) elif face==4: # top (x,y,z) = (b-1.0, a -5.0, 1.0) elif face==5: # bottom (x,y,z) = (5.0-b, a-5.0, -1.0) return (x,y,z)# convert using an inverse transformationdef convertBack(imgIn,imgOut): inSize = imgIn.size outSize = imgOut.size inPix = imgIn.load() outPix = imgOut.load() edge = inSize[0]/4 # the length of each edge in pixels for i in xrange(outSize[0]): face = int(i/edge) # 0 - back, 1 - left 2 - front, 3 - right if face==2: rng = xrange(0,edge*3) else: rng = xrange(edge,edge*2) for j in rng: if j<edge: face2 = 4 # top elif j>=2*edge: face2 = 5 # bottom else: face2 = face (x,y,z) = outImgToXYZ(i,j,face2,edge) theta = atan2(y,x) # range -pi to pi r = hypot(x,y) phi = atan2(z,r) # range -pi/2 to pi/2 # source img coords uf = ( 2.0*edge*(theta + pi)/pi ) vf = ( 2.0*edge * (pi/2 - phi)/pi) # Use bilinear interpolation between the four surrounding pixels ui = floor(uf) # coord of pixel to bottom left vi = floor(vf) u2 = ui+1 # coords of pixel to top right v2 = vi+1 mu = uf-ui # fraction of way across pixel nu = vf-vi # Pixel values of four corners A = inPix[ui % inSize[0],clip(vi,0,inSize[1]-1)] B = inPix[u2 % inSize[0],clip(vi,0,inSize[1]-1)] C = inPix[ui % inSize[0],clip(v2,0,inSize[1]-1)] D = inPix[u2 % inSize[0],clip(v2,0,inSize[1]-1)] # interpolate (r,g,b) = ( A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu+D[0]*mu*nu, A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu+D[1]*mu*nu, A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu+D[2]*mu*nu ) outPix[i,j] = (int(round(r)),int(round(g)),int(round(b)))imgIn = Image.open(sys.argv[1])inSize = imgIn.sizeimgOut = Image.new("RGB",(inSize[0],inSize[0]*3/4),"black")convertBack(imgIn,imgOut)imgOut.save(sys.argv[1].split('.')[0]+"Out2.png")imgOut.show() The results of this are: If anyone want to go in the reverse, see this JS Fiddle page . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/29678510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3625228/']} | jdg_81046 |
stackexchange | llm_judgeable_groundtruth_similarity | 17877220 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To keep things simple and to avoid nameing collisions, I've been bundling links in my record resources like this... { id: 211, first_name: 'John', last_name: 'Lock', _links: [ { rel: 'self', href: 'htttp://example.com/people/211' } ]} However, I can't work out how to implement links in collections. I have spent a long time trawling around the web for examples and other than using the not so lean HAL I'm unable to reconcile my problem. [ {id:1,first_name:.....}, {id:2,first_name:.....}, {id:3,first_name:.....}, "_links": "Cant put a key value pair here because its an-array" ] Which means I have to wrap up the array in a container object. { people: [ {id:1,first_name:.....} ], links: [ { rel:parent, href:.... ]} But is is different to the singular resource so I'm going to make the record behave like the collection and wrap it up in a container.... { person: { id: 211, first_name: 'John', last_name: 'Lock' }, links:[ { rel: 'self', href: 'htttp://example.com/people/211' } ] } On the surface, this seems like quite a neat solution. The resulting JSON is one level deeper, but HATEOAS has been implemented, so that's all good right? Not at all. The real sting comes when I go back to the collection. Now that the single resource has been wrapped up in a container in order to be consistent with the collection, the collection must now be changed in order to reflect the changes. And this is where it gets ugly. Very ugly. Now the collection looks like this... { "people": [ { "person": { .... }, "links" : [ { "rel": "self", "href": "http://example.com/people/1" } ] }, { "person": { .... }, "links" : [ { "rel": "self", "href": "http://example.com/people/2" } ] } ], "links" : [ { "rel": "self", "href": "http://example.com/people" } ]} Is there a simpler solution to implementing HATEOAS for collections? Or should I kiss HATEOAS goodbye for forcing me to over complicate the data structure?
Now provide the response and nothing else.
| Please don't dismiss HAL so quickly just because it looks a little bloated (in its JSON form, it's quite minimal). HAL is to JSON what HTML is to plain text. It adds hyperlinks. You need hyperlinks and a commonly understood representation format (such as HAL or Collection+JSON) for REST. You also need HATEOAS for REST, without HATEOAS it isn't REST! HATEOAS requires hyperlinks of course. In your case, you are trying to build a collection resource. The IANA-registered relation for that is "item" (with reverse relation "collection"). Here is the representation in HAL for a People collection: { "_links": { "self": { "href": "http://example.com/people" }, "item": [ { "href": "http://example.com/people/1", "title": "John Smith" }, { "href": "http://example.com/people/2", "title": "Jane Smith" } ] }, "_embedded": { "http://example.com/rels#person": [ { "first_name": "John", "last_name": "Smith", "_links": { "self": { "href": "http://example.com/people/1" }, "http://example.com/rels#spouse": { "href": "http://example.com/people/2" } } }, { "first_name": "Jane", "last_name": "Smith", "_links": { "self": { "href": "http://example.com/people/2" }, "http://example.com/rels#spouse": { "href": "http://example.com/people/1" } } } ] }} Note: The primary data for this collection comes from _links.item[] . These are the items in the collection. The full (or at least some additional) data for each item is available in the _embedded array. If the client needs these additional data, it must find them by searching through _embedded[n]._links.self.href for each n . This is a design constraint of HAL. Other hypermedia representation formats have similar constraints (though perhaps going in the other direction). I have added a title value for each member of the item array. This can appear between the opening and closing anchor tags if rendering to HTML, or as the text of a menu item in the client, without need for further processing of the representation by the client. There are no ID parameters. All references to other resources are exposed as hyperlinks. A client should not have to "build" a URL by gluing an ID into a URL at some pre-defined place. This constitutes out-of-band information which inhibits independent changes to the client and server. All of your hyperlinks should be absolute, since relative URLs may cause problems. All of your relations should be either listed on that IANA page or use a URI to define them. Ideally, that URI should be a dereferencable HTTP URL with documentation about the relation at the other end. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17877220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/560383/']} | jdg_81047 |
stackexchange | llm_judgeable_groundtruth_similarity | 6574274 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been seeing a lot of talks and articles on coroutines in python. They are considered to be "microthreads" and I've heard they improve performance. How do coroutines improve performance? From what I've seen so far, they are single threaded and execute in sequence. I agree that from a design perspective they are quite elegant especially in producer/consumer type applications. I guess I am missing the point. Can someone help explain?
Now provide the response and nothing else.
| Coroutines don't really improve performance except in a very limited sense: multi-threaded programs have a certain overhead and coroutines provide some of the benefit of threading without incurring those overheads. However, most multi-threaded applications (even in C-Python with its GIL) benefit from the overlap when one thread blocks on a system call and other threads can run: that doesn't generally happen with coroutines. If you have a few threads then usually the overlapping wins out and coroutines give no performance benefit. If you need thousands of threads then the thread switch overheads will be much greater and in that situation coroutines might give a benefit but reducing the number of threads is likely to give much greater benefit. The real gain of coroutines is that for producer/consumer applications they make the coding much simpler and therefore faster to code. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6574274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/223912/']} | jdg_81048 |
stackexchange | llm_judgeable_groundtruth_similarity | 121272 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Since slower is better and key stretching is an awesome way to increase security over potentially low entropy passwords, doesn't it make sense to use all three? To be clear, this is about implementing a key stretching architecture for your software. I see a lot of disputes about which is best, but it seems to me - just use all three.
Now provide the response and nothing else.
| PBKDF2, scrypt and bcrypt are all configurable ; they can be made as slow as you want. The limiting point is not the computer, but the user's patience. For example, suppose that the user will go irate if the password processing (e.g. to unlock an archive file) takes more than 6 seconds. If you use bcrypt only, then you can tune it up so that it takes 6 seconds on your machine. But if you use PBKDF2 and scrypt and bcrypt in succession, then they will have to fit together in the 6-second budget, meaning that, in practice, you'll configure them with lower iteration counts, e.g. 2 seconds each. From that point of view, using three algorithms in succession is no better than one; what matter is the total time. The point of this configurable slowness is that it makes things harder for the attacker; you want to choose an algorithm that makes each password guess as expensive as possible for the attacker. The attacker does not have necessarily the same hardware as you (e.g. he will likely try to use a GPU), so each algorithm may be more or less "good". The attacker can always buy a PC such as yours, and thus do in 6 seconds what you do in 6 seconds; however, the attacker may sometimes buy a more specialized hardware which will be better than yours for that specific task. This is especially the case with PBKDF2 and GPU. As a defender, you want to avoid such a situation. Thus, some algorithms are better than others, and, in the PBKDF2 / bcrypt / scrypt triad, one may assume that one algorithm is "stronger" in the sense explained above. Therefore, using just that algorithm, alone and with its configuration adjusted so that it takes as much time as you can tolerate, will be optimal. By definition, using all three algorithms in succession cannot be as strong as that unique algorithm alone. In practice, this means that you should use bcrypt: the budget you spend on PBKDF2 is squandered security, since, for the same CPU usage on your machine, bcrypt would have provided more resistance (see this for some details about bcrypt vs scrypt, and that for even more details on password hashing in general). | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/121272', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/31606/']} | jdg_81049 |
stackexchange | llm_judgeable_groundtruth_similarity | 5615296 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've recently encountered an error trying to host my asp.net site with IIS. I have found a solution that many swear by. Solution: Add IIS_IUSRS with Read permission on files in the folder Change IIS authentication method to BasicAuthentication refresh the website. It will work ( http://vivekthangaswamy.blogspot.com/2009/07/aspnet-website-cannot-read.html ) What do I add to my web.config file though? I've never had to edit it before. Here is its current contents: <?xml version="1.0"?><!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=169433 --><configuration> <connectionStrings> <add name="DefaultConnection" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0"/> </system.web></configuration> My error is: Config Error: Cannot read configuration file due to insufficient permissions Config File: \?\C:\Users*****\Documents\Visual Studio2010\WebSites\PointsForTime\web.config
Now provide the response and nothing else.
| There is no problem with your web.config . Your web site runs under a process. In iis you can define the identity of that process. The identity that your web site's application pool runs as ( Network Services, Local System , etc.), should have permission to access and read web.config file. Update: This updated answer is same as above, but a little longer and simpler and improved. First of all : you don't have to change anything in your config file. It's OK . The problem is with windows file permissions . This problems occurs because your application can not access and read web.config file. Make the file accessible to IIS_IUSRS group. Just right click web.config and click properties , under security tab, add IIS_IUSRS . So what is this IIS_IUSRS thing? Your web site is like an exe file. Just like any exe file, it should be started by a user and it runs according to permissions assigned to that user. When your site is started in IIS , Application Pool of your web site is associated with a user ( Network Services, Local System , Etc. ...) (and can be changed in IIS) So when you say IIS_IUSRS , it means any user ( Network Services, Local System , Etc. ...) that your site is running as. And as @Seph mentioned in comment below : If your computer is on a domain , remember that IIS_IUSRS group is a local group . Also make sure that when you're trying to find this user check the location it should be set to local computer and not a corporate domain. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5615296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/595437/']} | jdg_81050 |
stackexchange | llm_judgeable_groundtruth_similarity | 476536 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the following json file, { "email": "xxx", "pass": "yyy", "contact": [ { "id": 111, "name": "AAA" } ], "lname": "YYY", "name": "AAA", "group": [ { "name": "AAA", "lname": "YYY", } ], I need to look for the key "name" and replace its value to "XXX" at all places. Which jq command does that ?
Now provide the response and nothing else.
| jq's assignment operations can perform an update on as many locations at once as you can name and are made for this sort of situation. You can use jq '(.. | .name?) |= "XXXX"' to find every field called "name" anywhere and replace the value in each all at once with "XXXX", and output the resulting object. This is just the ..|.a? example from the recursive-descent documentation combined with update assignment . It uses the recursive descent operator .. to find every single value in the tree, then pulls out the "name" field from each of them with .name , suppresses any errors from non-matching values with ? , and then updates the object in all those places at once with "XXXX" using the update-assignment operator |= , and outputs the new object. This will work no matter what the file structure is and update every name field everywhere. Alternatively, if the file always has this structure, and it's those particular "name" fields you want to change , not just any old name, you can also just list them out and assign to them as a group as well: jq '(.name, .contact[].name, .group[].name) |= "XXXX"' This does the same assignment to the "name" field of the top-level object; the "name" field of every object in the "contact" array; and the "name" field of every object in the "group" array. all in one go. This is particularly useful if the file might have other name fields in there somewhere unrelated that you don't want to change. It finds just the three sets of locations named there and updates them all simultaneously. If the value is just a literal like it is here then plain assignment with = works too and saves you a character: (..|.name?)="XXXX" - you'd also want this if your value is computed based on the whole top-level object. If instead you want to compute the new name based on the old one, you need to use |= . If I'm not sure what to use, |= generally has slightly nicer behaviour in the corner cases. If you have multiple replacements to do , you can pipe them together: jq '(..|.name?) = "XXXX" | (..|.lname?) = "1234"' will update both the "name" and "lname" fields everywhere, and output the whole updated object once. A few other approaches that may work: You could also be really explicit about what you're selecting with (..|objects|select(has("name"))).name |= "XXXX"` which finds everything, then just the objects, then just the objects that have a "name", then the name field on those objects, and performs the same update as before. If you're running the development version of jq (unlikely) then the walk function can also do the job: walk(.name?="XXXX") . All the other versions will work on the latest released version, 1.5. An alternative multi-update could be jq '(..|select(has("name"))?) += {name: "XXXX", lname: "1234"}' which finds everything with a name and then sets both "name" and "lname" on each object using arithmetic update-assignment *= and the merging behaviour that + has for objects . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/476536', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/316642/']} | jdg_81051 |
stackexchange | llm_judgeable_groundtruth_similarity | 8917812 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am getting an error when using my application that used tableviews, the error looks like this. 2012-01-19 10:19:51.442 bcode[1176:207] *** Terminating app due to uncaught exception 'NSRangeException', reason: '-[UITableView scrollToRowAtIndexPath:atScrollPosition:animated:]: row (3) beyond bounds (3) for section (0).'*** First throw call stack:(0x16c3052 0x1920d0a 0x166ba78 0x166b9e9 0x68a136 0x3bfdf 0x6c2fbf 0x6c32d4 0x6c35d7 0x6d2666 0x87890e 0x878c17 0x878c86 0x62c499 0x62c584 0x2718e00 0x13614f0 0x15fa833 0x15f9db4 0x15f9ccb 0x1d05879 0x1d0593e 0x5fba9b 0x2838 0x2795 0x1)terminate called throwing an exception(gdb) NSRangeException, -[UITableView scrollToRowAtIndexPath:atScrollPosition:animated:] row (3) beyond bounds (3) for section (0). What is happening is that when the user drills down my navigation view they can select a cell from the tableview of each view, which later will be used to build a search string. However I allow them to go back and change their selection if they made an error.. this error happens when the user changes the value of the parent view then going into the subview only if the previous selection was outside the number of entries in the current tableview.. normally this wouldn't be a big problem however, inside viewDidAppear I am calling a method that scrolls to the previously selected cell... which obviously is what is breaking the app and giving me the error. - (void)viewDidAppear:(BOOL)animated{ [super viewDidAppear:animated]; //Scroll to previously selected value [self.tableView scrollToRowAtIndexPath:oldCheckedIndexPath atScrollPosition:UITableViewScrollPositionMiddle animated:YES];} how can I stop this from executing I have tried about a 100 different if statments in the parent view and the subview catching the new indexpath and checking agains oldselected indexpaths then setting oldCheckedIndexPath = nil; however somehow it always manages to mess up anyway.
Now provide the response and nothing else.
| The clean way, assuming self is the tableView datasource: Check the number of sections in the tableview: if ( [self numberOfSectionsInTableView:self.tableView] > oldCheckedIndexPath.section and check the number of rows in that section: && [self tableView:self.tableView numberOfRowsInSection: oldCheckedIndexPath.section] > oldCheckedIndexPath.row ) { [self.tableView scrollToRowAtIndexPath: // etc etc etc Or, the quick hack: @try { [self.tableView scrollToRowAtIndexPath: // etc etc etc }@catch ( NSException *e ){ NSLog(@"bummer: %@",e);} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8917812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/807400/']} | jdg_81052 |
stackexchange | llm_judgeable_groundtruth_similarity | 285850 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A little over a year ago I was fortunate enough to be able to take a 9 month break from work. I decided that in that time time I would hone my C# skills. I started working on a bunch of projects and forced myself to follow TDD. It was a fairly enlightening process. It was tough at first, but over time I learned how to write more testable code (which, as it turns out, tends to be more SOLID code) and in the process I also sharpened my OO design skill. Now I'm back in the workforce and I'm noticing something odd. I prefer not to follow TDD. I find TDD slows me down and actually makes it harder to design a clean application. Instead, I've adopted a slightly (massively) different approach: Pick a vertical slice of work Develop a functioning prototype Refactor until everything is nice and tidy Sit back an appreciate the beautifully SOLID and testable code I've written. You may have noticed that step 1 wasn't "define the public surface of my test target" and step 2 wasn't "test the bejesus out of said public surface." You may have also noticed that none of the steps involve testing. I'm writing testable code, but I'm not testing it... just yet. Now, I'd like to make it clear that I'm not actually foregoing any kind of testing. The code I'm writing works . It works because I'm testing it manually. I'd also like to make it clear that I'm not foregoing all automated testing either. This is where my process is different. And this is why I'm asking this question. TDD in theory. Not in practice. My process has evolved a bit and I've struck a balance between TDD and no tests that I find very productive and also reasonably safe. It goes as follows: Implement a working vertical slice of work with testing in mind, but don't write any tests. If down the road (eg, a month later) that slice needs modification Write Unit Tests, Integration Tests, Behaviour Tests, etc that guarantee the slice of work is correct Modify the code If that slice doesn't need modification, Do nothing By simply shifting the burden of writing tests from before writing the code to before modifying the code I've been able to produce much more working code. And, when I do get around to writing tests I write far fewer of them but cover nearly as much ground (higher ROI). I like this process, but I'm concerned it might not scale well. Its success hinges on developers being diligent about writing tests before they change things. And that seems like a pretty big risk. But, TDD has the very same risk. So, am I going to [BT]DD hell, or is this a common form of pragmatic coding and testing? I'd like to keep working this way. What can I do to make this process work in the long term? Note: I am the sole developer on my projects and I am responsible for everything: Requirements gathering, design, architecture, testing, deployment, etc. I suspect this is why my process is working.
Now provide the response and nothing else.
| To make the process work in the long term I would write the tests when the code is being written. Which may seem to contradict your approach. However you've posed the question so I'll give you my take: You don't have to write the tests before the code. forget that purity. However you want to write the tests around that time. Once you have got the code working, you've tweaked it a bit, got some bugs out (we're talking about a timescale of hours here), you are then at a point of maximum knowledge about what the code is doing. This is a great time to write tests that capture your knowledge. Leaving this until later means the knowledge will (naturally) diminish over time. It also means that should you ever leave and should anyone else take over you will not have the immediate technical debt of not having documented (through tests) what does what. Most of all, "some day" may not come. You may either get hit by a bus or you might board the bus for new adventures. Finally, manual testing doesn't scale and frequently doesn't cover all the devices used by the end user. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/285850', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/81495/']} | jdg_81053 |
stackexchange | llm_judgeable_groundtruth_similarity | 35975635 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to register objects by their names, and later on take them in another type's ctor as parameters, during registration. Hope my example will be clear enough, here it is: public class Obj : IObj{ public class Obj(string name)}I register the following objects like this :public void RegisterMyObj(string name){ // Keyed with the object name builder.Register<Obj>().Named<IObj>(name).WithParameter(name).SingleInstance();}public class ObjectsHolder : IObjectsHolder{ public ObjectsHolder (List<IObj> objsToHold))}// I want to register my ObjectsHolder in the following way:for example, this is how I want to call it from my code: RegisterObjectsHolder(string Obj1Name, string Obj2Name)public void RegisterObjectsHolder(params string[] objectsNames){ builder.Register<IObjectsHolder>().WithParameters(// Here comes the magic code which I can't figure out. // I want objects holder to be registered with the Obj instances whose names were passed to this method, // is there a way to do this?) )} I'm not strict about the ways the registrations will look.. If you know of a way to accomplish this using different methods, that will also do. Thanks in advance!
Now provide the response and nothing else.
| Instead of registering your services as "named", you may want to use "keyed" services. There is no way to specify services by their names. But you can use IIndex<TKey, TService> to retrieve a dictionary-like object with keyed services of specified type. So you can register: builder.RegisterType<Service1>().Keyed<IService>("key1");builder.RegisterType<Service2>().Keyed<IService>("key2");builder.RegisterType<Service3>().Keyed<IService>("key3");builder.RegisterType<Service4>().Keyed<IService>("key4");builder.RegisterType<Service5>().Keyed<IService>("key5"); Later, in your constructor, you can inject: public Test(IIndex<string, IService> serviceDictionary){ var service1 = serviceDictionary["key1"];} I used string objects as keys, but you can introduce e.g. enum and use it as a key. EDIT: If you want to narrow down available services for some classes, you can e.g. introduce different enum types as keys. enum ServicesA { A1, A2, A3 }enum ServicesB { B1, B2 } Then, registratrions: builder.RegisterType<Service1>().Keyed<IService>(ServicesA.A1);builder.RegisterType<Service2>().Keyed<IService>(ServicesA.A2);builder.RegisterType<Service3>().Keyed<IService>(ServicesA.A3);builder.RegisterType<Service4>().Keyed<IService>(ServicesB.B1);builder.RegisterType<Service5>().Keyed<IService>(ServicesB.B2); Now, if you inject IIndex<SerivcesA, IService> , only Service1 , Service2 and Service3 would be available, for IIndex<SerivcesB, IService> it would be Service4 and Service5 . You can chain Keyed registration so if you join both registrations from above to builder.RegisterType<Service1>().Keyed<IService>(ServicesA.A1).Keyed<IService>("key1");` etc., you could use both IIndex<YourEnum, IService> with only some of IService implementations or IIndex<string, IService> with all of them. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35975635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81054 |
stackexchange | llm_judgeable_groundtruth_similarity | 117456 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to delete all the Lines of File 1, if Column1 of File1 matches exactly with Column 1 File2. File 1: r001:21:10 21 AAAAAATTTGC * = XM:21r002:21:10 21 YAAAATTTGC * = nM:21r001:21:10 21 TTAAAATTTGC * = XM:21r0012:21:10 21 LLAAAATTTGC * + XM:21r001:21:10 21 AAAAAATTTGC * = GM:21 File2: r001:21:10r001:21:20r002:41:36r002:41:99r002:41:87r0012:21:1 Expected Output: r002:21:10 21 YAAAATTTGC * = nM:21r0012:21:10 21 LLAAAATTTGC * + XM:21
Now provide the response and nothing else.
| You can use this awk : $ awk 'FNR==NR {a[$i]; next}; !($1 in a)' f2 f1r002:21:10 21 YAAAATTTGC * = nM:21r0012:21:10 21 LLAAAATTTGC * + XM:21 Explanation FNR==NR {a[$i]; next} it reads the first file and saves the contents into the a array. !($1 in a) while reading the second file, it checks if the first field is in the a array. If not, prints the line. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/117456', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/44000/']} | jdg_81055 |
stackexchange | llm_judgeable_groundtruth_similarity | 46928636 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a string list: content01/09/15, 10:07 - message101/09/15, 10:32 - message201/09/15, 10:44 - message3 I want a data frame, like: date message01/09/15, 10:07 message101/09/15, 10:32 message201/09/15, 10:44 message3 Considering the fact that all my strings in the list starts in that format, I can just split by - , but I rather look for a smarter way to do so. history = pd.DataFrame([line.split(" - ", 1) for line in content], columns=['date', 'message']) (I'll convert the date to date time afterwards) Any help would be appreciated.
Now provide the response and nothing else.
| You can use str.extract - where named groups can become column names In [5827]: df['content'].str.extract('(?P<date>[\s\S]+) - (?P<message>[\s\S]+)', expand=True)Out[5827]: date message0 01/09/15, 10:07 message11 01/09/15, 10:32 message22 01/09/15, 10:44 message3 Details In [5828]: dfOut[5828]: content0 01/09/15, 10:07 - message11 01/09/15, 10:32 - message22 01/09/15, 10:44 - message3 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46928636', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7252805/']} | jdg_81056 |
stackexchange | llm_judgeable_groundtruth_similarity | 407 |
Below is a question asked on the forum computergraphics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently working on some simple pixel shader in HLSL. I send to shader texture and I want to make it more colorful (something like in the picture below). In the picture 1 there is original texture. Picture 2 shows an effect that I want to achieve. Is there some mathematical formula to do that? My input is the RGBA value of each pixel. EDIT:I'll try to write more concrete. Let's say I want to make that garden texture more red. I suppose that what I need to do is: OutputR = InputR * X,OuputG = InputG * Y,OutputB = InpputB * Z But how do I find X, Y and Z?
Now provide the response and nothing else.
| (XYZ) can be the RGB colour you want to tint your scene by. For the above scene it can be a red colour (1.0, 0.0, 0.0) or something similar with a strong red component. Bear in mind that since you are applying the colour in a multiplicative way it will act as a filter suppressing original colour components. So if your scene is mostly green but you apply (i.e. multiply it with) a red filter (1,0,0) the resulting scene will be very dark. A trick you could potentially do in such cases is to desaturate the original scene and then multiply it by the tint colour. That way you will keep the overall image intensity and achieve the colour tint you require. For example, here is the image multiplied by (1, 0, 0), (1, 0.2, 0.2), and (1, 0.5, 0.5) from left to right: | {} | {'log_upvote_score': 4, 'links': ['https://computergraphics.stackexchange.com/questions/407', 'https://computergraphics.stackexchange.com', 'https://computergraphics.stackexchange.com/users/205/']} | jdg_81057 |
stackexchange | llm_judgeable_groundtruth_similarity | 19062215 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are trying to only replace the first instance of "a" to first second third . Then it should loop and replace the second instance of "a" to a new generated value of first second third and continue this loop til there are no more "a" characters left and each "a" is a different value. However the results of the output are not correct. The code is erasing all characters before the last "a" then replacing that last "a". Dim message As String Dim start As Integer = 1 Dim letterfinder As String Dim timesLooped As Integer = 1 Dim length As Integer Dim first As Integer Dim second As Integer Dim third As Integer Dim count As Integer = 1 message = inputBox.Text length = Len(message) timesLooped = 0 While length > timesLooped Dim random As New Random letterfinder = Mid$(message, start, 1) If letterfinder = "a" Then first = random.Next(3, 9) second = 1 \ first third = 1 - second outputBox.Text = Replace(message, letterfinder, first & second & third, [start], [count]) End If timesLooped = timesLooped + 1 start = start + 1 End While
Now provide the response and nothing else.
| When you call timeit from the command line like this: python -mtimeit -s'import test' 'test.foo()' The timeit module is called as a script. In particular, the main function is called: if __name__ == "__main__": sys.exit(main()) If you look at the source code , you'll see that the main function can take an args argument: def main(args=None): if args is None: args = sys.argv[1:] So indeed it is possible to run timeit from within a program with exactly the same behavior as you see when run from the CLI. Just supply your own args instead of allowing it to be set to sys.argv[1:] : import timeitimport shlexdef foo(): total = 0 for i in range(10000): total += i**3 return totaltimeit.main(args=shlex.split("""-s'from __main__ import foo' 'foo()'""")) will print something like 100 loops, best of 3: 7.9 msec per loop Unfortunately, main prints to the console, instead of returning the time per loop. So if you want to programmatically use the result, perhaps the easiest way would be to start by copying the main function and then modifying it -- changing the printing code to instead return usec . Example by OP: If you place this in utils_timeit.py : import timeitdef timeit_auto(stmt="pass", setup="pass", repeat=3): """ http://stackoverflow.com/q/19062202/190597 (endolith) Imitate default behavior when timeit is run as a script. Runs enough loops so that total execution time is greater than 0.2 sec, and then repeats that 3 times and keeps the lowest value. Returns the number of loops and the time for each loop in microseconds """ t = timeit.Timer(stmt, setup) # determine number so that 0.2 <= total time < 2.0 for i in range(1, 10): number = 10**i x = t.timeit(number) # seconds if x >= 0.2: break r = t.repeat(repeat, number) best = min(r) usec = best * 1e6 / number return number, usec you can use it in scripts like this: import timeitimport utils_timeit as UTdef foo(): total = 0 for i in range(10000): total += i**3 return totalnum, timing = UT.timeit_auto(setup='from __main__ import foo', stmt='foo()')print(num, timing) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19062215', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2471667/']} | jdg_81058 |
stackexchange | llm_judgeable_groundtruth_similarity | 7711255 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am evaluating a very simple piece of code in Ocaml toplevel: let p5 () = print_int 5;;p5 ();;print_string "*************************";;let p4 = print_int 4;;p4;; And it returns: val p5 : unit -> unit = <fun># 5- : unit = ()# *************************- : unit = ()# 4val p4 : unit = ()# - : unit = () My questions are What does () mean in let p5 () = print_int 5;; ? What do - and () mean in # 5- : unit = () ? Is p4 a function? Why is there a 4 in the beginning of # 4val p4 : unit = () ? It seems that () could be used in Ocaml code to hide side effect, could anyone show me an example?
Now provide the response and nothing else.
| Here some answers: () is the unit type value. The unit type is a type with only one value. This is usually used to produce functions which either return nothing meaningful or take nothing meaningful. Remember, that in OCaml all functions alway have to return something and take some arguments, so the unit type is used to get around this limitation. Think of this similar to the void type in C, C++ or Java. There are two lines interleaved. The 5 is printed by the print_int function and not by the toplevel. The toplevel just returns - : unit = () without the 5 . The toplevel is telling you that it did not create any new bindings - and that the last returned value is of type unit and has the value () . No. It does not take any arguments, so it is not a function. Again there are two lines interleaved. The 4 is printed by the print_int function. At this time, the toplevel is telling you, that it created a new binding p4 , that this variable carries a value of type unit and that the stored value is () . No, () is not used to hide side effects. It is usually used to create functions, which have side effects, and thus need not take any kind of argument. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7711255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/702977/']} | jdg_81059 |
stackexchange | llm_judgeable_groundtruth_similarity | 20806560 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
>>> struct.pack('2I',12, 30)b'\x0c\x00\x00\x00\x1e\x00\x00\x00' >>> struct.pack('2I',12, 31)b'\x0c\x00\x00\x00\x1f\x00\x00\x00'>>> struct.pack('2I',12, 32)b'\x0c\x00\x00\x00 \x00\x00\x00' ^in question>>> struct.pack('2I',12, 33)b'\x0c\x00\x00\x00!\x00\x00\x00' ^in question I'd like all values to display as hex
Now provide the response and nothing else.
| How about his? >>> data = struct.pack('2I',12, 30)>>> [hex(ord(c)) for c in data]['0xc', '0x0', '0x0', '0x0', '0x1e', '0x0', '0x0', '0x0'] The expression [item for item in sequence] is a so called list comprehension . It's basically a very compact way of writing a simple for loop, and creating a list from the result. The ord() builtin function takes a string, and turns it into an integer that's its corresponding unicode code point (for characters in the ASCII character set that's the same as their value in the ASCII table). Its counterpart, chr() for 8bit strings or unichr() for unicode objects do the opposite. The hex() builtin then simply converts the integers into their hex representation. As pointed out by @TimPeters, in Python 3 you would need to lose the ord() , because iterating over a bytes object will (already) yield integers: Python 3.4.0a3 (default, Nov 8 2013, 18:33:56)>>> import struct>>> data = struct.pack('2I',12, 30)>>> type(data)<class 'bytes'>>>> type(data[1])<class 'int'>>>>>>> [hex(i) for i in data]['0xc', '0x0', '0x0', '0x0', '0x1e', '0x0', '0x0', '0x0'] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20806560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2059819/']} | jdg_81060 |
stackexchange | llm_judgeable_groundtruth_similarity | 1064657 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a research paper and I have stumbled upon an issue. I have to evaluate $$\sum_{n=1}^{\infty} \frac{x^n}{(n+2)^2}$$ Here is what I did: $$ \sum_{n=1}^{\infty} x^{n-1} = \frac{1}{1-x}$$ $$\sum_{n=1}^{\infty} x^{n+1} = \frac{x^2}{1-x}$$ Integrate once with respect to $x$. $$\sum_{n=1}^{\infty} \frac{x^{n+2}}{n+2} = \frac{3}{2} - \frac{x^2}{2} - x - \log(1-x)$$ Divide by $x$ $$\sum_{n=1}^{\infty} \frac{x^{n+1}}{n+2} = \frac{3}{2x} - \frac{x}{2} - 1 - \frac{\log(1-x)}{x}$$ Integrate the expression again; $$\sum_{n=1}^{\infty} \frac{x^{n+2}}{(n+2)^2} = \frac{3\log(x)}{2} - x - \frac{x^2}{4} + Li_2(x)$$ All we have to do is divide by $x^2$ $$\sum_{n=1}^{\infty} \frac{x^n}{(n+2)^2} = \frac{3\log(x)}{2x^2} - \frac{1}{x} - \frac{1}{4} + \frac{Li_2(x)}{x^2} = \frac{4Li_2(x) - x^2 - 4x + 6\log(x)}{4x^2}$$ The issue is WolframAlpha returns it as: $$\sum_{n=1}^{\infty} \frac{x^n}{(n+2)^2} = \frac{4Li_2(x) - x^2 - 4x}{4x^2}$$ Why is the way I did, wrong? I have an extraneous, extra log term.? Help is appreciated.
Now provide the response and nothing else.
| Not sure what you did, but all that work seems kinda silly to arrive at your result. $$\sum_{n=1}^\infty\frac{x^n}{(n+2)^2}=\frac{1}{x^2}\sum_{n=1}^\infty\frac{x^{n+2}}{(n+2)^2}=\frac{1}{x^2}(-x-\frac{x^2}{4}+\sum_{n=1}^\infty\frac{x^n}{n^2})=\frac{1}{x^2}(-x-\frac{x^2}{4}+\text{Li}_2(x))$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1064657', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/178248/']} | jdg_81061 |
stackexchange | llm_judgeable_groundtruth_similarity | 708733 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
July 2022 mac os Monterey V12.1 awk --version 20200816 GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin21) Why does awk -F work for most letters, but NOT for the letter t ?I have the solution, but I would like to understand why awk fails for the letter t . # Count 'e's% echo "tweeter" | awk -F "e" '{print NF-1}'3# Count 'r's% echo "tweeter" | awk -F "r" '{print NF-1}'1# (Attempt to) count 't's% echo "tweeter" | awk -F "t" '{print NF-1}'0 <=== ????# Use gsub()% echo "tweeter" | awk '{print gsub(/t/, "")}'2
Now provide the response and nothing else.
| Because: Normally, any number of blanks separate fields. In order to set thefield separator to a single blank, use the -F option with a value of [ ] . If a field separator of t is specified, awk treats it as if \t had been specified and uses <TAB> as the field separator. In orderto use a literal t as the field separator, use the -F option with avalue of [t] . That's from the FreeBSD awk man page , and the utilities that come with macOS are usually some old FreeBSD versions or such. $ printf 'foo\tbar\n' | awk -F t '{print NF-1}'1$ echo total | awk -F '[t]' '{print NF-1}'2 In a way, that seems like a useful shorthand for files with tab-separated values, but what with other letters taken as-is, it's confusing. It only works like that with -F , using -v FS=t doesn't do it. The feature is non-POSIX, as POSIX says that -F x is the same as -v FS=x . Most other awks I tested treated t as the the literal letter (some versions of gawk, mawk and Busybox). The version of awk that e.g. Debian has in the original-awk package ("One True AWK" or "BWK awk" presumably from Brian W. Kernighan's initials) does support it, though, and at least Wikipedia seems to indicate that would be the same software FreeBSD uses. This one appears to be based on the version described in the 1988 book "The AWK Programming Language", but I'm not an expert on awk lineages and don't know if it has evolved significantly since then. That one is on github , but the documentation there doesn't seem to describe the feature. The special case can be seen in the code (where it's described as "a wart" in a comment). You can get the same behaviour with GNU awk in BWK-awk compatibility mode, though. : As a special case, in compatibility mode (see section Command-Line Options), if the argument to -F is ‘t’, then FS is set to the TAB character. If you type ‘-F\t’ at the shell, without any quotes, the ‘\’ gets deleted, so awk figures that you really want your fields to be separated with TABs and not ‘t’s. | {} | {'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/708733', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/532632/']} | jdg_81062 |
stackexchange | llm_judgeable_groundtruth_similarity | 14844780 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
add_index :microposts, [:user_id, :created_at] I was going through Michael Hartl's railstutorial and noticed he's using something called a multiple key index . I know it means Active Record uses both keys at the same time, but I'm not really sure what the advantages and disadvantages of using multiple key indexes are. If anyone can give an answer I would greatly appreciate it.
Now provide the response and nothing else.
| Any index can give benefit by allowing a query to narrow down the set of rows to examine. A multi-column index can help when your query includes conditions on those multiple columns. For example: SELECT * FROM Mytable WHERE user_id = 123 AND created_at > '2013-02-01' The multi-column index narrows down to the subset of rows that are associated with user_id 123, then within that subset, it further narrows down the selection to those with a recent created_at value. Without the second column in the index, the RDBMS would have to load all the rows for user_id 123 into memory before it could determine if they pass the criteria. For more information, see my presentation How to Design Indexes, Really . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14844780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1993941/']} | jdg_81063 |
stackexchange | llm_judgeable_groundtruth_similarity | 20003559 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can publish without issues in Visual Studio 2010 but when I attempt to publish with Visual Studio 2013 I am getting this message: The target "MSDeployPublish" does not exist in the project. \Source2\Web Projects\SubService\subservice\subservice.csproj 0 0 subservice Can anyone tell me what I need to do to resolve this? It's happening with 2 different projects.
Now provide the response and nothing else.
| For what its worth; I had the same issue. Fresly installed Windows 8.1 machine, only installed Visual Studio 2013 (+update1) + Azure SDK's . Create new Web Api Project boom build error - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. note the v10.0. This is the default fallback when no MSBuild parameter "VisualStudioVersion" has been set. Prior to VS2012 the paths to the target files were hardcoded. Other solutions for this bugs say to remove the following from your csproj which is added for backward compatibility: <PropertyGroup> <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">10.0</VisualStudioVersion> <VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath> </PropertyGroup> But removing this fixes your builds but breaks the publish feature with the original posters error: The target "MSDeployPublish" does not exist in the project Note: I have all Azure related SDK parts installed from the web platform installer. I have resorted to re-adding the above XML part to my csproj file but changing the 10.0 part to 12.0 (=vs2013) <PropertyGroup> <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">12.0</VisualStudioVersion> <VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath> </PropertyGroup> This might break backward compatibility but finally made my Visual Studio 2013 compile the code & made the publish feature work again. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20003559', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302613/']} | jdg_81064 |
stackexchange | llm_judgeable_groundtruth_similarity | 2514789 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that $f:\mathbb{R}\rightarrow\mathbb{R}$ such that$$f(x)+f\left(1-{1\over x}\right)=\arctan x\,,\quad \forall \,x\neq 0$$Find $$\int_{0}^1 f(x)\,dx$$My Attempt : Replace $x$ by $1/x$ in given equation $$f\left({1\over x}\right)+f(1-x)=\arctan {1\over x}$$Add both equations $$f(x)+f\left(1-{1\over x}\right)+f\left({1\over x}\right)+f(1-{x})=\arctan x\,+\arctan {1\over x}$$Rearranging thenm gives$$f(x)+f(1-x)+f\left({1\over x}\right)+f\left(1-{1\over x}\right)={\pi\over2}$$Now it seems to me that $f(x)=f\left({1\over x}\right)$ Am I correct here? (I don't have proof though)$$f(x)+f(1-x)={\pi\over 4}$$ $$\int_0^1 f(x)\,dx =\int_0^1f(1-x)\, dx={\pi\over 8}$$I'm not sure about my assumption. Thank you
Now provide the response and nothing else.
| Appearently $f(x)=f(1/x)$ does not hold. Let $g(x) = 1-1/x$, then we have $$g^2 (x) = 1/(1-x) \quad \quad \color{red}{g^3 (x) = x}$$ Hence $f(x) + f(g(x)) = \arctan x$ implies$$f(g(x)) + f(g^2(x)) = \arctan(g(x))$$$$f(g^2(x)) + f(x) = \arctan(g^2(x))$$ Solving for $f(x)$ from these three equations give$$f(x) = \frac{1}{2}\left[\arctan x - \arctan(1-\frac{1}{x}) + \arctan(\frac{1}{1-x})\right]$$ and routine integration gives $$\int_0^1 f(x) dx = \frac{3\pi}{8}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2514789', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/464877/']} | jdg_81065 |
stackexchange | llm_judgeable_groundtruth_similarity | 8111677 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are some good explanations on what argument dependent lookup is? Many people also call it Koenig Lookup as well. Preferably I'd like to know: Why is it a good thing? Why is it a bad thing? How does it work?
Now provide the response and nothing else.
| Koenig Lookup , or Argument Dependent Lookup , describes how unqualified names are looked up by the compiler in C++. The C++11 standard § 3.4.2/1 states: When the postfix-expression in a function call (5.2.2) is an unqualified-id, other namespaces not considered during the usual unqualified lookup (3.4.1) may be searched, and in those namespaces, namespace-scope friend function declarations (11.3) not otherwise visible may be found. These modifications to the search depend on the types of the arguments (and for template template arguments, the namespace of the templateargument). In simpler terms Nicolai Josuttis states 1 : You don’t have to qualify the namespace for functions if one or more argument types are defined in the namespace of the function. A simple code example: namespace MyNamespace{ class MyClass {}; void doSomething(MyClass) {}}MyNamespace::MyClass obj; // global objectint main(){ doSomething(obj); // Works Fine - MyNamespace::doSomething() is called.} In the above example there is neither a using -declaration nor a using -directive but still the compiler correctly identifies the unqualified name doSomething() as the function declared in namespace MyNamespace by applying Koenig lookup . How does it work? The algorithm tells the compiler to not just look at local scope, but also the namespaces that contain the argument's type. Thus, in the above code, the compiler finds that the object obj , which is the argument of the function doSomething() , belongs to the namespace MyNamespace . So, it looks at that namespace to locate the declaration of doSomething() . What is the advantage of Koenig lookup? As the simple code example above demonstrates, Koenig lookup provides convenience and ease of usage to the programmer. Without Koenig lookup there would be an overhead on the programmer, to repeatedly specify the fully qualified names, or instead, use numerous using -declarations. Why the criticism of Koenig lookup? Over-reliance on Koenig lookup can lead to semantic problems, and catch the programmer off guard sometimes. Consider the example of std::swap , which is a standard library algorithm to swap two values. With the Koenig lookup one would have to be cautious while using this algorithm because: std::swap(obj1,obj2); may not show the same behavior as: using std::swap;swap(obj1, obj2); With ADL, which version of swap function gets called would depend on the namespace of the arguments passed to it. If there exists a namespace A , and if A::obj1 , A::obj2 , and A::swap() exist, then the second example will result in a call to A::swap() , which might not be what the user wanted. Further, if for some reason both A::swap(A::MyClass&, A::MyClass&) and std::swap(A::MyClass&, A::MyClass&) are defined, then the first example will call std::swap(A::MyClass&, A::MyClass&) but the second will not compile because swap(obj1, obj2) would be ambiguous. Trivia: Why is it called “Koenig lookup”? Because it was devised by former AT&T and Bell Labs researcher and programmer, Andrew Koenig . Further reading: Herb Sutter's Name Lookup on GotW Standard C++03/11 [basic.lookup.argdep]: 3.4.2 Argument-dependent name lookup. ** 1 ** The definition of Koenig lookup is as defined in Josuttis' book, *The C++ Standard Library: A Tutorial and Reference*. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8111677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/965369/']} | jdg_81066 |
stackexchange | llm_judgeable_groundtruth_similarity | 21427199 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to vertically align some text within a list li and having issues. First off you need to know that I have replaced my list-style-image with a custom image. This is my CSS: ul.cogs li { list-style-image: url(Images/li_cog.png); height: 50px; line-height: 50px; } I tried to see if there was a way of getting the text to align to the middle.I tried: vertical-align: middle; which didn't work, so then I tried: line-height: 50px; which also did not work, so I tried: display: table which worked, but the image disappears from the list item.... Does anyone know of a way to get this to work?
Now provide the response and nothing else.
| The issue using list-style-image is that you cannot align with the text, the best thing to do is to use background-image for li element, and then use padding-left for your li elements. Buggy Demo (The issue which you are facing) Demo ul li { background-image: url(http://png-5.findicons.com/files/icons/2222/gloss_basic/32/bullet_black.png); background-repeat: no-repeat; line-height: 30px; padding-left: 30px;}ul { margin: 50px;} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21427199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1303170/']} | jdg_81067 |
stackexchange | llm_judgeable_groundtruth_similarity | 20091779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a question I've been mildly irritated about for some time and just never got around to search the answer to. However I thought I might at least ask the question and perhaps someone can explain. Basically many languages I've worked in utilize syntactic sugar to write (using syntax from C++): int main() { int a = 2; a += 3; // a=a+3} while in lua the += is not defined, so I would have to write a=a+3 , which again is all about syntactical sugar. when using a more "meaningful" variable name such as: bleed_damage_over_time or something it starts getting tedious to write: bleed_damage_over_time = bleed_damage_over_time + added_bleed_damage_over_time instead of: bleed_damage_over_time += added_bleed_damage_over_time So I would like to know not how to solve this if you don't have a nice solution, in that case I would of course be interested in hearing it; but rather why lua doesn't implement this syntactical sugar.
Now provide the response and nothing else.
| This is just guesswork on my part, but: 1. It's hard to implement this in a single-pass compiler Lua's bytecode compiler is implemented as a single-pass recursive descent parser that immediately generates code. It does not parse to a separate AST structure and then in a second pass convert that to bytecode. This forces some limitations on the grammar and semantics. In particular, anything that requires arbitrary lookahead or forward references is really hard to support in this model. This means assignments are already hard to parse. Given something like: foo.bar.baz = "value" When you're parsing foo.bar.baz , you don't realize you're actually parsing an assignment until you hit the = after you've already parsed and generated code for that. Lua's compiler has a good bit of complexity just for handling assignments because of this. Supporting self-assignment would make that even harder. Something like: foo.bar.baz += "value" Needs to get translated to: foo.bar.baz = foo.bar.baz + "value" But at the point that the compiler hits the = , it's already forgotten about foo.bar.baz . It's possible, but not easy. 2. It may not play nice with the grammar Lua doesn't actually have any statement or line separators in the grammar. Whitespace is ignored and there are no mandatory semicolons. You can do: io.write("one")io.write("two") Or: io.write("one") io.write("two") And Lua is equally happy with both. Keeping a grammar like that unambiguous is tricky. I'm not sure, but self-assignment operators may make that harder. 3. It doesn't play nice with multiple assignment Lua supports multiple assignment, like: a, b, c = someFnThatReturnsThreeValues() It's not even clear to me what it would mean if you tried to do: a, b, c += someFnThatReturnsThreeValues() You could limit self-assignment operators to single assignment, but then you've just added a weird corner case people have to know about. With all of this, it's not at all clear that self-assignment operators are useful enough to be worth dealing with the above issues. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20091779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/795776/']} | jdg_81068 |
stackexchange | llm_judgeable_groundtruth_similarity | 24177572 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to add slides to Bootstrap carousel using jQuery but it is not acting as a slider in the browser. Instead it's showing the images in list view. <!DOCTYPE html><html><head><link href="Assets/css/bootstrap.css" rel="stylesheet"><script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"> </script><script src="Assets/js/bootstrap.min.js"></script><title></title><script> onload=function(){ for(var m=3;m>=0;m--) { var path="file_uploads/"+m+".jpg"; $(".carousel-indicators").after("<li data-target='#carousel-example-generic' data-slide-to=\""+m+"\"></li>"); $(".carousel-inner").after("<div class='item'><img src='"+path+"' alt='"+m+"'></div>"); } $(".carousel-indicators li:first").addClass("active"); $(".carousel-inner .item:first").addClass("active"); $('.carousel').carousel(); }</script></head><body> <div id="carousel-example-generic" class="carousel slide" data-ride="carousel"> <!-- Indicators --> <ol class="carousel-indicators"> </ol> <!-- Wrapper for slides --> <div class="carousel-inner"> </div> <!-- Controls --> <a class="left carousel-control" href="#carousel-example-generic" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left"></span> </a> <a class="right carousel-control" href="#carousel-example-generic" data-slide="next"> <span class="glyphicon glyphicon-chevron-right"></span> </a> </div></body></html>
Now provide the response and nothing else.
| First thing, I will rely on the fact that m is an array with proper url to your images. The HTML should be like this: <div id="carousel-example-generic" class="carousel slide" data-ride="carousel"> <!-- Indicators --> <ol class="carousel-indicators"></ol> <!-- Wrapper for slides --> <div class="carousel-inner"></div> <!-- Controls --> <a class="left carousel-control" href="#carousel-example-generic" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left"></span> </a> <a class="right carousel-control" href="#carousel-example-generic" data-slide="next"> <span class="glyphicon glyphicon-chevron-right"></span> </a></div> Class carousel inner is empty, there is where you gonna place your images for then carousel. Class carousel-indicatiors is also empty, will be filled by JS. Then, comes the JS:(as I said, Im relying on the fact that m is an array of imgs url) $(document).ready(function(){ for(var i=0 ; i< m.length ; i++) { $('<div class="item"><img src="'+m[i]+'"><div class="carousel-caption"></div> </div>').appendTo('.carousel-inner'); $('<li data-target="#carousel-example-generic" data-slide-to="'+i+'"></li>').appendTo('.carousel-indicators') } $('.item').first().addClass('active'); $('.carousel-indicators > li').first().addClass('active'); $('#carousel-example-generic').carousel();}); Basically, you append all your images to class carousel-inner, you add carousel control li's, then you add the active class to the first image and to the first carousel indicator li., and finally, you initialize your carousel. Note that all this is inside a document ready function, which is what you are missing . What you do is only define a function called onload Hope it helps ! EDIT: I saw that you are outputting also alt tag to images, but thats something that not need to be on my answer, I bet you can do that without problems. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24177572', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305405/']} | jdg_81069 |
stackexchange | llm_judgeable_groundtruth_similarity | 43874305 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Class Library, it contains the following Model and Method Model: public class Employee { public int EmpId { get; set; } public string Name { get; set; }} Method: public class EmployeeService { public List<Employee> GetEmployee() { return new List<Employee>() { new Employee() { EmpId = 1, Name = "John" }, new Employee() { EmpId = 2, Name = "Albert John" }, new Employee() { EmpId = 3, Name = "Emma" }, }.Where(m => m.Name.Contains("John")).ToList(); }} I have a Test Method [TestMethod()]public void GetEmployeeTest() { EmployeeService obj = new EmployeeService(); var result = obj.GetEmployee(); Xunit.Assert.Collection<Employee>(result, m => Xunit.Assert.Contains("John",m.Name));} I got an Exception message Assert.Collection() FailureCollection: [Employee { EmpId = 1, Name = "John" }, Employee { EmpId = 2, Name = "Albert John" }]Expected item count: 1Actual item count: 2 My requirement is to check all the items.Name should contain the sub string "John". Kindly assist me how to check using Xunit.Assert.Collection
Now provide the response and nothing else.
| It appears that Assert.Collection only uses each element inspector once. So, for your test, the following works: If the sequence result has exactly two elements: [Fact]public void GetEmployeeTest(){ EmployeeService obj = new EmployeeService(); var result = obj.GetEmployee(); Assert.Collection(result, item => Assert.Contains("John", item.Name), item => Assert.Contains("John", item.Name));} If there are many elements, changing the Assert to Assert.All(result, item => Assert.Contains("John", item.Name)); should give you the result you are expecting. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43874305', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81070 |
stackexchange | llm_judgeable_groundtruth_similarity | 2072755 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Question : For each $n \in \mathbb N$, Define : $\gamma_n=1+\frac{1}{2}+\dots+\frac{1}{n}-\int_1^n \frac{1}{x}dx$ Prove that $\{\gamma_n\}$ is convergent. Note 1 : I know that $\sum_1^\infty \frac{1}{k}$ diverges. I can guess that this sequence is convergent to $0$ because of the similar terms inside the integration and outside of it. The problem is that the integration is continuous. I mean that $x$ is not just integers. What can we do with the values left?! ( For example, $\frac{1}{2.5}$ ) Also, Notice that this problem should be solved with some elementary tools. ( It's from a part of my book which is even before any discussion about the Fundamental theorem of calculus. ) Note 2 : I'm not good at integration and i'm learning it. Any good details can help me. Thanks in advance.
Now provide the response and nothing else.
| It is enough to show that $(\gamma_n)$ is decreasing and bounded below. The function $f(x)=\dfrac{1}{x}$ is decreasing. On the interval $[k,k+1]$, $k\in\mathbb{N}$ we must have $\dfrac{1}{k+1}\leq f(x)\leq\dfrac{1}{k}$, hence$$\dfrac{1}{k+1}\leq\int_k^{k+1}f(x)\,dx\leq\dfrac{1}{k}\hspace{1cm}(\ast)$$(note that the length of the interval $[k,k+1]$ is $1$). Summing from $k=1$ to $n-1$ we get$$\sum_{k=1}^{n-1}\dfrac{1}{k+1}\leq\int_1^n \dfrac{1}{x}\,dx\leq\sum_{k=1}^{n-1}\dfrac{1}{k}$$that is$$\sum_{k=1}^n\dfrac{1}{k}-1\leq\int_1^n\dfrac{1}{x}\,dx\leq\sum_{k=1}^n\dfrac{1}{k}-\dfrac{1}{n}$$The RHS inequality gives us$$\gamma_n=\sum_{k=1}^n\dfrac{1}{k}-\int_1^n\dfrac{1}{x}\,dx\geq\dfrac{1}{n}\geq 0$$so that $(\gamma_n)$ is bounded below by $0$. To show that $(\gamma_n)$ is decreasing, consider$$\gamma_{n}-\gamma_{n-1}=\dfrac{1}{n}-\int_{n-1}^n\dfrac{1}{x}dx\leq 0$$by virtue of the LHS of $(\ast)$, so that $(\gamma_n)$ is decreasing. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2072755', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/301739/']} | jdg_81071 |
stackexchange | llm_judgeable_groundtruth_similarity | 10057095 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a OS X 10.7 with Python 2.7 from python.org and XCode 4.2. Following these instructions , I tried changing the CC , CXX ,and FFLAGS variables and install with sudo pip install numpy .However, the installation still looks for gcc-4.2 . I triedinstalling gcc-4.2 through MacPorts but get that Error: gcc42does not build on Snow Leopard or later. When I install numpy from the precompiled binary on sourceforge, I get the following error: import numpy Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/ init .py", line 137, in import add_newdocs File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/ init .py", line 4, in from type_check import * File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/type_check.py", line 8, in import numpy.core.numeric as _nx File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/ init .py", line 5, in import multiarray ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so: no matching architecture in universal wrapper Any help?
Now provide the response and nothing else.
| If I recall correctly the precompiled binary on source forge is 32-bit. It took me ages to get Numpy, Scipy, and Matplotlib set up on my macbook, it's definitely much harder than it should be. I believe that your best option is the Scipy superpack . Before using the superpack you need to update to Xcode 4.3.2 So I just installed python 2.7.2 from python.org, grabbed the superpack, and now have numpy working on my machine. After installing python 2.7.2, I did sudo pip uninstall numpy , which didn't remove enough. So I cd'd into /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ and moved both the numpy directory and the numpy egg to "old" versions of themselves. Then I ran sh install_superpack.sh and answered no to the "are you installing from a repository cloned to this machine(pretty important lol, hit yes without thinking the first time). However, now I'm able to do this: $ pythonPython 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import numpy>>> import scipy>>> import matplotlib With no problems :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10057095', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143476/']} | jdg_81072 |
stackexchange | llm_judgeable_groundtruth_similarity | 86657 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Throughout my upbringing, I encountered the following annotations on Gauss's diary in several so-called accounts of the history of mathematics: "... A few of the entries indicate that the diary was a strictly private affair of its author's (sic). Thus for July 10, 1796, there is the entry ΕΥΡΗΚΑ! num = Δ + Δ + Δ. Translated , this echoes Archimedes' exultant "Eureka!" and states that every positive integer is the sum of three triangular numbers—such a number is one of the sequence 0, 1, 3, 6, 10, 15, ... where each (after 0) is of the form $\frac{1}{2}n(n+1)$, $n$ being a positive integer. Another way of saying the same thing is that every number of the form $8n+3$ is a sum of three odd squares... It is not easy to prove this from scratch. Less intelligible is the cryptic entry for October 11, 1796, "Vicimus GEGAN." What dragon had Gauss conquered this time? Or what giant had he overcome on April 8, 1799, when he boxes REV. GALEN up in a neat rectangle? Although the meaning of these is lost forever the remaining 144 are for the most part clear enough." " The preceding paragraphs have been quoted verbatim from J. Newman's The World of MATHEMATICS (Vol. I, pages 304-305) and the questions that I pose today were motivated from my recent spotting of [2]: Why is there no mention whatsoever to the REV. GALEN inscription in either Klein's or Gray's work? What is the reason that E. T. Bell expressed that Gauss had written the Vicimus GEGAN entry on October 11, 1796? According to Klein, Gray, and (even) the Wikipedians it was written on October 21, 1796. As far as I understand, Klein and Gray are just reporting the dates that appear on the original manuscript. Did Bell actually go over it? Last but not least, is there a compendium out there of all known potential explanations to the Vicimus GEGAN enigma? The only ones whereof I have notice can be found on page 112 of [1]: "... Following a suggestion of Schlesinger [Gauss, Werke, X.1, part 2, 29], Biermann ... proposed that GA stood for Geometricas, Arithmeticas, so reading GEGAN in reverse as Vicimus N[exum] A[rithmetico] G[eometrici cum] E[xspectationibus] G[eneralibus]. Schumann has since proposed other variants; including, for GA, (La) G(rangianae) A(nalysis)..." Heartfelt thanks for your comments, reading suggestions, and replies. References J. J. Gray. " A commentary on Gauss's mathematical diary, 1796-1814, with an English translation". Expo. Math. 2 (1984), 97-130. F. Klein. "Gauß' wissenschaftliches Tagebuch 1796–1814". Math. Ann. 57 (1903), 1–34. M. Perero. Historia e Historias de Matemáticas. Grupo Editorial Iberoamérica, 1994, pág. 40.
Now provide the response and nothing else.
| I think part of the answer may be found by consulting Volume X of Gauss's Werke . "REV. GALEN" doesn't actually appear in the Tagebuch itself, a facsimile of which appears following page 482. It was jotted down by Gauss elsewhere, as explained on page 539, in the commentary (which runs for nearly three pages) on the Tagebuch entry dated April 8, 1799. Just above the excerpted paragraphs from Men of Mathematics , Bell writes, "A facsimile reproduction [of Gauss's diary] was published in 1917 in the tenth volume (part 1) of Gauss' [sic] collected works, together with an exhaustive analysis of its contents by several expert editors." I think it's safe to assume that Bell actually looked at this 1917 publication (and I think it's reasonable to assume that the 1973 edition I'm looking at right now is not substantially different), and I think it's fair to conjecture that Bell paid more attention -- but maybe not enough! -- to the transcription and commentary than he did to the facsimile. As for the misdating of "Vicimus GEGAN," the correct date is clear enough in both the facsimile and in the transcription on page 507. For one thing, it appears immediately below an entry dated October 18. My guess is that either Bell or the typesetter made a simple mistake. Finally, a useful reference, especially for "GEGAN" (and a related notation, "WAEGEGAN") is Mathematisches Tagebuch : 1796-1814 . Unfortunately, my command of German is insufficient to give a good synopsis of what's to be found there. I hope an actual historian will weigh in here. Added Feb. 21 : It turns out there is a 2005 edition of Mathematisches Tagebuch 1796-1814 (the copy I found earlier was a 1985 edition) which has an update referring to a 1997 paper by Kurt Biermann. Here is a relevant Zentralblatt review of that paper: Zbl 0888.01025 Biermann, Kurt-R.Vicimus NAGEG. Confirmation of a hypothesis. (Vicimus NAGEG. Bestätigung einer Hypothese.) (German)[J] Mitt., Gauss-Ges. Gött. 34, 31-34 (1997). The author, a well-known expert on Carl Friedrich Gauss, reports on a Gauss-manuscript, which was found recently in the Göttingen astronomical observatory by H. Grosser and which confirms a hypothesis by Biermann from 1963. At that time Biermann read the frequent code GEGAN in Gauss' diary and manuscripts in inverse order as standing for (vicimus) N[exum medii] A[rithmetico-] G[eometricum] E[xpectationibus] G[eneralibus]. This in turn was alluding (in Biermann's opinion) to Gauss' discovery of the connections between the arithmetic geometric mean and the general theory of elliptic functions. The recently found Gauss-manuscript shows, for the first time, the code NAGEG, and, on the same sheet (which is reproduced in the article), the well-known GEGAN alongside with the picture (by Gauss' hand) of a lemniscate. Thus a remarkable historical hypothesis has been essentially solved after more than three decades.[R.Siegmund-Schultze (Berlin)] | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/86657', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1593/']} | jdg_81073 |
stackexchange | llm_judgeable_groundtruth_similarity | 15195220 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am in the process of creating a package that uses a data.table as a dataset and has a couple of functions which assign by reference using := . I have built a simple package to demonstrate my problem library(devtools) install_github('foo','mnel') It contains two functions foo <- function(x){ x[, a := 1]}fooCall <- function(x){ eval(substitute(x[, a :=1]),parent.frame(1))} and a dataset (not lazy loaded) DT , created using DT <- data.table(b = 1:5)save(DT, file = 'data/DT.rda') When I install this package, my understanding is that foo(DT) should assign by reference within DT . library(foo) data(DT) foo(DT) b a1: 1 12: 2 13: 3 14: 4 15: 5 1# However this has not assigned by reference within `DT`DT b1: 12: 23: 34: 45: 5 If I use the more correct tracmem(DT)DT <- foo(DT)# This works without copyingDT b a1: 1 12: 2 13: 3 14: 4 15: 5 1untracemem(DT) If I use eval and substitute within the function fooCall(DT) b a1: 1 12: 2 13: 3 14: 4 15: 5 1# it does assign by reference DT b a1: 1 12: 2 13: 3 14: 4 15: 5 1 Should I stick with DT <- foo(DT) or the eval / substitute route, or Is there something I'm not understanding about how data loads datasets, even when not lazy?
Now provide the response and nothing else.
| This has nothing to do with datasets or locking -- you can reproduce it simply using DT<-unserialize(serialize(data.table(b = 1:5),NULL))foo(DT)DT I suspect it has to do with the fact that data.table has to re-create the extptr inside the object on the first access on DT , but it's doing so on a copy so there is no way it can share the modification with the original in the global environment. [From Matthew] Exactly. DT<-unserialize(serialize(data.table(b = 1:3),NULL))DT b1: 12: 23: 3DT[,newcol:=42]DT # Ok. DT rebound to new shallow copy (when direct) b newcol1: 1 422: 2 423: 3 42DT<-unserialize(serialize(data.table(b = 1:3),NULL))foo(DT) b a1: 1 12: 2 13: 3 1DT # but not ok when via function foo() b1: 12: 23: 3 DT<-unserialize(serialize(data.table(b = 1:3),NULL))alloc.col(DT) # alloc.col needed first b1: 12: 23: 3foo(DT) b a1: 1 12: 2 13: 3 1DT # now it's ok b a1: 1 12: 2 13: 3 1 Or, don't pass DT into the function, just refer to it directly. Use data.table like a database: a few fixed name tables in .GlobalEnv . DT <- unserialize(serialize(data.table(b = 1:5),NULL))foo <- function() { DT[, newcol := 7]}foo() b newcol1: 1 72: 2 73: 3 74: 4 75: 5 7DT # Unserialized data.table now over-allocated and updated ok. b newcol1: 1 72: 2 73: 3 74: 4 75: 5 7 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15195220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1385941/']} | jdg_81074 |
stackexchange | llm_judgeable_groundtruth_similarity | 2465430 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't find the word implicit differentiation anywhere in the book, but I'm assuming it means solve for $z$ in this case and differentiate with respect to $x$, then $y$ in this problem. But when I took the derivative with respect to $x$ of 47 after solving for $z$, i get a different answer. I got $$z = \dfrac{\sqrt{-x^2-2y^2+1}}{\sqrt{3}}$$ and for the derivative of that: $$-\dfrac{x}{\sqrt{3}\sqrt{-x^2-2y^2+1}}$$ (i got these results on https://www.derivative-calculator.net/ ) this is different than the answer in the book. am i not supposed to solve for $z$ and differentiate with respect to $x$ then $y$? Is this another form of the answer? help.. the answer is $-x/3z$ ... which I don't get..
Now provide the response and nothing else.
| I am afraid that this is not the way to do it. Consider the implicit function $$F=x^2+2y^2+3z^2-1=0$$ and consider the partial derivatives $$\frac{\partial F}{\partial x}=2x\qquad \frac{\partial F}{\partial y}=4y\qquad\frac{\partial F}{\partial z}=6z\qquad $$ and now use, from the implicit function theorem, $$\frac{\partial z}{\partial x}=-\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial z} }\qquad\qquad \frac{\partial z}{\partial y}=-\frac{\frac{\partial F}{\partial y}}{\frac{\partial F}{\partial z} }$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2465430', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/115082/']} | jdg_81075 |
stackexchange | llm_judgeable_groundtruth_similarity | 6280789 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've just imported a bunch of data to a MySQL table and I have a column "GUID" that I want to basically fill down all existing rows with new and unique random GUID's. How do I do this in MySQL ? I tried UPDATE db.tablename SET columnID = UUID() where columnID is not null And just get every field the same
Now provide the response and nothing else.
| I'm not sure if it's the easiest way, but it works. The idea is to create a trigger that does all work for you, then, to execute a query that updates your table, and finally to drop this trigger: delimiter //create trigger beforeYourTableUpdate BEFORE UPDATE on YourTableFOR EACH ROWBEGIN SET new.guid_column := (SELECT UUID());END// Then execute UPDATE YourTable set guid_column = (SELECT UUID()); And DROP TRIGGER beforeYourTableUpdate ; UPDATE Another solution that doesn't use triggers, but requires primary key or unique index : UPDATE YourTable,INNER JOIN (SELECT unique_col, UUID() as new_id FROM YourTable) new_data ON (new_data.unique_col = YourTable.unique_col)SET guid_column = new_data.new_id UPDATE once again:It seems that your original query should also work (maybe you don't need WHERE columnID is not null , so all my fancy code is not needed. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6280789', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/789403/']} | jdg_81076 |
stackexchange | llm_judgeable_groundtruth_similarity | 15808381 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to transfer a .png file from a directory on my computer to a directory on a remote server. I have to use SFTP to secure the file and transfer mode. And I already have a UNIX script (.ksh) file to copy the files in the normal mode. How do I implement the transfer in SFTP mode?
Now provide the response and nothing else.
| Use sftp instead of whatever command you are using in your .ksh script. See sftp man for reference. You may also want to look at scp secure copy - scp man . EDIT sftp is mostly for interactive operations, you need to specify host you want to connect to: sftp example.com you will be prompted for username and passsword, and the interactive session will begin.. Although it can be used in scripts, the scp is much more easy to use: scp /path/to/localfile user@host:/path/to/dest you will be prompted for password.. Edit 2 Both scp and sftp use ssh as underlying protocol, see this and this The best way to setup them to run from scripts is to setup passwordless authentication using keys . See this and this . I use this extensively on my servers.. After you setup keys, you can run scp -i private-key-file /path/to/local/file user@host:/path/to/remotesftp -oIdentityFile=private-key-file -b batch-file user@host If you want to authenticate with password, you may try the expect package. The simplest script may look like this: #!/usr/bin/expectspawn sftp -b batch-file user@hostexpect "*?assword:*"send "pasword\n"interact See this , this and this for more info. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15808381', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/549903/']} | jdg_81077 |
stackexchange | llm_judgeable_groundtruth_similarity | 4484560 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For $x^2+y^2$ there is a formula based on multiplication of complex numbers which allows one, given specific representations $a^2+b^2=m,\ c^2+d^2=n,$ to obtain a representation $s^2+t^2=mn.$ That is, there are simple formulas for $s,t$ in terms of $a,b,c,d.$ I am asking here for such a formula for the form $x^2+xy+y^2.$ That is, given $a^2+ab+b^2=m,\ c^2+cd+d^2=n,$ to solve $s^2+st+t^2=mn,$ where there are simple formulas for $s,t$ in terms of $a,b,c,d.$ Thank you for any information about this. Added later: I was looking to find cases of two representations by the form $$x^2+xy+y^2 \tag{1}$$ of the same number using positive $x,y$ and [though there is a smaller example] found what to me is an interesting coincidence. The "taxicab number" $1729,$ famous for having two essentially different representations as a sum of two cubes, has three different representations by $(1).$ They are $(x,y)=(25,23),(32,15),(37,8).$
Now provide the response and nothing else.
| We know, if $\omega$ is a primitive third root of unity, then, we can factorise $x^3-y^3$ as $(x-y)(x-\omega y)(x-\omega^2 y)$ but also as $(x-y)(x^2+xy+y^2)$ . Thus, $$(x^2+xy+y^2)= (x-\omega y)(x-\omega^2 y)$$ Now, we can write the equations as $$(a-b\omega)(a-b \omega^2)=m$$ $$(c-d \omega)(c-d \omega^2)=n$$ Multiplying these together we get $$(a-b \omega)(c-d \omega)(a-b \omega^2)(c-d \omega^2)=mn $$ $$(ac+bd \omega^2-(ad+bc) \omega)(ac+bd \omega^4-(bc+ad) \omega^2)=mn$$ Now we will use $\omega^3=1$ and $\omega^2+ \omega+1=0$ . So $$(ac-bd-(ad+bc+bd) \omega) (ac-bd-(ad+bc+bd) \omega^2)=mn$$ Thus you get $(s,t)=(ac-bd, ad+bc+bd)$ . Another solution can be $(s,t)=(ac+bd+ad, ad-bc)$ obtained by multiplying the first and fourth terms of the equation. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4484560', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/483139/']} | jdg_81078 |
stackexchange | llm_judgeable_groundtruth_similarity | 424955 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Taylor series expansion of function, $f$, is a vector in the vector space with basis: $\{(x-a)^0, (x-a)^1, (x-a)^3, \ldots, (x-a)^n, \ldots\}$. This vector space has a countably infinite dimension. When $f$ is expressed as linear combination of the basis vector the scalar multiple for the $n$-th basis vector is $\operatorname{Diff}_n{f}(a)/n!$ Fourier series expansion of function, $f$, is a vector in the vector space with basis: $\{\sin(1x), \cos(1x), \sin(2x), \cos(2x), \ldots, \sin(nx), \cos(nx), \ldots\}$. This vector space has a countably infinite dimension. When $f$ is expressed as linear combination of the basis vector the scalar multiple for the $n$-th basis vectors are $\operatorname{Int}\{f\cdot\sin(nx)\}$ and $\operatorname{Int}\{f\cdot\cos(nx)\}$. Questions: The vector space for the Fourier series has an inner product, $\operatorname{Int}\{f\cdot g\}$, and it's this inner product that provides the above expressions like $\operatorname{Int}\{f\cdot\sin(nx)\}$ and $\operatorname{Int}\{f\cdot\cos(nx)\}$. Is there a similar inner product based derivation of the scalar multiples for the vector space of spanned by the polynomial basis in Taylor series? What is the relationship, if any, between the vector space produced by Taylor Series and that of Fourier Series? E.g. is one a subspace of the other? When Fourier series is taught, why isn't Taylor Series re-explained in the vector space framework used for Fourier series? And would this approach not lead the discussion of the implication of the choice of basis (and perhaps the choice of inner product) for function spaces? Just as Fourier series get generalized to Fourier Transform (the summation of the series becomes an integral), is there something equivalent to Taylor series? Are there any recommended resources (books, courses, etc.) available which can help clarify my thinking regarding these issues?
Now provide the response and nothing else.
| First, a general point. The big difference between Taylor and Fourier series is that Taylor series are local and Fourier series are global . That is, Taylor series are defined in terms of and capture local behavior of a function, whereas Fourier series are defined in terms of and capture global behavior. It is not quite accurate to think of a function as being just the sum of its Taylor series; even a function which has a Taylor series need not be equal to it locally (see, for example, MO ), and certainly need not have any relationship to it globally, whereas every $L^2$ function on the circle is equal to the sum of its Fourier series in the $L^2$ (not pointwise) sense. Sort of. If you take the inner product to be something like $\int fg \, dx$, then the thing whose inner product with $f$ is $f(0)$ is not quite a function but a distribution, namely the Dirac distribution at $0$. The thing whose inner product with $f$ is $f^{(n)}(0)$, at least on a suitable class of functions $f$, is related to the distributional derivatives of the Dirac distribution. It depends. First, you shouldn't be treating them as bare vector spaces: you really want the language of topological vector spaces and the rest of functional analysis . Second, it depends on where you want the Taylor series to be defined. If we're talking about functions with convergent Taylor series on $\mathbb{R}$ then such functions don't have a Fourier series. The linear algebra analogy is much weaker for Taylor series than it is for Fourier series, the problem being again that Taylor series only capture behavior at a point, and generally functions are determined by their Taylor series to a much weaker extent than functions are determined by their Fourier series. (This is in general; in complex analysis there is a much tighter relationship between Taylor series and Fourier series, as described in the link Martin gives in the comments.) You shouldn't think of Fourier transforms as a generalization of Fourier series. They're both generalized by something called Pontrjagin duality , and I'm not aware of an analogous phenomenon for Taylor series. As mentioned in 2, you might want to learn some functional analysis. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/424955', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/83153/']} | jdg_81079 |
stackexchange | llm_judgeable_groundtruth_similarity | 5823580 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After I submit the form for the first time and then refresh the form it gets resubmitted and and I don't want that. Here's my form in template : <form action = "" method = "POST"> {% csrf_token %} {{ form.as_p }} <input type = "submit" value = "Shout!"/></form> How can I fix this ? Here's my views: def index(request): shouts = Shout.objects.all() if request.method == "POST": form = GuestBookForm(request.POST) if form.is_valid(): cd = form.cleaned_data shout = Shout(author = cd['author'], message = cd['message']) shout.save() form = GuestBookForm() else: form = GuestBookForm() return render_to_response('guestbook/index.html', {'shouts' : shouts, 'form' : form }, context_instance = RequestContext(request))
Now provide the response and nothing else.
| My guess is that this is a problem in your view. After successful submission and processing of a web form, you need to use a return HttpResponseRedirect , even if you are only redirecting to the same view. Otherwise, certain browsers (I'm pretty sure FireFox does this) will end up submitting the form twice. Here's an example of how to handle this... def some_view(request): if request.method == "POST": form = some_form(request.POST) if form.is_valid(): # do processing # save model, etc. return HttpResponseRedirect("/some/url/") return render_to_response("normal/template.html", {"form":form}, context_instance=RequestContext(request)) Given your recently added view above... def index(request): shouts = Shout.objects.all() if request.method == "POST": form = GuestBookForm(request.POST) if form.is_valid(): cd = form.cleaned_data shout = Shout(author = cd['author'], message = cd['message']) shout.save() # Redirect to THIS view, assuming it lives in 'some app' return HttpResponseRedirect(reverse("some_app.views.index")) else: form = GuestBookForm() return render_to_response('guestbook/index.html', {'shouts' : shouts, 'form' : form }, context_instance = RequestContext(request)) That will use reverse to redirect to this same view (if thats what you are trying to do) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5823580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/555479/']} | jdg_81080 |
stackexchange | llm_judgeable_groundtruth_similarity | 261087 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My long lasting previous installation somehow tied up VLC and gtk file dialog. I didn't even do anything special, except installing VLC. After update to VLC 2.2.1 the file dialog was replaced to Qt and I don't see any obvious way how to get back with gtk. When I mark "vlc-qt" for deinstallation, entire vlc is marked for removal as well. openSUSE 13.2
Now provide the response and nothing else.
| VLC media player has been using Qt interface for quite long time. VLC however, has an option to override window style, which will also change the file dialog as well. In VLC media player, do the following steps: Go to Tools > Preferences (or press Ctrl + P ) In the first tab, under Interface Settings - Look and feel , look for "Force window style:" with the drop-down menu and change selection from System's default to GTK+ Finally, click Save to apply the changes. Then, go to Media > Open File... (or press Ctrl + O ) to confirm that the file dialog has been applied with GTK+ window style. That's all. Tested with VLC 2.2.1 in Debian 8 Xfce (Xfce 4.10). Force style for Qt5 in Debian/Ubuntu Previously, for Debian 9 (testing) and Ubuntu 16.04 (xenial) and older, user had to additionally install libqt5libqgtk2 package from the repository. For newer releases, that is now provided by qt5-gtk-platformtheme or qt5-gtk2-platformtheme and either one will be installed automatically by recommends. Debian Testing (stretch) -- needed libqt5libqgtk2 Debian Old Stable (stretch) and newer Ubuntu 15.10 (wily) until 16.04 (xenial) -- needed libqt5libqgtk2 Ubuntu 18.04 (bionic) and newer Tested with VLC 2.2.2 in Xubuntu 16.04 (Xfce 4.12). I did not test in Debian, but reportedly works according to this post on Ask Ubuntu . Later, I had observed that qt5-gtk-platformtheme package was installed by default for VLC 3.0.9 in Xubuntu 20.04. Force style for Qt5 in other distributions The package above is not available in repositories of other distributions, including openSUSE, according to this search result from software.opensuse.org. As an alternative, this Arch Wiki noted that QT_STYLE_OVERRIDE environment variable will force specific style to Qt applications. Therefore, the line QT_STYLE_OVERRIDE=gtk2 or QT_STYLE_OVERRIDE=GTK+ may be added in one of the following locations: ~/.profile (reportedly works in Linux Mint, suggested in this post on Unix.SE ) ~/.bashrc (suggested in this post on Ask Ubuntu ) ~/.xsession or ~/.xinitrc (suggested in this post on FreeBSD forum ) ~/.xsessionrc (suggested for OpenBox in this post on CrunchBang Linux forum ) Without installing the Qt5 package, I have tried export the line to each of above configuration files one at a time, except for the last one. However, none of these worked for VLC in Xubuntu 16.04. At the moment, I can't verify whether the environment variable actually works or not. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/261087', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/5884/']} | jdg_81081 |
stackexchange | llm_judgeable_groundtruth_similarity | 12622920 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone advise on a way to list all stored procedures along with their schema names in a database? Thanks!
Now provide the response and nothing else.
| SELECT [schema] = OBJECT_SCHEMA_NAME([object_id]), nameFROM sys.procedures; or SELECT [schema] = SCHEMA_NAME([schema_id]), nameFROM sys.procedures; For a specific database, you can just change the context to that database first, or change Marc's query slightly (my queries are no good in this case because they rely on functions that are context-sensitive): SELECT SchemaName = s.name, ProcedureName = pr.name FROM databasename.sys.procedures prINNER JOIN databasename.sys.schemas s ON pr.schema_id = s.schema_id; If you want to do this for all databases: DECLARE @sql NVARCHAR(MAX) = N'';SELECT @sql += N' UNION ALL SELECT db = N''' + name + ''', s.name COLLATE Latin1_General_CI_AI, o.name COLLATE Latin1_General_CI_AI FROM ' + QUOTENAME(name) + '.sys.procedures AS o INNER JOIN ' + QUOTENAME(name) + '.sys.schemas AS s ON o.[schema_id] = s.[schema_id]'FROM sys.databases-- WHERE ... -- probably don't need system databases at leastSELECT @sql = STUFF(@sql, 1, 18, '') -- you may have to adjust ^^ 18 due to copy/paste, cr/lf, tabs etc + ' ORDER BY by db, s.name, o.name';EXEC sp_executesql @sql; The collate clauses are necessary in case you have databases with different collations. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12622920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/942914/']} | jdg_81082 |
stackexchange | llm_judgeable_groundtruth_similarity | 973418 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does there exist some definition of addition and multiplication for which the set of all integers is a field?
Now provide the response and nothing else.
| Yes, but not a very natural one. Since $\mathbb{Z}$ and $\mathbb{Q}$ are both countable, there exists a bijection $\Phi: \mathbb{Z} \to \mathbb{Q}$. With the operations$$m \oplus n := \Phi^{-1}(\Phi(m) + \Phi(n)), \qquad m \otimes n := \Phi^{-1}(\Phi(m)\Phi(n)),$$$(\mathbb{Z}, \oplus, \otimes)$ is a field, but NB that $\oplus$ and $\otimes$ won't have much to do with the usual addition and multiplication operations on $\mathbb{Z}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/973418', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/112447/']} | jdg_81083 |
stackexchange | llm_judgeable_groundtruth_similarity | 32145 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm pretty new to designing hardware with VHDL, and I think I'm making a noob mistake. I'm making a CPU and my registerfile is rising_edge triggered. I had a problem though. I'll try putting some pseudo code to clear it up on rising_edge {r0in <= 20; }CLOCK CYCLEon rising_edge {r1in <= r0out; }CLOCK CYCLE My problem is that r1in would be getting the old value of r0 instead of the appropriate 20 . I then proceeded to change my code to use falling_edge instead of rising_edge for only the registerfile. A simplified version is below: process(WriteEnable, DataIn, Clock)begin if falling_edge(Clock) then --note the falling_edge instead of rising_edge if(WriteEnable = '1') then registers <= DataIn; end if; end if;end process;DataOut <= registers; So now my registers are falling_edge triggered instead of rising_edge. All of the test cases I have pass with this configuration and it synthesizes without warnings... but is this correct? I'm asking this question because I've long been told to only use both edges in a design if you really know what you're doing. I've been programming in VHDL for a few weeks, so I want to make sure that this is an appropriate case to ues both edges
Now provide the response and nothing else.
| Yes, you can, but it's not recommended. You will need to have the hardware synthesize at twice the clock rate now - a 100MHz bus needs to be set up at 200MHz because the data only has half the time to propagate. You have effectively made it DDR . It seems that you are actually misunderstanding how a process works. All outputs within a process are scheduled to be presented on the output upon the next clock cycle . So everything you code within that process is actually happening in parallel and must stabilize before the next clock comes along, which is when the outputs will be posted. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/32145', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/708/']} | jdg_81084 |
stackexchange | llm_judgeable_groundtruth_similarity | 3827892 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am experimenting a little bit with gamestudio.I am now making a shooter game.I have an array with the pointers to the enemies. When an enemy is killed, I want to remove him from the list. And I also want to be able to create new enemies. Gamestudio uses a scripting language named lite-C. It has the same syntax as C and on the website they say, that it can be compiled with any C compiler. It is pure C, no C++ or anything else. I am new to C. I normally program in .NET languages and some scripting languages.
Now provide the response and nothing else.
| You can't. This is normally done with dynamic memory allocation. // Like "ENEMY enemies[100]", but from the heapENEMY* enemies = malloc(100 * sizeof(ENEMY));if (!enemies) { error handling }// You can index pointers just like arrays.enemies[0] = CreateEnemy();// Make the array biggerENEMY* more_enemies = realloc(enemies, 200 * sizeof(ENEMY));if (!more_enemies) { error handling }enemies = more_enemies;// Clean up when you're done.free(enemies); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3827892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/186964/']} | jdg_81085 |
stackexchange | llm_judgeable_groundtruth_similarity | 29324888 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to js unit testing and I am trying to use mocha for my backbone contact manager tutorial that i found at this github repo . However, i have a global window.ContactManager variable that I firsted wanted to test whether it exists and then test the router.on functionality inside the start function later. The variable looks like so: window.ContactManager = { Models: {}, Collections: {}, Views: {}, start: function(data) { var contacts = new ContactManager.Collections.Contacts(data.contacts), router = new ContactManager.Router(); router.on('route:home', function() { router.navigate('contacts', { trigger: true, replace: true }); }); router.on('route:showContacts', function() { var contactsView = new ContactManager.Views.Contacts({ collection: contacts });..... My test that does not work: var expect = require ('chai').expect; describe("Application", function() { it('creates a global variable for the name space ContactManager' , function () { expect(ContactManager).to.exist; })}); How do I test and access a global window variable existence in mocha from running the tests in the console?
Now provide the response and nothing else.
| You are ignoring the difference between running JavaScript code in the browser and running JavaScript code in Node. In the browser, the window name is a reference to the object which holds all your global variables. So when you do foo = 1 in the outermost scope, you declare a global foo , which is also accessible as window.foo . Conversely, if you assign a new field like this: window.bar = 1 , then you have a new global called bar . In Node, your global object is accessed as global . So if you do foo = 1 in the outermost scope, foo is also accessible as global.foo . And if you do global.bar = 1 , you have a new global named bar . Your code shows that you modify a window object, which does not appear to be a reference to the global object. Options: Run Mocha in the browser instead of in Node. See Mocha's documentation . Set your Node environment so that it mimics enough of a browser environment to satisfy node. Setting a global window variable to be a equal to global might be enough but I don't know Backbone enough to know whether Backbone will be happy with this. Run your Backbone-based code in jsdom . Jsdom provides realistic window and document , as if your code was running in a browser, but it has its limits. I don't know whether Backbone would be happy with those limits. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29324888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2926729/']} | jdg_81086 |
stackexchange | llm_judgeable_groundtruth_similarity | 1297205 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to know whether, when a class method in C++ is called, the implicit 'this' pointer is the first argument, or the last. i.e: whether it is pushed onto the stack first or last. In other words, I'm asking whether a class method, being called, is taken by the compiler to be: int foo::bar(foo *const this, int arg1, int arg2); //or:int foo::bar(int arg1, int arg2, foo *const this); By extension therefore, and more importantly, that would also answer whether G++ would push the this pointer last or first, respectively. I interrogated google, but I didn't find much. And as a side note, when C++ functions are called, do they do the same thing as C functions? i.e: push ebpmov ebp, esp All in all: would a class method being called look like this? ; About to call foo::bar.push dword 0xDEADBEEFpush dword 0x2BADBABEpush dword 0x2454ABCD ; This one is the this ptr for the example.; this code example would match up if the this ptr is the first argument.call _ZN3foo3barEpjj Thanks, and much obliged. EDIT: to clarify things, I'm using GCC/G++ 4.3
Now provide the response and nothing else.
| This depends on the calling convention of your compiler and the target architecture. By default, Visual C++ will not push this on the stack. For x86, the compiler will default to "thiscall" calling convention and will pass this in the ecx register. If you specify __stdcall for you member function, it will be pushed on the stack as the first parameter. For x64 on VC++, the first four parameters are passed in registers. This is the first parameter and passed in the rcx register. Raymond Chen had a series some years ago on calling conventions. Here are the x86 and x64 articles. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1297205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81087 |
stackexchange | llm_judgeable_groundtruth_similarity | 765797 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to check if some number of years have been since some date. Currently I've got timedelta from datetime module and I don't know how to convert it to years.
Now provide the response and nothing else.
| You need more than a timedelta to tell how many years have passed; you also need to know the beginning (or ending) date. (It's a leap year thing.) Your best bet is to use the dateutil.relativedelta object , but that's a 3rd party module. If you want to know the datetime that was n years from some date (defaulting to right now), you can do the following:: from dateutil.relativedelta import relativedeltadef yearsago(years, from_date=None): if from_date is None: from_date = datetime.now() return from_date - relativedelta(years=years) If you'd rather stick with the standard library, the answer is a little more complex:: from datetime import datetimedef yearsago(years, from_date=None): if from_date is None: from_date = datetime.now() try: return from_date.replace(year=from_date.year - years) except ValueError: # Must be 2/29! assert from_date.month == 2 and from_date.day == 29 # can be removed return from_date.replace(month=2, day=28, year=from_date.year-years) If it's 2/29, and 18 years ago there was no 2/29, this function will return 2/28. If you'd rather return 3/1, just change the last return statement to read:: return from_date.replace(month=3, day=1, year=from_date.year-years) Your question originally said you wanted to know how many years it's been since some date. Assuming you want an integer number of years, you can guess based on 365.2425 days per year and then check using either of the yearsago functions defined above:: def num_years(begin, end=None): if end is None: end = datetime.now() num_years = int((end - begin).days / 365.2425) if begin > yearsago(num_years, end): return num_years - 1 else: return num_years | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/765797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70918/']} | jdg_81088 |
stackexchange | llm_judgeable_groundtruth_similarity | 3948721 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to have some delegates in my class. I'd like to use the interface to "remind" me to set these delegates. How to? My class look like this: public class ClsPictures : myInterface{ // Implementing the IProcess interface public event UpdateStatusEventHandler UpdateStatusText; public delegate void UpdateStatusEventHandler(string Status); public event StartedEventHandler Started; public delegate void StartedEventHandler();} I need an interface to force those delegates: public interface myInterface{ // ?????}
Now provide the response and nothing else.
| Those are declaring delegate types . They don't belong in an interface. The events using those delegate types are fine to be in the interface though: public delegate void UpdateStatusEventHandler(string status);public delegate void StartedEventHandler();public interface IMyInterface{ event UpdateStatusEventHandler StatusUpdated; event StartedEventHandler Started;} The implementation won't (and shouldn't) redeclare the delegate type, any more than it would redeclare any other type used in an interface. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3948721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382588/']} | jdg_81089 |
stackexchange | llm_judgeable_groundtruth_similarity | 52455614 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with Angular(6) application, and I am trying to use FA icons, and drop-down with no luck. I am successfully using FA(4) icons, like: <i class="fa fa-align-justify"></i> But <i class="fas fa-align-justify"></i> is not working. I am using following command to install FA: npm install font-awesome --save This is from package.json in node_modules folder: I can see font-awesome.css , but it says Font Awesome 4.7.0 as shown below: In angular.json file, I referenced FA: "node_modules/font-awesome/css/font-awesome.min.css", I read a lot of posts like How to use font Awesome 5 installed via NPM What do I need to do else?
Now provide the response and nothing else.
| For version 5, you need following package @fortawesome/fontawesome . See installation using package managers . Using npm: npm install --save @fortawesome/fontawesome-free Then reference either the all.css or all.js in the <head> . <link rel="stylesheet" href="node_modules/@fortawesome/fontawesome-free/css/all.css"> Note: Make sure the path is right, depends on where you installed the package from. OR you can import the module in your js code. import '@fortawesome/fontawesome-free/js/all.js'; OR if you are using Sass, you can import the module in app.scss . @import '~@fortawesome/fontawesome-free/scss/fontawesome';@import '~@fortawesome/fontawesome-free/scss/<type>'; Note: Replace <type> with solid , brands or regular whatever type of icons you need. Font Awesome has an official Angular component. npm install --save @fortawesome/fontawesome-svg-corenpm install --save @fortawesome/free-<type>-svg-iconsnpm install --save @fortawesome/angular-fontawesome Note: Replace <type> with solid , brands or regular whatever type of icons you need. You can also install Vue and React components. npm install --save @fortawesome/vue-fontawesomenpm install --save @fortawesome/react-fontawesome | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/52455614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1310853/']} | jdg_81090 |
stackexchange | llm_judgeable_groundtruth_similarity | 11817062 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to align my text in combo box so that it will show in the center of combobox tell me how to do this also you can see there is a default border around a combo box when it is in focus how can i remove that border alsoKindly solve my two problems Thanks
Now provide the response and nothing else.
| This isn't supported for ComboBox. The exact reasons are lost in the fog of time, ComboBox has been around since the early nineties, but surely has something to do with the awkwardness of getting the text in the textbox portion to line up with the text in the dropdown. Custom drawing with DrawItem cannot solve it either, that only affects the appearance of the dropdown items. As a possible workaround, you could perhaps do something outlandish like padding the item strings with spaces so they look centered. You'll need TextRenderer.MeasureText() to figure out how many spaces to add for each item. The "border" you are talking about is not a border, it is the focus rectangle. You can't get rid of that either, Windows refuses to let you create a UI that won't show the control with the focus. Users that prefer the keyboard over the mouse care about that. No workaround for that one. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11817062', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1366440/']} | jdg_81091 |
stackexchange | llm_judgeable_groundtruth_similarity | 2783970 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking to create an options table in my db that makes every record a system option, so I can work with a little number of fields. My db has the following structure: 3 columns named id , name , and value The following data is inserted as an example: +--+-----------+--------------------------+|id|name |value |+--+-----------+--------------------------+| 1|uri |www.example.com || 2|sitename |Working it out || 3|base_folder|/folder1/folder2/ || 4|slogan |Just a slogan for the site|+--+-----------+--------------------------+ That way I can include a large number of customizable system options very easily. The problem is that I don't know how to retrieve them.How do I get the value of uri and store it as a var?And better yet, how do I get, for exmaple, values of id 1 and 4 only without making a query each time? (I assume multiple queries are useless and a pretty ugly method.) I know the question is pretty basic but I'm lost here.I'd really appreciate your answer!
Now provide the response and nothing else.
| If you only want to know if any item of d is contained in paid[j] , as you literally say: if any(x in paid[j] for x in d): ... If you also want to know which items of d are contained in paid[j] : contained = [x for x in d if x in paid[j]] contained will be an empty list if no items of d are contained in paid[j] . There are other solutions yet if what you want is yet another alternative, e.g., get the first item of d contained in paid[j] (and None if no item is so contained): firstone = next((x for x in d if x in paid[j]), None) BTW, since in a comment you mention sentences and words, maybe you don't necessarily want a string check (which is what all of my examples are doing), because they can't consider word boundaries -- e.g., each example will say that 'cat' is in 'obfuscate' (because, 'obfuscate' contains 'cat' as a substring ). To allow checks on word boundaries, rather than simple substring checks, you might productively use regular expressions... but I suggest you open a separate question on that, if that's what you require -- all of the code snippets in this answer, depending on your exact requirements, will work equally well if you change the predicate x in paid[j] into some more sophisticated predicate such as somere.search(paid[j]) for an appropriate RE object somere .(Python 2.6 or better -- slight differences in 2.5 and earlier). If your intention is something else again, such as getting one or all of the indices in d of the items satisfying your constrain, there are easy solutions for those different problems, too... but, if what you actually require is so far away from what you said, I'd better stop guessing and hope you clarify;-). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2783970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/334793/']} | jdg_81092 |
stackexchange | llm_judgeable_groundtruth_similarity | 256768 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to use the theorem which states: If W is a set of one or more vectors in a vector space V, then W is a subspace of V if and only if the following conditions hold. a) If u and v are vectors in W, then u + v is in W. b) If k is any scalar and u is any vector in W then k* u * is in W. to find if all vectors of the form $(a, b, c)$, where $b = a + c$, subspaces of $R^3$. I understand how to do it for the first two problems which were of the form $(a, 0, 0)$ and $(a, 1, 0)$, but don't understand for this form. For example for $(a, 1, 0) + (d, 1, 0) = (a + d, 2, 0)$, which is not in the correct form. But for $(a, b, c)$ I am not sure what to make of it. I get $(a, b, c) + (d, e, f) = (a + d, b + e, c + f)$ $(b+e) = (a+c) + (d+f)$, or $(a+d) + (c+f)$ I eventually get $(a, b, c) + (d, e, f) = (a+d, [(a+d) + (c+f)], c+f)$ I see a pattern there, but I no longer recognize the form and don't understand what it's telling me. The answer key in the book says that it is not a subspace of $R^3$. How should I be using Part A of the theorem?
Now provide the response and nothing else.
| The characteristic polynomial of $A$ is $\lambda^2 - 14$, so $A^2 - 14 I = 0$.Thus $(a A + bI)^2 = (14 a^2 + b^2) I + 2 a b A$. To make this equal to $A$, we want $2ab = 1$ and $14 a^2 + b^2 = 0$. Thus $b = 1/(2a)$ and $14 a^2 = -1/(4a^2)$, or $a^4 = -1/56$. Take $a$ to be any of the four fourth roots of $-1/56$, and you have a solution. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/256768', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7996/']} | jdg_81093 |
stackexchange | llm_judgeable_groundtruth_similarity | 6573806 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a simple question, and a seemingly simple task but I can't find any info on how to accomplish what I need to do. I have an application whose main tile (when pinned) sometimes needs to be the default, single sided tile and sometimes needs to have information displayed on the back of the tile. I can add the BackBackgroundImage, BackContent and BackTitle successfully from the ScheduledActionService, but I can't remove them when they are no longer required. This isn't a secondary tile so I can't remove it and re-create and re-add it. Does anyone know if it is possible to revert a double-sided tile back to single-sided via code, and if so, how can I achieve that behaviour, please? EDIT The settings that get applied from the StandardTileData object are additive - if you only specify a title, for example, all other elements remain the same and only the title is updated. I have attempted to set the three parameters that appear on the back of the tile to null and had partial success. The effect is that the background image, title text and content text are all removed, but the tile still flips over to show a completely empty reverse side. EDIT AGAIN So, looking at the documentation, the tile back behaves differently to the front. Setting the back content or backtitle to string.Empty will remove those. All good there. However, it does say that "If set to an empty URI, the BackBackgroundImage will not be displayed.". How do I go about creating an empty Uri? I tried new Uri(string,Empty) but that throws an exception about trying to create an empty Uri - which is what I'm trying to do.
Now provide the response and nothing else.
| OK, I think I've got it, and it appears to be related to a change in the way tile data is handled... Previously, setting a value to an empty string would have now effect in the tile. For eaxmple, setting title = string.Empty would leave the existing title in place. Now, though, it will blank out the title. That's good - it means I can remove BackTitle and BackContent string easily. We're half way there. Now, to get rid of the BackBackgroundImage, the documentation states "If set to an empty URI, the BackBackgroundImage will not be displayed." - all good, except you can't create an empty Uri in any simple way. The one way I've made it work is to set it to a Uri value that doesn't exist, eg BackBackgroundImage = new Uri("obviouslyMadeUpLocation", UriKind.Relative); I would have expected that to throw an exception when you try to apply it to the tile, but it doesn't - it just clears the background image. So that's it. All I appear to need to do is to call the following to unset these properties and put my tile back as it was. private void ResetMyMainTile(){ ShellTile tile = ShellTile.ActiveTiles.First(); StandardTileData data = new StandardTileData { BackBackgroundImage = new Uri("IDontExist",UriKind.Relative), BackContent = string.Empty, BackTitle = string.Empty }; tile.Update(data);} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6573806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/377/']} | jdg_81094 |
stackexchange | llm_judgeable_groundtruth_similarity | 2771796 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
You are on a submarine and there is an encrypted message that you want to read. Two people must use their keys at the same time in order to obtain the plain text. What is best cryptographic primitive to use? Are the following two implementations suitable? plain_text=decrypt(Key1 XOR key2,ciper_text,IV)plain_text=decrypt(Key1,decrypt(key2,ciper_text,IV2),IV1) (Assume AES-256-CBC with a CMAC block if it matters to you.)
Now provide the response and nothing else.
| XORing two randomly generated keys together to obtain the final secret is certainly secure. The general form of this is known as ' secret sharing ', and there are secure algorithms that allow you to generate 'm of n' schemes, where you generate n shares, and any m are sufficient to deduce the original key. The best known scheme is Shamir's Secret Sharing , and involves generating a random m-1 degree polynomial with the key as the constant, then sampling it at n locations, and giving those to the individuals as key shares. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2771796', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/183528/']} | jdg_81095 |
stackexchange | llm_judgeable_groundtruth_similarity | 282712 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm quite confused about the basic concepts of a Hash table. If I were to code a hash how would I even begin? What is the difference between a Hash table and just a normal array? Basically if someone answered this question I think all my questions would be answered:If I had 100 randomly generated numbers (as keys), how would I implement a hash table and why would that be advantageous over an array? Psuedo-code or Java would be appreciated as a learning tool...
Now provide the response and nothing else.
| The answers so far have helped to define hash tables and explain some theory, but I think an example may help you get a better feeling for them. What is the difference between a hash table and just a normal array? A hash table and an array are both structures that allow you to store and retrieve data. Both allow you to specify an index and retrieve a value associated with it. The difference, as Daniel Spiewak noted, is that the indices of an array are sequential , while those of a hash table are based on the value of the data associated with them. Why would I use a hash table? A hash table can provide a very efficient way to search for items in large amounts of data, particularly data that is not otherwise easily searchable. ("Large" here means ginormous , in the sense that it would take a long time to perform a sequential search). If I were to code a hash how would I even begin? No problem. The simplest way is to invent an arbitrary mathematical operation that you can perform on the data, that returns a number N (usually an integer). Then use that number as the index into an array of "buckets" and store your data in bucket # N . The trick is in selecting an operation that tends to place values in different buckets in a way that makes it easy for your to find them later. Example: A large mall keeps a database of its patrons' cars and parking locations, to help shoppers remember where they parked. The database stores make , color , license plate , and parking location . On leaving the store a shopper finds his car by entering the its make and color. The database returns a (relatively short) list of license plates and parking spaces. A quick scan locates the shopper's car. You could implement this with an SQL query: SELECT license, location FROM cars WHERE make="$(make)" AND color="$(color)" If the data were stored in an array, which is essentially just a list, you can imagine implementing the query by scanning an array for all matching entries. On the other hand, imagine a hash rule: Add the ASCII character codes of all the letters in the make and color, divide by 100, and use the remainder as the hash value. This rule will convert each item to a number between 0 and 99, essentially sorting the data into 100 buckets. Each time a customer needs to locate a car, you can hash the make and color to find the one bucket out of 100 that contains the information. You've immediately reduced the search by a factor of 100! Now scale the example to huge amounts of data, say a database with millions of entries that is searched based on tens of criteria. A "good" hash function will distribute the data into buckets in a way that minimizes any additional searching, saving a significant amount of time. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/282712', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36545/']} | jdg_81096 |
stackexchange | llm_judgeable_groundtruth_similarity | 416820 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I used to think deleting my bash history was enough to clear my bash history, but yesterday my cat was messing around the right side of my keyboard and when I got back into my computer I saw something I typed a month ago, then I started to press all the keys like crazy looking for what could've triggered it. Turns out UPARROW key shows my bash history even after deleting .bash_history. How can I delete my bash history for real?
Now provide the response and nothing else.
| In some cases (some bash versions), doing a: $ history -c; history -w Or simply $ history -cw Will clear history in memory (up and down arrow will have no commands to list) and then write that to the $HISTFILE file (if the $HISTFILE gets truncated by the running bash instance). Sometimes bash choose to not truncate the $HISTFILE file even with histappend option unset and $HISFILEZIZE set to 0. In such cases, the nuke option always works: history -c; >$HISTFILE That clear the history list of commands recorded in memory and all commands previously recorded to file. That will ensure that the running shell has no recorded history either in memory or disk, however, other running instances of bash (where history is active) may have a full copy of commands read from $HISTFILE when bash was started (or when a history -r is executed). If it is also required that nothing else (no new commands) of the present session would be written to the history file, then, unset HISTFILE will prevent any such logging. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/416820', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/270184/']} | jdg_81097 |
stackexchange | llm_judgeable_groundtruth_similarity | 279024 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have seen wrapper script examples which in a nutshell are following: #!/bin/bashmyprog=sleepecho "This is the wrapper script, it will exec "$myprog""exec "$myprog" "$@" As seen above, they use exec to replace the newly created shell almost immediately with the $myprog . One could achieve the same without exec : #!/bin/bashmyprog=sleepecho "This is the wrapper script, it will exec "$myprog"""$myprog" "$@" In this last example, a new bash instance is started and then $myprog is started as a child process of the bash instance. What are the benefits of the first approach?
Now provide the response and nothing else.
| Using exec makes the wrapper more transparent, i.e. it makes it less likely that the user or application that calls the script needs to be aware that it's a relay that in turns launches the “real” program. In particular, if the caller wants to kill the program, they'll just kill the process they just launched. If the wrapper script runs a child process, the caller would need to know that they should find out the child of the wrapper and kill that instead. The wrapper script could set a trap to relay some signals, but that wouldn't work with SIGSTOP or SIGKILL which can't be caught. Calling exec also saves a bit of memory (and other resources such as PIDs etc.) since it there's no need to keep an extra shell around with nothing left to do. If there are multiple wrappers, the problems add up (difficulty in finding the right process to kill, memory overhead, etc.). Some shells (e.g. the Korn shell) automatically detect when a command is the last one and there's no active trap and put an implicit exec , but not all do (e.g. not bash). | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/279024', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/33060/']} | jdg_81098 |
stackexchange | llm_judgeable_groundtruth_similarity | 44319313 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I use PowerShell commands/scripts with Gitlab CI in a .gitlab-ci.yml file which is used to deploy to gitlab pages? I am trying to execute the build.ps1 file from .gitlab-ci.yml , but when it reaches the build.ps1 line, it gives an error saying /bin/bash: line 5: .build.ps1: command not found I am trying to use the PowerShell script to convert a file in my repo and have the converted file deployed to gitlab pages using .gitlab-ci.yml Here is my code: .gitlab.yml pages: stage: deploy script: - mkdir .public - .\build.ps1 - cp -r * .public - mv .public public artifacts: paths: - public only: - master
Now provide the response and nothing else.
| I have been able to figure out a solution to my own question. Solution To Run PowerShell Command/Script from a .gitlab-ci.yml file on a gitlab.com using the Gitlab CI, you need to make sure that the contents of your .gitlab-ci.yml file is as shown below. Note: The .gitlab-ci.yml below works without having to install a Gitlab Runner on your own machine and has been tested on the http://gitlab.com website. image: philippheuer/docker-gitlab-powershellpages: stage: deploy script: - mkdir .public # run PowerShell Script - powershell -File build.ps1 # run PowerShell Command - powershell -Command "Get-Date" - cp -r * .public - mv .public public artifacts: paths: - public only: - master | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44319313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1992224/']} | jdg_81099 |
stackexchange | llm_judgeable_groundtruth_similarity | 2966803 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a website using a huge background image (2000x1500) in a div container (100% x 100%). When I open that site on Safari on an iPad it gets scaled down (~40%) in a different proportion than the content (~80%). I moved the background to an img-tag in a div with 100% width and 100% height and an overflow setting "hidden". Exactly the same happens. Is there a CSS Setting that can help Safari to scale down background images in the same proportion as the content?
Now provide the response and nothing else.
| You should definitely create a separate stylesheet for the iPad.You can use the following to do so: <link rel="stylesheet" media="only screen and (max-device-width: 1024px)" href="../ipad.css" type="text/css" /> On this link, you will find information about the orientation of your website on the iPad and how to deal with it. My advice would be, to create a separate stylesheet (css file) for the iPad version of your site, no matter what you do, you should just create it and add a link tag like the one shown above. If you have a background of a picture that is 2000x1500px for the iPad version, you can reduce it to fit the iPad, if that's the only thing you've got a problem with. I'd say you should reduce the size of the image to 1024px and declare it in the separate stylesheet for the iPad, and you will see the end result. Let me know how it goes. BTW you should read this article as well: http://www.inspiredm.com/2010/02/09/ipad-design/ | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2966803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/344570/']} | jdg_81100 |
stackexchange | llm_judgeable_groundtruth_similarity | 4596447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I check in Java if a file exists on a remote server (served by HTTP), having its URL? I don't want to download the file, just check its existence.
Now provide the response and nothing else.
| import java.net.*;import java.io.*;public static boolean exists(String URLName){ try { HttpURLConnection.setFollowRedirects(false); // note : you may also need // HttpURLConnection.setInstanceFollowRedirects(false) HttpURLConnection con = (HttpURLConnection) new URL(URLName).openConnection(); con.setRequestMethod("HEAD"); return (con.getResponseCode() == HttpURLConnection.HTTP_OK); } catch (Exception e) { e.printStackTrace(); return false; } } If the connection to a URL ( made with HttpURLConnection ) returns with HTTP status code 200 then the file exists. EDIT: Note that since we only care it exists or not there is no need to request the entire document. We can just request the header using the HTTP HEAD request method to check if it exists. Source: http://www.rgagnon.com/javadetails/java-0059.html | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4596447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/554972/']} | jdg_81101 |
stackexchange | llm_judgeable_groundtruth_similarity | 28328 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can two particles remain entangled even if one is past the event horizon of a black hole? If both particles are in the black hole? What changes occur when the particle(s) crosses(cross) the event horizon? I have basic Physics knowledge, so I request that answers not assume an in-depth understanding of the field. Thank you all in advance!
Now provide the response and nothing else.
| This question is the black hole information paradox. If you take two entangled particles, make a black hole by colliding two highly energetic photons, throw in one of the two entangled particles, and wait for the black hole to decay, is the remaining untouched particle entangled with anything anymore? In Hawking's original view, the infalling particle would no longer be in communication with our universe, and the entanglement would be converted to a pure density matrix from our point of view. The particle outside would no longer be entangled with anything we can see in our causal part of the universe. Then when the black hole decays, the outgoing Hawking particles of the decay would not be entangled with the untouched particle. This point of view is incompatible with quantum mechanics, since it takes a pure state to a density matrix. It is known today to be incorrect, since in models of quantum gravity when AdS/CFT works, the theory is still completely unitary. This means that the particle stays entangled with something as its partner crosses the horizon. This "thing" is whatever degrees of freedom the black hole has, those degrees of freedom that make up its entropy. When the black hole decays completely, the outgoing particles are determined by these microscopic variables, and at no point was there ever a loss of coherence in the entanglement. This point of view requires that the information about the particle that fell through the horizon is also contained in the measurable outside state of the black hole. This is t'Hoofts holographic principle as extended into Susskind's black hole complementarity, the principle that the degrees of freedom of a black hole encode the infalling matter completely in externally measurable variables. This point of view is nearly universal today, because we have model quantum gravity situations where this is clearly what happens. The details of the degrees of freedom of a four dimensional neutral black hole in our universe are not completely understood, so it is not possible to say exactly what the external particle is entangled with as the infalling particle gets to the horizon. But the basic picture is that the infalling particle doesn't fall through from the external point of view, but smears itself nonlocally on the horizon (like a string theory string getting boosted and long). The external particle is still entangled with this second representation of the infalling particle. This means that the same thing is described in two different ways, the interior and the exterior. But since no observer measures both at the same time, it is consistent with quantum mechanics to just have a unitary transformation that reconstructs the interior states from those states of the exterior that can be measured at infinity by scattering experiments. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/28328', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7600/']} | jdg_81102 |
stackexchange | llm_judgeable_groundtruth_similarity | 124977 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Motivation As Mathematica v.11 was released earlier this month with a host of new [[experimental]] functions and a limited number of examples on curated data that do not cover all layers, options, etc., I am posting this with the intent of augmenting Mathematica's documentation via the cumulative knowledge of the coding wizards of Stack Exchange. Why read this? It includes more details and examples of than the documentation (currently). Also, you will see where there are a lot of peculiarities that may or may not be causing errors in your code. For example, if you are using NetDecoder for classes, it automatically assumes that you are using the UnitVector encoding, there is not a decoder for Booleans, etc. Layers An Overview of all NetLayers DeconvolutionLayer Example Useful Functions An Overview of Useful Net Functions (e.g. NetEncoder ) Useful User-Defined Functions Formatting Input Troubleshooting Formatting Input Disclaimer Until this point I have implemented my neural networks in Mathematica by hand. I have a good grasp on the theory behind neural networks (and the associated mathematics); however, Mathematica - being proprietary - does not make it clear as to which algorithms they choose to use to implement their network layers $*$ . Therefore I honestly am not sure of the some of these layers' underpinnings. $*$ Correction from Sebastian Mathematica uses the same definitions as all the other frameworks. Definitions have become rather standardized, as everyone wants to use the same backends, like cuDNN. Contributors Both @Sebastian and @JHM have supplied useful comments and corrections relating to Mathematica code. I have noted where they have been edited in. I also thank @Sascha for the clear example on the Deconvolution layer. In addition @EmilioPisanty has suggested updating to this new format - which I agree is spiffy - borrowed from @Mr.Wizard . @TaliesinBeynon has assisted with input formatting. Thank you for helping keep this as accurate as possible.
Now provide the response and nothing else.
| Thank you for your summary. I would like to clarify and correct a few of your points. however, Mathematica - being proprietary - does not make it clear as to which algorithms they choose to use to implement their network layers. Therefore I honestly am not sure of the some of these layers' underpinnings. We use the same definitions as all the other frameworks. Definitions have become rather standardized, as everyone wants to use the same backends, like cuDNN. Because Mathematica designed each Net's nodes / neurons / "ports" to recieve only one input If you are referring to layers, this is not true: all the loss functions take two inputs. Regarding EmbeddingLayer : This to me is odd, as NetEncoder has a specific option for Booleans; Suppose you are trying to create a vector representation of a very high-dimensional categorical input (like words). NetEncoder will produce one-hot encoded vectors of the same dimension as the number of categories. This can be absolutely massive, and make training impossible. EmbeddingLayer solves this: it maps integers directly to a low-dimensional vector subspace, and this embedding is trained. This allows things like Word2Vec to be implemented (for example). The docs should make this use-case clearer. Also regarding EmbeddingLayer : This is the only net layer that I am aware of, that the documentation specifically calls "trainable" for whatever that is worth. This is not true, see ConvolutionLayer , DotPlusLayer , etc. "Trainable" just means that it has parameters that can be modified during training. Regarding SummationLayer : This really doesn't need any explaination. It is basically the same as Total[] To be more precise, its more like Total[array, Infinity] . TotalLayer acts more like Total[{array1, array2, ...}] Other pecularities are that NetDecoder automatically assumes that your classes are in a UnitVector. Even if you specify "Index" It will produce an error. This is because your network cannot output an index: no layer allows you to do this. Also, it assumes you are giving the decoder a probability vector for the classes, which allows you to use it like a ClassifierFunction . Mathematica calls your layers ports. NetPort lets you access them. You can name them... No, layers are not called ports. The inputs and outputs of layers and containers (like NetGraph and NetChain ) are called "Ports". NetPort is a way to unambiguously refer to one of these inputs/outputs. For example, MeanSquaredLossLayer has two input ports, "Target" and "Input". Consider: NetGraph[{ElementwiseLayer[Tanh], DropoutLayer[], MeanSquaredLossLayer[]}, {1 -> NetPort[3, "Target"], 2 -> NetPort[3, "Input"]}] NetPort allows you to specify exactly which input/output you are referring to. In my mind, I think their implementation of NetGraph is kind of silly. Why? Because if you define a net and then pass it to NetGraph, it doesnt produce a graph; It does produce a graph, just the net you passed in looks like any other layer (until you click on it and see its structure). The philosophy is that you can use NetChain or NetGraph objects exactly as you would use normal layers inside other NetChain or NetGraph objects. It allows for nice definitions of things like Inception networks etc. It also solves namespacing issues elegantly when composing containers. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/124977', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/42578/']} | jdg_81103 |
stackexchange | llm_judgeable_groundtruth_similarity | 27777071 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I ask because my compiler seems to think so, even though I don’t. echo 'int main;' | cc -x c - -Wall echo 'int main;' | c++ -x c++ - -Wall Clang issues no warning or error with this, and gcc issues only the meek warning: 'main' is usually a function [-Wmain] , but only when compiled as C. Specifying a -std= doesn’t seem to matter. Otherwise, it compiles and links fine. But on execution, it terminates immediately with SIGBUS (for me). Reading through the (excellent) answers at What should main() return in C and C++? and a quick grep through the language specs, it would certainly seem to me that a main function is required. But the verbiage from gcc’s -Wmain (‘main’ is usually a function) (and the dearth of errors here) seems to possibly suggest otherwise. But why? Is there some strange edge-case or “historical” use for this? Anyone know what gives? My point, I suppose, is that I really think this should be an error in a hosted environment, eh?
Now provide the response and nothing else.
| Since the question is double-tagged as C and C++, the reasoning for C++ and C would be different: C++ uses name mangling to help linker distinguish between textually identical symbols of different types, e.g. a global variable xyz and a free-standing global function xyz(int) . However, the name main is never mangled. C does not use mangling, so it is possible for a program to confuse linker by providing a symbol of one kind in place of a different symbol, and have the program successfully link. That is what's going on here: the linker expects to find symbol main , and it does. It "wires" that symbol as if it were a function, because it does not know any better. The portion of runtime library that passes control to main asks linker for main , so linker gives it symbol main , letting the link phase to complete. Of course this fails at runtime, because main is not a function. Here is another illustration of the same issue: file x.c: #include <stdio.h>int foo(); // <<== main() expects thisint main(){ printf("%p\n", (void*)&foo); return 0;} file y.c: int foo; // <<== external definition supplies a symbol of a wrong kind compiling: gcc x.c y.c This compiles, and it would probably run, but it's undefined behavior, because the type of the symbol promised to the compiler is different from the actual symbol supplied to the linker. As far as the warning goes, I think it is reasonable: C lets you build libraries that have no main function, so the compiler frees up the name main for other uses if you need to define a variable main for some unknown reason. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27777071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2351351/']} | jdg_81104 |
stackexchange | llm_judgeable_groundtruth_similarity | 17695456 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Python 3. I've just installed a Python IDE and I am curious about the following code warning: features = { ... }for k, v in features.items(): print("%s=%s" % (k, v)) Warning is: "For Python3 support should look like ... list(features.items()) " Also there is mention about this at http://docs.python.org/2/library/2to3.html#fixers It also wraps existing usages of dict.items(), dict.keys(), and dict.values() in a call to list. Why is this necessary?
Now provide the response and nothing else.
| You can safely ignore this "extra precautions" warning: your code will work the same even without list in both versions of Python. It would run differently if you needed a list (but this is not the case): in fact, features.items() is a list in Python 2, but a view in Python 3. They work the same when used as an iterable, as in your example. Now, the Python 2 to Python 3 conversion tool 2to3 errs on the side of safety, and assumes that you really wanted a list when you use dict.items() . This may not be the case (as in the question), in which case dict.items() in Python 3 (no wrapping list ) is better (faster, and less memory-consuming, since no list is built). Concretely, this means that Python 2 code can explicitly iterate over the view: for k, v in features.viewitems() (which will be converted in Python 3 by 2to3 to features.items() ). It looks like your IDE thinks that the code is Python 2, because your for statement is very good, in Python 3, so there should be no warning about Python 3 support. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17695456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149818/']} | jdg_81105 |
stackexchange | llm_judgeable_groundtruth_similarity | 1658557 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I (if it is possible at all) initialize multiple variables of different type in a C# for loop? Example: for (MyClass i = 0, int j = 1; j<3; j++,i++)
Now provide the response and nothing else.
| It can't be done. Put one of the declarations before the loop: MyClass i = 0;for (int j = 1; j < 3; j++, i++) Or for symmetry, both of them: MyClass i = 0;int j = 1;for (; j < 3; j++, i++) It's also possible that one of the variables is more primary than the other. In that case it might be neater to have one be the loop variable, and deal with the other seperately, like this: MyClass i = 0;for (int j = 0; j < 3; j++){ ... i++;} Note that if i and j were of the same type, then you could declare them both in the for-loop: for (int i = 0, j = 1; j < 3; j++, i++) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1658557', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143211/']} | jdg_81106 |
stackexchange | llm_judgeable_groundtruth_similarity | 10935110 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
DrRacket running R5RS says that 1### is a perfectly valid Scheme number and prints a value of 1000.0 . This leads me to believe that the pound signs (#) specify inexactness in a number, but I'm not certain. The spec also says that it is valid syntax for a number literal, but it does not say what those signs mean. Any ideas as to what the # signs in Scheme number literals signifiy?
Now provide the response and nothing else.
| The hash syntax was introduced in 1989. There were a discussion on inexact numbers on the Scheme authors mailing list, which contains several nice ideas. Some caught on and some didn't. http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00178.html One idea that stuck was introducing the # to stand for an unknown digit.If you have measurement with two significant digits you can indicate that with 23## that the digits 2 and 3 are known, but that the last digits are unknown. If you write 2300 , then you can't see that the two zero aren't to ne trusted. When I saw the syntax I expected 23## to evaluate to 2350, but (I believe) the interpretation is implementation dependent. Many implementation interpret 23## as 2300. The syntax was formally introduced here: http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00324.html EDIT From http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r3rs-html/r3rs_8.html#SEC52 An attempt to produce more digits than are available in the internal machine representation of a number will be marked with a "#" filling the extra digits. This is not a statement that the implementation knows or keeps track of the significance of a number, just that the machine will flag attempts to produce 20 digits of a number that has only 15 digits of machine representation: 3.14158265358979##### ; (flo 20 (exactness s)) EDIT2 Gerald Jay Sussman writes why the introduced the syntax here: http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1994/msg00096.html | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10935110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/677242/']} | jdg_81107 |
stackexchange | llm_judgeable_groundtruth_similarity | 37833 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What would be a "reasonable" minimal number of observations to look for a trend over time with a linear regression? what about fitting a quadratic model? I work with composite indices of inequality in health (SII, RII), and have only 4 waves of the survey, so 4 points (1997, 2001, 2004, 2008). I am not a statistician, but I have the intuitive impression 4 points are not sufficient. Do you have an answer, and/or references? Thanks a lot
Now provide the response and nothing else.
| Peter's rule of thumb of 10 per covariate is a reasonable rule. A straight line can be fit perfectly with any two points regardless of the amount of noise in the response values and a quadratic can be fit perfectly with just 3 points. So clearly in almost any circumstance, it would be proper to say that 4 points are insufficient. However, like most rules of thumb, it does not cover every situation. Cases, where the noise term in the model has a large variance, will require more samples than a similar case where the error variance is small. The required number of sample points does depend on the objectives. If you are doing exploratory analysis just to see if one model (say linear in a covariate) looks better than another (say a quadratic function of the covariate) less than 10 points may be enough. But if you want very accurate estimates of the correlation and regression coefficients for the covariates you could need more than 10 per covariate. A criterion for prediction accuracy could require even more samples than accurate parameter estimates. Note that the variance of the estimates and prediction all involve the variance of the model's error term. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/37833', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/14299/']} | jdg_81108 |
stackexchange | llm_judgeable_groundtruth_similarity | 45743 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This may be a very basic question but I am not seeing how it works.Consider the standard example of an ice skate rotating about his/her center of mass and pulling in his/her arms. The torque is zero so we have conservation of angular momentum. This implies that $\omega$ increases to keep $I\omega$ constant, but then $K_{rot}=\frac{1}{2}I\omega^2$ doesn't stay constant, it increases. This implies that there is work done, but what force is doing this work?
Now provide the response and nothing else.
| The energy comes from the ice-skater's muscles; they have to work to pull their arms in. There is no external work done on the skater - the energy is converted from the chemical potential energy stored in the skater's body to kinetic energy. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/45743', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_81109 |
stackexchange | llm_judgeable_groundtruth_similarity | 123226 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 3D data - a bunch of triples like {{x1, y1, z1}, {x2, y2, z2}, ...} , and I know they lie on a curve rather than a surface; in fact, I need a least squares fit of these points to a 3D straight line. That is, I am looking for six numbers ax, bx, ay, by, az, bz such that my points are as close as possible to the line {ax*t + bx, ay*t + by, az*t + bz} , with t running through reals. I could not find a way to do it in Mathematica . Does anybody know a way?
Now provide the response and nothing else.
| As it turns out, you don't need FindMinimum[] in the linear case of total least squares / orthogonal distance regression; all that is needed is a clever application of SVD: BlockRandom[SeedRandom[42, Method -> "MersenneTwister"]; (* for reproducibility *) p = RandomReal[{-2, 2}, 3]; (* point on true line *) (* direction cosines *) q = Normalize[RandomVariate[NormalDistribution[], 3]]; (* random points clustered near the line *) pts = Table[p + t q + RandomVariate[NormalDistribution[0, 1/10], 3], {t, 0, 1, 1/90}];](* orthogonal fit *)lin = InfiniteLine[Mean[pts], Flatten[Last[ SingularValueDecomposition[Standardize[pts, Mean, 1 &], 1]]]];Legended[Graphics3D[{{Directive[AbsolutePointSize[6], Brown], Point[pts]}, {Directive[AbsoluteThickness[4], ColorData[97, 1]], lin}, {Directive[AbsoluteThickness[4], ColorData[97, 3]], InfiniteLine[p, q]}}, Axes -> True], LineLegend[{ColorData[97, 1], ColorData[97, 3]}, {"orthogonal fit", "true line"}]] | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/123226', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/35000/']} | jdg_81110 |
stackexchange | llm_judgeable_groundtruth_similarity | 310075 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The standard interpretation of $|\psi|^2$ is taken as the probability density of the wave-function collapsing in the given infinitesimal small region. The way this probability is interpreted (at least in the text-book by Griffiths) is that if I prepare a large number of identical states and then perform a measurement on each of them then the probability associated with $|\psi|^2$ actually represents the statistical results of the measurements made (individually) on all these wave-functions of the entire ensemble. This seems like a good enough argument to accept $|\psi|^2$ as the probability density of the wave-function collapsing in a given infinitesimal small region even if I have only one wave-function. But the No-Clone Theorem suggests that it is fundamentally impossible to make two states that are completely identical and thus, to my understanding, it makes absolutely no sense to talk about those identical wave-functions that were used to project $|\psi|^2$ as the probability density. Put in other words, if I can't replicate a wave-function at all then how do I confirm in my laboratory that the probabilities obtained by $|\psi|^2$ actually represents the likely-hood of the collapse happening in a given region? Edit Is it the catch that the No-Clone Theorem suggests only that a given state can't be evolved to a state identical to another state and it allows the two states being identical if they are so from eternity?
Now provide the response and nothing else.
| But the No-Clone Theorem suggests that it is fundamentally impossible to make two states that are completely identical[...] No, it doesn't say that. It says that it is impossible to build a quantum apparatus (modeled by a unitary operator and a state space for that apparatus) that takes any quantum state as input and then outputs the exact same state, i.e. "clones" it. It does not prohibit the existence of an apparatus that produces a never-ending stream of copies of a fixed state, and it say nothing at all about two states being "accidentally" the same. If you consider that the crucial feature of bosons is that more than one of them can exist in the same state, the no-cloning theorem in your phrasing would be in outright contradiction to what quantum mechanics actually says. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/310075', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/87745/']} | jdg_81111 |
stackexchange | llm_judgeable_groundtruth_similarity | 30452104 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to block a user permanently. I don't understand why this code is not working. This line UserManager.IsLockedOut(user.Id); always returns false instead of true . Maybe it's necessary to put this line UserManager.UserLockoutEnabledByDefault = true; in user registration stage? using (var _db = new ApplicationDbContext()){ UserStore<DALApplicationUser> UserStore = new UserStore<DALApplicationUser>(_db); UserManager<DALApplicationUser> UserManager = new UserManager<DALApplicationUser>(UserStore); UserManager.UserLockoutEnabledByDefault = true; DALApplicationUser user = _userService.GetUserByProfileId(id); bool a = UserManager.IsLockedOut(user.Id); UserManager.SetLockoutEnabled(user.Id, true); a = UserManager.IsLockedOut(user.Id); _db.SaveChanges();}
Now provide the response and nothing else.
| The line UserManager.SetLockoutEnabled(user.Id, true); is not locking or unlocking the account. This method is used to permanently enable or disable the locking out process for the given user account. As it stands, you are making a call that is basically setting this user account up to be bound by the rules of account locking. Making a call with the second parameter as false ie: UserManager.SetLockoutEnabled(user.Id, false); would allow you to setup a user account that is exempt from the rules of locking - this may be useful for an admin account. Here is the code for UserManager.IsLockedOutAsync : /// <summary>/// Returns true if the user is locked out/// </summary>/// <param name="userId"></param>/// <returns></returns>public virtual async Task<bool> IsLockedOutAsync(TKey userId){ ThrowIfDisposed(); var store = GetUserLockoutStore(); var user = await FindByIdAsync(userId).WithCurrentCulture(); if (user == null) { throw new InvalidOperationException(String.Format(CultureInfo.CurrentCulture, Resources.UserIdNotFound, userId)); } if (!await store.GetLockoutEnabledAsync(user).WithCurrentCulture()) { return false; } var lockoutTime = await store.GetLockoutEndDateAsync(user).WithCurrentCulture(); return lockoutTime >= DateTimeOffset.UtcNow;} As you can see, for a user to be classed as locked out, the lockout must be enabled as above and the user must have a LockoutEndDateUtc value that is greater than or equal to the current date. So, to "permanently" lock an account, you could do the following: using (var _db = new ApplicationDbContext()){ UserStore<DALApplicationUser> UserStore = new UserStore<DALApplicationUser>(_db); UserManager<DALApplicationUser> UserManager = new UserManager<DALApplicationUser>(UserStore); UserManager.UserLockoutEnabledByDefault = true; DALApplicationUser user = _userService.GetUserByProfileId(id); bool a = UserManager.IsLockedOut(user.Id); //user.LockoutEndDateUtc = DateTime.MaxValue; //.NET 4.5+ user.LockoutEndDateUtc = new DateTime(9999, 12, 30); _db.SaveChanges(); a = UserManager.IsLockedOut(user.Id);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30452104', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4786962/']} | jdg_81112 |
stackexchange | llm_judgeable_groundtruth_similarity | 16046743 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to change the state of a Button from DISABLED to NORMAL when some event occurs. The button is currently created in the DISABLED state using the following code: self.x = Button(self.dialog, text="Download", state=DISABLED, command=self.download).pack(side=LEFT) How can I change the state to NORMAL ?
Now provide the response and nothing else.
| You simply have to set the state of the your button self.x to normal : self.x['state'] = 'normal' or self.x.config(state="normal") This code would go in the callback for the event that will cause the Button to be enabled. Also, the right code should be: self.x = Button(self.dialog, text="Download", state=DISABLED, command=self.download)self.x.pack(side=LEFT) The method pack in Button(...).pack() returns None , and you are assigning it to self.x . You actually want to assign the return value of Button(...) to self.x , and then, in the following line, use self.x.pack() . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16046743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1762785/']} | jdg_81113 |
stackexchange | llm_judgeable_groundtruth_similarity | 99473 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I merge two trees, t and s , so as to form a square matrix below, where both trees starts from top-left (as shown), and occupy lower and upper triangular forms (in a transpose fashion). Trees can get larger. Diagonal contains zeros, but I will store some other information there later. Up to transpose, it doesn't matter which tree is on the bottom. t = {{1}, {1, 2}, {1, 2, 3}, {1, 2, 3, 4}}s = 10 t output {{0, 10, 10, 10, 10}, {1, 0, 20, 20, 20}, {1, 2, 0, 30, 30}, {1, 2, 3, 0, 40}, {1, 2, 3, 4, 0}} or in matrix form:
Now provide the response and nothing else.
| You are right, it can be done in a fraction of second. One can explicitly construct an array of indexes blockArray[mat_] := SparseArray[ Tuples[Range@# - {1, 0, 0}].{Rest@#, {1, 0}, {0, 1}} &@Dimensions@mat -> Flatten@mat] Timings: matrices = RandomReal[1, {48, 128, 128}];s1 = SparseArray@ ArrayFlatten@ReleaseHold@DiagonalMatrix[Hold /@ matrices]; // RepeatedTiming(* {7.56, Null} *)s2 = SparseArray[Band@{1, 1} -> matrices]; // RepeatedTiming(* {4.03, Null} *)s3 = blockArray[matrices]; // RepeatedTiming(* {0.097, Null} *)TrueQ[s1 == s2 == s3](* True *) For further acceleration you can create the internal structure of the SparseArray directly c = Compile[{{b, _Integer}, {h, _Integer}, {w, _Integer}}, Partition[Flatten@Table[k + i w, {i, 0, b - 1}, {j, h}, {k, w}], 1], CompilationTarget -> "C", RuntimeOptions -> "Speed"];blockArray2[mat_] := SparseArray @@ {Automatic, # {##2}, 0, {1, {Range[0, 1 ##, #3], c@##}, Flatten@mat}} & @@ Dimensions@mats4 = blockArray2[matrices]; // RepeatedTiming(* {0.031, Null} *)s3 == s4(* True *) | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/99473', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/35383/']} | jdg_81114 |
stackexchange | llm_judgeable_groundtruth_similarity | 29898364 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to bind the MainActivity to a foreground service, then got the following Exceptions, already searched more than one hour, no idea what i did wrongly, pls save me. java.lang.ClassCastException: android.os.BinderProxy cannot be cast to com.leonard.sg.okcoin.service.SyncAndTradeService$SyncAndTradeBinder at com.leonard.sg.okcoin.MainActivity$1.onServiceConnected(MainActivity.java:50) at android.app.LoadedApk$ServiceDispatcher.doConnected(LoadedApk.java:1101) at android.app.LoadedApk$ServiceDispatcher$RunConnection.run(LoadedApk.java:1118) at android.os.Handler.handleCallback(Handler.java:725) at android.os.Handler.dispatchMessage(Handler.java:92) at android.os.Looper.loop(Looper.java:137) at android.app.ActivityThread.main(ActivityThread.java:5227) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:795) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:562) at dalvik.system.NativeStart.main(Native Method) The code in my MainActivity: private SyncAndTradeService syncAndTradeService;private boolean hasBounded = false;private Intent syncAndTradeServiceIntent;private ServiceConnection syncAndTradeServiceConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { SyncAndTradeService.SyncAndTradeBinder syncAndTradeBinder = (SyncAndTradeService.SyncAndTradeBinder) service; syncAndTradeService = syncAndTradeBinder.getSyncAndTradeService(); hasBounded = true; } @Override public void onServiceDisconnected(ComponentName name) { syncAndTradeService = null; hasBounded = false; }}; And i tried to do this in onCreate method: syncAndTradeServiceIntent = new Intent(this, SyncAndTradeService.class);bindService(syncAndTradeServiceIntent, syncAndTradeServiceConnection, Context.BIND_AUTO_CREATE); And this is my service code: public class SyncAndTradeService extends Service { public static final int MY_FOREGROUND_SERVICE_START_ID = 996539; private IBinder syncAndTradeBinder = new SyncAndTradeBinder(); @Override public void onCreate() { super.onCreate(); } @Override public int onStartCommand(Intent intent, int flags, int startId) { startSyncAndTradeService(); return Service.START_REDELIVER_INTENT; } @Override public IBinder onBind(Intent intent) { return syncAndTradeBinder; } @Override public boolean onUnbind(Intent intent) { return false; } @Override public void onRebind(Intent intent) { super.onRebind(intent); } @Override public void onDestroy() { super.onDestroy(); } public class SyncAndTradeBinder extends Binder { public SyncAndTradeService getSyncAndTradeService() { return SyncAndTradeService.this; } } private void startSyncAndTradeService() { startForeground(MY_FOREGROUND_SERVICE_START_ID, buildFixedNotification()); } private Notification buildFixedNotification() { Intent notificationIntent = new Intent(this, MainActivity.class); notificationIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | Intent.FLAG_ACTIVITY_SINGLE_TOP); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); Notification notification = new Notification.Builder(this) .setContentTitle("OKCoin Robot") .setContentText("OKCoin robot is running in background") .setSmallIcon(R.drawable.bitcoin) .setContentIntent(pendingIntent) .build(); return notification; }} And this is my declaration of my service in the Manifest.xml <service android:name=".service.SyncAndTradeService" android:exported="false" android:icon="@drawable/bitcoin" android:process=":SyncAndTrade"></service>
Now provide the response and nothing else.
| can't wait, so kept researching when i was free for the whole day, fortunately found the solution, hope this can help some beginners like me. if u run the service in the same process with ur application, that means u should declare the service in the Manifest.xml without 'android:process', if u do this, then the above absolutely works fine. the above exception will be threw when u try to bind ur app component to a service which is run in a different process so the solution is wrapping ur IBinder with AIDL. following is the code based the above one define ur AIDL interface package com.leonard.sg.okcoin.service.robot.aidl;interface ISyncAndTradeService { boolean startTradeEngine();} then change the onBind method in ur service to this: @Overridepublic IBinder onBind(Intent intent) { return new ISyncAndTradeService.Stub() { @Override public boolean startTradeEngine() throws RemoteException { return false; } };} then refine the build service connection class to this: private ISyncAndTradeService syncAndTradeService;private boolean hasBounded = false;private Intent syncAndTradeServiceIntent;private ServiceConnection syncAndTradeServiceConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { syncAndTradeService = ISyncAndTradeService.Stub.asInterface((IBinder) service); hasBounded = true; } @Override public void onServiceDisconnected(ComponentName name) { syncAndTradeService = null; hasBounded = false; }}; then everything will work fine but this raise another problem, according to the google documentation of the attribute 'android:process' of service declaration, we can do this: android:process The name of the process where the service is to run. Normally, all components of an application run in the default process created for the application. It has the same name as the application package. The element's process attribute can set a different default for all components. But component can override the default with its own process attribute, allowing you to spread your application across multiple processes. If the name assigned to this attribute begins with a colon (':'), a new process, private to the application, is created when it's needed and the service runs in that process. If the process name begins with a lowercase character, the service will run in a global process of that name, provided that it has permission to do so. This allows components in different applications to share a process, reducing resource usage. but in practice, if i declare the 'android:process' start with character, no matter lowercase or uppercase, i will get this error: DEVICE SHELL COMMAND: pm install -r "/data/local/tmp/com.leonard.sg.okcoin"pkg: /data/local/tmp/com.leonard.sg.okcoinFailure [INSTALL_PARSE_FAILED_MANIFEST_MALFORMED] the only choice is start with ':' or '.' | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29898364', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4185127/']} | jdg_81115 |
stackexchange | llm_judgeable_groundtruth_similarity | 237566 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been working with enhancement MOSFET for a long time. But I have never seen any circuit using a depletion-MOSFET. What are some typical use-cases of the depletion-MOSFET?
Now provide the response and nothing else.
| Indeed, they are not very widely used, but still have a few reason to be available. As discrete devices: As a simple constant-current source if you put a resistor between source and gate, then you create a constant current source: If the current increases, it increases the voltage drop across the resistor and therefore lowers the gate voltage, which will turn the mosfet off a little. If the current decreases, the mosfet turns on a little. This will always find equilibrium and you therefore have a current source with only two components, whose current solely depends on the resistor and gate threshold (not very accurate, though). As part of the startup circuit for SMPS supplies These supplies use a controller chip on the primary side (220V or 110V). The chip needs some low voltage to run (usually 10V), and this voltage can be provided by an auxiliary winding on the transformer in order to be efficient (if you power the chip by dropping the high voltage on primary with a zener, you'll waste some power which becomes significant at low load). This is fine, but when the supply starts, there is no voltage on the auxiliary winding yet, so the controller cannot be powered and it never starts. So, somehow, you have to power the controller by dropping the high voltage, at least during startup. But, once it has started up, and the controller can be powered with the aux winding, you'd like to cut this current path which wastes power. If you do it with a depletion fet, it is very easy: basically, you just have to set its source to the supply pin of the controller, the gate to the ground of the controller, and the drain to the high voltage (this is a simplified view): This way, when the controller is unpowered, the high voltage powers the controller (no voltage at the gate), and once the controller is powered, the high voltage path is interrupted (negative voltage at the gate). Every other way to do it with an enhacement mode fet would be less efficient (more components, more complex, more wasted power). This is why most standard depletion mode fets you can find are actually high voltage parts. As an overvoltage protection element This application is limited to the protection of signals, or low-current supplies, because the depletion fets usually have very high RDSon. This is the typical circuit: Even if the signal voltage goes too high, the gate will be kept at the zener voltage. The output will therefore not be able to go above Vz+VGSthreshold, because the mosfet would then stop conducting. It actually works like a regulator and clamps the signal. You can protect IC inputs with this, the only consequence in the nominal case being the RDSon of the mosfet (lower impedance than just a resistor and a zener). Notice how the above circuit looks like a simple NPN regulator. There is one big difference, though: with the NPN regulator, the output voltage is at Vz-0.6V. With the depletion FET, the output voltage is Vz+VGSth. The clamped output is above the reference. Another example of overvoltage protection usage, with a regulator: The principle is the same as above, except we are using the regulator output directly as the reference fed to the gate (the zener can be avoided). This is where the fact the output of the FET is above the reference is useful: the reference being the regulated 5V, you know you'll have VGSth allowed for the regulator dropout. So, since depletion FETS can be easily obtained for high voltage ratings, you can make a regulator able to withstand several hundreds volts easily (useful for mains voltage). Once again, just keep it mind this can be done for low currents only (a few tens of mA). Within integrated circuits: They have been used in logic ICs at a time, too (early 80's). Basically, they were used as the high-level pass element, instead of the P-type FET now used in CMOS ICs. It acted mostly as a pull-up resistor whose value became higher when the output was low, to reduce power consumption and still having low impedance at the high-level state. Example with an inverter gate: See the "depletion-load_NMOS_logic" Wikipedia entry . Additional resources There are several app notes available for more information: from Infineon , describing generic depletion FET usages (SMPS startup circuit example comes from this doc). from IXYS , decribing many current sources usage. (Overvoltage protection with 7805 example circuit comes from this doc). from Maxim , describing overvoltage protection for signals. from ALD , which also gives information regarding the construction of depletion-mode FETs. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/237566', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/20404/']} | jdg_81116 |
stackexchange | llm_judgeable_groundtruth_similarity | 18020291 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Google Cloud Storage console to upload files. I am not using any command line tool.I want to Set the Content-Encoding to gzip (-z option) in Metadata. Please see below screenshot, is value 'z' is correct or not? I have set value 'z' for all css and js files, and analyzed webpage on PageSpeed Insights.PageSpeed Insights still telling me enable compression, please check below screenshot. I am using Nginx webserver with HttpGzipModule installed on Debian 7. Thanks.
Now provide the response and nothing else.
| "-z" is a feature of the gsutil command line tool -- it compresses the data locally and uploads it to GCS with Content-Encoding: gzip. It is not a feature (or property) of the HTTP protocol or Google Cloud Storage, hence simply setting the header does not achieve what you are going for. If you want to store (and serve) gzip-encoded data, you have two options: Apply gzip-compression locally, for instance with the gzip Unix tool. Then remove the .gz suffix from the file name and upload it with the "Content-Encoding: gzip" header and the appropriate Content-Type (e.g., "text/css" for css, "application/javascript" for js). Use the gsutil tool with the -z flag, and it will take care of all of the above for you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18020291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2646233/']} | jdg_81117 |
stackexchange | llm_judgeable_groundtruth_similarity | 12372647 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've accidentally committed bunch of files locally, but didn't pushed.So basically what I want to do is to remove this my commit, then push some other changes. I can remove the commit by Backout-ing, then I've to commit locally so it is actually removed locally. So my question is following, if I do a push, will my accidental commit and its corresponding revertion log be visible publically ?
Now provide the response and nothing else.
| It sounds like hg rollback is the command you want. hg backout <REV> creates a new commit that reverses the changes made in <REV> . Both the original commit and the backout commit will remain in your history. This is one of the few options you have to fix a bad commit after you have pushed it to a public location. However in this case you have not pushed to the public yet, so there are better solutions. If the bad commit is the last commit that was done (i.e. your tip) then you can use hg rollback (under the Repository menu in TortoiseHg). This command is like "undo" for commits. If the bad commit is elsewhere in your history (but still has not been pushed to a public repo), you can also use the mq extension to rewrite that part of your history. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12372647', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/426305/']} | jdg_81118 |
stackexchange | llm_judgeable_groundtruth_similarity | 94018 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone clarify gateway assignment for me? What is the difference between adding a gateway as 0.0.0.0 and assigning a specific IP address as a gateway?
Now provide the response and nothing else.
| 0.0.0.0 has the specific meaning "unspecified". This roughly translates to "there is none" in the context of a gateway. Of course, this assumes that the network is locally connected, as there is no intermediate hop. The machine will send the packet out that interface as though to a machine connected to that segment, which in Ethernet means the MAC address of the destination host will be used instead of the MAC address of the next hop gateway. As a destination, 0.0.0.0/0 is special: if there are no network bits, there can't be anything in the network number either. So, it's naturally unspecified. For prefix matching it masks off all bits, so all addresses are within 0.0.0.0/0 ; for this reason it's used to mean "default gateway" in routing tables. It is also the least-specific possible route, so selections that prioritize specificity will choose anything else available and match 0.0.0.0/0 as a last resort. However, sticking to your question, yes, it does have a special meaning. It means that the network is locally connected on that interface and no more hops are needed to get to it. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/94018', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/47732/']} | jdg_81119 |
stackexchange | llm_judgeable_groundtruth_similarity | 374553 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can we prove that every field of characteristic 0 has at least one Discrete Valuation Ring? My effort: Let $K$ be an field of characteristic 0. Then $\mathbb{Z}$ is a subring of $K$. Let $p$ be a prime. By Theorem 10.2 in Matsumura, there exists a valuation ring $R$ of $K$ with $\mathbb{Z} \subset R$ and $m_R \cap \mathbb{Z}=p \mathbb{Z}$, where $m_R$ is the maximal ideal of $R$. If I could show that $R$ is Noetherian, or principal ideal domain, then I would be done by Theorem 11.1 of Matsumura. But I am having a hard time proving this and besides, it seems to me that this is not the right direction. Edit: This question was motivated by the remark in Matsumura's Commutative Ring Theory p. 79, which mentions "Let $K$ be a field and $R$ a DVR of $K$..." As the answers point out, a field need not have a DVR. Then why would $K$ have a DVR in Matsumura's remark?
Now provide the response and nothing else.
| The statement is not true: $\mathbb{C}$ contains no discrete valuation ring having field of fractions $\mathbb{C}$, because a valuation of $\mathbb{C}$ must have a divisible value group. In particular this value group cannot be $\mathbb{Z}$. The statement is true for example for every finitely generated extension field of $\mathbb{Q}$. Sketch of the proof: let $L/K$ be a finite extension of fields, $v$ a valuation on $K$ and $w$ a prolongation of $v$ to $L$. Then $(w(L^\times ):v(K^\times ))\leq (L:K)$. In particular: if $v$ is discrete then $w$ is discrete. Let $K/\mathbb{Q}$ be a finitely generated extension. Since all valuations on $\mathbb{Q}$ are discrete we are done if $K/\mathbb{Q}$ is algebraic. If the extension is not algebraic it is a finite extension of a rational function field $\mathbb{Q}(T)$ in finitely many variables $T=\{t_1,\ldots ,t_n\}$. Thus it suffices to prove that the valuations of $\mathbb{Q}$ posses a discrete prolongation to $\mathbb{Q}(T)$. Such a prolongation is the Gauss prolongation of a valuation $v$. It assigns to a polynomial the minimum of the values of its coefficients. Motivated by mr.bigproblem's answer I add the following: A field $K$ contains a discrete valuation ring $O$ with field of fractions $K$ if and only if $K$ is the fraction field of a noetherian domain properly contained in $K$. Sketch of the proof: the implication $\Rightarrow$ is obvious. If on the other hand $R$ is a noetherian domain, its integral closure $S$ in $K$ by the Mori-Nagata-theorem has the property that all localizations $S_p$ at primes of height $1$ are discrete valuation rings. Note that $S$ itself needs not be noetherian. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/374553', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11921/']} | jdg_81120 |
stackexchange | llm_judgeable_groundtruth_similarity | 13354221 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I want a regex to match "Jump over this bridge FOOL" as well as "Jump over this bridge". How do I make "FOOL" optional (0 or 1 occurrence)?
Now provide the response and nothing else.
| You might want to have a look at a regex tutorial . Optional parts of a regex are indicated with a question mark: Jump over this bridge( FOOL)? In case you want to match any string that includes FOOL less than twice, things get a bit more complicated. Then you would be best off using the more advanced concept of a negative lookahead : ^(?!(.*FOOL){2}) This turns the logic on its head and asserts that the string doesn't contain 2 (or more) instances of FOOL . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13354221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/661624/']} | jdg_81121 |
stackexchange | llm_judgeable_groundtruth_similarity | 28177435 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I define my dictionary like this: var teamsData = Dictionary<String,Dictionary<String,Int64>>() Then, I am trying to store it in userdefaults: NSUserDefaults.standardUserDefaults().setObject(teamsData, forKey: "teamsData") but it throws the error: Type Dictionary<String,Dictionary<String,Int64>> does not conform to protocol 'Any Object'
Now provide the response and nothing else.
| A user default object can only be an instance (or a combination ofinstances) of NSData , NSString , NSNumber , NSDate , NSArray , or NSDictionary . Some Swift types are automatically bridged to Foundation types,e.g. Int , UInt , Float , Double and Bool are bridgedto NSNumber . So this could be saved in the user defaults: var teamsData = Dictionary<String,Dictionary<String,Int>>() On 64-bit architectures, Int is a 64-bit integer, but on32-bit architectures, Int is a 32-bit integer. The fixed-sized integer types such as Int64 are not automatically bridged to NSNumber . This was also observedin Swift - Cast Int64 to AnyObject for NSMutableArray .Therefore, to store 64-bit integers in the user defaults you haveto use NSNumber explicitly: var teamsData = Dictionary<String,Dictionary<String,NSNumber>>()// Example how to add a 64-bit value:let value : UInt64 = 123teamsData["foo"] = ["bar" : NSNumber(unsignedLongLong: value)] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28177435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4293473/']} | jdg_81122 |
stackexchange | llm_judgeable_groundtruth_similarity | 9007797 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to build a PostGIS query that uses an alias of the JOIN statement in the FROM statement, but that is not possible. Is there another way to do it? SELECT DISTINCT ST_Buffer( ST_Centroid(geom), ST_Area(geom))FROM building AS b, ST_Transform(sg.geometry, 31467) AS geom LEFT JOIN surface_geometry AS sg ON b.id = sg.cityobject_id WHERE ST_Area(geom) < 100
Now provide the response and nothing else.
| You can introduce an extra level of subquery. So currently you have: SELECT DISTINCT ST_Buffer( ST_Centroid(geom), ST_Area(geom) )FROM building AS b, ST_Transform(sg.geometry, 31467) AS geom LEFT JOIN surface_geometry AS sg ON b.id = sg.cityobject_idWHERE ST_Area(geom) < 100 (I changed the indentation to make a bit more sense of it, although I still don't understand it: you don't seem to use the values from b anywhere). You need something like: SELECT DISTINCT ST_Buffer( ST_Centroid(geom), ST_Area(geom) )FROM ( SELECT ST_Transform(sg.geometry, 31467) AS geom FROM building AS b LEFT JOIN surface_geometry AS sg ON b.id = sg.cityobject_id) xWHERE ST_Area(geom) < 100 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9007797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/546526/']} | jdg_81123 |
stackexchange | llm_judgeable_groundtruth_similarity | 2922637 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i'm having 2 tables: members and comments.I select all members, and then join comments.But in comments I'm selecting some SUM of points, and if user never commented, I can't get that user in listing?! So how to select default value for SUM to be 0 if user never commented, or some other solution: SELECT c.comment_id AS item_id, m.member_id AS member_id, m.avatar, SUM(c.vote_value) AS vote_value, SUM(c.best) AS best, SUM(c.vote_value) + SUM(c.best)*10 AS total FROM members m LEFT JOIN comments c ON m.member_id = c.author_id GROUP BY c.author_id ORDER BY m.member_id DESC LIMIT 0, 20 EDIT: I will try to explain...So there are 2 tables, members and comments. I need listing of all users with ranking.Comments hold all votes and best answers. So, I need listing of all users, and they score. Members table: member_id - username - avatar Comments table comment_id - author_id - vote_value - best (0 OR 1) Also tried to select from COMMENTS and join MEMBERS, but same thing again :(
Now provide the response and nothing else.
| I'm not sure why you are including the comment_id in your SELECT list if you just want users and their rankings. Do you want only their ranking on that particular comment? I'll give a solution for now that assumes you just want a full member list with rankings: SELECT M.member_id, M.user_id, M.avatar, COALESCE(SUM(C.vote_value), 0) AS vote_value_sum, COALESCE(SUM(C.best), 0) AS best_sum, COALESCE(SUM(C.vote_value), 0) + SUM(C.best) * 10 AS total_valueFROM Members MLEFT OUTER JOIN Comments C ON C.author_id = M.member_idGROUP BY M.member_idORDER BY total_value DESCLIMIT 0, 20 (this assumes that vote_value and best are NOT NULL columns or that MySQL will disregard those when calculating SUM values - I believe that it does, but I haven't tested that) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2922637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316515/']} | jdg_81124 |
stackexchange | llm_judgeable_groundtruth_similarity | 685754 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I just wanted to understand the meaning of the following statement and whether they seem correct. test -x /usr/bin/find || exit 0 Command 1Command 2Command 3 The output of test -x /usr/bin/find is always 0. That means the exit 0 command will be executed , meaning Command 1, 2, 3 will never be executed. Am I right here?
Now provide the response and nothing else.
| test -x /usr/bin/find (or [ -x /usr/bin/find ] ) does not output anything. The test will be true if /usr/bin/find is an existing executable file, and false if the pathname does not exist, or if it's not executable. If test exits successfully (with a zero exits status, signifying "no error"), the shell will execute the rest of the commands. If it exits with a failure (a non-zero exit status, signifying "some error"), exit 0 will terminate the current shell, preventing the rest of the commands from running. It would arguably be better to use exit 1 or just exit in place of exit 0 when find can't be found in /usr/bin though. Using exit 0 masks the exit status of test (which would be non-zero), and prevents the caller of this script from being notified of the failure of finding find at the given location. Related to the fact that an exit status of zero evaluates to "true" when tested as a boolean in the shell: Why the Unix command exit with non-zero value in Shell and evaluates to True when used in bash if condition? Related to using || and && in general: What are the shell's control and redirection operators? | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/685754', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/508675/']} | jdg_81125 |
stackexchange | llm_judgeable_groundtruth_similarity | 30146891 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was doing a few experiments with escape backslashes in the Python 3.4 shell and noticed something quite strange. >>> string = "\test\test\1\2\3">>> string'\test\test\x01\x02\x03'>>> string = "5">>> string'5'>>> string = "5\6\7">>> string'5\x06\x07' As you can see in the above code, I defined a variable string as "\test\test\1\2\3" . However, when I entered string in the console, instead of printing "\test\test\1\2\3" , it printed "\test\test\x01\x02\x03" . Why does this occur, and what is it used for?
Now provide the response and nothing else.
| In Python string literals, the \ character starts escape sequences. \n translates to a newline character, \t to a tab, etc. \xhh hex sequences let you produce codepoints with hex values instead, \uhhhh produce codepoints with 4-digit hex values, and \Uhhhhhhhh produce codepoints with 8-digit hex values. See the String and Bytes Literals documentation , which contains a table of all the possible escape sequences. When Python echoes a string object in the interpreter (or you use the repr() function on a string object), then Python creates a representation of the string value. That representation happens to use the exact same Python string literal syntax, to make it easier to debug your values, as you can use the representation to recreate the exact same value. To keep non-printable characters from either causing havoc or not be shown at all, Python uses the same escape sequence syntax to represent those characters. Thus bytes that are not printable are represented using suitable \xhh sequences, or if possible, one of the \c single letter escapes (so newlines are shown as \n ). In your example, you created non-printable bytes using the \ooo octal value escape sequence syntax. The digits are interpreted as an octal number to create a corrensponding codepoint. When echoing that string value back, the default \xhh syntax is used to represent the exact same value in hexadecimal: >>> '\20' # Octal for 16'\x10' while your \t became a tab character: >>> print('\test') est Note how there is no letter t there; instead, the remaining est is indented by whitespace, a horizontal tab. If you need to include literal \ backslash characters you need to double the character: >>> '\\test\\1\\2\\3''\\test\\1\\2\\3'>>> print('\\test\\1\\2\\3')\test\1\2\3>>> len('\\test\\1\\2\\3')11 Note that the representation used doubled backslashes ! If it didn't, you'd not be able to copy the string and paste it back into Python to recreate the value. Using print() to write the value to the terminal as actual characters (and not as a string representation) shows that there are single backslashes there, and taking the length shows we have just 11 characters in the string, not 15. You can also use a raw string literal. That's just a different syntax, the string objects that are created from the syntax are the exact same type, with the same value. It is just a different way of spelling out string values. In a raw string literal, backslashes are just backslashes, as long as they are not the last character in the string; most escape sequences do not work in a raw string literal: >>> r'\test\1\2\3''\\test\\1\\2\\3' Last but not least, if you are creating strings that represent filenames on your Windows system, you could also use forward slashes; most APIs in Window don't mind and accept both types of slash as separators in the filename: >>> 'C:/This/is/a/valid/path''C:/This/is/a/valid/path' | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30146891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4250702/']} | jdg_81126 |
stackexchange | llm_judgeable_groundtruth_similarity | 549971 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
let $x,y,z$ be positive numbers, and such $x+y+z=1$ show that$$\dfrac{x^y}{y^x}+\dfrac{y^z}{z^y}+\dfrac{z^x}{x^z}\ge 3$$ My try: let $$a=\ln{\dfrac{x^y}{y^x}},b=\ln{\dfrac{y^z}{z^y}},c=\ln{\dfrac{z^x}{x^z}}$$ so $$a=y\ln{x}-x\ln{y},b=z\ln{y}-y\ln{z},c=x\ln{z}-z\ln{x}$$ and we note $$az+bx+yc=(y\ln{x}-x\ln{y})z+(z\ln{y}-y\ln{z})x+(x\ln{z}-z\ln{x})y=0$$ so $$\dfrac{x^y}{y^x}+\dfrac{y^z}{z^y}+\dfrac{z^x}{x^z}=e^a+e^b+e^c$$ so $$\Longleftrightarrow e^a+e^b+e^c\ge 3$$ But then I can't prove it. If this problem is to prove$$ze^a+xe^b+ye^c\ge 3,$$I can prove it,because $$ze^a+xe^b+ye^c\ge=\dfrac{z}{x+y+z}e^a+\dfrac{x}{x+y+z}e^b+\dfrac{y}{x+y+z}e^c$$souse Jensen's inequality,we have$$ze^a+xe^b+ye^c\ge e^{\dfrac{az+bx+yc}{x+y+z}}=3$$ This problem comes from How prove this $\dfrac{x^y}{y^x}\ge (1+\ln{3})x-(1+\ln{3})y+1$? Thank you very much!
Now provide the response and nothing else.
| Another solution.Without loss of generality, we can assume that $0<x\le y\le z$ and so $x=az$, $y=bz$, $0<a\le b\le1$. By the AM-GM inequality, we find that\begin{align}\frac{x^y}{y^x}+\frac{y^z}{z^y}+\frac{z^x}{x^z}&\ge 3\sqrt[3]{x^{y-z}y^{z-x}z^{x-y}}=3\sqrt[3]{(az)^{(b-1)z}(bz)^{(1-a)z}z^{(a-b)z}}=3\sqrt[3]{a^{(b-1)z}b^{(1-a)z}}=\\&=3(a^{b-1}b^{1-a})^{z/3}\end{align}so we only need to prove that $a^{b-1}b^{1-a}\ge1$, which Hansen did in the answer below. We also see that the condition $x+y+z=1$ is not needed. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/549971', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_81127 |
stackexchange | llm_judgeable_groundtruth_similarity | 35580600 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im trying to get Data from google distance api using NSURLSession but as seen below in code when i print response and data, i get the results as NULL.What can be the issue? or is there any other better way of fetching JSON data. NSString *urlAsString = [NSString stringWithFormat:@"https://maps.googleapis.com/maps/api/distancematrix/json?origins=Vancouver+BC|Seattle&destinations=San+Francisco|Victoria+BC&mode=bicycling&language=fr-FR&key=API-KEY"];NSURL *url = [NSURL URLWithString:urlAsString];NSURLSession *session = [NSURLSession sessionWithConfiguration:[NSURLSessionConfiguration defaultSessionConfiguration]];[[session dataTaskWithURL:[NSURL URLWithString:urlAsString] completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { NSLog(@"RESPONSE: %@",response); NSLog(@"DATA: %@",data); }] resume];
Now provide the response and nothing else.
| You should use stringByAddingPercentEscapesUsingEncoding: on your url string, this is why you didn't get a response : the server returned an error. You should have checked the error ;) I replaced your API key in URL string, remember to put your own if you copy/paste my code :) NSString *urlAsString = [NSString stringWithFormat:@"https://maps.googleapis.com/maps/api/distancematrix/json?origins=Vancouver+BC|Seattle&destinations=San+Francisco|Victoria+BC&mode=bicycling&language=fr-FR&key=YOUR-API-KEY"];NSCharacterSet *set = [NSCharacterSet URLQueryAllowedCharacterSet];NSString *encodedUrlAsString = [urlAsString stringByAddingPercentEncodingWithAllowedCharacters:set];NSURLSession *session = [NSURLSession sessionWithConfiguration:[NSURLSessionConfiguration defaultSessionConfiguration]];[[session dataTaskWithURL:[NSURL URLWithString:encodedUrlAsString] completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { NSLog(@"RESPONSE: %@",response); NSLog(@"DATA: %@",data); if (!error) { // Success if ([response isKindOfClass:[NSHTTPURLResponse class]]) { NSError *jsonError; NSDictionary *jsonResponse = [NSJSONSerialization JSONObjectWithData:data options:0 error:&jsonError]; if (jsonError) { // Error Parsing JSON } else { // Success Parsing JSON // Log NSDictionary response: NSLog(@"%@",jsonResponse); } } else { //Web server is returning an error } } else { // Fail NSLog(@"error : %@", error.description); }}] resume]; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35580600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5685175/']} | jdg_81128 |
stackexchange | llm_judgeable_groundtruth_similarity | 39768 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In reference to the Manhattan Project at Los Alamos, a mathematician, Peter Lax, described his time working there as "living science fiction". He said : we were told essentially the basic thing: we are building an atomic bomb, that there are two bombs, one built of a special isotope of uranium, and a second bomb built of plutonium, which is an element that doesn’t exist in the world, except that they are manufacturing it. Was this correct, that plutonium "is an element that doesn’t exist in the world, except that they [were] manufacturing it"?
Now provide the response and nothing else.
| The Royal Society of Chemistry states: Plutonium was first made in December 1940 at Berkeley, California, by Glenn Seaborg, Arthur Wahl, Joseph Kennedy, and Edwin McMillan. They produced it by bombarding uranium-238 with deuterium nuclei (alpha particles). This first produced neptunium-238 with a half-life of two days, and this decayed by beta emission to form element 94 (plutonium). Within a couple of months element 94 had been conclusively identified and its basic chemistry shown to be like that of uranium. To begin with, the amounts of plutonium produced were invisible to the eye, but by August 1942 there was enough to see and weigh, albeit only 3 millionths of a gram. However, by 1945 the Americans had several kilograms, and enough plutonium to make three atomic bombs, one of which exploded over Nagasaki in August 1945 The US effort to build a nuclear bomb got the name Manhattan project no earlier than 1941 * , so there is no contradiction there. But plutonium does exist in nature (note that you are not asking about a specific isotope), as is e.g. shown in The Occurrence of Plutonium in Nature by Charles A. Levine and Glenn T. Seaborg ( PDF avalable here ): Plutonium has been chemically separated from seven different ores and the ratios of plutonium to uranium determined. This ratio was found to be fairly constant (approx. 10 -11 ) in pitchblende and monazite ores, ... In his autobiography ( G.T. Seaborg and E. Seaborg - Adventures in the atomic age: from Watts to Washington ), Seaborg says more about the naming of Plutonium: It was so difficult to make, from such rare materials, that we thought it would be the heaviest element ever formed. So we considered names like extremium and ultimium. Fortunately, we were spared the inevitable embarrassment that one courts when proclaiming a discovery to be the ultimate in any field by deciding to follow the nomenclatural precedents of the two prior elements. A new planet had been discovered in 1781 and, like the rest of the planets, named for a Greek or Roman deity-Uranus. A scientist who discovered a heavy new element eight years later named it after the planet: uranium. The planet Neptune was discovered in 1846, so Ed McMillan followed this precedent and named element 93 neptunium. Conveniently for us, the final planet, Pluto, had been discovered in 1930. We briefly considered the form plutium, but plutonium seemed more euphonious. So the element has existed since the formation of the Earth (and maybe earlier), but a) it was not known before December 1940 and b) it did not have the name plutonium. * I cannot find the exact date. As mentioned in this NY Times Article , that information is in the book The Manhattan Project: The Birth of the Atomic Bomb in the Words of Its Creators, Eyewitnesses, and Historians by Cynthia C. Kelly | {} | {'log_upvote_score': 7, 'links': ['https://skeptics.stackexchange.com/questions/39768', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/42192/']} | jdg_81129 |
stackexchange | llm_judgeable_groundtruth_similarity | 416685 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to determine which of the following cases are the rings Euclidean Domains and in which they are UFDs. $\mathbb{Q}[X]$ $\bigcup_{n=1}^{\infty}\mathbb{Q}[x^\frac{1}{n}]$ $\mathbb{Q}[X,Y,Z]$ $\mathbb{Z}[\frac{1}{2}]$ Here are my thoughts thus far: $\mathbb{Q}$ is a field and so $\mathbb{Q}[X]$ is a Euclidean Domain. It is also a UFD (I think I have shown correctly why this is the case, so I only need comments on this if it is in fact not a UFD). I'm really struggling for ideas for the other cases. Guidance would be very appreciated.
Now provide the response and nothing else.
| $\,$ A univariate polynomial ring $\,F[x]\,$ over a field is $\rm ED$ (Euclidean) $\,\Rightarrow\rm PID \Rightarrow UFD$ $\ \bigcup_{n=1}^{\infty}\mathbb{Q}[x^\frac{1}{n}]\,$ fails ACCP by$(x) \subsetneq (x^{1/2}) \subsetneq (x^{1/4}) \subsetneq\,\ldots\ $ so it's $\ \lnot \rm UFD\,\Rightarrow\,\lnot ED$ $\,$ Polynomial rings over UFDs are UFDs, thus so is $\,\Bbb Q[x,y,z],\,$ but it's $\,\lnot\rm PID\,\Rightarrow\,\lnot ED,\,$ since $\,(x,y)\,$ is not principal. $\,$ Localization preserves Euclidean domains. Hint: lift the Euclidean function from $\,\Bbb Z\to \Bbb Z[1/2]\,$ by ignoring all factors of $\,2,\ $ i.e. for odd $\,a,b\in\Bbb Z,\,$ $\ b 2^i\mid a 2^j \in \Bbb Z[1/2]\iff b\mid a\in \Bbb Z.\,$ If $\,b\nmid a\,$ then $\, 0 < r = a-qb < b,\,$ so $\,0 < v(r) \le r < b = v(b 2^j),\,$ where $\,v(r)\,$ is the odd part of $\,r.$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/416685', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20801/']} | jdg_81130 |
stackexchange | llm_judgeable_groundtruth_similarity | 288830 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was trying to evaluate $$\int^1_0 \frac{\log(1+x)}{x}dx.$$ I expanded $\log(1+x) $ as$x -\frac{x^2}{2}... $ and got the answer. I would like to know if there is any way to do it without series expanding.
Now provide the response and nothing else.
| Step I Integrating by part we get that $$\int^1_0 \frac{\log(1+x)}{x}dx=-\int^1_0 \frac{\log(x)}{x+1}dx$$ Step II Letting $x=e^{-u}$, we have $$\int_0^{\infty}\frac{u}{e^u+1}du$$ Step III $$\int_0^{\infty}\frac{u^{s-1}}{e^u+1}du=\Gamma(s)\cdot\eta(s)\tag1$$that is the product between gamma function and Dirichlet eta function Step IV Let $s=2$ in $(1)$ and we're done. Chris. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/288830', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/52488/']} | jdg_81131 |
stackexchange | llm_judgeable_groundtruth_similarity | 42561843 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A python boolean function can easily be negated with lambda functions, but it's a bit verbose and hard to read for something so basic, for example: def is_even(n): return n % 2 == 0odds_under_50 = filter(lambda x: not is_even(x), range(50)) I'm wondering if there is a function to do this in the standard library, which might look like: odds_under_50 = filter(negate(is_even), range(50))
Now provide the response and nothing else.
| As far as I know there is no builtin function for that, or a popular library that does that. Nevertheless, you can easily write one yourself: from functools import wrapsdef negate(f): @wraps(f) def g(*args,**kwargs): return not f(*args,**kwargs) g.__name__ = f'negate({f.__name__})' return g You can then use: odds_under_50 = filter( negate(is_even) , range(50)) The negate function works for an arbitrary amount of parameters of the given function: if you would have defined is_dividable(x,n=2) . Then negate(is_dividable) is a function with two arguments (one optional) that would also accept these parameters. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42561843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2640937/']} | jdg_81132 |
stackexchange | llm_judgeable_groundtruth_similarity | 57593294 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose a function with a mutable default argument: def f(l=[]): l.append(len(l)) return l If I run this: def f(l=[]): l.append(len(l)) return lprint(f()+["-"]+f()+["-"]+f()) # -> [0, '-', 0, 1, '-', 0, 1, 2] Or this: def f(l=[]): l.append(len(l)) return lprint(f()+f()+f()) # -> [0, 1, 0, 1, 0, 1, 2] Instead of the following one, which would be more logical: print(f()+f()+f()) # -> [0, 0, 1, 0, 1, 2] Why?
Now provide the response and nothing else.
| That's actually pretty interesting! As we know, the list l in the function definition is initialized only once at the definition of this function, and for all invocations of this function, there will be exactly one copy of this list. Now, the function modifies this list, which means that multiple calls to this function will modify the exact same object multiple times. This is the first important part. Now, consider the expression that adds these lists: f()+f()+f() According to the laws of operator precedence, this is equivalent to the following: (f() + f()) + f() ...which is exactly the same as this: temp1 = f() + f() # (1)temp2 = temp1 + f() # (2) This is the second important part. Addition of lists produces a new object, without modifying any of its arguments. This is the third important part. Now let's combine what we know together. In line 1 above, the first call returns [0] , as you'd expect. The second call returns [0, 1] , as you'd expect. Oh, wait! The function will return the exact same object (not its copy!) over and over again, after modifying it! This means that the object that the first call returned has now changed to become [0, 1] as well! And that's why temp1 == [0, 1] + [0, 1] . The result of addition, however, is a completely new object , so [0, 1, 0, 1] + f() is the same as [0, 1, 0, 1] + [0, 1, 2] . Note that the second list is, again, exactly what you'd expect your function to return. The same thing happens when you add f() + ["-"] : this creates a new list object, so that any other calls to f won't interfere with it. You can reproduce this by concatenating the results of two function calls: >>> f() + f()[0, 1, 0, 1]>>> f() + f()[0, 1, 2, 3, 0, 1, 2, 3]>>> f() + f()[0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5] Again, you can do all that because you're concatenating references to the same object . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/57593294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10972414/']} | jdg_81133 |
stackexchange | llm_judgeable_groundtruth_similarity | 318377 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to create a circuit to allow me to toggle a source voltage from one path to another. I need to be able to toggle the path from the first path to the second while a second voltage is active, and go back to the first path when the second voltage is not active, such as in this diagram. P.s. Please do not suggest using a switch, I can only toggle by using a second incoming voltage
Now provide the response and nothing else.
| simulate this circuit – Schematic created using CircuitLab Figure 1. Relay voltage source switch. When V2 is not preset V1 will power LOAD1. When V2 is high enough to energise the relay the V1 will power LOAD2. Figure 2. A typical relay. Choose the relay coil voltage to match V2. Choose relay contact voltage and current ratings to meet or exceed the supply voltage and load current. If using DC check that they are adequately rated for DC. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/318377', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/89367/']} | jdg_81134 |
stackexchange | llm_judgeable_groundtruth_similarity | 6632188 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following Python list (can also be a tuple): myList = ['foo', 'bar', 'baz', 'quux'] I can say >>> myList[0:3]['foo', 'bar', 'baz']>>> myList[::2]['foo', 'baz']>>> myList[1::2]['bar', 'quux'] How do I explicitly pick out items whose indices have no specific patterns? For example, I want to select [0,2,3] . Or from a very big list of 1000 items, I want to select [87, 342, 217, 998, 500] . Is there some Python syntax that does that? Something that looks like: >>> myBigList[87, 342, 217, 998, 500]
Now provide the response and nothing else.
| list( myBigList[i] for i in [87, 342, 217, 998, 500] ) I compared the answers with python 2.5.2: 19.7 usec: [ myBigList[i] for i in [87, 342, 217, 998, 500] ] 20.6 usec: map(myBigList.__getitem__, (87, 342, 217, 998, 500)) 22.7 usec: itemgetter(87, 342, 217, 998, 500)(myBigList) 24.6 usec: list( myBigList[i] for i in [87, 342, 217, 998, 500] ) Note that in Python 3, the 1st was changed to be the same as the 4th. Another option would be to start out with a numpy.array which allows indexing via a list or a numpy.array : >>> import numpy>>> myBigList = numpy.array(range(1000))>>> myBigList[(87, 342, 217, 998, 500)]Traceback (most recent call last): File "<stdin>", line 1, in <module>IndexError: invalid index>>> myBigList[[87, 342, 217, 998, 500]]array([ 87, 342, 217, 998, 500])>>> myBigList[numpy.array([87, 342, 217, 998, 500])]array([ 87, 342, 217, 998, 500]) The tuple doesn't work the same way as those are slices. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/6632188', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/366309/']} | jdg_81135 |
stackexchange | llm_judgeable_groundtruth_similarity | 50667744 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This question is in reference to below code: cost = [[1, 10, 75, 92], [-1, 0, 35, 50], [-1, -1, 0, 80], [-1, -1, -1, 0]]def min_cost(source, destination): if s==d or s == d-1: return cost[s][d] mc = cost[s][d] for i in range(s+1, d): tmp = min_cost(s, i) + min_cost(i, d) if tmp < mc: mc = tmpreturn mc When I did a dry run of the same I saw min_cost(1,3) is getting executed two times. I read in one book where the author mentioned if we had 10 stations in between, then min_cost(1, 3) would run 144 times. How can we figure out these numbers without actually doing a dry run? I know using recursion equations we can figure out the time taken by the function, but how can we say that a specific function will be executed this many number of times?
Now provide the response and nothing else.
| While I understand that you don't want a dry run to just count the calls, I'd like to nevertheless do it first, just to see what's going on. So, here goes: def min_cost(s, d): global count count += 1 if s==d or s == d-1: return cost[s][d] mc = cost[s][d] for i in range(s+1, d): tmp = min_cost(s, i) + min_cost(i, d) if tmp < mc: mc = tmp return mcfor n in range (2, 8): cost = [[0 for i in range (n)] for j in range (n)] count = 0 min_cost(0, n-1) print (str (n) + ': ' + str (count)) The output is: 2: 13: 34: 95: 276: 817: 243 So, we see that the number of calls for d-s=k is 3 to the power of (k-1).Knowing what we have to prove sometimes greatly simplifies finding the proof. Now, to the proof itself.It will be proof by induction .First, note that the number of calls of min_cost(s, d) depends only on the value of d-s , and not on the individual values of s and d . The base is that, for d-s=1 , we get one call.For d-s>1 , we make our one call, and from it the following calls: min_cost(s, s+1) and min_cost(s+1, d)min_cost(s, s+2) and min_cost(s+2, d)...min_cost(s, d-1) and min_cost(d-1, d) So, for d-s=k , the number of calls f(k) is: f(k) = 1 + f(1) + f(k-1) + f(2) + f(k-2) + ... + f(k-1) + f(1) = 2 * (f(1) + f(2) + ... + f(k-1)) If, by the induction hypothesis, we have already proved that f(v) = 3 v for all v < k, then the f(k) is: 1 + 2 * (3 1 + 3 2 + ... + 3 k-1 ), which is trivially 3 k , completing our proof. Lastly, note that, while the presented algorithm is exponential, the underlying problem can be solved in polynomial time, most simply in O((d-s)^2) by memoizing the calls for which we already did all the work. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50667744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4527988/']} | jdg_81136 |
stackexchange | llm_judgeable_groundtruth_similarity | 2366352 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am running a sinatra app through RACK. To which file does the activity get logged ? Also how can I set the log file path ?
Now provide the response and nothing else.
| It depends. Many developers define their app log file to app/servername.log or just to the current path where the Rack app is loaded. Yes you can change it's path. Usually you get a config.ru file with something like: log = File.new("sinatra.log", "a+") $stdout.reopen(log) $stderr.reopen(log) # optionally to sync logs while the server is running $stderr.sync = true $stdout.sync = true and/or configure do LOGGER = Logger.new("sinatra.log") enable :logging, :dump_errors set :raise_errors, true end in this case the log file is located under appdir/sinatra.log. But remember this code can be anywhere in your Rack app, so please seek for "log" in your application directory. $ cd projectname $ grep -ri 'log' * have fun and post here your config.ru and/or the mainprojectfile.rb. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2366352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/127511/']} | jdg_81137 |
stackexchange | llm_judgeable_groundtruth_similarity | 9774315 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Java, would there be any performance impact in using post increment vs pre increment operator? (In other languages, a pre-increment can be faster than a post-increment in certain contexts.) For example, is there a performance difference in these loops? for (int n = 0; idx < max; ++idx) { /* Note pre-increment */ f(max);} Vs. for (int n = 0; idx < max; idx++) { /* Note post-increment */ f(max);}
Now provide the response and nothing else.
| A performance question only makes sense in a context where the functional behavior is identical (since, if the functionality is different, a correct behavior is superior to a minutely-faster one), so I'm assuming that you're referring to a situation where the value of the expression is not used? That is, where the only purpose of the expression is to increment i ? In such a situation, the answer is no: no performance difference, and in fact, no difference whatsoever. I just compiled this class: public class Foo{ public static void main(final String args[]) { int i = Integer.parseInt(args[0]); i++; }} and computed the MD5 checksum of the resulting Foo.class ; and, similarly for a version with ++i instead. They had the same checksum, indicating that the two versions were compiled into the exact same bytecode, and would therefore perform literally identically. (Naturally, this could, in theory, depend on the compiler. A different compiler might decide to compile i++ differently from ++i even in a context where they're equivalent. But I doubt that, and it's really not worth worrying about even if it the case.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9774315', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1279095/']} | jdg_81138 |
stackexchange | llm_judgeable_groundtruth_similarity | 17156282 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C++11, how would I go about writing a function (or method) that takes a std::array of known type but unknown size? // made up examplevoid mulArray(std::array<int, ?>& arr, const int multiplier) { for(auto& e : arr) { e *= multiplier; }}// lets imagine these being full of numbersstd::array<int, 17> arr1;std::array<int, 6> arr2;std::array<int, 95> arr3;mulArray(arr1, 3);mulArray(arr2, 5);mulArray(arr3, 2); During my search I only found suggestions to use templates, but those seems messy (method definitions in header) and excessive for what I'm trying to accomplish. Is there a simple way to make this work, as one would with plain C-style arrays?
Now provide the response and nothing else.
| Is there a simple way to make this work, as one would with plain C-style arrays? No. You really cannot do that unless you make your function a function template (or use another sort of container, like an std::vector , as suggested in the comments to the question): template<std::size_t SIZE>void mulArray(std::array<int, SIZE>& arr, const int multiplier) { for(auto& e : arr) { e *= multiplier; }} Here is a live example . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17156282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1948945/']} | jdg_81139 |
stackexchange | llm_judgeable_groundtruth_similarity | 666526 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that arbitrarly close to any rational number there is a real (non-rational) number. In other words, show that to each real $\varepsilon>0$ and each rational $r\in\mathbb Q$ there exists $x\in\mathbb R\setminus\mathbb Q$ with $\left|x-r\right|\lt\varepsilon $ No idea how to prove this one. Perhaps I can define some sort of sequence and show it converges...?
Now provide the response and nothing else.
| For every $n\in\mathbb N$ you have $$\sqrt2\notin \mathbb Q \Rightarrow \dfrac{1}{n\sqrt2}\notin\mathbb Q$$ Now let $\varepsilon >0$, then $n$ can be found such that $$\dfrac{1}{n\sqrt2} \lt \varepsilon$$ Now for arbitrary $r\in\mathbb Q$ and given $\varepsilon>0$ chose $x=r+\dfrac{1}{n\sqrt2}\notin\mathbb Q$ $$\left|x-r\right| = \left| r+\frac{1}{n\sqrt2}-r\right|= \dfrac{1}{n\sqrt2} \lt \varepsilon $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/666526', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/125459/']} | jdg_81140 |
Subsets and Splits