source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
19151940
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know removing trailing whitespace can be done with a pre-commit hook. I am interested in doing it manually. I read the question here: Make git automatically remove trailing whitespace before committing - Stack Overflow The answer closest to what I want is the "automatic version" from ntc2 : (export VISUAL=: && git -c apply.whitespace=fix add -ue .) && git checkout . && git reset That command works well except it seems to be only for changes on files that are already in the repo, not new files. I have a bunch of files that are new, meaning they aren't yet in the repo. I want to remove whitespace from those files so I tried add -A instead of -u but that didn't make a difference. Now provide the response and nothing else.
To manually clean up whitespace from your last 3 commits, you can do this: git rebase --whitespace=fix HEAD~3 When I work on a topic branch, I track the upstream branch (usually by creating it like this) git checkout -b topic -t Which allows me to drop the last argument from git rebase . So once I'm done & ready to merge, I can clean the whole topic branch quickly with: git ws # aliased to rebase --whitespace=fix Note that, unlike the HEAD~3 example, this will actually rebase your changes upon the upstream branch if it's changed! (But that's also what I want, in my workflow.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19151940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4704515/']}
jdg_73741
stackexchange
llm_judgeable_groundtruth_similarity
10399334
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for some examples for creating a custom constructor on my models. I want the structure the model/data differently then just setting it as attributes. Can somebody show me some basic example of how to do this? Thanks! Now provide the response and nothing else.
If you really want to override the constructor, pass a constructor property to Backbone.Model.extend() , e.g.: var Klass = Backbone.Model.extend( { constructor : function ( attributes, options ) { // ... }} ); If you want to call the built-in constructor from your custom constructor, you can do something like: var Klass = Backbone.Model.extend( { constructor : function ( attributes, options ) { Backbone.Model.apply( this, arguments ); }} ); Or if you don't want to have to repeat the name of the variable containing the parent class all over the sub class, or you don't want to worry about the value of that variable changing, you can do something like the following: var Klass;var parent_klass = Backbone.Model.prototype;( function ( parent_klass ) { Klass = parent_klass.constructor.extend( { constructor : function ( attributes, options ) { parent_klass.constructor.apply( this, arguments ); } } );} )( parent_klass ); Or if you prefer the way @Claude suggests , but repeating the sub class variable name within the sub class instead of the parent class var name: var Klass = Backbone.Model.extend( { constructor : function ( attributes, options ) { Klass.parent_klass.constructor.apply( this, arguments ); } }, { parent_klass : Backbone.Model.prototype }); If you want more advice than that, you'll have to be more specific about what you want to accomplish. Anything that you just want to do after the built-in constructor functionality, you should probably do in initialize() .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10399334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/666564/']}
jdg_73742
stackexchange
llm_judgeable_groundtruth_similarity
9032050
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's what I'm trying to do: Get image A, and image B. Image B is a black and white mask image. Replace image A's alpha channel with image B's red channel. Draw image C on the canvas. Draw image A on top of image C. Everything seems ok until step 4. Image C isn't visible at all and where image A should be transparent there's white color. cx.putImageData(imageA, 0, 0);var resultData = cx.getImageData(0, 0, view.width, view.height);for (var h=0; h<resultData.data.length; h+=4) { resultData.data[h+3] = imageB.data[h];}cx.putImageData(imageC, 0, 0);cx.putImageData(resultData, 0, 0); Now provide the response and nothing else.
Simon is right: the putImageData method does not pay any attention to compositing; it merely copies pixel values. In order to get compositing, we need to use drawing operations. We need to mess with the channels (turn red into alpha) with the pixel data, put that changed pixel data into an image, and then use a composite operation to get the desired masking. //copy from one channel to anothervar assignChannel = function(imageData, channelTo, channelFrom) { if(channelTo < 0 || channelTo > 3 || channelFrom < 0 || channelFrom > 3) { throw new Error("bad channel number"); } if(channelTo == channelFrom) return; var px = imageData.data; for(var i = 0; i < px.length; i += 4) { px[i + channelTo] = px[i + channelFrom]; }};/**============================================================================ * this function uses 3 or 4 canvases for clarity / pedagogical reasons: * redCanvas has our mask image; * maskCanvas will be used to store the alpha channel conversion of redCanvas' image; * imageCanvas contains the image to be masked; * ctx is the context of the canvas to which the masked image will be drawn.============================================================================**/var drawOnTopOfRed = function(redCanvas, maskCanvas, imageCanvas, ctx) { var redImageData = redCanvas.getContext("2d").getImageData(0, 0, w, h); //assign the alpha channel assignChannel(redImageData, 3, 0); //write the mask image maskCanvas.getContext("2d").putImageData(redImageData, 0, 0); ctx.save(); //draw the mask ctx.globalCompositeOperation = "copy"; ctx.drawImage(maskCanvas, 0, 0); //draw the image to be masked, but only where both it //and the mask are opaque; see http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#compositing for details. ctx.globalCompositeOperation = "source-in"; ctx.drawImage(imageCanvas, 0, 0); ctx.restore();}; jsfiddle example A doodle with the example:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9032050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/924677/']}
jdg_73743
stackexchange
llm_judgeable_groundtruth_similarity
23418
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it necessary to go straight to ordering a professionally made PCBs when one wants to test a surface mount component circuit involving high rates of change of voltage (dv/dt) and current (di/dt)? That is, this question refers prototyping SMD PCBs when one must minimize interconnect length to avoid parasitic inductance. Does this requirement make readily available breakout boards useless? ( http://www.futurlec.com/SMD_Adapters.shtml ). I've etched my own PCBs, but, I have found that soldering fine pitch parts without a soldermask difficult. Have people found this to be a viable option even with leadless packages with 0.5mm pad spacing and a big thermal pad such as the QFN? For a particular example I refer you to this question: What is causing large oscillations in my DC/DC boost converter? Is this ground bounce or some other effect? This was my first attempt at SMD circuit, a DC/DC converter, and in dealing with parasitics. It was based around a small QFN and I couldn't think of any way to prototype the circuit other than to go straight to getting the board done professionally. A very tight layout was indeed critical to getting the board to work. I found that my layout wasn't sufficiently tight and I will need to make another board revision. Did I have a better option to getting the board done professionally? I am asking in case there is another option I don't know of. Does anyone try to solder 30 gauge wire wrap wire onto the small pads and wire the chip up somehow? Now provide the response and nothing else.
What's your work environment? Mentioning toner transfer makes me think you're a hobbyist (which is fine), but as a hobbyist you're doing this because it's fun. Your time takes on a different value, and your budget outlook is quite different. As a professional, I build circuit boards because it makes money for my employer. I'm paid fairly well, and it's not economically sensible for me to mess around with toner transfer and trying to solder to that board. I take my time and try to do it right the first time, send the boards out for manufacture, and move on to other projects. When the boards get back, I send them through the reflow oven or have a tech solder them up (the former is easier with soldermask, the latter is easier with silkscreen and soldermask) and test. If it works, great! If it doesn't, I revise the board accordingly and try again. Usually, the board works the first time, but if not, I revise it and send it out again. Making a toner transfer board (or, at my workplace, a board cut out with a PCB router) is valuable when there's a major time crunch and you'd rather spend extra time to make sure that your prototype for the prototype works, rather than counting on the real prototype working the first time. I'm not going to sell or mass-manufacture routed boards, and they're laid out fundamentally differently than professionally made boards: Vias are free on professional boards, and difficult, large, and time-consuming on self-made boards Soldering is much more difficult. Keepaways, plane spacing, and thermals all behave very differently without soldermask. I'll work to make soldering easy on a self-made board, but lay out a professional board differently. Trace/space is smaller on a professional board. This could lead to major layout differences on some boards. Especially with high-frequency signals, moving things closer together can change impedances and cause problems. Some parts simply can't be soldered effectively on toner-transfer boards. 144-pin QFPs, QFN and BGA parts, and other tight layouts are far, far easier with soldermask. In most cases, it's a better investment to send out for a few samples of the final product and wait for shipping than to do a toner transfer board as a prototype. If you enjoy doing toner transfer stuff, enjoy getting better at soldering, and your time isn't a part of your budget (hint: It isn't, even if you're a hobbyist - you have limited time too), then toner transfer makes some sense. If not, just get the real thing.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/23418', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1900/']}
jdg_73744
stackexchange
llm_judgeable_groundtruth_similarity
8633959
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to understand how new instances of a Python class should be created when the creation process can either be via the constructor or via the __new__ method. In particular, I notice that when using the constructor, the __init__ method will be automatically called after __new__ , while when invoking __new__ directly the __init__ class will not automatically be called. I can force __init__ to be called when __new__ is explicitly called by embedding a call to __init__ within __new__ , but then __init__ will end up getting called twice when the class is created via the constructor. For example, consider the following toy class, which stores one internal property, namely a list object called data : it is useful to think of this as the start of a vector class. class MyClass(object): def __new__(cls, *args, **kwargs): obj = object.__new__(cls, *args, **kwargs) obj.__init__(*args, **kwargs) return obj def __init__(self, data): self.data = data def __getitem__(self, index): return self.__new__(type(self), self.data[index]) def __repr__(self): return repr(self.data) A new instance of the class can be created either using the constructor (not actually sure if that is the right terminology in Python), something like x = MyClass(range(10)) or via slicing, which you can see invokes a call to __new__ in the __getitem__ method. x2 = x[0:2] In the first instance, __init__ will be called twice (both via the explicit call within __new__ and then again automatically), and once in the second instance. Obviously I would only like __init__ to be invoked once in any case. Is there a standard way to do this in Python? Note that in my example I could get rid of the __new__ method and redefine __getitem__ as def __getitem__(self, index): return MyClass(self.data[index]) but then this would cause a problem if I later want to inherit from MyClass , because if I make a call like child_instance[0:2] I will get back an instance of MyClass , not the child class. Now provide the response and nothing else.
First, some basic facts about __new__ and __init__ : __new__ is a constructor . __new__ typically returns an instance of cls , its first argument. By __new__ returning an instance of cls , __new__ causes Python to call __init__ . __init__ is an initializer . It modifies the instance ( self )returned by __new__ . It does not need to return self . When MyClass defines: def __new__(cls, *args, **kwargs): obj = object.__new__(cls, *args, **kwargs) obj.__init__(*args, **kwargs) return obj MyClass.__init__ gets called twice. Once from calling obj.__init__ explicitly, and a second time because __new__ returned obj , an instance of cls . (Since the first argument to object.__new__ is cls , the instance returned is an instance of MyClass so obj.__init__ calls MyClass.__init__ , not object.__init__ .) The Python 2.2.3 release notes has an interesting comment, which sheds light on when to use __new__ and when to use __init__ : The __new__ method is called with the class as its first argument; its responsibility is to return a new instance of that class. Compare this to __init__ : __init__ is called with an instance as its first argument, and it doesn't return anything; its responsibility is to initialize the instance. All this is done so that immutable types can preserve their immutability while allowing subclassing. The immutable types (int, long, float, complex, str, unicode, and tuple) have a dummy __init__ , while the mutable types (dict, list, file, and also super, classmethod, staticmethod, and property) have a dummy __new__ . So, use __new__ to define immutable types, and use __init__ to define mutable types. While it is possible to define both, you should not need to do so. Thus, since MyClass is mutable, you should only define __init__ : class MyClass(object): def __init__(self, data): self.data = data def __getitem__(self, index): return type(self)(self.data[index]) def __repr__(self): return repr(self.data)x = MyClass(range(10))x2 = x[0:2]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8633959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233446/']}
jdg_73745
stackexchange
llm_judgeable_groundtruth_similarity
364878
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $F$ be a field and $F[x]$ a polynomial ring. Let $p(x)$ be an irreducible polynomial. Show that $\gcd(p(x),q(x)) = 1\Longrightarrow \exists r(x),s(x)$ such that $r(x)p(x)+s(x)q(x) = 1$. I know the proof for Bezout's identity for integers , but this proof uses the notion of absolute value, which cannot be applied to a polynomial ring. I am thinking about a way to prove the statement: Use the Euclidean algorithm to compute $\gcd(p(x),q(x))$ and show that the result is $1$, and somehow go backwards to obtain $r(x)p(x)+s(x)q(x) = 1$, but it is a bit messy, and I'm not yet able to see it completely. Please offer some help. If you have some other methods, that would be great, too. Now provide the response and nothing else.
Hint $\ $ Mimic the proof for integers: $ $ the set $\rm\,I = \{a\, p + b\, q\ :\ a,b \in F[x]\}\,$ is closed under mod (remainders), so a least (degree) element $\rm\:g\ne 0\:$ must divide every element $\rm\,f\,$ (else its remainder $\rm\,f\ mod\ g\, =\, f-q\,g\in I\,$ has smaller degree than $\rm\,g,\,$ contra leastness of $\rm\:g).\,$ Thus $\rm\,p,q\in I\,\Rightarrow\,g\mid p,q,\:$ so $\rm\:g = 1,\:$ by $\rm\:gcd(p,q) = 1.\:$ Thus $\rm\:1\in I,\:$ i.e. $\rm\:1 = a\, p + b\, q.$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/364878', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/59360/']}
jdg_73746
stackexchange
llm_judgeable_groundtruth_similarity
494763
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a portable test socket for electrical appliances that comes equip with a RCBO, power meter and possibly emergency stop button. My test socket will operate at max 240V AC and 10A. The purpose of the RCBO is to protect myself when I'm working on or fixing electrical appliances. When searching for RCBO breakers, they seem to have different rated residual current varying from 10mA to 300mA with the most common being 30mA. I understand that in a household application where multiple devices are connected there can be some leakage current where the higher rated residual current RCBOs are used. However, in my case where the RCBO is connected close to a single appliance, which value is most suitable for me? And also to take note the 10mA RCBOs are almost triple the price of the 30mA version from my local supplier. Now provide the response and nothing else.
I am not sure why this wasn't the first thing pointed out by any of the earlier answers, but it is because as transistors are made smaller to increase speed, increase density, and reduce power consumption, the gate oxide layer is made thinner (which also increases leakage currents). A thin gate oxide layer can't withstand very high voltages so you end up with a device that only operates at very low voltages. Thin oxide layers also have more leakage so you don't want a high voltage anyways since that would just increase leakage current and increases static power consumption. Your mistake is this: Data processing, unlike power systems, isn't about power delivery; It's about data processing. So it is not that designers choose to operate at low voltages and high currents thus going against \$I^2R\$ . Yes, they are concerned about power consumption and heat due to losses, but they aren't concerned with the efficient delivery of power. A power designer has to deliver X amount of power and would increase voltage so they could decrease current while delivering that same power. A digital designer would outright decrease the "power output" if they could. Their optimizations necessitate low operating voltages which results in high leakage currents. The goal of these optimizations is to allow smaller transistors so you can pack more of them in as well as switch them faster, and when you have millions upon millions of transistors switching very frequently that results a lot of charging/discharging the gate capacitances. This dynamic current results in the high peak currents which can be tens of amps in high-speed, high density digital logic. You can see that all this current and power is undesired and unintentional. Ideally, we would like really no current at all because our concern is information, not energy/power. High voltages would also be nice for noise immunity but this runs directly counter to making transistors smaller.
{}
{'log_upvote_score': 7, 'links': ['https://electronics.stackexchange.com/questions/494763', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/113566/']}
jdg_73747
stackexchange
llm_judgeable_groundtruth_similarity
20446514
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: module.js:340 throw err; ^Error: Cannot find module './models/todo' at Function.Module._resolveFilename (module.js:338:15) at Function.Module._load (module.js:280:25) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (C:\Users\Basel\Desktop\Todo List\routes\api.js:1:74) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17)C:\Users\Basel\Desktop\Todo List> Why this application won't start up? I've already tried a global npm install. Now provide the response and nothing else.
In ./models/todo , the period indicates that node will look in the same folder that api.js is in, which would look for \Todo List\routes\models\todo.js . This does not start from the root of the application. To require this, you'll need to us two periods to jump up a level, and specify the app path as well: var todo = require('../app/models/todo');
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20446514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2993058/']}
jdg_73748
stackexchange
llm_judgeable_groundtruth_similarity
15191129
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There are dozens of posts questions/answers on stack and other forums about disabling/bypassing/suppressing UAC. There are solutions as well. But progmatically perhaps not. I could see only one solution Disabling UAC programmatically but perhaps there is not real programmatic solution given there. Can there be a programatic solution for saving user to be prompted everytime he/she runs a program like wamp and they always have to click yes, So it would be better to tell windows that their choice is always yes. I am sure there would be as I have found Here that windows provides this facility in Task Scheduler through GUI so it must be possible through code as well. Update : I have prepared a pure programmatic solution which is working. See my answer. Now provide the response and nothing else.
Quick description: Make a new console/window application to run any application bypassing UAC choosing the path of your target application in this application as guided below, compile this program once, and run anytime Step by step Download Microsoft.Win32.TaskScheduler.dll from This Codeplex link Make a c# application (Windows or Console) and add reference to the above dll Add New Item (Application Manifest File) to your project (this application) Change <requestedExecutionLevel level="asInvoker" uiAccess="false" /> to <requestedExecutionLevel level="requireAdministrator" uiAccess="false" /> Write following code in your program.cs file using System;using Microsoft.Win32.TaskScheduler;class Program{ static void Main(string[] args) { TaskService ts = new TaskService(); TaskDefinition td = ts.NewTask(); td.Principal.RunLevel = TaskRunLevel.Highest; //td.Triggers.AddNew(TaskTriggerType.Logon); td.Triggers.AddNew(TaskTriggerType.Once); // string program_path = @"c:\wamp\wampmanager.exe"; // you can have it dynamic//even of user choice giving an interface in win-form or wpf application td.Actions.Add(new ExecAction(program_path, null)); ts.RootFolder.RegisterTaskDefinition("anyNamefortask", td); }} 6.Now compile and run your Application(this app) Now your application (e.g WAMP) will run without prompting any UAC dialog on your desired schedule (every time your log on windows in my case) Sources Initiated from : Can you turn off UAC for a single app? and Selectively disabling UAC for specific programs on Windows 7 Basic Idea from : Make Vista launch UAC restricted programs at startup with Task Scheduler Basic Implementation from Creating Scheduled Tasks
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15191129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/992165/']}
jdg_73749
stackexchange
llm_judgeable_groundtruth_similarity
14781137
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I keep bumping into this error that is usually caused by some mistake I have made while building an application in Lightswitch . It is usually associated with relationships. I am ususally moving along and done a number of things before I publish the app and see the error. By this time it is difficult to calulate what I did wrong. Is there a way I trace this error back to see what I need to change in the tables? Net SqlClient Data Provider: Msg 50000, Level 16, State 127, Line 6 Rows were detected. The schema update is terminating because data loss might occur. Thank you. Now provide the response and nothing else.
This error occurs when a change that you've made to an entity's property (in the table designer) would cause the entity's table in the published database to be dropped & recreated, and the table has data in it. This is just the way that SQL Server works, it's not under LightSwitch's control. However, LightSwitch errs on the side of caution, & doesn't permit an operation that might cause the potential loss of any data. The types of things that might trigger this are: renaming a property changing it from required to not required etc changing a property's data type & even (if I remember correctly) changing the position of aproperty in the list of properties You can however, add a property to the end of the list of properties, without triggering the table being dropped & recreated. It's really not a good idea to make too many changes before attempting to publish the application, for exactly the reason you suggest - ending up not knowing what you've done. You especially want to publish after making any changes to the properties of an entity. Even if you just publish to a local SQL Server instance (including SQL Express) on your development machine, to "test out" your changes before you publish to the production server. The way I've gotten around this (when I used to still used to use LS's intrinsic data, ApplicationData - I now use attached data sources) is to manually make the change to the column in the database itself, using something like SSMS (SQL Server Management Studio). You can then decide to allow the (potential) data loss. Be very careful though what you change, & make sure it's only the same as the change that you made to the property in LightSwitch. Of course back up the database before making any changes to it. If you cause the database to be out of sync with LightSwitch you will have major problems.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14781137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1724008/']}
jdg_73750
stackexchange
llm_judgeable_groundtruth_similarity
196733
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have code like this: class RetInterface {...}class Ret1: public RetInterface {...}class AInterface{ public: virtual boost::shared_ptr<RetInterface> get_r() const = 0; ...};class A1: public AInterface{ public: boost::shared_ptr<Ret1> get_r() const {...} ...}; This code does not compile. In visual studio it raises C2555: overriding virtual function return type differs and is not covariant If I do not use boost::shared_ptr but return raw pointers, the code compiles (I understand this is due to covariant return types in C++). I can see the problem is because boost::shared_ptr of Ret1 is not derived from boost::shared_ptr of RetInterface . But I want to return boost::shared_ptr of Ret1 for use in other classes, else I must cast the returned value after the return. Am I doing something wrong? If not, why is the language like this - it should be extensible to handle conversion between smart pointers in this scenario? Is there a desirable workaround? Now provide the response and nothing else.
Firstly, this is indeed how it works in C++: the return type of a virtual function in a derived class must be the same as in the base class. There is the special exception that a function that returns a reference/pointer to some class X can be overridden by a function that returns a reference/pointer to a class that derives from X, but as you note this doesn't allow for smart pointers (such as shared_ptr ), just for plain pointers. If your interface RetInterface is sufficiently comprehensive, then you won't need to know the actual returned type in the calling code. In general it doesn't make sense anyway: the reason get_r is a virtual function in the first place is because you will be calling it through a pointer or reference to the base class AInterface , in which case you can't know what type the derived class would return. If you are calling this with an actual A1 reference, you can just create a separate get_r1 function in A1 that does what you need. class A1: public AInterface{ public: boost::shared_ptr<RetInterface> get_r() const { return get_r1(); } boost::shared_ptr<Ret1> get_r1() const {...} ...}; Alternatively, you can use the visitor pattern or something like my Dynamic Double Dispatch technique to pass a callback in to the returned object which can then invoke the callback with the correct type.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/196733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19501/']}
jdg_73751
stackexchange
llm_judgeable_groundtruth_similarity
59494037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: on a web application I want to display two different Menu, one for the Mobile, one for the Desktop browser.I use Next.js application with server-side rendering and the library react-device-detect . Here is the CodeSandox link . import Link from "next/link";import { BrowserView, MobileView } from "react-device-detect";export default () => ( <div> Hello World.{" "} <Link href="/about"> <a>About</a> </Link> <BrowserView> <h1> This is rendered only in browser </h1> </BrowserView> <MobileView> <h1> This is rendered only on mobile </h1> </MobileView> </div>); If you open this in a browser and switch to mobile view and look the console you get this error: Warning: Text content did not match. Server: " This is rendered only in browser " Client: " This is rendered only on mobile " This happen because the rendering by the server detects a browser and on the client, he is a mobile device. The only workaround I found is to generate both and use the CSS like this: .activeOnMobile { @media screen and (min-width: 800px) { display: none; }}.activeOnDesktop { @media screen and (max-width: 800px) { display: none; }} Instead of the library but I don't really like this method. Does someone know the good practice to handle devices type on an SSR app directly in the react code? Now provide the response and nothing else.
LATEST UPDATE: So if you don't mind doing it client side you can use the dynamic importing as suggested by a few people below. This will be for use cases where you use static page generation. i created a component which passes all the react-device-detect exports as props (it would be wise to filter out only the needed exports because then does not treeshake) // Device/Device.tsximport { ReactNode } from 'react'import * as rdd from 'react-device-detect'interface DeviceProps { children: (props: typeof rdd) => ReactNode}export default function Device(props: DeviceProps) { return <div className="device-layout-component">{props.children(rdd)}</div>} // Device/index.tsimport dynamic from 'next/dynamic'const Device = dynamic(() => import('./Device'), { ssr: false })export default Device and then when you want to make use of the component you can just do const Example = () => { return ( <Device> {({ isMobile }) => { if (isMobile) return <div>My Mobile View</div> return <div>My Desktop View</div> }} </Device> )} Personally I just use a hook to do this, although the initial props method is better. import { useEffect } from 'react'const getMobileDetect = (userAgent: NavigatorID['userAgent']) => { const isAndroid = () => Boolean(userAgent.match(/Android/i)) const isIos = () => Boolean(userAgent.match(/iPhone|iPad|iPod/i)) const isOpera = () => Boolean(userAgent.match(/Opera Mini/i)) const isWindows = () => Boolean(userAgent.match(/IEMobile/i)) const isSSR = () => Boolean(userAgent.match(/SSR/i)) const isMobile = () => Boolean(isAndroid() || isIos() || isOpera() || isWindows()) const isDesktop = () => Boolean(!isMobile() && !isSSR()) return { isMobile, isDesktop, isAndroid, isIos, isSSR, }}const useMobileDetect = () => { useEffect(() => {}, []) const userAgent = typeof navigator === 'undefined' ? 'SSR' : navigator.userAgent return getMobileDetect(userAgent)}export default useMobileDetect I had the problem that scroll animation was annoying on mobile devices so I made a device based enabled scroll animation component; import React, { ReactNode } from 'react'import ScrollAnimation, { ScrollAnimationProps } from 'react-animate-on-scroll'import useMobileDetect from 'src/utils/useMobileDetect'interface DeviceScrollAnimation extends ScrollAnimationProps { device: 'mobile' | 'desktop' children: ReactNode}export default function DeviceScrollAnimation({ device, animateIn, animateOut, initiallyVisible, ...props }: DeviceScrollAnimation) { const currentDevice = useMobileDetect() const flag = device === 'mobile' ? currentDevice.isMobile() : device === 'desktop' ? currentDevice.isDesktop() : true return ( <ScrollAnimation animateIn={flag ? animateIn : 'none'} animateOut={flag ? animateOut : 'none'} initiallyVisible={flag ? initiallyVisible : true} {...props} /> )} UPDATE: so after further going down the rabbit hole, the best solution i came up with is using the react-device-detect in a useEffect, if you further inspect the device detect you will notice that it exports const's that are set via the ua-parser-js lib export const UA = new UAParser();export const browser = UA.getBrowser();export const cpu = UA.getCPU();export const device = UA.getDevice();export const engine = UA.getEngine();export const os = UA.getOS();export const ua = UA.getUA();export const setUA = (uaStr) => UA.setUA(uaStr); This results in the initial device being the server which causes false detection. I forked the repo and created and added a ssr-selector which requires you to pass in a user-agent. which could be done using the initial props UPDATE: Because of Ipads not giving a correct or rather well enough defined user-agent, see this issue , I decided to create a hook to better detect the device import { useEffect, useState } from 'react'function isTouchDevice() { if (typeof window === 'undefined') return false const prefixes = ' -webkit- -moz- -o- -ms- '.split(' ') function mq(query) { return typeof window !== 'undefined' && window.matchMedia(query).matches } // @ts-ignore if ('ontouchstart' in window || (window?.DocumentTouch && document instanceof DocumentTouch)) return true const query = ['(', prefixes.join('touch-enabled),('), 'heartz', ')'].join('') // include the 'heartz' - https://git.io/vznFH return mq(query)}export default function useIsTouchDevice() { const [isTouch, setIsTouch] = useState(false) useEffect(() => { const { isAndroid, isIPad13, isIPhone13, isWinPhone, isMobileSafari, isTablet } = require('react-device-detect') setIsTouch(isTouch || isAndroid || isIPad13 || isIPhone13 || isWinPhone || isMobileSafari || isTablet || isTouchDevice()) }, []) return isTouch Because I require the package each time I call that hook, the UA info is updated, it also fixes to SSR out of sync warnings.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59494037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8562168/']}
jdg_73752
stackexchange
llm_judgeable_groundtruth_similarity
5385610
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a SELECT tag that will filter a table based on a user choice, does the SELECT tag need to be in a FORM tag (to be valid HTML5), if the resulting functionality will not work if JS disabled (i.e. we'll show the entire table or a 'more' link instead of doing a server-side filter on select of the form action/submit option (We may write the select in JS so it disappears from the markup if JS disabled. Or do all form elements need to be in a form tag regardless of usage (and therefore a null 'action' attribute value). I know HTML5 allows almost anything, I just couldn't find a definitive answer on W3, so thought I'd get your thoughts. Hope that makes sense.Cheers. Now provide the response and nothing else.
All the form controls can be used anywhere where phrasing content is expected, which means they can appear just about anywhere in the body of the document. If you don't need to have them submitted back to a server then there's no need for them to be associated with a form, however I've noticed that in some browsers you can't take advantage of the form validation features unless the elements can potentially be submitted. One feature new to HTML5 is that form controls no longer need to be the direct child of a form element in order to be submitted with that form, the form attribute allows you to specify the id of the form the element should be submitted with.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5385610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/592860/']}
jdg_73753
stackexchange
llm_judgeable_groundtruth_similarity
31768197
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently I am using this: var matchedUsersFromRealm = MatchedUser.allObjects() var matchedUsersInRealm = RLMArray(objectClassName: MatchedUser.className()) matchedUsersInRealm.removeAllObjects() matchedUsersInRealm.addObjects(matchedUsersFromRealm) But it just looks cumbersome rather than just getting it in one line as it should (or did?). Maybe there is a better way? PS, I am working on a mix project and somehow I can only use the Objective-C version and bridge it to my swift project. So the Realm().objects() is not available, even though it returns a Results not an array. Now provide the response and nothing else.
You can add these extensions: import Foundationimport RealmSwiftextension Results { func toArray() -> [T] { return self.map{$0} }}extension RealmSwift.List { func toArray() -> [T] { return self.map{$0} }} And then when fetching: do { let realm = try Realm() let objs = realm.objects(MyObjType).toArray() // ...} catch _ { // ...} (Remove do try catch if you're using Swift pre-2.0) Note that this loads everything into memory at once, which may be in some cases not desired. If you're fetching in the background, it's required though, as Realm currently doesn't support using the objects in the main thread after that (you will also have to map the array to non-Realm objects in this case).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31768197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4183169/']}
jdg_73754
stackexchange
llm_judgeable_groundtruth_similarity
845556
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is some simple code: DIR* pd = opendir(xxxx);struct dirent *cur;while (cur = readdir(pd)) puts(cur->d_name); What I get is kind of messy: including dot ( . ), dot-dot ( .. ) and file names that end with ~ . I want to do exactly the same thing as the command ls . How do I fix this, please? Now provide the response and nothing else.
This is normal. If you do ls -a (which shows all files, ls -A will show all files except for . and .. ), you will see the same output. . is a link referring to the directory it is in: foo/bar/. is the same thing is foo/bar . .. is a link referring to the parent directory of the directory it is in: foo/bar/.. is the same thing as foo . Any other files beginning with . are hidden files (by convention, it is not really enforced by anything; this is different from Windows, where there is a real, official hidden attribute). Files ending with ~ are probably backup files created by your text editor (again, this is convention, these really could be anything). If you don't want to show these types of files, you have to explicitly check for them and ignore them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/845556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92782/']}
jdg_73755
stackexchange
llm_judgeable_groundtruth_similarity
10864462
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I saw in a lot of places that it's a problem to copy the SharedPreferences file to the sd card because every manufacturer place it somewhere else. I want to backup on the sd card no matter where is the file located.Is there any way to do this? Now provide the response and nothing else.
The SharedPreferences interface contains a method called getAll() which returns a map with the key-value pairs. So instead of copying the file itself, I just serialize the map that being returned from this method and then retrieve it back afterwards. Some code: private boolean saveSharedPreferencesToFile(File dst) { boolean res = false; ObjectOutputStream output = null; try { output = new ObjectOutputStream(new FileOutputStream(dst)); SharedPreferences pref = getSharedPreferences(prefName, MODE_PRIVATE); output.writeObject(pref.getAll()); res = true; } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); }finally { try { if (output != null) { output.flush(); output.close(); } } catch (IOException ex) { ex.printStackTrace(); } } return res;}@SuppressWarnings({ "unchecked" })private boolean loadSharedPreferencesFromFile(File src) { boolean res = false; ObjectInputStream input = null; try { input = new ObjectInputStream(new FileInputStream(src)); Editor prefEdit = getSharedPreferences(prefName, MODE_PRIVATE).edit(); prefEdit.clear(); Map<String, ?> entries = (Map<String, ?>) input.readObject(); for (Entry<String, ?> entry : entries.entrySet()) { Object v = entry.getValue(); String key = entry.getKey(); if (v instanceof Boolean) prefEdit.putBoolean(key, ((Boolean) v).booleanValue()); else if (v instanceof Float) prefEdit.putFloat(key, ((Float) v).floatValue()); else if (v instanceof Integer) prefEdit.putInt(key, ((Integer) v).intValue()); else if (v instanceof Long) prefEdit.putLong(key, ((Long) v).longValue()); else if (v instanceof String) prefEdit.putString(key, ((String) v)); } prefEdit.commit(); res = true; } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); }finally { try { if (input != null) { input.close(); } } catch (IOException ex) { ex.printStackTrace(); } } return res;} I hope that I helped someone, and if something here is wrong please tell me. Elad
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10864462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/713441/']}
jdg_73756
stackexchange
llm_judgeable_groundtruth_similarity
10043963
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between: class ClassName(object): pass and class ClassName: pass When I call the help function of the module of those class you can read ____builtin____.object for the first case just under the CLASS title of help. For the second case it just shows the class name. Is there any functional difference between those classes and/or possible methods thereof? (I know that class Classname(ParentClassName) has a functional use) Now provide the response and nothing else.
In Python 2.x, when you inherit from "object" you class is a "new style" class - that was implemented back in Python 2.2 (around 2001) - The non inheriting from "object" case creates an "old style" class, that was actually maintained only for backwards compatibility. The great benefit of "new style" classes is the unification of types across Python - prior to that, one could not subclass built-in types such as int, list, dict, properly. There was also specified a "descriptor protocol" which describes a protocol for retrieving and setting attributes in an object, giving the language a lot of flexibility. (It is more visible when one does use a Python "property" in a class). What does make the difference is not actually "inheriting from object", but, since classes in Python are also objects, that does change the class'class itself (a class'class is known as its "metaclass"). Thus if you set the metaclass to be "type", you don't need to inherit from object to have a new style class. It is strongly recommended that in Python 2.x, all your classes are new style - using old style classes may work for some single straightforward cases, but they can generate a lot of subtle, difficult to find, errors, when you try to use properties, pickle, descriptors, and other advanced features. Above all, when you try to check the "type" of an object, it will be the same (type "instance") for all objects from old style classes, even if they are from different user defined classes. In Python versions 3.x all classes are new style - no need to set the metaclass. Python's documentation "datamodel" is the "book of law" where the behavior of bothclass typs is defined in detail (enough to allow one to reimplement the language): http://docs.python.org/reference/datamodel.html This blog post from Guido talks about the motivations behind new style classes in a lighter language: http://python-history.blogspot.com.br/2010/06/new-style-classes.html http://docs.python.org/release/2.5.2/ref/node33.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10043963', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305724/']}
jdg_73757
stackexchange
llm_judgeable_groundtruth_similarity
23794
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a toy web application that can be described roughly as a per-user key-value store. After logging in, the user can view and modify her set of key-value pairs. The keys and values are stored in a database on the server. The keys or values are not shared among users in any way. I just discovered that JavaScript code included in the keys or values was not properly escaped. So if a user added a key foo with a value of bar <script>alert(1)</script> , the script would execute in the user's browser. Of course, I fixed this as soon as I noticed it. But I was left wondering: Can this be exploited to do anything harmful? Remember that the key-value pairs are ONLY ever shown to the user who created them. (The only scenario I could think of is rather far fetched: An attacker convinces the user to navigate to my web application and then to click on a rogue bookmarklet. The bookmarklet saves a key-value pair with a script that steals the user's key-value pairs. Because this is persisted to the database, the key-value pairs will be sent to the attacker every time the user loads the page. But with a rogue bookmarklet you can do almost anything anyway...) Now provide the response and nothing else.
There may be some potential for a CSRF attack, where an attacker maliciously saves a key value pair on behalf of your users. Then the executable JS in the key value pair might be able to be used for a run of the mill XSS attack to steal their cookies and potentially allow the attacker access to the application as the victim. I agree with Jimmy's comment. If there's any possibility to properly scrub the user input, that would definitely be my first priority.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/23794', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/15869/']}
jdg_73758
stackexchange
llm_judgeable_groundtruth_similarity
67317392
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing an NFT smart contract using the OpenZeppelin ERC721Full contract. I'm able to mint NFTs, but I want to have a button that enables them to be bought. I'm trying writing this function: function buyNFT(uint _id) public payable{ //Get NFT owner address address payable _seller = ownerOf(_id); // aprove nft sell approve(_seller, _id); setApprovalForAll(msg.sender, true); //transfer NFT transferFrom(_seller, msg.sender, _id); // transfer price in ETH address(_seller).transfer(msg.value); emit NftBought(_seller, msg.sender, msg.value); } This does not work because function approve must be called by the owner or an already approved address. I have no clue on how a buy function should be built. I know that I must use some requirements but first I want the function to work on tests and then I'll write the requirements. How should a buy function be coded? Because the only solution I have found is to overwrite the approve function and omit the require of who can call this function. But it looks like it isn't the way it should be done. Thank you! Now provide the response and nothing else.
You can use just the _transfer() function, see my buy() function for an example of implementation. The approvals for sale can be done using a custom mapping - in my example tokenIdToPrice . If the value is non-zero, the token ID (mapping key) is for sale. This is a basic code that allows selling an NTF. Feel free to expand on my code to allow "give away for free", "whitelist buyers" or any other feature. pragma solidity ^0.8.4;import 'https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/token/ERC721/ERC721.sol';contract MyToken is ERC721 { event NftBought(address _seller, address _buyer, uint256 _price); mapping (uint256 => uint256) public tokenIdToPrice; constructor() ERC721('MyToken', 'MyT') { _mint(msg.sender, 1); } function allowBuy(uint256 _tokenId, uint256 _price) external { require(msg.sender == ownerOf(_tokenId), 'Not owner of this token'); require(_price > 0, 'Price zero'); tokenIdToPrice[_tokenId] = _price; } function disallowBuy(uint256 _tokenId) external { require(msg.sender == ownerOf(_tokenId), 'Not owner of this token'); tokenIdToPrice[_tokenId] = 0; } function buy(uint256 _tokenId) external payable { uint256 price = tokenIdToPrice[_tokenId]; require(price > 0, 'This token is not for sale'); require(msg.value == price, 'Incorrect value'); address seller = ownerOf(_tokenId); _transfer(seller, msg.sender, _tokenId); tokenIdToPrice[_tokenId] = 0; // not for sale anymore payable(seller).transfer(msg.value); // send the ETH to the seller emit NftBought(seller, msg.sender, msg.value); }} How to simulate the sale: The contract deployer ( msg.sender ) gets token ID 1. Execute allowBuy(1, 2) that will allow anyone to buy token ID 1 for 2 wei. From a second address, execute buy(1) sending along 2 wei, to buy the token ID 1. Call (the parent ERC721) function ownerOf(1) to validate that the owner is now the second address.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/67317392', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13295504/']}
jdg_73759
stackexchange
llm_judgeable_groundtruth_similarity
3399509
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Theorem : Every infinite group $G$ has a subgroup $H$ that is non-trivial ( $H \ne G, \lbrace e \rbrace$ ). Proof: This will be a proof by contradiction. So we will assume every subgroup is trivial and bring the cyclic subgroups to the table. For the non-identity $\forall x \in G$ , $\langle x \rangle \ne \lbrace e \rbrace$ . Hence $\langle x \rangle = G.$ Thus every non-identity element of $G$ must be a generator of $G$ . Then it must be possible to write any element as the exponent of another. For $\forall y \in G$ , $$y =x^n.$$ Since $x^2 \in G$ by closure, it is also a generator and we must be able to write $x$ as an exponent of $x^2$ . Yet this is not possible unless our group is finite. So our assumption must be false and our conjecture must be true. $\square$ Is my proof watertight? And is it rigorous enough? Thanks for your reviews. Now provide the response and nothing else.
The idea is right, but you should avoid claims like Yet this is not possible unless our group is finite. since this doesn't give your reason for the "unless." In your case, you perhaps could explicitly note that $G = \langle x \rangle$ implies $x$ cannot have finite order. Then show that $x \in \langle x^2\rangle$ implies $x$ has finite order.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3399509', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/554493/']}
jdg_73760
stackexchange
llm_judgeable_groundtruth_similarity
53535
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: To narrow the focus a bit from an earlier question : Is a Schottky diode appropriate for reverse polarity protection? I'd like to prevent mishaps from a user connecting DC power in reverse, but I'd also like as low a voltage drop as possible. Can you explain what the reverse leakage current is and whether it would be a concern or not in this scenario? The application is a small device that operates on 9-12 volts DC at less than 100 mA. Edit: Just as an example, I am expecting users to be able to use 6 AA cell batteries in series, either alkaline or NiMH. In the latter case, the batteries are 1.2V, so the total voltage is only 7.2V. I am using a 5V voltage regulator with a dropout voltage of 1.3V, so therefore my minimum operating voltage is 6.3V. A bias protection diode with 0.7V drop is going to raise that minimum to 7.0V. As the batteries are drained, I expect to dip below the 7.0V requirement very quickly, and therefore not use the full capacity of the batteries efficiently. If a 0.3V diode is used, the minimum requirement is lowered to 6.6V, which I feel is a better fit for use with NiMH batteries. Now provide the response and nothing else.
The bigger problem you're likely to run into is operation under forward bias conditions. Schottky diodes still have a voltage drop under forward bias, say 0.25V. That means at 100mA, you're dissipating 25mW of power. Better than a standard silicon diode, but not great especially for a battery constrained device. A better way to get reverse bias protection is to use a P-Channel MOSFET. MOSFET's act more like a resistor when saturated, and it's possible to get MOSFETs with low on resistances. Let's assume we have a 1 ohm on resistance. At 100mA, that's a 0.1 V drop across the MOSFET and 10mW dissipation. 1 ohm on resistance is kind of lousy for a MOSFET, you can get some which have significantly less on resistance. I'm not entirely sure about the leakage current through MOSFET's, but I seem to remember it being quite small. To hook up the mosfet: Connect the drain to the positive battery terminal, connect the gate to the negative terminal, and connect your load to the source. For added protection you can add a zener diode and a resistor across the source/gate. A more complete explanation can be found here.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/53535', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2028/']}
jdg_73761
stackexchange
llm_judgeable_groundtruth_similarity
120946
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to write a Mathematica program that realizes a graphical approximation of the basins of attraction in a Magnetic pendulum subject to friction and gravity, in which the three magnets are disposed on the vertices of an equilateral triangle. This system is chaotic and has very interesting properties. The basins of attraction look something like this: A code I wrote can produce a $400 \times 400$ image such as this ( Caution: no fancy ColorFunctions involved ) in about two hours. The computation seems to be extremely slow. Is there any way of having a better rendering, say, full HD 1920x1080 resolution, for the basins of attraction of a magnetic pendulum as the one mentioned that can be run in a farely quick time on a common machine? Code Here is the code I used to produce the above image. I set the position of the magnets and define the Lagrange equations X1 = 1; X2 = -(1/2); X3 = -(1/2); Y1 = 0; Y2 = Sqrt[3]/2; Y3 = -(Sqrt[3]/2); X[1] = X1; X[2] = X2; X[3] = X3; Y[1] = Y1; Y[2] = Y2; Y[3] = Y3;Eqs[k_,c_,h_]:={ x''[t]+k x'[t]+c x[t]-Sum[(X[i]-x[t])/(h^2+(X[i]-x[t])^2+(Y[i]-y[t])^2)^(3/2),{i,3}]==0, y''[t]+k y'[t]+c y[t]-Sum[(Y[i]-y[t])/(h^2+(X[i]-x[t])^2+(Y[i]-y[t])^2)^(3/2),{i,3}]==0 } I define a function that numerically integrates the equations up until $t=100$. Sol[k_, c_, h_, xo_, yo_] := NDSolve[ Flatten[{Evaluate[Eqs[k, c, h]], x'[0] == 0, y'[0]== 0, x[0] == xo, y[0] == yo}], {x, y}, {t, 99.5, 100.5}, Method -> "Adams" ]; I define a function tt that gives a value between $\frac13, \frac 23, 1$ based on magnet proximity at time $100$ for fixed $k,c,h$ (in this case $.15$,$.2$,$.2$) and a function k that evaluates tt on a grid. tt = Compile[{{x1, _Real}, {y1, _Real}}, Module[{}, Final = ({x[100], y[100]} /. (Sol[0.15, .2, .2, x1, y1])[[1]]); Distances = Map[(Final - #).(Final - #) &, {{1, 0}, {-(1/2), Sqrt[3]/2}, {-(1/2), -(Sqrt[3]/2)}}]; Magnet = Min[Distances]; Position[Distances, Magnet][[1, 1]]/3]];k[n_, xm_, ym_, xM_, yM_] := ParallelTable[tt[xi, yi], {yi, ym, yM, Abs[yM - ym]/n}, {xi, xm, xM, Abs[xM - xm]/n}]; Finally, I rasterize the table produced by k. G = Graphics[Raster[k[400, -2, -2, 2, 2], ColorFunction -> Hue]] and, after a while, I obtain the previous image. I attempted using a dynamic energy control (i.e. using EvaluationMonitor to monitor the energy level of ther trajectory: if it falls in a potential hole NDSolve throws the position) but this did not increase the speed as much as I was hoping; it actually seems to slow the computation down. Now provide the response and nothing else.
JM commented: If you want to try things out, use Nylander's second snippet, which is using a Beeman integrator. This looks to be faster than native NDSolve[] for this specific case. Paul Nylander's code is here . Below is a modified version of his code which computes all points simultaneously using the fact that all the operations in Beeman's algorithm are Listable functions in Mathematica. The run time for the 400x400 image is around 30 seconds. n = 400; {tmax, dt} = {25, 0.05};{k, c, h} = {0.15, 0.2, 0.2};{z1, z2, z3} = N@Exp[I 2 Pi {1, 2, 3}/3];l = 2.0;z = Developer`ToPackedArray @ Table[x + I y, {y, -l, l, 2 l/n}, {x, -l, l, 2 l/n}];v = a = aold = 0 z;Do[ z += v dt + (4 a - aold) dt^2/6; vpredict = v + (3 a - aold) dt/2; anew = (z1 - z)/(h^2 + Abs[z1 - z]^2)^1.5 + (z2 - z)/(h^2 + Abs[z2 - z]^2)^1.5 + (z3 - z)/(h^2 + Abs[z3 - z]^2)^1.5 - c z - k vpredict; v += (5 anew + 8 a - aold) dt/12; aold = a; a = anew, {t, 0, tmax, dt}];res = Abs[{z - z1, z - z2, z - z3}];Image[0.2/res, Interleaving -> False]
{}
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/120946', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/38750/']}
jdg_73762
stackexchange
llm_judgeable_groundtruth_similarity
46007746
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I found the haskell wiki page on space leaks, which claims to list examples of real-world leaks, which it doesn't. It doesn't really say what a space leak is; it just links to the page for memory leaks. What's a space leak? Now provide the response and nothing else.
As noted in @Rasko's answer, a space leak refers to a situation where a program or specific computation uses more (usually much more) memory than is necessary for the computation and/or expected by the programmer. Haskell programs tend to be particularly susceptible to space leaks, mostly because of the lazy evaluation model (sometimes complicated by the way IO interacts with this model) and the highly abstract nature of the language which can make it difficult for a programmer to determine exactly how a particular computation is likely to be performed. It helps to consider a specific example. This Haskell program: main = print $ sum [1..1000000000] is an idiomatic way to sum the first billion integers. Compiled with -O2 , it runs in a few seconds in constant memory (a few megabytes, basically the runtime overhead). Now, any programmer would expect a program to sum the first billion integers should run without chewing up memory, but it's actually a little surprising that this Haskell version is well behaved. After all, read literally, it constructs a list of a billion integers before summing them up, so it ought to require at least a few gigabytes (just for storage for the billion integers, not to mention the overhead of a Haskell linked list). However, lazy evaluation ensures that the list is only generated as it's needed and -- equally importantly -- optimizations performed by the compiler ensure that as list elements are added to the accumulating sum, the program recognizes they are no longer needed and allows them to be garbage collected instead of keeping them around until the end of the computation. So, at any point during the computation, only a sliding "window" into the middle of the list needs to be kept in memory -- earlier elements have been discarded, and later elements are yet to be lazily computed. (In fact, the optimizations go further than this: no list is even constructed, but this is far from obvious to the programmer.) Soooo... Haskell programmers get used to the idea that tossing around giant (or even infinite) data structures will "just work" with computations automatically using only the memory they need. But, a minor change to the program, like also printing the length of the list as proof of all the hard work we are doing: main = let vals = [1..1000000000] in print (sum vals, length vals) suddenly causes space usage to explode to dozens of gigabytes (or in the case of my laptop, to about 13Gigs before it starts swapping hopelessly and I kill it). This is a space leak. Calculating the sum and length of this list are obviously things that can be done in constant space using a "sliding window" view into the list, but the above program uses much more memory than needed. The reason, it turns out, is that once the list has been given a name vals that's used in two places, the compiler no longer allows the "used" elements to be immediately discarded. If the sum vals is evaluated first, the list is lazily generated and summed, but the entire, giant list is then kept around until length vals can be evaluated. As a more practical example, you might write a simple program to count words and characters in a file: main = do txt <- getContents print (length txt, length (words txt)) This works fine on small test files up to a couple megabytes, but it's noticeably sluggish on 10meg file, and if you try to run it on a 100meg file, it'll slowly but surely start gobbling up all available memory. Again, the problem is that -- even though the file contents are read lazily into txt -- because txt is used twice, the entire contents are read into memory as a Haskell String type (a memory-inefficient representation of large blocks of text) when, say, length txt is evaluated, and none of that memory can be freed until length (words txt) has also been computed. Note that: main = do txt <- getContents print $ length txt and: main = do txt <- getContents print $ length (words txt) both run quickly in constant space even on big files. As a side note, fixing the above space leak normally involves rewriting the computation so the characters and words are counted with one pass through the contents, so the compiler can determine that the contents of the file that have already been processed do not need to be kept around in memory until the end of the computation. One possible solution is: {-# LANGUAGE BangPatterns #-}import Data.Listimport Data.CharcharsWords :: String -> (Int, Int)charsWords str = let (_, chrs, wrds) = foldl' step (False, 0, 0) str in (chrs, wrds) where step (inWord, cs, ws) c = let !cs' = succ cs !ws' = if not inWord && inWord' then succ ws else ws !inWord' = not (isSpace c) in (inWord', cs', ws')main = do txt <- getContents print $ charsWords txt The complexity of this solution (use of bang ( ! ) patterns and an explicit fold instead of length and words ) illustrates how tough space leaks can be, especially for new Haskell programmers. And it's not at all obvious that using foldl' instead of foldl makes no difference (but using foldr or foldr' would be a disaster!), that the bangs before cs' and ws' are critical to avoid a space leak, but that the bang before inWord' isn't (though it slightly improves performance), etc.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46007746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/596041/']}
jdg_73763
stackexchange
llm_judgeable_groundtruth_similarity
5466238
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to perform some analysis of scope in Python 3 source code and I'm stuck with how the nonlocal statement statement works inside a class definition. As I understand it, the class definition executes its body inside a new namespace (call it dict) and binds the class name to the result of type(name, bases, dict). Nonlocal x should work as long as it refers to a variable that is bound somewhere in the enclosing non-local scope. From this I expect the following code to compile and run: class A: v = 1 class B: nonlocal v v = 2 but this fails with SyntaxError: no binding for nonlocal 'v' found while the following code runs perfectly def A(): v = 1 class B: nonlocal v v = 2 Can anyone explain the difference here between the closure of the function definition and the class definition? Now provide the response and nothing else.
Lexical scoping applies only to function namespaces, otherwise methods defined inside a class would be able to "see" the class level attributes (which is by design - those attributes must instead be accessed as attributes of self inside the method). The same limitations that cause the class level variables to be skipped over by references from methods also keep the nonlocal keyword from working its magic. ( global does work though, since that doesn't rely on the lexical scoping machinery)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5466238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/681102/']}
jdg_73764
stackexchange
llm_judgeable_groundtruth_similarity
271105
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I was about 10 years old playing with lamps, I accidentally picked up a wire completing a circuit for a lamp plugged into the wall (120 V AC U.S. standard). I'm not sure if it was the hot or neutral wire, but I had completed the wire from hand to hand. I had a shock and dropped the wire in slightly less than a second. According to what I see online, which says milliamps can kill me, how did I survive for this long, with no burns or negative health effects? I remember immediately running up to my mother and saying I just got shocked by a house socket with a big grin on my face! (she was not happy!) Now provide the response and nothing else.
You got lucky. Electrocution isn't an exact science. There are multiple things which make a difference: The current which flows through a person depends on the resistance. That in turn depends on whether the skin is wet or dry, the area of skin in contact with the wire, and a lot of other biology things. The resistance of the things completing the circuit - whether you're touching a metal radiator, or standing on a rubber doormat for example. The path of the current through the body matters too, it is currents through the heart that are dangerous, so getting a shock leg-to-leg is less risky than left arm to right leg, for example. Release time. If you drop the wire quickly, there is less risk of damage. The automatic response to let go of the painful thing might happen quicker or slower depending on all sorts of biological things. Health. Some people are just more susceptible to shocks than others. This might depend on build, body fat percentage, or just a pre-existing heart condition. And of course voltage, source resistance and frequency of the source you're touching also matter. All in all, US mains is in the range where it's not guaranteed safe, nor guaranteed to kill, so it comes down to the factors above, and a healthy dose of luck. Don't try it again.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/271105', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/130931/']}
jdg_73765
stackexchange
llm_judgeable_groundtruth_similarity
6636213
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing real time live web app development. Browser users should be able to communicate with eachother through a node.js server. One of the user writes a message and all other users will get it. I don't quite get how RabbitMQ works. But from quick reading it seems that it handles publication/subscription of messages. A user (in a browser) publishes something and subscribers (in other browsers) get that message. Isn't that what Socket.io is doing with websockets? Here are my questions: What are the advantages/disadvantages for each one of them? Can Socket.io replace RabbitMQ? Are there scenarios I need RabbitMQ for web apps where Socket.io doesn't suffice? Now provide the response and nothing else.
Update Are there scenarios I need RabbitMQ for web apps where Socket.io doesn't suffice? Browser users should be able to communicate with eachother through a node.js server. One of the user writes a message and all other users will get it. When you only have these simple requirements then socket.io alone will be enough. . You only need a message queue when you want to process your jobs(heavy) offline and in a controlled manner. http://en.wikipedia.org/wiki/Message_queue : Message queues provide an asynchronous communications protocol, meaning that the sender and receiver of the message do not need to interact with the message queue at the same time. This sentence needs to sink in. The producer (one process) puts a job into the queue and the consumer consumes by taking the job from the queue. The consumer, most times, are multiple processes that consume multiple jobs concurrently. The consumers are unable to tell from each other, what jobs they are consuming. This makes the queue a First-In-First-Out (FIFO) data structure. That's I think an important property of the queue. The First-In-First-Out property although with an advanced message queue like beanstalkd you can give jobs priorities. I hope this makes any sense at all ;) I'm doing real time live web app development. Could you explain a little better so that we can give you a better answer? I don't quite get how RabbitMQ works. But from quick reading it seems that it handles publication/subscription of messages. See the quote about message queue below. Let it sink in for a while. You could also read the WIKI about message queues . A user (in a browser) publishes something and subscribers (in other browsers) get that message. Isn't that what Socket.io is doing with websockets? Socket.io supports a lot of different transports(also websockets) and it should because websockets are not supported by the most browsers. But for example Google Chrome does already support websockets. I believe that websockets are the transport of the future(but not yet!). When you look at Socket.io's browser support page you will notice that Socket.io does support all the major browsers(some even ancient). The nice thing is that it wraps this around a nice API. What are the advantages/disadvantages for each one of them? You are comparing apples to oranges so comparing that is kind of strange. RabbitMQ http://www.rabbitmq.com/tutorials/tutorial-one-python.html : RabbitMQ is a message broker. The principal idea is pretty simple: it accepts and forwards messages. You can think about it as a post office: when you send mail to the post box you're pretty sure that Mr. Postman will eventually deliver the mail to your recipient. Using this metaphor RabbitMQ is a post box, a post office and a postman. Advantages It is a pretty good message queue . Personally I would use redis or beanstalkd . Disadvantages: Is not really for "browsers". Socket.io http://socket.io/ : Socket.IO aims to make realtime apps possible in every browser and mobile device, blurring the differences between the different transport mechanisms. Advantages It is for browser Disadvantages It is not a message queue. Can Socket.io replace RabbitMQ? No you can't because they are two completely different things. You are comparing apples to oranges. You should try to comprehend both descriptions from the sites I quoted.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6636213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/206446/']}
jdg_73766
stackexchange
llm_judgeable_groundtruth_similarity
214734
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am fairly new in product development and I am trying to work over a product. The problem that I have realized is that people draw diagrams and charts showing different modules and layers. But as I am working alone (I am my own team) I got a bit confused about the interaction I am facing in the development within the programs and I am wondering whether developing a product in modules is real or not? Maybe I am not a great programmer, but I see no boundaries when data start to travel from frontend to backend. Now provide the response and nothing else.
The "layers" that we describe when we describe software systems are abstract concepts. To the computer, all it gets to see is a featureless stream of one opcode after another, no matter which layer, which class or which method it originally came from. In this sense, they are not "real" at all. However, layers (and classes and methods) are useful for programmers to ease thinking about the system . By voluntarily restricting ourselves to think about the system only on one level of description at a time, and to think about an operation only in terms of its input and output rather than its internal workings, we increase our ability to understand the relevant parts when adding functionality or making a change. This is initially quite counter-intuitive, and under-taught in formal education. But I assure you that judicious use of abstraction is the only thing that makes software development "in the large" possible at all for people who aren't autistic savants.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/214734', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/82213/']}
jdg_73767
stackexchange
llm_judgeable_groundtruth_similarity
4668453
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: when I clicked on Browse *;80(http) I got this screen in IIS..what could wrong? please give me solution. Now provide the response and nothing else.
This is one of the more confusing parts of trying to get exact pixel values from matplotlib. Matplotlib separates the renderer that handles exact pixel values from the canvas that the figure and axes are drawn on. Basically, the renderer that exists when the figure is initially created (but not yet displayed) is not necessarily the same as the renderer that is used when displaying the figure or saving it to a file. What you're doing is correct, but it's using the initial renderer, not the one that's used when the figure is saved. To illustrate this, here's a slightly simplified version of your code: import numpy as npimport matplotlib.pyplot as pltfig = plt.figure()ax = fig.add_subplot(111)im = ax.imshow(np.random.random((27,27)), interpolation='nearest')for i in range(28): x, y = ax.transData.transform_point([i,i]) print '%i, %i' % (x, fig.bbox.height - y)fig.savefig('foo.png', dpi=fig.dpi) This yields similar results to what you have above: (the differences are due to different rendering backends between your machine and mine) 89, 55107, 69125, 83...548, 410566, 424585, 439 However, if we do the exact same thing, but instead draw the figure before displaying the coordinates, we get the correct answer! import numpy as npimport matplotlib.pyplot as pltfig = plt.figure()ax = fig.add_subplot(111)im = ax.imshow(np.random.random((27,27)), interpolation='nearest')fig.canvas.draw()for i in range(28): x, y = ax.transData.transform_point([i,i]) print '%i, %i' % (x, fig.bbox.height - y)fig.savefig('foo.png', dpi=fig.dpi) This yields: (Keep in mind that the edge of the figure is at <-0.5, -0.5> in data coordinates, not <0, 0> . (i.e. the coordinates for the plotted image are pixel-centered) This is why <0, 0> yields 143, 55 , and not 135, 48 ) 143, 55157, 69171, 83...498, 410512, 424527, 439 Of course, drawing the figure just to draw it again when it's saved is redundant and computationally expensive. To avoid drawing things twice, you can connect a callback function to the draw event, and output your HTML image map inside this function. As a quick example: import numpy as npimport matplotlib.pyplot as pltdef print_pixel_coords(event): fig = event.canvas.figure ax = fig.axes[0] # I'm assuming there's only one subplot here... for i in range(28): x, y = ax.transData.transform_point([i,i]) print '%i, %i' % (x, fig.bbox.height - y)fig = plt.figure()ax = fig.add_subplot(111)im = ax.imshow(np.random.random((27,27)), interpolation='nearest')fig.canvas.mpl_connect('draw_event', print_pixel_coords)fig.savefig('foo.png', dpi=fig.dpi) Which yields the correct output, while only drawing the figure once, when it is saved: 143, 55157, 69171, 83...498, 410512, 424527, 439 Another advantage is that you can use any dpi in the call to fig.savefig without having to manually set the fig object's dpi beforehand. Therefore, when using the callback function, you can just do fig.savefig('foo.png') , (or fig.savefig('foo.png', dpi=whatever) ) and you'll get output that matches the saved .png file. (The default dpi when saving a figure is 100, while the default dpi for a figure object is 80, which is why you had to specify the dpi to be the same as fig.dpi in the first place) Hopefully that's at least somewhat clear!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4668453', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/568585/']}
jdg_73768
stackexchange
llm_judgeable_groundtruth_similarity
15602667
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been reading a lot about the topic but all I find are obsolete or partial answers, which don't really help me that much and actually just confused me more.I'm writing a Rest API (Node+Express+MongoDB) that is accessed by a web app (hosted on the same domain than the API) and an Android app. I want the API to be accessed only by my applications and only by authorized users.I also want the users to be able to signup and login only using their Facebook account, and I need to be able to access some basic info like name, profile pic and email. A possible scenario that I have in mind is: The user logs-in on the web app using Facebook, the app is grantedpermission to access the user Facebook information and receives anaccess token. The web app asks the API to confirm that this useris actually registered on our system, sending the email and thetoken received by Facebook. The API verifies that the userexists, it stores into the DB (or Redis) the username, the token anda timestamp and then goes back to the client app. Each time theclient app hits one of the API endpoint, it will have to provide theusername and the token, other than other info. The API each timeverifies that the provided pair username/token matches the mostrecent pair username/token stored into the DB (using the timestampto order), and that no more than 1 hour has passed since we storedthese info (again using the timestamp). If that's the case, the APIwill process the request, otherwise will issue a 401 Unauthorizedresponse. Does this make sense? Does this approach have any macroscopic security hole that I'm missing?One problem I see using MongoDB to store these info is that the collection will quickly become bloated with old tokens.In this sense I think it would be best to use Redis with an expire policy of 1 hour so that old info will be automatically removed by Redis. Now provide the response and nothing else.
I think the better solution would be this: Login via Facebook Pass the Facebook AccessToken to the server (over SSL for theandroid app, and for the web app just have it redirect to an API endpointafter FB login) Check the fb_access_token given, make sure its valid. Get user_id , email and cross-reference this with existing users tosee if its a new or old one. Now, create a random, separate api_access_token that you give back to the webapp and android app. If you need Facebook foranything other than login, store that fb_access_token and in yourDB associate it with the new api_access_token and your user_id . For every call hereafter, send api_access_token to authenticate it. If you need the fb_access_token for getting more info, you cando so by retrieving it from the DB. In summary : Whenever you can, avoid passing the fb_access_token . If the api_access_token is compromised, you have more control to see who the attacker is, what they're doing etc than if they were to get ahold of the fb_access_token . You also have more control over settings an expiration date, extending fb_access_token s, etc Just make sure whenever you pass a access_token of any sort via HTTP, use SSL.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15602667', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/874634/']}
jdg_73769
stackexchange
llm_judgeable_groundtruth_similarity
54104522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Example: https://drive.google.com/open?id=1WsFnYIy0ATiGAn5ylL4kIQY3tQ-oBxN1qTwclhR8SBo I can easily merge two ranges. Example in D2.But I'd like to get intersections of range. In F column I have manually written what I'd like to get. It is an intersection of two ranges. That is California+Texas-Exception. In other words I'd like to get intersection of two ranges. Could you give me a hint here? Now provide the response and nothing else.
You are mapping each line to an array (transforming a Stream<String> to a Stream<String[]> , and then count the number of array elements (i.e. the number of lines in the file). You should use flatMap to create a Stream<String> of all the words in the file, and after the distinct() and count() operations, you'll get the number of distinct words. long wordCount = Files.lines(Paths.get("sample.txt")) .flatMap(line -> Arrays.stream(line.split("\\s+"))) .distinct() .count();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54104522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3475946/']}
jdg_73770
stackexchange
llm_judgeable_groundtruth_similarity
349004
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written a game that uses GLUT, OpenGL and FMOD. The problem is that the binary won't run, unless Visual Studio 2008 is installed on the computer. Why is this? Now provide the response and nothing else.
Most likely you're linking with DLL versions of the C/C++ runtime. Go to project properties -> C++ -> Code Generation, and set Runtime Library to not be one of "DLL" kinds. Alternatively, you can link to DLL runtimes, but then you have to redistribute the runtime with your application. MSDN has more information on various aspects of C++ application deployment: http://msdn.microsoft.com/en-us/library/zebw5zk9.aspx Also, Dependency Walker (depends.exe) will show what libraries your executable depends on. It ships with some versions of Visual Studio as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/349004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40002/']}
jdg_73771
stackexchange
llm_judgeable_groundtruth_similarity
17708489
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Although I've come across Kafka before, I just recently realized Kafka may perhaps be used as (the basis of) a CQRS , eventstore . One of the main points that Kafka supports: Event capturing / storing, all HA of course. Pub / sub architecture Ability to replay the eventlog which allows the ability for new subscribers to register with the system after the fact. Admittedly I'm not 100% versed into CQRS / Event sourcing but this seems pretty close to what an eventstore should be. Funny thing is: I really can't find that much about Kafka being used as an eventstore, so perhaps I am missing something. So, anything missing from Kafka for it to be a good eventstore? Would it work? Using it production? Interested in insight, links, etc. Basically the state of the system is saved based on the transactions/events the system has ever received, instead of just saving the current state / snapshot of the system which is what is usually done. (Think of it as a General Ledger in Accounting: all transactions ultimately add up to the final state) This allows all kinds of cool things, but just read up on the links provided. Now provide the response and nothing else.
Kafka is meant to be a messaging system which has many similarities to an event store however to quote their intro: The Kafka cluster retains all published messages—whether or not theyhave been consumed— for a configurable period of time . For example ifthe retention is set for two days, then for the two days after amessage is published it is available for consumption, after which itwill be discarded to free up space. Kafka's performance is effectivelyconstant with respect to data size so retaining lots of data is not aproblem. So while messages can potentially be retained indefinitely, the expectation is that they will be deleted. This doesn't mean you can't use this as an event store, but it may be better to use something else. Take a look at EventStoreDB for an alternative. UPDATE Kafka documentation : Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka's support for very large stored log data makes it an excellent backend for an application built in this style. UPDATE 2 One concern with using Kafka for event sourcing is the number of required topics. Typically in event sourcing, there is a stream (topic) of events per entity (such as user, product, etc). This way, the current state of an entity can be reconstituted by re-applying all events in the stream. Each Kafka topic consists of one or more partitions and each partition is stored as a directory on the file system. There will also be pressure from ZooKeeper as the number of znodes increases.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17708489', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/209591/']}
jdg_73772
stackexchange
llm_judgeable_groundtruth_similarity
92575
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've searched for details on how to do this but I've been unsuccessful - I wondered if someone could offer up some advice. So, let's say I have 2 network cards (LAN and 3G in my instance), both assigned dynamic IP addresses. The LAN interface is my corporate LAN, and I'd like to use the 3G interface for all other access (ie, t'internet!). I have little networking experience, but my feeling is that I should be able to make the 3G card the default gateway, and then force all traffic for a set of known subnets through the LAN interface. Here's a route print ===========================================================================Interface List 40...........................Vodafone Mobile Connect 12...00 16 cf 87 71 22 ......Dell Wireless 1500 Draft 802.11n WLAN Mini-Card 11...00 15 c5 58 47 24 ......Broadcom NetXtreme 57xx Gigabit Controller 24...00 50 56 c0 00 01 ......VMware Virtual Ethernet Adapter for VMnet1 25...00 50 56 c0 00 08 ......VMware Virtual Ethernet Adapter for VMnet8 1...........................Software Loopback Interface 1 26...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 21...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 23...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #4 28...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #6===========================================================================IPv4 Route Table===========================================================================Active Routes: Netork Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.183.148.5 10.183.148.157 4235 0.0.0.0 0.0.0.0 10.183.148.6 10.183.148.157 4235 0.0.0.0 0.0.0.0 10.183.148.7 10.183.148.157 4235 0.0.0.0 0.0.0.0 On-link 10.57.175.79 31 10.57.175.79 255.255.255.255 On-link 10.57.175.79 286 10.183.148.0 255.255.255.0 On-link 10.183.148.157 4491 10.183.148.157 255.255.255.255 On-link 10.183.148.157 4491 10.183.148.255 255.255.255.255 On-link 10.183.148.157 4491 127.0.0.0 255.0.0.0 On-link 127.0.0.1 4531 127.0.0.1 255.255.255.255 On-link 127.0.0.1 4531 127.255.255.255 255.255.255.255 On-link 127.0.0.1 4531 169.254.0.0 255.255.0.0 On-link 10.183.148.157 4511 169.254.255.255 255.255.255.255 On-link 10.183.148.157 4491 192.168.6.0 255.255.255.0 On-link 192.168.6.1 4501 192.168.6.1 255.255.255.255 On-link 192.168.6.1 4501 192.168.6.255 255.255.255.255 On-link 192.168.6.1 4501 192.168.73.0 255.255.255.0 On-link 192.168.73.1 4501 192.168.73.1 255.255.255.255 On-link 192.168.73.1 4501 192.168.73.255 255.255.255.255 On-link 192.168.73.1 4501 224.0.0.0 240.0.0.0 On-link 127.0.0.1 4531 224.0.0.0 240.0.0.0 On-link 10.183.148.157 4492 224.0.0.0 240.0.0.0 On-link 192.168.6.1 4502 224.0.0.0 240.0.0.0 On-link 192.168.73.1 4502 224.0.0.0 240.0.0.0 On-link 10.57.175.79 31 255.255.255.255 255.255.255.255 On-link 127.0.0.1 4531 255.255.255.255 255.255.255.255 On-link 10.183.148.157 4491 255.255.255.255 255.255.255.255 On-link 192.168.6.1 4501 255.255.255.255 255.255.255.255 On-link 192.168.73.1 4501 255.255.255.255 255.255.255.255 On-link 10.57.175.79 286===========================================================================Persistent Routes: None So, interface 40 is my 3G card, and interface 11 is my LAN card. You can see that (I think) I have two default routes currently but the 3G wins because of the lower metric? I need to force all 10.183. . traffic over LAN interface. Now provide the response and nothing else.
The command you're looking for is route add: route | Microsoft Docs For your setup, I think the syntax is: route add 10.183.0.0 mask 255.255.0.0 10.183.148.5 This will send all the traffic for 10.183.x.x to the next hop address of 10.183.148.5 which your system already knows is off of your ethernet nic, and any traffic that doesn't match a route, will be grabbed by your default route and head through your 3g connection. It also looks like your network assigns multiple routers, so you might want to double it up and add the routes for 10.183.148.6 and .7 as well. You might need to be careful if your network has stuff not in the 10.183 range, you may need to add more routes. You may also be able to get away with routing all of 10.0.0.0/8 to your corporate network, since windows will have a more specific route, but i'm not 100% sure on that since your 3g card is giving you an IP in the 10.x.x.x range.
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/92575', 'https://serverfault.com', 'https://serverfault.com/users/-1/']}
jdg_73773
stackexchange
llm_judgeable_groundtruth_similarity
3055349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In one of the applications I have been referring to, the Connection String is stored in AppSettings! Till now I have been storing the connection in the <connectionstring/> element. But what is the correct way? So my question is, what are the differences between <connectionstrings> and <appsettings> in the web.config and are there any specific reasons why I should or should not be storing connection strings in appsettings? Are there any rules / guidelines provided to follow? Or is this completely the choice of the developer? Now provide the response and nothing else.
There is a fundamental difference between connectionString and appSettings : They look for different things. In .NET 2.0 and above: A connectionString object is an XML node that has specific attributes to set; and semantically it refers to a database connection string. For instance, a connectionString looks like the following: <connectionStrings> <clear/> <add name="LocalSqlServer" connectionString="Data Source=(local);Initial Catalog=aspnetdb;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> You'll notice it has a few different attributes: name connectionString : This has a specific string inside of it, it needs an Initial Catalog , a security mechanism (in this case Integrated Security providerName Whereas appSettings is just a user-defined Key-value pair that allows you to... well... set application settings. It can be anything: <appSettings> <add key="Mom" value="Your"/> <add key="UseCache" value="True"/> <add key="MapsKey" value="1234567890-AA"/> <add key="SMTPServer" value="smtp.peterkellner.net"/></appSettings> In many cases, it would just be odd to put the connectionString in a key-value pair like appSettings (semantically and programmatically). As well as it would make it more difficult to encrypt the connectionString when you need to . There is more information about this from this blog post .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3055349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/191860/']}
jdg_73774
stackexchange
llm_judgeable_groundtruth_similarity
11166
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is geometric deep learning (GDL)? Here are a few sub-questions How is it different from deep learning? Why do we need GDL? What are some applications of GDL? Now provide the response and nothing else.
The article Geometric deep learning: going beyond Euclidean data (by Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst) provides an overview of this relatively new sub-field of deep learning. It answers all the questions asked above (and more). If you are familiar with deep learning, graphs, linear algebra and calculus, you should be able to follow this article. What is geometric deep learning (GDL) ? This article describes GDL as follows Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. So, the inputs to these GDL models are graphs (or representations of graphs), or, in general, any non-Euclidean data . To be more concrete, the input to these models (e.g. graph neural networks ) are e.g. feature vectors associated with the nodes of the graphs and matrices which describe the graph structure (e.g. the adjacency matrix of the graphs). Why are e.g. graphs non-Euclidean data? A graph is a non-Euclidean structure because e.g. distances between nodes are not well defined. Yes, you can have graphs with weights associated with the edges, but not all graphs have this property. What classes of problems does GDL address? In GDL, there are two classes of problems that are often tackled: characterise the structure of the data (e.g. of a graph) analyse functions defined on a given non-Euclidean domain These classes of problems are related, given that the structure of the graph imposes certain properties on the functions that can be defined on it. Furthermore, these properties of these functions can also convey information about the structure of the graph. What are applications of GDL? An example of an application where this type of data (graphs) arises is in the context of social networks, where each user can be associated with a vertex of the social graph and the characteristics (or features) of each user (e.g. number of friends) can be represented as a feature vector (which can then be associated with the corresponding vertex of a graph). In this context, the goal might e.g. be to determine different groups of users in the social network (i.e. clustering). Why can't we simply use deep learning methods (like CNNs) when the data is non-Euclidean? There are several problems that arise when dealing with non-Euclidean data. For example, operations like convolution are not (usually) defined on non-Euclidean data. More concretely, the relative position of nodes is not defined on graphs (but this would be required to perform the usual convolution operation): in other words, it is meaningless to talk about a vertex that is e.g. on the left of another vertex. In practice, it means that we can't simply use the usual CNN when we are given non-Euclidean data. There have been attempts to generalise the convolution operation to graphs (or to approximate it). The field is still quite new, so there will certainly be new developments and breakthroughs.
{}
{'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/11166', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/2444/']}
jdg_73775
stackexchange
llm_judgeable_groundtruth_similarity
14089883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having some trouble with multidimensional array and its value. What i am looking for is , from my query I am searching teachers name in the array. And after that i want to create a pdf using dompdf. The problem is with looping. I am not able to create a proper loop which will work the way I want it to work. My sample query is $q11 = "select id from teachers order by teacher "; $r11 = mysql_query($q11) or die(mysql_error()); while($rows11 = mysql_fetch_array($r11)){ $teacher = $rows11['id']; $dompdf->"It will start working"; } Now i know , this code is confusing, but what i want is, it should create dompdf for every teacher in one single pdf file. Like from the query it should fetch teachers, and for each teacher it should create a dompdf page. Currently it is making just one page according to the last value that my query has search. Please help. It is kinda urgent. Now provide the response and nothing else.
Your loop is working fine. The way you add pages to your PDF is probably wrong. Apparently you are overwriting one page again and again instead of attaching a new one. EDIT I've never used dompdf. A quick look into the docs let me think you create something like a HTML markup which then is converted into PDF, did I get this right? Example code $html = <<<HTML <html> <head> <style type="text/css"> /* Your document styling goes here */ </style> </head> <body>HTML;while ( $row = $dbResult->fetch_assoc() ) { $html .= '<div class="teacherPage">' . $row['name'] // your teacher document goes here '</div>';}$html .= '</body></html>';$dompdf = new DOMPDF();$dompdf->load_html($html);$dompdf->render();$dompdf->stream("sample.pdf"); If you wonder about the unusual syntax $var = <<<HTML \r\nHTML , that's a heredoc . It's just more comfortable to use heredocs when you have a lot of alien inline code, this can have variables {$varname} and you don't need to worry about quotes. All you need to make sure, is that heredoc closer HTML is in a new line and not indented. EDIT2 Still not too sure, which library you are using. I find this extension looking pretty good and it's called dompdf, just like you said in your question. Your latest comment indicates you did not solve your problem so far, so I decided to add some more information to get you to the target. Disclaimer: I am not going to write functional code and I will not test this, but the following hints will push you into the right direction to get your stuff done. dompdf is able to read CSS2 and CSS3 properties of your input document. Each cycle in the while loop above represents one teacher whith each of them getting a own page in the output document. I put the page into a div container with the class teacherPage . You can fill this container with all the information you want to have displayed for a teacher. Now all we need to do, is to tell dompdf each teacherPage is a new page. This can be done using @page markup shipped with CSS3 I added an empty css container <style type="text/css"></style> to the example document above, that's where the page styling should go to. The example CSS @page teacher { size: A4 portrait; margin: 2cm;}.teacherPage { page: teacher; page-break-after: always;} With @page you can define a named page teacher , which can have properties valid for the whole page container. page-break-after: always will begin a new page after each container Hope this helps, have fun trying :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14089883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1529577/']}
jdg_73776
stackexchange
llm_judgeable_groundtruth_similarity
5626193
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand, what is monkey patching or a monkey patch? Is that something like methods/operators overloading or delegating? Does it have anything common with these things? Now provide the response and nothing else.
No, it's not like any of those things. It's simply the dynamic replacement of attributes at runtime. For instance, consider a class that has a method get_data . This method does an external lookup (on a database or web API, for example), and various other methods in the class call it. However, in a unit test, you don't want to depend on the external data source - so you dynamically replace the get_data method with a stub that returns some fixed data. Because Python classes are mutable, and methods are just attributes of the class, you can do this as much as you like - and, in fact, you can even replace classes and functions in a module in exactly the same way. But, as a commenter pointed out, use caution when monkeypatching: If anything else besides your test logic calls get_data as well, it will also call your monkey-patched replacement rather than the original -- which can be good or bad. Just beware. If some variable or attribute exists that also points to the get_data function by the time you replace it, this alias will not change its meaning and will continue to point to the original get_data . (Why? Python just rebinds the name get_data in your class to some other function object; other name bindings are not impacted at all.)
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5626193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135829/']}
jdg_73777
stackexchange
llm_judgeable_groundtruth_similarity
10440646
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for an example of how to draw a scatterplot in D3.js. I haven't been able to find a simple example by looking through the official D3.js examples (impressive though they are). I just want to know how to: draw and label the x- and y-axes draw scatter points on the graph. I did find this example in this D3 reusable library , but it is much more complex than I need, with external files that make it hard to pull out the essential points. Could anyone point me at a simple scatterplot example to get started? Thanks very much. Now provide the response and nothing else.
This should get you started. You can see it in action at http://bl.ocks.org/2595950 . // data that you want to plot, I've used separate arrays for x and y valuesvar xdata = [5, 10, 15, 20], ydata = [3, 17, 4, 6];// size and margins for the chartvar margin = {top: 20, right: 15, bottom: 60, left: 60} , width = 960 - margin.left - margin.right , height = 500 - margin.top - margin.bottom;// x and y scales, I've used linear here but there are other options// the scales translate data values to pixel values for youvar x = d3.scale.linear() .domain([0, d3.max(xdata)]) // the range of the values to plot .range([ 0, width ]); // the pixel range of the x-axisvar y = d3.scale.linear() .domain([0, d3.max(ydata)]) .range([ height, 0 ]);// the chart object, includes all marginsvar chart = d3.select('body').append('svg:svg').attr('width', width + margin.right + margin.left).attr('height', height + margin.top + margin.bottom).attr('class', 'chart')// the main object where the chart and axis will be drawnvar main = chart.append('g').attr('transform', 'translate(' + margin.left + ',' + margin.top + ')').attr('width', width).attr('height', height).attr('class', 'main') // draw the x axisvar xAxis = d3.svg.axis().scale(x).orient('bottom');main.append('g').attr('transform', 'translate(0,' + height + ')').attr('class', 'main axis date').call(xAxis);// draw the y axisvar yAxis = d3.svg.axis().scale(y).orient('left');main.append('g').attr('transform', 'translate(0,0)').attr('class', 'main axis date').call(yAxis);// draw the graph objectvar g = main.append("svg:g"); g.selectAll("scatter-dots") .data(ydata) // using the values in the ydata array .enter().append("svg:circle") // create a new circle for each value .attr("cy", function (d) { return y(d); } ) // translate y value to a pixel .attr("cx", function (d,i) { return x(xdata[i]); } ) // translate x value .attr("r", 10) // radius of circle .style("opacity", 0.6); // opacity of circle Used like this: <!DOCTYPE html><html> <head> <title>The d3 test</title> <script type="text/javascript" src="http://mbostock.github.com/d3/d3.v2.js" charset="utf-8"></script> </head> <body> <div class='content'> <!-- /the chart goes here --> </div> <script type="text/javascript" src="scatterchart.js"></script> </body></html
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10440646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/806539/']}
jdg_73778
stackexchange
llm_judgeable_groundtruth_similarity
17158932
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems that $('#someIframe').load(function(){...}) won't fire if it is attached after the iframe has finished loading. Is that correct? What I'd really like is to have a function that is always called once when or after an iframe has loaded. To make this clearer, here are two cases: Iframe hasn't loaded yet : run a callback function once it loads. Iframe has already loaded : run the callback immediately. How can I do this? Now provide the response and nothing else.
I've banged my head against a wall until I found out what's happening here. Background information Using .load() isn't possible if the iframe has already been loaded (event will never fire) Using .ready() on an iframe element isn't supported ( reference ) and will call the callback immediately even if the iframe isn't loaded yet Using postMessage or a calling a container function on load inside the iframe is only possible when having control over it Using $(window).load() on the container would also wait for other assets to load, like images and other iframes. This is not a solution if you want to wait only for a specific iframe Checking readyState in Chrome for an alredy fired onload event is meaningless, as Chrome initializes every iframe with an "about:blank" empty page. The readyState of this page may be complete , but it's not the readyState of the page you expect ( src attribute). Solution The following is necessary: If the iframe is not loaded yet we can observe the .load() event If the iframe has been loaded already we need to check the readyState If the readyState is complete , we can normally assume that the iframe has already been loaded. However, because of the above-named behavior of Chrome we furthermore need to check if it's the readyState of an empty page If so, we need to observe the readyState in an interval to check if the actual document (related to the src attribute) is complete I've solved this with the following function. It has been (transpiled to ES5) successfully tested in Chrome 49 Safari 5 Firefox 45 IE 8, 9, 10, 11 Edge 24 iOS 8.0 ("Safari Mobile") Android 4.0 ("Browser") Function taken from jquery.mark /** * Will wait for an iframe to be ready * for DOM manipulation. Just listening for * the load event will only work if the iframe * is not already loaded. If so, it is necessary * to observe the readyState. The issue here is * that Chrome will initialize iframes with * "about:blank" and set its readyState to complete. * So it is furthermore necessary to check if it's * the readyState of the target document property. * Errors that may occur when trying to access the iframe * (Same-Origin-Policy) will be catched and the error * function will be called. * @param {jquery} $i - The jQuery iframe element * @param {function} successFn - The callback on success. Will * receive the jQuery contents of the iframe as a parameter * @param {function} errorFn - The callback on error */var onIframeReady = function($i, successFn, errorFn) { try { const iCon = $i.first()[0].contentWindow, bl = "about:blank", compl = "complete"; const callCallback = () => { try { const $con = $i.contents(); if($con.length === 0) { // https://git.io/vV8yU throw new Error("iframe inaccessible"); } successFn($con); } catch(e) { // accessing contents failed errorFn(); } }; const observeOnload = () => { $i.on("load.jqueryMark", () => { try { const src = $i.attr("src").trim(), href = iCon.location.href; if(href !== bl || src === bl || src === "") { $i.off("load.jqueryMark"); callCallback(); } } catch(e) { errorFn(); } }); }; if(iCon.document.readyState === compl) { const src = $i.attr("src").trim(), href = iCon.location.href; if(href === bl && src !== bl && src !== "") { observeOnload(); } else { callCallback(); } } else { observeOnload(); } } catch(e) { // accessing contentWindow failed errorFn(); }}; Working example Consisting of two files (index.html and iframe.html): index.html : <!doctype html><html><head> <meta charset="utf-8"> <title>Parent</title></head><body> <script src="https://code.jquery.com/jquery-1.12.2.min.js"></script> <script> $(function() { /** * Will wait for an iframe to be ready * for DOM manipulation. Just listening for * the load event will only work if the iframe * is not already loaded. If so, it is necessary * to observe the readyState. The issue here is * that Chrome will initialize iframes with * "about:blank" and set its readyState to complete. * So it is furthermore necessary to check if it's * the readyState of the target document property. * Errors that may occur when trying to access the iframe * (Same-Origin-Policy) will be catched and the error * function will be called. * @param {jquery} $i - The jQuery iframe element * @param {function} successFn - The callback on success. Will * receive the jQuery contents of the iframe as a parameter * @param {function} errorFn - The callback on error */ var onIframeReady = function($i, successFn, errorFn) { try { const iCon = $i.first()[0].contentWindow, bl = "about:blank", compl = "complete"; const callCallback = () => { try { const $con = $i.contents(); if($con.length === 0) { // https://git.io/vV8yU throw new Error("iframe inaccessible"); } successFn($con); } catch(e) { // accessing contents failed errorFn(); } }; const observeOnload = () => { $i.on("load.jqueryMark", () => { try { const src = $i.attr("src").trim(), href = iCon.location.href; if(href !== bl || src === bl || src === "") { $i.off("load.jqueryMark"); callCallback(); } } catch(e) { errorFn(); } }); }; if(iCon.document.readyState === compl) { const src = $i.attr("src").trim(), href = iCon.location.href; if(href === bl && src !== bl && src !== "") { observeOnload(); } else { callCallback(); } } else { observeOnload(); } } catch(e) { // accessing contentWindow failed errorFn(); } }; var $iframe = $("iframe"); onIframeReady($iframe, function($contents) { console.log("Ready to got"); console.log($contents.find("*")); }, function() { console.log("Can not access iframe"); }); }); </script> <iframe src="iframe.html"></iframe></body></html> iframe.html : <!doctype html><html><head> <meta charset="utf-8"> <title>Child</title></head><body> <p>Lorem ipsum</p></body></html> You can also change the src attribute inside index.html to e.g. " http://example.com/ ". Just play around with it.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17158932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122422/']}
jdg_73779
stackexchange
llm_judgeable_groundtruth_similarity
11905531
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When this code runs, it gets the content of a webpage. I wanted to concatenate that entire string rather than printing it to the console but when I uncomment the two lines in the code below, System.out.println(inputLine); prints nothing (but it worked with the line below commented) and the value fileText = null , where does this error come from? import java.net.*;import java.io.*;public class URLReader { public static void main(String[] args) throws Exception { URL oracle = new URL("http://www.oracle.com"); BufferedReader in = new BufferedReader( new InputStreamReader(oracle.openStream())); String fileText = ""; String inputLine; while ((inputLine = in.readLine()) != null) //fileText.concat(inputLine); System.out.println(inputLine); in.close(); //System.out.println(fileText); }} Now provide the response and nothing else.
String is immutable and concat() will return a new String (check the linked doc), which you're not collecting. You should make use of a StringBuilder to build a string efficiently, and then call toString() on that once you're complete to get he resultant String. e.g. StringBuilder sb = new StringBuilder();while (....) { sb.append("more string data");}String str = sb.toString(); You can append Strings e.g. str = str + "more string data"; but it's not very efficient, due to the implementation of String . A StringBuilder is built in order to perform concatenation efficiently. You can tune a StringBuilder via its initial capacity if you have an idea of the size of String you're building. You may see some sources refer to a StringBuffer . That's very similar, except it's older and synchronises its methods by default. In a non-threaded environment that's wasteful and the general advice is to prefer StringBuilder .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11905531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/653912/']}
jdg_73780
stackexchange
llm_judgeable_groundtruth_similarity
3951872
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I guess this answer had already been answered a long time ago, but indeed I cannot find any reference. What is the circumradius of a $n$ -dimensional regular hypertetrahedron? Does it approach the length of the hypertetrahedron edges as $n$ increases? Now provide the response and nothing else.
The standard $n$ simplex is the convex hull in $\mathbb{R}^{n+1}$ of the points $$(1,0,0,\dots,0),\\(0,1,0,\dots,0)\\\vdots\\(0,0,\dots,0,1)$$ That is, the set of all points in $\mathbb{R}^{n+1}$ whose coordinates are nonnegative and add up to $1$ . The edge length is $\sqrt2$ and the center is at $(\frac1{n+1},\frac1{n+1}, \dots,\frac1{n+1})$ . A simple calculation shows that the distance from the center to a vertex approaches $1$ as $n\to\infty,$ so the ratio of the edge length to the circumradius actually goes to $\sqrt2$ as $n\to\infty$ . Thanks to Adam Zalcmann for pointing out the error in my previous calculation. See his comment below for the detailed calculation.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3951872', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/861120/']}
jdg_73781
stackexchange
llm_judgeable_groundtruth_similarity
13311471
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a large table (numbers in text format) that I would like to load with numpy.genfromtxt() . I would like to ignore the first n columns, say 5. I do not know the size of the table (number of row or columns) in advance. I saw that genfromtxt() has an option skip_header that allows to skip a specified number of header rows, but it seems there is no such option for columns. There is a usecols option but there I must specify the column numbers I want to keep, rather than those I want to discard (I do not know this number in advance). Obviously I could just load the whole thing and then throw away the first n columns, but this is not elegant and is wasteful in terms of memory. Also I could peak into the file, find the number of columns, and then construct the usecols argument, but that is rather messy. Any ideas on how to solve this elegantly? Is there some hidden/undocumented argument that I can use? Now provide the response and nothing else.
In newer versions of Numpy, np.genfromtxt can take an iterable argument, so you can wrap the file you're reading in a generator that generates lines, skipping the first N columns. If your numbers are space-separated, that's something like np.genfromtxt(" ".join(ln.split()[N:]) for ln in f)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13311471', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1358889/']}
jdg_73782
stackexchange
llm_judgeable_groundtruth_similarity
3507624
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A fairly common requirement in database applications is to track changes to one or more specific entities in a database. I've heard this called row versioning, a log table or a history table (I'm sure there are other names for it). There are a number of ways to approach it in an RDBMS--you can write all changes from all source tables to a single table (more of a log) or have a separate history table for each source table. You also have the option to either manage the logging in application code or via database triggers. I'm trying to think through what a solution to the same problem would look like in a NoSQL/document database (specifically MongoDB), and how it would be solved in a uniform way. Would it be as simple as creating version numbers for documents, and never overwriting them? Creating separate collections for "real" vs. "logged" documents? How would this affect querying and performance? Anyway, is this a common scenario with NoSQL databases, and if so, is there a common solution? Now provide the response and nothing else.
Good question, I was looking into this myself as well. Create a new version on each change I came across the Versioning module of the Mongoid driver for Ruby. I haven't used it myself, but from what I could find , it adds a version number to each document. Older versions are embedded in the document itself. The major drawback is that the entire document is duplicated on each change , which will result in a lot of duplicate content being stored when you're dealing with large documents. This approach is fine though when you're dealing with small-sized documents and/or don't update documents very often. Only store changes in a new version Another approach would be to store only the changed fields in a new version . Then you can 'flatten' your history to reconstruct any version of the document. This is rather complex though, as you need to track changes in your model and store updates and deletes in a way that your application can reconstruct the up-to-date document. This might be tricky, as you're dealing with structured documents rather than flat SQL tables. Store changes within the document Each field can also have an individual history. Reconstructing documents to a given version is much easier this way. In your application you don't have to explicitly track changes, but just create a new version of the property when you change its value. A document could look something like this: { _id: "4c6b9456f61f000000007ba6" title: [ { version: 1, value: "Hello world" }, { version: 6, value: "Foo" } ], body: [ { version: 1, value: "Is this thing on?" }, { version: 2, value: "What should I write?" }, { version: 6, value: "This is the new body" } ], tags: [ { version: 1, value: [ "test", "trivial" ] }, { version: 6, value: [ "foo", "test" ] } ], comments: [ { author: "joe", // Unversioned field body: [ { version: 3, value: "Something cool" } ] }, { author: "xxx", body: [ { version: 4, value: "Spam" }, { version: 5, deleted: true } ] }, { author: "jim", body: [ { version: 7, value: "Not bad" }, { version: 8, value: "Not bad at all" } ] } ]} Marking part of the document as deleted in a version is still somewhat awkward though. You could introduce a state field for parts that can be deleted/restored from your application: { author: "xxx", body: [ { version: 4, value: "Spam" } ], state: [ { version: 4, deleted: false }, { version: 5, deleted: true } ]} With each of these approaches you can store an up-to-date and flattened version in one collection and the history data in a separate collection. This should improve query times if you're only interested in the latest version of a document. But when you need both the latest version and historical data, you'll need to perform two queries, rather than one. So the choice of using a single collection vs. two separate collections should depend on how often your application needs the historical versions . Most of this answer is just a brain dump of my thoughts, I haven't actually tried any of this yet. Looking back on it, the first option is probably the easiest and best solution, unless the overhead of duplicate data is very significant for your application. The second option is quite complex and probably isn't worth the effort. The third option is basically an optimization of option two and should be easier to implement, but probably isn't worth the implementation effort unless you really can't go with option one. Looking forward to feedback on this, and other people's solutions to the problem :)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3507624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151084/']}
jdg_73783
stackexchange
llm_judgeable_groundtruth_similarity
123482
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Typically on a server, automatic updates of security-related patches are configured. Therefore, if I'm running MySQL 5.5 and a new security patch comes out, Ubuntu Server will apply the upgrade and restart MySQL to keep me protected in an automated way. Obviously, this can be disabled, but it's helpful for those of us who are a bit lazy ;) Does such a concept exist inside of a Docker container? If I'm running MySQL in a Docker container, do I need to constantly stop the container, open a shell in it, then update and upgrade MySQL? Now provide the response and nothing else.
I disagree with accepted answer. First, you should design your containers with separation of state in mind (in the MySQL case this means that at least /var/lib/mysql goes in its own volume ). Secondly, you should determine a strategy for upgrades; state-of-the-art would be to create versioned images for each set of apt packages upgrade and have them tested before swapping the data volume from running container to new one. This can either include a downtime or no downtime at all if you use something like hipache . This setup, although more expensive to initially setup, is the same used by companies with a mass scale of containers and will be cheaper to maintain over time. Furthermore, package upgrades can eventually break your running containers, thus applying them to running containers is not a best practice - as it is not to run outdated software.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/123482', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/5614/']}
jdg_73784
stackexchange
llm_judgeable_groundtruth_similarity
5093
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am currently working on a RandomForest based prediction method using protein sequence data. I have generated two models first model (NF) using standard set of features and the second model (HF) using hybrid features. I have done Mathews Correlation Coefficient (MCC) and Accuracy calculation and the following are my results: Model 1 (NF): Training Accuracy - 62.85% Testing Accuracy - 56.38 MCC - 0.1673 Model 2 (HF): Training Accuracy - 60.34 Testing Accuracy - 61.78 MCC - 0.1856 The testing data is an independent dataset (means not included in the training data). Since there is a trade-off in accuracy and MCC between the models am confused about the prediction power of the models. Could you please share your thoughts on which model I should consider for further analysis? Apart from Accuracy and MCC is there any other measure that I should consider for validation? Thanks in advance. Now provide the response and nothing else.
I like the idea of parsimony- the smaller the number of variables in the model, the better. Unless you are driven theoretically of course. Feature selection refers to the process of choosing which variables to use in the model (getting the best combination of variables). There are lots of different options for feature selection (worth a read). With that said, there should be inbuilt within the rf algorithm, a variable importance measure that you can generate as a starting point (with that said, be very very careful with this because there are noted biases in this) - see Strobl et al in the R journal. I trust you have varied the number of variables randomly sampled at each node (this is mtry in R) and the depth of the trees and splitting criteria etc. In terms of appearance, the second model looks slighly better to me, simply because of the reproduced accuracy in the test and train results. It always concerns me that if my test set accuracy is notably lower, there may be something wrong with the model. I trust you have made sure that your test and train set are balanced, at least on the dependent variable you are looking to classify. If this is binary (0,1) your models are not really doing much better than chance (50,50). An very important thing to look at is the sensitivity (the number of true positives in a binary task 0,1 that are correctly classified) and specificity (the number of true negatives in a binary task 0,1) that are both correctly classified. If possible, I would compare this model against other machine learning algorithms such as boosted trees, support vector machines (which do ok in gene data) etc. I am not sure what package you are using - hope that helps if If you are using r - look up caret in cran (really good intro to some of the ideas here and great for getting out some alternative measures of performance). Paul D
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/5093', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/529/']}
jdg_73785
stackexchange
llm_judgeable_groundtruth_similarity
14548991
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When adding a new file to a VB.Net project in Visual Studio, I'm given the option of both a 'Class' and a 'Module'. A class is described as An empty class file While a module is described as A file for storing groups of functions This seems to imply that a module is less useful that a class, since a class can store groups of functions and more. Is it the case that a module is simply a group of functions, or is there more to the module than the visual studio documentation suggests? Now provide the response and nothing else.
A class is a type. You can use this type like any other type ( String , Integer , Date , FileInfo ...) to declare variables, parameters, properties, and function return types. Let us make a little example: Public Class Person Public Property FirstName As String Public Property LastName As String Public Overridable Sub Print() 'I will explain Overridable later. Console.WriteLine($"{FirstName} {LastName}") End SubEnd Class Now you can declare variables of type Person Dim sue, pete As PersonDim persons As List(Of Person)sue = New Person()sue.FirstName = "Susan"sue.LastName = "Miller"pete = New Person()pete.FirstName = "Peter"pete.LastName = "Smith"persons = new List(Of Person)()persons.Add(sue)persons.Add(pete)For Each person As Person In persons person.Print()Next Whereas modules are static. I.e. Data stored in a module exists exactly once. On the other hand, you do not have to instantiate a module with New , therefore they are often used to store global data and for methods that are available globally. For instance, you could store the persons list in a module. But there is much more you can do with classes. You can derive a class from a base class. This new class inherits everything from the base class and can add more stuff to it. For instance, you could derive an Employee class from Person Public Class Employee Inherits Person Public Property Salary As Decimal Public Overrides Sub Print Console.WriteLine($"{FirstName} {LastName}, Salary = {Salary}") End SubEnd Class The Overridable keyword in Person.Print allows deriving classes to re-define (to override) the Print method. (Functions and Subs in classes are called methods.) Employees are assignment compatible to Persons. You could add an employee to the persons list. This does not require any change in the For Each loop, i.e., the call of person.Print() automatically calls the right Print method (the first one for "normal" persons and the second one for employees). Dim emp as Employeeemp = New Employee()emp.FirstName = "Frank"emp.LastName = "Taggart"emp.Salary = 3500.00Dpersons.Add(emp) There is much more to say about classes. I hope that you got a certain idea of what you can do with classes. See Objects and classes in Visual Basic and especially the section Differences between classes and modules .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14548991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427616/']}
jdg_73786
stackexchange
llm_judgeable_groundtruth_similarity
3391755
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: try {}catch (Exception) {} can I just write try {}catch {} Is this ok in C# .NET 3.5? The code looks nicer, but I don't know if it's the same. Now provide the response and nothing else.
Yes, the advantage of the first form is that you can name the exception variable and then use the object to log the exception details to file, etc... try {}catch (Exception ex) { // Log exception message here...} Also, it is generally a bad practice to catch the generic Exception class if you can instead catch specific exceptions (such as an IOException) using the first form.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3391755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267679/']}
jdg_73787
stackexchange
llm_judgeable_groundtruth_similarity
3335059
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The inequality $$\boxed{e^3 > 20}$$ is occasionally useful, including in the answer I wrote for this question that comes from a GRE subject exam. This bound is relatively tight: $$e^3 = 20.08553\!\ldots ,$$ a relative error of $< \frac{1}{200}$ , which means establishing the inequality might be a little delicate. In a comment under the linked answer, TheSimpliFire posed the following natural question: What is an efficient way to prove the inequality $e^3 > 20$ by hand? (I would have guessed that this had been asked before, but neither the internal search nor searchonmath turned up any duplicates.) A naive method is to use the series truncation $$e = \sum_{k = 0}^\infty \frac{1}{k!} > 1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \frac{1}{120} = \frac{120 + 120 + 60 + 20 + 5 + 1}{160} = \frac{163}{60} .$$ Then, it suffices to prove that $\left(\frac{163}{60}\right)^3 > 20$ , which is equivalent to $4\,330\,747 > 4\,320\,000$ . This last step could even be outsourced to an enthusiastic primary school student, but it involves cubing a three-digit prime and so is slightly tedious. One might try to refine this method by looking for rationals that are easier to cube, but the only rational numbers satisfying $\sqrt[3]{20} < q < e$ with denominator $< 60$ are $\frac{106}{39}, \frac{125}{46}, \frac{144}{53}$ . It's again straightforward to show that the cube of any of these $> 20$ , but doing so is no faster than cubing $\frac{163}{60}$ and one then has the additional burden of showing the number is $< e$ . One could also search for integrals analogous to the classic Dalzell integrals for the difference $e^3 - 20$ (or to the difference corresponding to some other inequality equivalent thereto), by which I mean evidently positive definite integrals equal to that difference. For example, some experimentation yields the definite integral \begin{align}&\int_1^2 - \frac{(x - 1) (2 - x) p(x) \,dx}{20 x (x^2 + 1)} \\&\qquad = \int_1^2 \left(-\frac{1}{2} x^3 + \frac{63}{20} x^2 - \frac{153}{20} x + 9 - \frac{3}{x} - \frac{2 x}{x^2 + 1} \right) dx \\&\qquad = 3 - \log 20 ,\end{align} where $p(x) = 10 x^4 - 33 x^3 + 44 x^2 - 45 x + 30$ . Computing gives that all of the coefficients of $p(x + 1)$ are positive, so $p$ is strictly positive for $x \geq 1$ , and thus the integrand is strictly positive on $(1, 2)$ . So, the integral is positive, that is, $3 > \log 20$ , which is equivalent via exponentiation to $e^3 > 20$ . This is again elementary, but not terribly fast. Remark Incidentally this latter method lets us extract cheap but relatively sharp rational bounds on $\log 20$ : Since $2 < x (x^2 + 1) < 10$ on the interval of integration, our integral is bounded by polynomial integrals: $$\int_1^2 - \frac{(x - 1) (2 - x) p(x) \,dx}{20 \cdot 10} < \int_1^2 - \frac{(x - 1) (2 - x) p(x) \,dx}{20 x (x^2 + 1)} < \int_1^2 - \frac{(x - 1) (2 - x) p(x) \,dx}{20 \cdot 2} .$$ Integrating gives $$\frac{163}{84000} < 3 - \log 20 < \frac{163}{16800},$$ and rearranging gives the bounds $$2.99027\!\ldots = \frac{251185}{84000} < \log 20 < \frac{251837}{84000} = 2.99805\!\ldots .$$ Now provide the response and nothing else.
Similar to your last proof, I found a positive function whose integral is $e^3-20$ . For $$f(x)=\frac{1}{186}(x-1)^2(x-2)^4e^x\ge0$$ we have $$\int_{0}^{3}f(x)dx=e^3-20$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3335059', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/155629/']}
jdg_73788
stackexchange
llm_judgeable_groundtruth_similarity
665311
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am looking to find all the hosts that are online in a set of networks. I would like to find all hosts that are online in the entire network of 170.10. . (there are ~64K possible hosts). The network I am trying to scan is an internal local network. I used nmap tool. But it takes about 50 mins, which is way too long. Out of the 64K hosts, there are possible only about 20-40 hosts online. But the problem is they may be in any (or in one or more) network out of the possible 256 networks. I am looking for a way to quickly figure out this. I don't think using ping command will help either as pinging 64K hosts is not going to be any faster. I am looking for any alternate solution, perhaps broadcasting ICMP packets directly to all 256 networks or something similar. Any ideas/suggestions? Thanks. Now provide the response and nothing else.
short answer: nmap -sn -T5 --min-parallelism 100 subnet/mask -oG output.file.txt; grep -v Down output.file.txt explanation:nmap alone should be able to scan much faster. We'll start by limiting nmap to do ping scans with -sP (newer versions replaced -sP with -sn) From man nmap : TIMING AND PERFORMANCE: Options which take <time> are in seconds, or append 'ms' (milliseconds), 's' (seconds), 'm' (minutes), or 'h' (hours) to the value (e.g. 30m). -T<0-5>: Set timing template (higher is faster) --min-hostgroup/max-hostgroup <size>: Parallel host scan group sizes --min-parallelism/max-parallelism <numprobes>: Probe parallelization --min-rtt-timeout/max-rtt-timeout/initial-rtt-timeout <time>: Specifies probe round trip time. --max-retries <tries>: Caps number of port scan probe retransmissions. --host-timeout <time>: Give up on target after this long --scan-delay/--max-scan-delay <time>: Adjust delay between probes --min-rate <number>: Send packets no slower than <number> per second --max-rate <number>: Send packets no faster than <number> per second Time for a little experiment with just running more ping scans parallel --max-parallelism and throwing caution about being detected in the wind -T5 : nmap without any options: % time nmap -sP 192.168.1.0/24 [...]nmap -sP 192.168.1.0/24 0.04s user 0.02s system 2% cpu 2.917 total% time nmap -T5 --max-parallelism=100 -sP 192.168.1.0/23[...]nmap -sP 192.168.1.0/23 0.08s user 0.04s system 0% cpu 37.469 total nmap with timing options: % time nmap -T5 --max-parallelism=100 -sP 192.168.1.0/24[...]nmap -T5 --max-parallelism=100 -sP 192.168.1.0/24 0.03s user 0.03s system 3% cpu 2.016 total% time nmap -T5 --max-parallelism=100 -sP 192.168.1.0/23[...]nmap -T5 --max-parallelism=100 -sP 192.168.1.0/23 0.11s user 0.02s system 2% cpu 4.869 total Quite the improvement. For a /16 subnet scan, like OP asked: Nmap done: 65536 IP addresses (30 hosts up) scanned in 169.43 secondsnmap -sP -T5 --min-parallelism 100 --max-parallelism 256 192.168.0.0/16 44.67s user 8.45s system 31% cpu 2:49.44 total To throw @Dan's suggestion in the mix too, I got bored after hitting 5 minutes with fping still running :-)
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/665311', 'https://serverfault.com', 'https://serverfault.com/users/247531/']}
jdg_73789
stackexchange
llm_judgeable_groundtruth_similarity
13458522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just met a developer who prepended every table and column name in his MYSQL databases with an underscore (e.g. _users, _name, _active). When I questioned the practice he stated that this helps prevent SQL injection attacks - I have never come across this practice/advice before. How does it help to prevent SQL injection attacks? Now provide the response and nothing else.
No. His thinking goes "If the attacker doesn't know the name of my tables, then the attacker can't mess with them." However, you're still vulnerable to SQL injection, and the attacker can still cause arbitrary system calls, perhaps to well-known system tables. What if he adds some SQL code that causes very long queries against a system table that bogs down your server? Security through obscurity is no security at all.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13458522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1836316/']}
jdg_73790
stackexchange
llm_judgeable_groundtruth_similarity
16484843
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I thought I'd automate error detection in future code, provided some of our code needs to explicitly release connection objects because of what we think is a bug in the Java HTTP API we use (not important which API it is). The idea was simple: For every method on a class A, if that method makes a call to a method with a return type assignable from Response , that method should also call close() on the return value. All this without actually running any code in the class being checked. Forgive my rant, but just to say what I don't want: I found http://asm.ow2.org/ , which provided access to the bytecode I needed, however: The API seems tailored for bytecode manipulation (no pure introspection), and it's die-hard based on the visitor pattern in a way that makes it virtually impossible to discover its workings through my IDE's autocomplete, and none of the visit*() methods really nails bytecode semantics with Java objects, but strings mixing method signature and return type signature altogether. I'm not even sure if the bytecodes have any constant defined somewhere. I was able to make use of it though, but it was really elegant. I wonder what other libraries for Java are out there with a philosophy more similar to Mono.Cecil and .NET's own System.Reflection.MethodBody object? Now provide the response and nothing else.
You can use BeforeAfterTestAttribute to resolve your case. There are some ways to address your issue using Xunit, which would be to make sub-class of TestClassCommand, or FactAttribute and TestCommand, but I think that BeforeAfterTestAttribute is the simplest way. Check out the code below. public class TestWithCommonSetupAndTearDown{ [Fact] [DisplayTestMethodName] public void Blub() { } private class DisplayTestMethodNameAttribute : BeforeAfterTestAttribute { public override void Before(MethodInfo methodUnderTest) { var nameOfRunningTest = "TODO"; Console.WriteLine("Setup for test '{0}.'", methodUnderTest.Name); } public override void After(MethodInfo methodUnderTest) { var nameOfRunningTest = "TODO"; Console.WriteLine("TearDown for test '{0}.'", methodUnderTest.Name); } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16484843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/847796/']}
jdg_73791
stackexchange
llm_judgeable_groundtruth_similarity
17037766
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My application uses select2 to show list of names which is retrieved through Ajax call.It uses select2 ajax functionalities. But the problem is that select2 fetches items whenever i type on the select2 input. I dont want to fetch every time user type. I want to fetch items in the initial loading of select2 and then uses same data even if they typing in the select2 input. How can i achieve this? PS: I have seen cache flag in Ajax, but i think it does caching the result based on the URL. It does not stop fetching of data when user type on the select2 input. Now provide the response and nothing else.
Select2 load data using ajax with caching in-place. $("#selIUT").select2({ cacheDataSource: [], placeholder: "Please enter the name", query: function(query) { self = this; var key = query.term; var cachedData = self.cacheDataSource[key]; if(cachedData) { query.callback({results: cachedData.result}); return; } else { $.ajax({ url: '/ajax/suggest/', data: { q : query.term }, dataType: 'json', type: 'GET', success: function(data) { self.cacheDataSource[key] = data; query.callback({results: data.result}); } }) } }, width: '250px', formatResult: formatResult, formatSelection: formatSelection, dropdownCssClass: "bigdrop", escapeMarkup: function (m) { return m; } }); I hope you find it helpful.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17037766', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1154350/']}
jdg_73792
stackexchange
llm_judgeable_groundtruth_similarity
50737862
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to add canonical link in angular 5 dynamically <link rel="canonical" href="http://foobar.com/gotcah"/> Now provide the response and nothing else.
Facing the same issue I searched around and found a guide on how to do this: https://www.concretepage.com/angular/angular-title-service-and-canonical-url It is using Angular 6 though, but I think it is backwards compatible to 5. It basically suggests creating a Service (SEOService) for facilitating creating the canonical link whereever it is injected. It injects the Angular DOCUMENT object into the service, and creates the canonical link element as a HTMLLinkElement . It takes care that the solution is prerendering/serverside rendering friendly - so if you are looking to better control your SEO for the pages in your application, I believe that this is what you want. Here's a minimal rewrite of the service from the article: seo.service.ts import { Injectable, Inject } from '@angular/core';import { DOCUMENT } from '@angular/common';@Injectable({ providedIn: 'root'})export class SeoService { constructor(@Inject(DOCUMENT) private doc) {} createLinkForCanonicalURL() { let link: HTMLLinkElement = this.doc.createElement('link'); link.setAttribute('rel', 'canonical'); this.doc.head.appendChild(link); link.setAttribute('href', this.doc.URL); }} And here's a rewrite of the component consuming the service: data.component.ts import { Component, OnInit } from '@angular/core';import { SeoService } from './seo.service';@Component({ selector: 'app-data', templateUrl: './data.component.html'})export class DataComponent implements OnInit { constructor(private seoService: SeoService) { } ngOnInit() { this.createLinkForCanonicalURL(); } createLinkForCanonicalURL() { this.seoService.createLinkForCanonicalURL(); } } You could simply have the createLinkForCanonicalURL() method take a optional parameter of the URL that you would want as a canonical reference for the page, for full control.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50737862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4947379/']}
jdg_73793
stackexchange
llm_judgeable_groundtruth_similarity
1247125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using # my_app.rbload 'index.rb' and start the sever like this ruby my_app.rb but it never reload any changes I made in index page. Did I miss anything here? Now provide the response and nothing else.
See the Sinatra FAQ , "How do I make my Sinatra app reload on changes?" First off, in-process code reloading in Ruby is hard and having a solution that works for every scenario is technically impossible. Which is why we recommend you to do out-of-process reloading. First you need to install rerun if you haven’t already: $ gem install rerun Now if you start your Sinatra app like this: $ ruby app.rb All you have to do for reloading is instead do this: $ rerun 'ruby app.rb' If you are for instance using rackup, instead do the following: $ rerun 'rackup' You get the idea. If you still want in-process reloading, check out Sinatra::Reloader .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1247125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/147921/']}
jdg_73794
stackexchange
llm_judgeable_groundtruth_similarity
198220
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't know if I'm doing this right. Suppose you have 12 marbles. 5 are red and 7 are blue.If you randomly choose 4 marbles at once, what is the probability you will get exactly 3 red marbles? I did the equation: $(5/12)(4/11)(3/10)(7/9)$ Then I tried to do it in combination form: ${5 \choose 3}$${7 \choose 1}$ divided by ${12 \choose 4}$. But it doesn't yield the same result. Did I do something wrong in the first equation? Or set up my combinations wrong? Now provide the response and nothing else.
Your first expression is incorrect. It gives the probability that if you take out the marbles one at a time, you get red, red, red, blue in that order . You could use the same reasoning to find the probability of the order red, red, blue, red; the probability of the order red, blue, red, red; and the probability of the order blue, red, red, red. These will each turn out to multiply out to the same answer as your first calculation. Now add up. You get $4$ times your first expression. That is correct, and agrees with your second calculation.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/198220', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28901/']}
jdg_73795
stackexchange
llm_judgeable_groundtruth_similarity
91939
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following problem. Fix $n \in \mathbb N$ . Prove that for every set of complex numbers $\{z_i\}_{1\le i \le n}$ , there is a subset $J\subset \{1,\dots , n\}$ such that $$\left|\sum_{j\in J} z_j\right|\ge \frac{1}{4\sqrt 2} \sum_{k=1}^n |z_k|.$$ I believe I proven have a stronger statement. Is this proof correct, and if so, what is the optimal constant? My proof. Consider all the $z_i$ with positive real part. Call the real part of the sum of these numbers $X^+$ . In a similar way, form $X^-$ , $Y^+$ , and $Y^-$ . Without loss of generality, let $X^+$ have the greatest magnitude of these. Note that because $|\operatorname{Re}(z)|+|\operatorname{Im}(z)|\ge |z|$ , we have $$ \left(\sum_{k=1}^n |\operatorname{Re}(z_k)|+|\operatorname{Im}(z_k)| \right) \ge \sum_{k=1}^n |z_k|.$$ But note that $\sum \limits_{k=1}^n |\operatorname{Re}(z_k)|+|\operatorname{Im}(z_k)| = X^+ + |X^-|+ Y^+ +|Y^-|$ , so we have $$ 4X^+ \ge \sum_{k=1}^n |z_k|.$$ By choosing $J$ to be the set of complex number with positive real part, this proves a stronger statement, because the factor of $1/\sqrt 2$ isn't needed. Now provide the response and nothing else.
The constant $\frac{1}{4 \sqrt{2}}$ can be replaced by $\frac{1}{\pi}$ , which is the best possible constant independent of $n$ . Let $R(z) = \max(0, \text{Re}(z))$ . Choose $\theta \in [0,2\pi]$ to maximize $F(z_1,\ldots, z_n,\theta) = \sum_{j=1}^n R(e^{i\theta} z_j)$ . Note that for any complex number $z$ , $$\frac{1}{2\pi} \int_0^{2 \pi} R(e^{i \theta} z) \ d\theta = \frac{|z|}{2 \pi} \int_{0}^\pi \sin \theta \ d\theta = \frac{|z|}{\pi}$$ The maximal value of $F(z_1,\ldots,z_n,\theta)$ is at least the average valuefor $\theta \in [0,2\pi]$ ,namely $\frac{1}{\pi} \sum_{j=1}^n |z_j|$ . Now note that if $J = \{j: R(e^{i\theta} z_j) > 0\}$ , $$\left|\sum_{j \in J} z_j\right| \ge \text{Re} \sum_{j \in J} e^{i \theta} z_j = F(z_1,\ldots,z_n,\theta).$$ To see that this estimate is best possible, consider cases where $n$ is largeand the $z_n$ are the $n$ 'th roots of unity.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/91939', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18240/']}
jdg_73796
stackexchange
llm_judgeable_groundtruth_similarity
41079692
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am very new to both AngularJS and NodeJS. I am trying to get a basic application running that on page load will make a GET call that returns a JSON structure of customers, and the HTML page will then use AngularJS to display that JSON structure into a table. I can load my HTML with this in my server.js : app.get('/', function(req, res) { res.sendFile(path.join(__dirname + '/../frontend/app/partials/index.html'));}); And I can make a GET call to retrieve my JSON data on page load with this in my server.js : app.get('/', function(req, res) { fs.readFile('./testData.json', 'utf8', function(err, data) { res.send(data); });}); I want to combine those two so that the GET call returns and then the HTML uses that data. How can I do this? I am also getting a 404 error in the browser for angular.js. I put the path in my index.html just like all my other JS files. I have it in app.use in my server.js . My other JS files and my CSS file can be found fine, but not angular.js. Does anyone know why this is the case? Here is the error angular.js not found error Here is my full code: Server.js: var path = require('path');var express = require('express');var fs = require('fs');var app = express();// angular.js gets 404 in browser -- why?app.use(express.static(path.join(__dirname, '../frontend/bower_components/angular')));// These files load fine (SimpleController.js, main.css)app.use(express.static(path.join(__dirname, '../frontend/app')));// This will load the HTML pageapp.get('/', function(req, res) { res.sendFile(path.join(__dirname + '/../frontend/app/partials/index.html'));});// This will return JSON data -- How to combine this with the above GET?app.get('/', function (req, res) { fs.readFile('./testData.json', 'utf8', function(err, data) { if(err) { return console.log(err); } res.send(data); });});var server = app.listen(8081, function() { var port = server.address().port; console.log("Server listening on port ", port);}); SimpleController.js: var app = angular.module('app', ['']); app.controller('SimpleController', function ($scope, $http) { $http.get('/').then(function (response) { $scope.customers = response.data; });}); index.html: <!DOCTYPE html><html ng-app="app"> <head> <title>CUSTOMERS</title> <link rel="stylesheet" type="text/css" href="../main.css"> </head> <body> <h1> CUSTOMERS </h1> <div ng-controller="SimpleController"> <form> <label>First Name:</label> <input type="text" /> <br /> <label>Last Name:</label> <input type="text" /> <br /> <label>Age:</label> <input type="text" /> <br /> <label>Street:</label> <input type="text" /> <br /> <label>City:</label> <input type="text" /> <br /> <label>State:</label> <input type="text" /> <br /> <label>Zip:</label> <input type="text" /> <br /> <input type="submit" value="Add Customer" /> </form> <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /> <table id="customers"> <tr> <th>First Name</th> <th>Last Name</th> <th>Age</th> <th>Street</th> <th>City</th> <th>State</th> <th>Zip</th> </tr> <tr ng-repeat="cust in customers"> <td> {{ cust.firstName }} </td> <td> {{ cust.lastName }} </td> <td> {{ cust.age }} </td> <td> {{ cust.address.street }} </td> <td> {{ cust.address.city }} </td> <td> {{ cust.address.state }} </td> <td> {{ cust.address.zip }} </td> </tr> </table> </div> <!-- angular.js gets 404 in browser --> <script src="../../bower_components/angular/angular.js"></script> <!-- this loads fine --> <script src="../SimpleController.js"></script> </body></html> UPDATE:So somehow, my index.html could not find angular.js file. I moved the angular.js file from /src/frontend/bower_components/angular/ into /src/frontend/app with the rest of my javascript files. My index.html resides in ' /src/frontend/app/partials/ . I did this along side what suzo suggested of making a '/' route and '/data' route and then calling '/data' inside my controller. so my next question is ... why can't index.html read angular.js within the /bower_components directory, but it can read it fine within the /app directory? Now provide the response and nothing else.
Copy on write is a tricky thing, and you need to think carefully about how many things are sharing a structure that you're trying to modify. The culprit is here. countToColorMap[colorCount]?.append(CountedColor(color: color as! UIColor, colorCount: colorCount)) This is generating a temporary value that is modified and put back into the dictionary. Since two "things" are looking at the same underlying data structure (the dictionary, and append ), it forces a copy-on-write. The secret to fixing this is to make sure that there's only one copy when you modify it. How? Take it out of the dictionary. Replace this: if countToColorMap[colorCount] != nil { countToColorMap[colorCount]?.append(CountedColor(color: color as! UIColor, colorCount: colorCount))} else { countToColorMap[colorCount] = [CountedColor(color: color as! UIColor, colorCount: colorCount)]} which has a runtime of: Elapsed Time: 74.251746599002253217 with this: var countForColor = countToColorMap.removeValue(forKey: colorCount) ?? []countForColor.append(CountedColor(color: color as! UIColor, colorCount: colorCount))countToColorMap[colorCount] = countForColor which has a runtime of: Elapsed Time: 0.37095380800019553217
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41079692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7129055/']}
jdg_73797
stackexchange
llm_judgeable_groundtruth_similarity
57452412
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to fix an element to the top and left of the screen using position sticky when scrolling a large div either vertically or horizontally. Fixing to the top is working fine, but fixing to the left is not.This is my html page: .sticky { position: -webkit-sticky; position: sticky; left: 0; top: 0;}.scroll-horizontally-and-vertically { width: 4000px; height: 2000px; background-color: lightblue;} <div> <div class="sticky"> <h1>please stick to top and left</h1> </div> <div class="scroll-horizontally-and-vertically"></div></div> I also tried using either top or left alone, with the same result.I must be missing something. Why is the top position fixed, but not the left position?How should I fix the page to get the desired behaviour? Now provide the response and nothing else.
The sticky element is a block level element inside another block level so this one is already taking 100% width if its parent element and there is no room for a left sticky behavior. Add some border to better see: .sticky { position: -webkit-sticky; position: sticky; left: 0; top: 0; border:2px solid green;}.scroll-horizontally-and-vertically { width: 4000px; height: 2000px; background-color: lightblue;} <div style="border:2px solid red;"> <div class="sticky"> <h1>please stick to top and left</h1> </div> <div class="scroll-horizontally-and-vertically"></div></div> The green box can only stick inside the red one and the lightblue element is overflowing. Add inline-block to sticky element (to remove the width 100% constraint) and to the parent element (so it grows with the lightblue element) and you will have the expected result .sticky { position: -webkit-sticky; position: sticky; left: 0; top: 0; border:2px solid green; display:inline-block}.scroll-horizontally-and-vertically { width: 4000px; height: 2000px; background-color: lightblue;} <div style="border:2px solid red;display:inline-block;"> <div class="sticky"> <h1>please stick to top and left</h1> </div> <div class="scroll-horizontally-and-vertically"></div></div>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57452412', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6845080/']}
jdg_73798
stackexchange
llm_judgeable_groundtruth_similarity
59014376
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to reconstruct a 3d shape from multiple 2d images.I have calculated a fundamental matrix, but now I don't know what to do with it. I am finding multiple conflicting answers on stack overflow and academic papers.For example, Here says you need to compute the rotation and translation matrices from the fundamental matrix. Here says you need to find the camera matrices. Here says you need to find the homographies. Here says you need to find the epipolar lines. Which is it?? (And how do I do it? I have read the H&Z book but I do not understand it. It says I can 'easily' use the 'direct formula' in result 9.14, but result 9.14 is neither easy nor direct to understand.) Stack overflow wants code so here's what I have so far: # let's create some sample data Wpts = np.array([[1, 1, 1, 1], # A Cube in world points [1, 2, 1, 1], [2, 1, 1, 1], [2, 2, 1, 1], [1, 1, 2, 1], [1, 2, 2, 1], [2, 1, 2, 1], [2, 2, 2, 1]]) Cpts = np.array([[0, 4, 0, 1], #slightly up [4, 0, 0, 1], [-4, 0, 0, 1], [0, -4, 0, 1]]) Cangles = np.array([[0, -1, 0], #slightly looking down [-1, 0, 0], [1, 0, 0], [0,1,0]]) views = [] transforms = [] clen = len(Cpts) for i in range(clen): cangle = Cangles[i] cpt = Cpts[i] transform = cameraTransformMatrix(cangle, cpt) transforms.append(transform) newpts = np.dot(Wpts, transform.T) view = cameraView(newpts) views.append(view)H = cv2.findFundamentalMat(views[0], views[1])[0]## now what??? How do I recover the cube shape? Edit: I do not know the camera parameters Now provide the response and nothing else.
Fundamental Matrix At first, listen to the fundamental matrix song ;). The Fundamental Matrix only shows the mathematical relationship between your point correspondences in 2 images (x' - image 2, x - image 1). "That means, for all pairs of corresponding points holds " (Wikipedia) . This also means, that if you are having outlier or incorrect point correspondences, it directly affects the quality of your fundamental matrix. Additionally, a similar structure exists for the relationship of point correspondences between 3 images which is called Trifocal Tensor. A 3d reconstruction using exclusively the properties of the Fundamental Matrix is not possible because "The epipolar geometry is the intrinsic projective geometry between two views. It isindependent of scene structure, and only depends on the cameras’ internal parametersand relative pose." (HZ, p.239). Camera matrix Refering to your question how to reconstruct the shape from multiple images you need to know the camera matrices of your images (K', K). The camera matrix is a 3x3 matrix composed of the camera focal lengths or principal distance (fx, fy) as well as the optical center or principal point (cx, cy). You can derive your camera matrix using camera calibration. Essential matrix When you know your camera matrices you can extend your Fundamental Matrix to a Essential Matrix E. You could say quite sloppy that your Fundamental Matrix is now "calibrated". The Essential Matrix can be used to get the rotation (rotation matrix R) and translation (vector t) of your second image in comparison to your first image only up to a projective reconstruction . t will be a unit vector. For this purpose you can use the OpenCV functions decomposeEssentialMat or recoverPose (that uses the cheirality check) or read further detailed explanations in HZ. Projection matrix Knowing your translation and rotation you can build you projection matrices for your images. The projection matrix is defined as . Finally, you can use triangulation ( triangulatePoints ) to derive the 3d coordinates of your image points. I recommend using a subsequent bundle adjustment to receive a proper configuration. There is also a sfm module in openCV. Since homography or epipolar line knowledge is not essentially necessary for the 3d reconstruction I did not explain these concepts.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59014376', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/396483/']}
jdg_73799
stackexchange
llm_judgeable_groundtruth_similarity
52692740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Kotlin Coroutines and in particular using Retrofit's CoroutineCallAdapterFactory . I'm trying then to unit test a class that in turn utilizes Retrofit interface ( GalwayBusService below). interface GalwayBusService { @GET("/routes/{route_id}.json") fun getStops(@Path("route_id") routeId: String) : Deferred<GetStopsResponse>} In my unit test I have val galwayBusService = mock() and then trying something like following to mock what gets returned when that method is called. The issue is though that getStops returns a Deferred value. Is there any particular approach recommend for mocking APIs like this? `when`(galwayBusService.getBusStops()).thenReturn(busStopsResponse) Now provide the response and nothing else.
The proper solution is to use CompletableDeferred .It is better than writing async because it doesn't launch anything concurrently (otherwise your test timings may become unstable) and gives you more control over what happens in what order. For example, you can write it as whenever(galwayBusService. getBusStops()).thenReturn(CompletableDeferred(busStopsResponse)) if you want to unconditionally return completed deferred or val deferred = CompletableDeferred<GetStopsResponse>()whenever(galwayBusService.getBusStops()).thenReturn(deferred)// Here you can complete deferred whenever you want if you want to complete it later
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52692740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/793861/']}
jdg_73800
stackexchange
llm_judgeable_groundtruth_similarity
108764
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The dihedral group of order $2n+2$ acts on $K_n$ , the ( $n-2$ )-dimensional associahedron. Are there any other symmetries? References? Does the answer to 1 change if we restrict to just the 1-skeleton of $K_n$ ? References? It is "obvious" that any simple circuit (simple closed walk, simple closed path, whatever terminology you prefer) of length 4 or 5 is a 2-dimensional face of $K_n$ . Is this true? Proof? Reference? Now provide the response and nothing else.
The answer to question 1 is no. A reference for this is: Carl Lee, The associahedron and triangulations of the $n$-gon, European Journal of Combinatorics, 10 (1989), no. 6, 551--560. The answer to question 3 is yes. I think this is clear from the viewpoint where you think of vertices of the associahedron as triangulations of an $(n+1)$-gon and you obtain higher dimensional faces containing such a vertex by deleting edges from the triangulation. This is the viewpoint e.g. discussed by Carl Lee. A 4-cycle involving a vertex $v$ of the associahedron implies that the two edges $e_1,e_2$ in the 4-cycle containing $v$ correspond to the deletion of a pair of edges $E_1,E_2$ from the triangulation corresponding to $v$ such that the concurrent deletion of $E_1, E_2$ yields two quadrilateral regions in the resulting subdivision; a 5-cycle involving a vertex $v$ of the associahedron likewise results from two edges $E_1, E_2$ of the corresponding triangulation whose concurrent deletion yields a single pentagonal region. In either case, the 4-cycle or 5-cycle then clearly bounds a face of the associahedron, namely the one given by the subdivision in which $E_1$ and $E_2$ are deleted from the triangulation corresponding to $v$. ${\bf Edit:}$ I just realized we can deduce that the answer to 2 is also no, by virtue of a result of Gil Kalai. Kalai proved that any $d$-dimensional simple polytope is determined by its 1-skeleton. So we can use that the associahedron is a simple polytope to see that its 1-skeleton can't have any extra symmetries not present in the associahedron itself.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/108764', 'https://mathoverflow.net', 'https://mathoverflow.net/users/15980/']}
jdg_73801
stackexchange
llm_judgeable_groundtruth_similarity
23246125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a numpy array results that looks like [ 0. 2. 0. 0. 0. 0. 3. 0. 0. 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 2. 0. 3. 1. 0. 0. 2. 2. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 2. 2.] I would like to plot a histogram of it. I have tried import matplotlib.pyplot as pltplt.hist(results, bins=range(5))plt.show() This gives me a histogram with the x-axis labelled 0.0 0.5 1.0 1.5 2.0 2.5 3.0. 3.5 4.0 . I would like the x-axis to be labelled 0 1 2 3 instead with the labels in the center of each bar. How can you do that? Now provide the response and nothing else.
The other answers just don't do it for me. The benefit of using plt.bar over plt.hist is that bar can use align='center' : import numpy as npimport matplotlib.pyplot as pltarr = np.array([ 0., 2., 0., 0., 0., 0., 3., 0., 0., 0., 0., 0., 0., 0., 0., 2., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 2., 0., 3., 1., 0., 0., 2., 2., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 2., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 3., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 2., 2.])labels, counts = np.unique(arr, return_counts=True)plt.bar(labels, counts, align='center')plt.gca().set_xticks(labels)plt.show()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23246125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1473517/']}
jdg_73802
stackexchange
llm_judgeable_groundtruth_similarity
44977
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a really simple description of the practical differences between these two techniques? Both seem to be used for supervised learning (though associationrules can also handle unsupervised). Both can be used for prediction The closest I've found to a 'good' description is from the Statsoft Textbook . They say Association Rules are used to: ...detect relationships or associations between specific values of categorical variables in large data sets. Whilst Decision Tree classifiers are described as being used to: ...predict membership of cases or objects in the classes of a categorical dependent variable from their measurements on one or more predictor variables. However, over at R Data Mining, they give an example of Association Rules being used with a target field . So both can be used to predict group membership, is the key difference that decision trees can handle non-categorical input data whilst association rules can't? Or is there something more fundamental? One site ( sqlserverdatamining.com ) says that the key difference is: The decision trees rules are based on information gain while association rules are based on popularity and/or confidence. So (possibly answering my own question) does that mean that association rules are evaluated purely on how often they appear in the dataset (and how often they are 'true') whilst decision trees are actually trying to minimise variance? If anyone knows of a good description they'd be willing to point me towards then that would be great. Now provide the response and nothing else.
Basically, Decision Trees are a pure classification techniques . These techniques aim at labelling records of unknown class making use of theirfeatures. They basically map the set of record features $\mathcal{F} = {F_1 , \dots, F_m }$ (attributes, variables) into the class attribute $C$ (target variable), the object of the classification. The relationship between $\mathcal{F}$ and $C$ is learned using a set of labelled records, defined as the training set. The ultimate purpose of classification models is to minimise the mis-classification error on unlabelled records, where the class predicted by the model differs from the real one. The features $F$ can be categorical or continuous. Association analysis first applications were about market basket analysis, in these application you are interested in finding out association between items with no particular focus on a target one. Datasets commonly used are the transactional ones: a collection of transaction were each of those contains a set of items. For example:$$ t_1 = \{i_1,i_2 \} \\t_2 = \{i_1, i_3, i_4, i_5 \} \\t_3 = \{i_2, i_3, i_4, i_5 \} \\\vdots \\t_n = \{ i_2, i_3, i_4, i_5 \}$$You are interested in finding out rules such as $$ \{ i_3, i_5 \} \rightarrow \{ i_4 \} $$ It turns out that you can use association analysis for some specific classification tasks, for example when all your features are categorical. You have just to see items as features, but this is not what association analysis was born for.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/44977', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/13526/']}
jdg_73803
stackexchange
llm_judgeable_groundtruth_similarity
15751442
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was asked to recreate the following style of plot. (Please ignore the question of whether this is a good type of visualization and charitably consider this as adding a colorful element to a numeric table.) Most of it is pretty straightforward, but I have not yet found a good way to make the center hollow. In the interest of time, I may resort to the kludge of adding invisible dummy data (I'll post that approach if no one else does, but it seems less optimal than one that modifies the theme). Is there a theme-based solution or a non-ggplot2 R solution? What we're imitating Simple ggplot2 result (undesired filled center) library(ggplot2)# make sample dataframeCategory <- c("Electronics", "Appliances", "Books", "Music", "Clothing", "Cars", "Food/Beverages", "Personal Hygiene", "Personal Health/OTC", "Hair Care")Percent <- c(81, 77, 70, 69, 69, 68, 62, 62, 61, 60)internetImportance<-data.frame(Category,Percent)# append number to category nameinternetImportance$Category <- paste0(internetImportance$Category," - ",internetImportance$Percent,"%")# set factor so it will plot in descending order internetImportance$Category <- factor(internetImportance$Category, levels=rev(internetImportance$Category))# plotggplot(internetImportance, aes(x = Category, y = Percent, fill = Category)) + geom_bar(width = 0.9, stat="identity") + coord_polar(theta = "y") + xlab("") + ylab("") + ylim(c(0,100)) + ggtitle("Top Product Categories Influenced by Internet") + geom_text(data = internetImportance, hjust = 1, size = 3, aes(x = Category, y = 0, label = Category)) + theme_minimal() + theme(legend.position = "none", panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_blank(), axis.text.y = element_blank(), axis.text.x = element_blank(), axis.ticks = element_blank()) How can we plot these data with a hollow center? Now provide the response and nothing else.
Here's a non-ggplot2 (base R graphics) solution using the plotrix package, which contains two nice functions: draw.circle() and draw.arc() : circBarPlot <- function(x, labels, colors=rainbow(length(x)), cex.lab=1) { require(plotrix) plot(0,xlim=c(-1.1,1.1),ylim=c(-1.1,1.1),type="n",axes=F, xlab=NA, ylab=NA) radii <- seq(1, 0.3, length.out=length(x)) draw.circle(0,0,radii,border="lightgrey") angles <- (1/4 - x)*2*pi draw.arc(0, 0, radii, angles, pi/2, col=colors, lwd=130/length(x), lend=2, n=100) ymult <- (par("usr")[4]-par("usr")[3])/(par("usr")[2]-par("usr")[1])*par("pin")[1]/par("pin")[2] text(x=-0.02, y=radii*ymult, labels=paste(labels," - ", x*100, "%", sep=""), pos=2, cex=cex.lab)}circBarPlot(Percent/100, Category)text(0,0,"GLOBAL",cex=1.5,col="grey") It gives me:
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15751442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1700987/']}
jdg_73804
stackexchange
llm_judgeable_groundtruth_similarity
3761549
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm given $$\cos(z) = \frac{5}{2}$$ and I'm trying to solve for $z$ but I keep going in circles. I know $\cos z = 5/2 = 1/2(e^{iz}+e^{-iz})$ so then $e^{iz}+e^{-iz} = 5$ but then I'm stuck Now provide the response and nothing else.
Taking $t=e^{iz}$ we get $$t+\frac{1}{t}=5 \implies t^2-5t+1=0 \implies t_{1,2} = \frac{5 \pm \sqrt{21}}{2}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3761549', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/809260/']}
jdg_73805
stackexchange
llm_judgeable_groundtruth_similarity
22773693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'll start by saying I'm not an expert at command-line, so I really only barely know what I'm doing. I cannot get RVM and oh-my-zsh to play nice together. I've tried several different solutions posted around, some on SO, but nothing I have found seems to fix the issue. Initially, someone else setup RVM on my machine. I later setup oh-my-zsh myself, and remember having a lot of trouble doing so. My .zhsrc file is completely default except for these 2 lines export PATH=$HOME/bin:/usr/local/bin:/usr/local/rvm/bin:$PATH#pretty sure this line does nothing, although I've seen this fix around# several places. I noticed that I don't seem to have an .rvm file in my # home directory which would be an issue, but I have no idea how to # go about correcting this.[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" I added the /usr/local/rvm to the Path directive as zsh returns zsh: rvm command not found without it there. Otherwise most rvm command work, with the exception of rvm use which throws the following error: RVM is not a function, selecting rubies with 'rvm use ...' will not work.You need to change your terminal emulator preferences to allow login shell.Sometimes it is required to use `/bin/bash --login` as the command.Please visit https://rvm.io/integration/gnome-terminal/ for a example. I have ensured the profile preferences on my profile in iTerm2 are set to "Login Shell" as recommended by RVM I have tried changing these preferences to the "Command" option and using the recommended /bin/bash/ --login as the command, however zsh no longer appears to work when this happens. Changing the command to /bin/zsh/ --login allows zsh to work again but does not solve the problem as I still get the "RVM is not a function" error when trying to run rvm use The default ruby in RVM is set to 2.0.0. I can run ruby -v in my home folder and get the ruby version output which shows the default version. Working on rails projects, I can run all rails commands IF the Gemfile specifies ruby 2.0.0, the default. However on a project that uses a different version, I get the following error: Your Ruby version is 2.0.0, but your Gemfile specified 1.9.3 Then I cannot run rvm use 1.9.3 as it returns the above mentioned error. How can I make zsh play nice with RVM? Some info about my setup: Mac OSX 10.9.1 (Mavericks) iTerm2 Zsh 5.0.2 with oh-my-zsh RVM 1.5.1 Now provide the response and nothing else.
Turns out that RVM was installed in the wrong place. When I had initially installed RVM I believe I did so with the sudo command, which ran the multi-user install (installed to /usr/local/rvm ), and not the single user (should be in ~/.rvm/scrips/rvm ), which is why all the posted solutions were not working for me. I had to uninstall RVM with rvm implode . There were some permissions issues, so I had to manually go in and delete the files that could not be deleted. Ran the curl command to re-install RVM and now it's installed in the correct place. Sourcing RVM in my .zshrc with [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" now works properly and I can switch between rubies.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22773693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/664904/']}
jdg_73806
stackexchange
llm_judgeable_groundtruth_similarity
638873
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A primary ideal (in a commutative ring with unity) is an ideal $J$ for which if $ab\in J$, then either $a\in J$ or $b^n\in J$ for some integer $n\geq 1$. So it also implies (due to commutativity) that if $ab\in J$, then $a^m,b^n\in J$ for some integers $m,n\geq 1$. Wouldn't the latter be a nicer definition for a primary ideal? Why is it not used? Now provide the response and nothing else.
Your "it implies" is wrong, it is possible that $ab \in J$ and $a \in J$ but $b \notin J$. For example any prime ideal is primary, but it doesn't necesary satisfy your condition.... Added: P.S. I think the equivalent condition you seek is the following: whenever $ab \in J$ we either ( have one of $a$ or $b$ in J ) OR ( $a^m,b^n\in J $ for some integers $m,n$ .) But then, the standard definition is much cleaner.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/638873', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108139/']}
jdg_73807
stackexchange
llm_judgeable_groundtruth_similarity
21394617
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having issues setting pie slice colors using a d3.pieChart. Documentation and examples I've seen use the colors method in combination with an array of hex colors. However, setting this results in my pie chart being colored white (invisible) and every item in the legend becoming black. I've tried using .colors with an array of five and an array of six colors but the issue persists. Could this be due to some issue with the slicesCap? Code snippet below, but category10 burns my eyes so any advice on implementing a custom color set would be appreciated! pie .slicesCap(5) .legend(dc.legend().gap(3)) .colors(d3.scale.category10()) Now provide the response and nothing else.
Just passing an array of colour values doesn't work because the .colors() function is expecting a color scale like the one created by d3.scale.category10() . Scales are functions which take input data and return a value; in this case, the returned value is one of 10 colours. For starters, you could try one of the other d3 built-in colour scales, which don't have as extreme contrast: https://github.com/mbostock/d3/wiki/Ordinal-Scales#wiki-category20 If none of those suit, you can make your own scale function with var colorScale = d3.scale.ordinal().range([/*array of hex values */]);pie.colors(colorScale); You can create the array yourself or use one of the colorBrewer functions. If you want to specify particular colours for particular values (instead of just assigning the colours to values in the order they appear) you'll need to (a) specify the domain of the scale as an array of values that matches the order of the range array, and (b) create a helper function that passes in the correct property from your data: var colorScale = d3.scale.ordinal().domain(["banana", "cherry", "blueberry"]) .range(["#eeff00", "#ff0022", "#2200ff"]);pie.colors(function(d){ return colorScale(d.fruitType); });
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21394617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2816030/']}
jdg_73808
stackexchange
llm_judgeable_groundtruth_similarity
595004
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone think of a nicer way to do the following: public string ShortDescription{ get { return this.Description.Length <= 25 ? this.Description : this.Description.Substring(0, 25) + "..."; }} I would have liked to just do string.Substring(0, 25) but it throws an exception if the string is less than the length supplied. Now provide the response and nothing else.
I needed this so often, I wrote an extension method for it: public static class StringExtensions{ public static string SafeSubstring(this string input, int startIndex, int length, string suffix) { // Todo: Check that startIndex + length does not cause an arithmetic overflow - not that this is likely, but still... if (input.Length >= (startIndex + length)) { if (suffix == null) suffix = string.Empty; return input.Substring(startIndex, length) + suffix; } else { if (input.Length > startIndex) { return input.Substring(startIndex); } else { return string.Empty; } } }} if you only need it once, that is overkill, but if you need it more often then it can come in handy. Edit: Added support for a string suffix. Pass in "..." and you get your ellipses on shorter strings, or pass in string.Empty for no special suffixes.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/595004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70472/']}
jdg_73809
stackexchange
llm_judgeable_groundtruth_similarity
3654295
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Some elements in my array are empty strings based on what the user has submitted. I need to remove those elements. I have this: foreach($linksArray as $link){ if($link == '') { unset($link); }}print_r($linksArray); But it doesn't work. $linksArray still has empty elements. I have also tried doing it with the empty() function, but the outcome is the same. Now provide the response and nothing else.
As you're dealing with an array of strings, you can simply use array_filter() , which conveniently handles all this for you: print_r(array_filter($linksArray)); Keep in mind that if no callback is supplied , all entries of array equal to FALSE (see converting to boolean ) will be removed. So if you need to preserve elements that are i.e. exact string '0' , you will need a custom callback: // PHP 7.4 and laterprint_r(array_filter($linksArray, fn($value) => !is_null($value) && $value !== ''));// PHP 5.3 and laterprint_r(array_filter($linksArray, function($value) { return !is_null($value) && $value !== ''; }));// PHP < 5.3print_r(array_filter($linksArray, create_function('$value', 'return $value !== "";'))); Note: If you need to reindex the array after removing the empty elements, use: $linksArray = array_values(array_filter($linksArray));
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/3654295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/434269/']}
jdg_73810
stackexchange
llm_judgeable_groundtruth_similarity
10456293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't figure out why the code #1 returns an extra empty line while code #2 doesn't. Could somebody explain this? The difference is an extra comma at the end of the code #2. # Code #1file = open('tasks.txt')for i, text in enumerate(filer, start=1): if i >= 2 and i <= 4: print "(%d) %s" % (i, text)# Code #2file = open('tasks.txt')for i, text in enumerate(filer, start=1): if i >= 2 and i <= 4: print "(%d) %s" % (i, text), Here is the content of my tasks.txt file: line 1line 2line 3line 4line 5 Result from code #1: (2) line 2(3) line 3(4) line 4 Result from code #2 (desired result): (2) line 2(3) line 3(4) line 4 Now provide the response and nothing else.
The trailing , in the print statement will surpress a line feed. Your first print statement doesn't have one, your second one does. The input you read still contains the \n which causes the extra linefeed. One way to compensate for it is to prevent print from issuing a linefeed of its own by using the trailing comma. Alternatively, and arguably a better approach, you could strip the newline in the input when you read it (e.g., by using rstrip() )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10456293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/932210/']}
jdg_73811
stackexchange
llm_judgeable_groundtruth_similarity
25022282
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to make a vertically scrolling screen, because the items I want to display take up more vertical real estate than an iPhone screen. The same limitation seems to apply to storyboard, where I am finding it very difficult to design my screen because of the limited size of the view. The contents of the screen I am trying to design are longer than the iPhone, storyboard is preventing me from designing the entirety of the screen even though I am using a scrollview as the root view. How do I get around this? Now provide the response and nothing else.
Found this bug report which exactly explains the problem I had: http://connect.microsoft.com/SQLServer/feedback/details/757358/pasting-objects-with-expressions-pastes-fully-qualified-functions Summary of the above: When you copy/paste reporting services expressions from one place to another (I copied an entire tablix, with expressions in it) all the pieces of that expression get expanded to their full names. There is a bug specific to when you copy/paste an expression containing "Cstr()". It gets a full name that does not resolve correctly. Fortunately this is easy to fix, even if you have a ton of such expressions in your report. --Go to View > CodeDo a find for "RdlObjectModel" or "Cstr". You are looking for something like this: Microsoft.ReportingServices.RdlObjectModel.ExpressionParser.VBFunctions.Cstr([your expression here]) -Remove everything from "Microsoft" to "Cstr" so it looks like this: Cstr([your expression here]) -Save. You have essentially undone what Reporting services so "helpfully" tried to do when you copied/pasted the expresion in the first place.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25022282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3273344/']}
jdg_73812
stackexchange
llm_judgeable_groundtruth_similarity
3867113
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm programming a project with plugin support. Since many of the plugins are relatively small (only one source-file/class) I would like to have them all in one project in visual studio, but to successfully do this I would need each source-file/class to be compiled into its own dll file, is this possible using visual studio? If this is not possible with visual studio, would it be possible using another build system, while still coding and debugging with visual studio? Currently I've set the plugins project output type to console, and programmed a main() method that will compile all .cs files in the source directory to dlls and copy those to the proper directory. Then I set that console app to be the post-build event of the plugins project. It works, but it seems like a very ugly hack. Using visual studio 2010. Now provide the response and nothing else.
You could create one project for each plugin and group all projects in a solution. If you don't want to have one project per plugin, you could create a custom build with MSBuild using CSC task How to generate a dll for each plugin file In a project you add all plugins files Edit the project file to specify which class will generate a plugin library : <ItemGroup> <Compile Include="Class1.cs"> <Plugin>true</Plugin> </Compile> <Compile Include="Class2.cs" /> <Compile Include="Class3.cs"> <Plugin>true</Plugin> </Compile> <Compile Include="Program.cs" /> <Compile Include="Properties\AssemblyInfo.cs" /></ItemGroup> Add a new target in your project file to generate the plugins library <Target Name="BuildPlugins"> <CSC Condition="%(Compile.Plugin) == 'true'" Sources="%(Compile.FullPath)" TargetType="library" OutputAssembly="$(OutputPath)%(Compile.FileName).dll" EmitDebugInformation="true" /></Target> If you want to create the plugins library after each build, add an after build target : <Target Name="AfterBuild" DependsOnTargets="BuildPlugins"></Target>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3867113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/147905/']}
jdg_73813
stackexchange
llm_judgeable_groundtruth_similarity
5945897
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is dynamic initialization of objects in c++? Please explain with an simple example... Now provide the response and nothing else.
Dynamic initialization is that in which initialization value isn't known at compile-time. It's computed at runtime to initialize the variable. Example, int factorial(int n){ if ( n < 0 ) return -1; //indicates input error else if ( n == 0 ) return 1; else return n * factorial(n-1);}int const a = 10 ; //static initialization //10 is known at compile time. Its 10!int const b = factorial(8); //dynamic initialization //factorial(8) isn't known at compile time, //rather it's computed at runtime. That is, static-initialization usually involves constant-expression (which is known at compile-time), while dynamic-initialization involves non-constant expression. static int c;//this is also static initialization (with zero)! §3.6.2/1 from the C++ Standard (2003) says, Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization ; all other initialization is dynamic initialization . So there are two kind of initializations: Static initialization : Its either zero-initialization or initialization with a constant expression Any other initialization is dynamic initialization. Also note that the same variable can be dynamically-initialized after it has been statically-initialized. For example, see this code: int d = factorial(8);int main(){} Since d is a global variable, it has static storage. That means, according to §3.6.2.1 it's initialized to 0 at the static-initialization phase which occurs before any other initialization takes place . Then later, at runtime, it's dynamically-initialized with the value returned from the function factorial() . That means, global objects can be initialized twice: once by static initialization (which is zero-initialization) and later, at runtime, they can be dynamically-initialized.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5945897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/744606/']}
jdg_73814
stackexchange
llm_judgeable_groundtruth_similarity
308191
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've got an Apache vhost which performs a number of rewrite operations (using mod_rewrite) on the requested URLs. Some of these RewriteRule calls redirect the browser to other hosts (using 301 and 302 redirections). I have set a default expiry of 1 second in that vhost file: ExpiresActive OnExpiresDefault "access plus 1 second" and accordingly, all of the redirections include this header: Cache-Control: max-age=1 Now what I'm trying to do is to output a different caching header in the case of one particular redirection. I want most of the redirections (the 302s) to continue have a 1 second expiry, but for one of them (a 301), I'd like to use a 1-day expiry instead. Is this doable? Now provide the response and nothing else.
There is a way to do this, but not with mod_expires. Instead you must set an environment variable using mod_rewrite and then conditionally add the right caching headers with mod_headers as described in Mark S. Kolich: Set the Cache-Control and Expires Headers on a Redirect with mod_rewrite . So my final solution looks like this: RewriteRule ... [last,redirect=301,env=longexpiry:1]RewriteRule ... [last,redirect=302,env=nocache:1]Header always set Cache-Control "no-store, no-cache, must-revalidate" env=nocacheHeader always set Cache-Control "max-age=86400" env=longexpiry Make sure you don't have a default expiry set by mod_expires though or you will end up with duplicate headers.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/308191', 'https://serverfault.com', 'https://serverfault.com/users/93738/']}
jdg_73815
stackexchange
llm_judgeable_groundtruth_similarity
703048
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It has been some years since I've taken multivariable calculus now, but there's something I really never understood: how people would discover the expressions for divergence and curl. I mean, the books usually say the formulas and then show that with that it's possible to view divergence as a measure of how much a vector field diverges locally and curl the analog for rotation locally. Now, it's not clear that if you pick those expressions it will give this interpretation. Books usually say: "we take those formulas because they work" and well, I know that. What I want to know is: imagining we want to find two operators $\operatorname{div}$ and $\operatorname{curl}$ on vector fields such that $\operatorname{div}$ gives local divergence and $\operatorname{curl}$ gives local rotation, how could we deduce the definitions that would work? I'm questioning this because currently I'm studying differential forms on manifolds, and to appreciate the definition of exterior derivative I thought it would be good to go back and see where the definitions of divergence and curl come from. Based then on the exterior derivative, I've found out that if $v\in \mathfrak{X}(\mathbb{R}^3)$ is a vector field and we consider the usual cartesian coordinates in $\mathbb{R}^3$ then $$\nabla \times v = \sum_{i=1}^3 \nabla v^i \times \dfrac{\partial}{\partial x^i} \qquad \nabla\cdot v = \sum_{i=1}^3 \nabla v^i \cdot \dfrac{\partial}{\partial x^i}$$ I then started to try seeing if these formulas were any easier to find out, but I couldn't get anythin from it. Thanks very much in advance. Now provide the response and nothing else.
The complete answer is given on pages 22-27 of my 2011 vector calculus notes . I think many good calculus text include these heuristic arguments, I found them in Thomas' calculus a few editions back. Long story short, what you should really do to understand is to prove Greene's and Stokes' Theorems, this will give you deeper insight into the nature of your question. Let me summarize the method here: The flux of $\langle P,Q \rangle$ through a little rectangle with corners $(x,y), (x+\triangle x,y), (x,y+\triangle y), (x+\triangle x,y+\triangle y)$ is easily calculated by multiplying components of $\langle P,Q \rangle$ with width ($\triangle x$) and height ($\triangle y$) with the values of the components at the corners. Then divide by the area $\triangle x \triangle y$ of the rectangle and pass to the limits $\triangle x,\triangle y \rightarrow 0$ to obtains partial derivatives which we recognize as the divergence of $\langle P,Q \rangle$. It follows that, $\nabla \cdot \langle P,Q \rangle$ measures the flux area density of the vector field $\langle P,Q \rangle$. The extension to three variables is including in my notes where I discuss a sketch of the proof of Gauss' Theorem. The curl is seen from doing a similar argument to calculate the circulation of $\langle P,Q \rangle$ around a little rectangle with corners $(x,y), (x+\triangle x,y), (x,y+\triangle y), (x+\triangle x,y+\triangle y)$. To find the circulation from $(x,y)$ to $(x+\triangle x,y)$ we multiply $P(x,y) \triangle x$ as the path is horizontal so only the $x$-component lines up with the segment to contribute some circulation. If you take into account the directions of each segment and divide by the area $\triangle x \triangle y$ of the rectangle and then pass to the limit $\triangle x,\triangle y \rightarrow 0$ and the $z$-component of the curl appears. It follows that, $\nabla \times \langle P,Q,0 \rangle$ measures the circulation area density of the vector field $\langle P,Q,R \rangle$ in the $z$-direction. To extend to a three component vector field we need a vector to describe the possible circulations along a space curve in all three directions. All of this said, I really would rather give the less helpful answer that $d$ is the natural exterior derivative operation on the exterior algebra of $\mathbb{R}^3$ and it is just the case that: 1. $df = \omega_{\nabla f}$, 2. $d\omega_{\vec{F}} = \Phi_{\nabla \times \vec{F}}$ and 3. $d \Phi_{\vec{G}} = \nabla \cdot G dx \wedge dy \wedge dz$ where $\omega_{\langle a,b,c \rangle} = adx+bdy+cdz$ and $\Phi_{\langle a,b,c \rangle} = ady \wedge dz+bdz \wedge dz+cdx \wedge dy$. Therefore, gradient, curl and divergence are just different levels of the cohomological operator which ultimately reveals the deeper shape of space.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/703048', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39106/']}
jdg_73816
stackexchange
llm_judgeable_groundtruth_similarity
48267
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why use Root Mean Squared Error (RMSE) instead of Mean Absolute Error (MAE)?? Hi I've been investigating the error generated in a calculation - I initially calculated the error as a Root Mean Normalised Squared Error. Looking a little closer, I see the effects of squaring the error gives more weight to larger errors than smaller ones, skewing the error estimate towards the odd outlier. This is quite obvious in retrospect. So my question - in what instance would the Root Mean Squared Error be a more appropriate measure of error than the Mean Absolute Error? The latter seems more appropriate to me or am I missing something? To illustrate this I have attached an example below: The scatter plot shows two variables with a good correlation, the two histograms to the right chart the error between Y(observed )and Y(predicted) using normalised RMSE (top) and MAE (bottom). There are no significant outliers in this data and MAE gives a lower error than RMSE. Is there any rational, other than MAE being preferable, for using one measure of error over the other? Now provide the response and nothing else.
This depends on your loss function. In many circumstances it makes sense to give more weight to points further away from the mean--that is, being off by 10 is more than twice as bad as being off by 5. In such cases RMSE is a more appropriate measure of error. If being off by ten is just twice as bad as being off by 5, then MAE is more appropriate. In any case, it doesn't make sense to compare RMSE and MAE to each other as you do in your second-to-last sentence ("MAE gives a lower error than RMSE"). MAE will never be higher than RMSE because of the way they are calculated. They only make sense in comparison to the same measure of error: you can compare RMSE for Method 1 to RMSE for Method 2, or MAE for Method 1 to MAE for Method 2, but you can't say MAE is better than RMSE for Method 1 because it's smaller.
{}
{'log_upvote_score': 8, 'links': ['https://stats.stackexchange.com/questions/48267', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/19931/']}
jdg_73817
stackexchange
llm_judgeable_groundtruth_similarity
846417
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to create a simple mouseover effect using a combination of mouseover, mouseout, addClass, and removeClass. Basically, when the user mouses over an element, I want to apply a different border (1px dashed gray). The initial state is "1px solid white". I have a class called "highlight" which simply has "border: 1px dashed gray" in it. I want to add that class onmouseover and remove it on onmouseout but I am unable to get the effect I want unless I use !important within the "highlight" class. Now provide the response and nothing else.
It sounds as though you've got the javascript working fine as is, but it's just a problem with the specificity of your CSS rules , which is why !important makes it work. You just have to make your highlighted css rules more specific than the non-highlighted rules. #someItem ul li { /* Specificity = 102 */ border-color: white;}.highlight { /* Specificity = 10 -- not specific enough! */ border-color: grey; }#someItem ul li.highlight { /* Specificity = 112 -- this will work */ border-color: grey; } Edit with further explanation: Let's say the relevant parts of your HTML look like this: <div id="someItem"> <ul> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> </ul></div> and you have this CSS: #someItem ul li { border: 1px solid white;}.highlight { border-color: grey;} Currently, all the list items in the ul in #someItem div will have a white border, and nothing has the class highlight so nothing's grey. Through whatever means you want (in your case a hover event in jQuery), you add a class to one of the items: $(this).addClass('highlight'); The HTML will now look something like this: <div id="someItem"> <ul> <li>Item 1</li> <li class="highlight">Item 2</li> <li>Item 3</li> </ul></div> So far, your Javascript and HTML are working fine, but you don't see a grey border! The problem is your CSS. When the browser is trying to decide how to style the element, it looks at all the different selectors which target an element and the styles defined in those selectors. If there are two different selectors both defining the same style (in our case, the border colour is contested), then it has to decide which style to apply and which to ignore. It does this by means of what is known as "Specificity" - that is, how specific a selector is. As outlined in the HTML Dog article , it does this by assigning a value to each part of your selector, and the one with the highest score wins. The points are: element selector (eg: "ul", "li", "table") = 1 point class selector (eg: ".highlight", ".active", ".menu") = 10 points id selector (eg: "#someItem", "#mainContent") = 100 points There are some more rules, eg: the keyword !important and also inline styles, but that's mostly irrelevant for this, uhh... "lesson". The only other thing you should know is that if two selectors have the same specificity, then the one defined later in the file wins. Going back to your problem, given the CSS we had before, we can see why it's still not got a grey border: #someItem ul li = id + element + element = 100 + 1 + 1 = 102 points.highlight = class = 10 points As mentioned earlier, the solution is to create a more specific selector: #someItem ul li.highlight = id + element + element + class = 100 + 1 + 1 + 10 = 112 points And to answer your question in the comments, you don't need to change any of your javascript or HTML for this to work. If you break down that selector, what it's saying is: Look for the element with id "someItem", inside that look for a ul element, and then an li element which has the class "highlight" on it. ...and now, given the simple .addClass() call that you made earlier, the li satisfies these conditions, so the border should turn grey.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/846417', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/67179/']}
jdg_73818
stackexchange
llm_judgeable_groundtruth_similarity
9392027
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get value for a dynamic property of a dynamic object.Here is my Code.. public string ReturnProperty(object ob, string prop){ Type type = ob.GetType(); PropertyInfo pr = type.GetProperty(prop); //Here pr is null..Dont know whats wrong return pr.GetValue(ob, null).ToString();} Now provide the response and nothing else.
My guess is that either it isn't a public property, or you've got the name wrong, or it isn't a property at all (but a public field). It's impossible to say more without knowing what the actual type is, but that should be a start. You mention that this is a "dynamic object" but that's not really very descriptive. Bear in mind that the CLR itself doesn't know anything about the DLR - if you mean this is a type which implements IDynamicMetaObjectProvider or extends DynamicObject , then you won't be able to get at the properties with "normal" reflection like this.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9392027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1117447/']}
jdg_73819
stackexchange
llm_judgeable_groundtruth_similarity
8488449
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to bring user to the option, where he/she can turn off Multitasking gestures on his/her iPad. Now provide the response and nothing else.
After reviewing several solutions, I began to create a custom solution: bouncefix.js http://jaridmargolin.github.io/bouncefix.js/ Usage: bouncefix.add(el) Apply fix so that the given element no longer causes a full body elastic bounce when scrolling at its extremes. bouncefix.remove(el) Remove all listeners/observers responsible for fixing the full body elastic bounce. Why? Scrollfix was a good start, however I noticed several problems: It only worked when there was scrollable content. If you had anempty page, the bounce effect on the body would occur. The API did not expose a method to remove the listeners. My app willhave multiple pages, and it didn't feel right to keep all of thelisteners attached as the user moved around the app. How? It uses a similar approach to that of scrollfix. The problem occurs when you are at one of the scrolling extremes. On touchstart, we look to see if we are at the top extreme or bottom extreme, adding 1px if we are at the top, and removing 1px if we are at the bottom. Unfortunately, this trick only works if we are able to set the scrollTop value. If the content is not yet scrollable, for example, you only have 1 list item, the whole body will again scroll. Bouncefix.js will take care of all of this behind the scenes by using event delegation and checking the scrollHeight against the offsetHeight anytime touchstart is triggered. In the case that there is no scrollable content, all scrolling on the container is blocked with e.preventDefault();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8488449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/215282/']}
jdg_73820
stackexchange
llm_judgeable_groundtruth_similarity
14756519
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to dependency injection, but happy with Ninject and Ninject.Extensions.Logging to [Inject] my ILogger wherever i need it. However some DelegatingHandlers are spoiling all the fun. public class HttpsHandler : DelegatingHandler{ [Inject] public ILogger Logger { get; set; } protected override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { if (!string.Equals(request.RequestUri.Scheme, "https", StringComparison.OrdinalIgnoreCase)) { Logger.Info(String.Format("{0}: is using HTTP", request.RemoteAddress()); return Task.Factory.StartNew( () => new HttpResponseMessage(HttpStatusCode.BadRequest) { Content = new StringContent("HTTPS Required") }); } return base.SendAsync(request, cancellationToken); } } Could anyone point me in the right direction on how to Inject Ninject.Extensions.Logger.Nlog2 into Ilogger inside the delegatinghandlers? Update I think Pete got me in the right direction in the comments (thanks!).I added the following constructor to the HttpsHandler : public HttpsHandler() { var kernel = new StandardKernel(); var logfactory = kernel.Get<ILoggerFactory>(); this.Logger = logfactory.GetCurrentClassLogger(); } and now i've got the Logger working! My only question left, is this the right way to do it, or is it a anti-pattern? Now provide the response and nothing else.
DelegatingHandlers are only initialized once in Web API, at application start. This is a known issue/design feature of Web API (I presume for performance reasons) - see the bug report here http://aspnetwebstack.codeplex.com/workitem/62 . Using a constructor initialization like you suggested yourself, would only work if you have singleton-type dependencies (that' why you logger works). Regardless, if you want to delegate the resolution to Web API DependencyResolver you have to use GetDependencyScope() , which can be called off the HttpRequestMessage , as a workaround. I posted a walkthrough on doing that with Ninject a while ago. You should use this approach to resolve your deendency, because with your current solution you have coupled Ninject & your handler, which is far from desired.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14756519', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/849741/']}
jdg_73821
stackexchange
llm_judgeable_groundtruth_similarity
1779607
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There is a picture which hope to illustrate the configuration: $\triangle ABC$ is such that $\angle A=\dfrac{2\pi}{3}$, and $F$ is the midpoint of $BC$, and $D,E$ lie on $AB,AC$ respectively, such that $DE ||BC$.Show that:$$2(DF+EF)\ge BC$$ maybe use cosine theorem,let $\dfrac{BD}{AB}=k,AB=c,BC=a,AC=b$, and in $\Delta ABC$ we use cosine theorem we have$$a^2=b^2+c^2+bc\tag{1}$$and $$DF^2=BD^2+BF^2-2BD\cdot BF\cos{\angle B}=(kc)^2+\dfrac{a^2}{4}-2kc\cdot\dfrac{a}{2}\dfrac{a^2+c^2-b^2}{2ac}=k^2c^2+\dfrac{a^2}{4}-\dfrac{k(a^2+c^2-b^2)}{2}$$the same as$$EF^2=\dfrac{a^2}{4}+(kb)^2-\dfrac{k(a^2+b^2-c^2)}{2}$$it is equivalent $$\sqrt{4k^2c^2+a^2-2k(a^2+c^2-b^2)}+\sqrt{4k^2b^2+a^2-2k(a^2+b^2-c^2)}\ge a,0\le k\le 1\tag{2}$$ I have tried some inequality (AM-GM,Cauchy-Schwarz,$\cdots$) to prove $(2)$, but didn't see the solution. Now provide the response and nothing else.
As you did, let us set $k:=BD/AB$. If $1/2\le k\lt 1$, then letting $M,N$ be the midpoint of the side $AB,AC$ respectively, we have$$2(DF+EF)\gt 2DE\ge 2MN=BC$$ In the following, we consider the case when $0\lt k\lt 1/2$. By the law of cosines,$$BC^2=AB^2+AC^2-2AB\cdot AC\cos\angle{BAC}\iff a^2=c^2+b^2+cb\tag1$$ By the way, $$2(DF+EF)\ge BC$$is equivalent to $$2DF\ge BC-2EF$$Since both sides are non-negative, this is equivalent to$$4DF^2\ge BC^2-4BC\cdot EF+4EF^2$$which can be written as$$4BC\cdot EF\ge (BC+2DF)(BC-2DF)+4EF^2$$Since $BC-2DF\ge 0$, the both sides are non-negative, and so this is equivalent to$$(4BC\cdot EF)^2\ge ((BC+2DF)(BC-2DF)+4EF^2)^2,$$i.e.$$8BC^2\cdot EF^2-BC^4-16EF^4-16DF^4+8BC^2\cdot DF^2+32EF^2\cdot DF^2\ge 0$$ Using that$$4DF^2=4k^2b^2+a^2-2k(a^2+b^2-c^2)$$$$4EF^2=4k^2b^2+a^2-2k(a^2+b^2-c^2)$$this is equivalent to$$2a^2(4k^2b^2+a^2-2k(a^2+b^2-c^2))-a^4-(4k^2b^2+a^2-2k(a^2+b^2-c^2))^2-(4k^2c^2+a^2-2k(a^2+c^2-b^2))^2+2a^2(4k^2c^2+a^2-2k(a^2+c^2-b^2))+2(4k^2b^2+a^2-2k(a^2+b^2-c^2))(4k^2c^2+a^2-2k(a^2+c^2-b^2))\ge 0,$$i.e.$$16k^4(b^2-c^2)^2-32k^3(b^2-c^2)^2-8k^2(a^2(b^2+c^2)-2(b^2-c^2)^2)+8a^4k-3a^4\color{red}{\le} 0$$ Here, let$$f(k):=16k^4(b^2-c^2)^2-32k^3(b^2-c^2)^2-8k^2(a^2(b^2+c^2)-2(b^2-c^2)^2)+8a^4k-3a^4$$So, we want to prove that $f(k)\le 0$ for $0\lt k\lt 1/2$. Now,$$f'(k)=64(b^2-c^2)^2k^3-96(b^2-c^2)^2k^2-16(a^2(b^2+c^2)-2(b^2-c^2)^2)k+8a^4$$$$\begin{align}f''(k)&=192(b^2-c^2)^2k^2-192(b^2-c^2)^2k-16(a^2(b^2+c^2)-2(b^2-c^2)^2)\\&=192(b^2-c^2)^2\left(k-\frac 12\right)^2-16a^2(b^2+c^2)-16(b^2-c^2)^2\end{align}$$ With $f''(1/2)\lt 0,f'(0)\gt 0$ and $$f'\left(\frac 12\right)=8a^2(a^2-b^2-c^2)\gt 0$$we know that $f(k)$ is increasing for $0\lt k\lt 1/2$, and using $(1)$ gives$$f\left(\frac 12\right)=-3b^2c^2\lt 0$$ It follows from this that $f(k)\le 0$ for $0\lt k\lt 1/2$. $\blacksquare$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1779607', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58742/']}
jdg_73822
stackexchange
llm_judgeable_groundtruth_similarity
47509
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Brown representability states that any contravariant functor from the homotopy category $CW_*$ of pointed CW complexes to the category of pointed sets is representable if it turns coproducts into products and satisfies a type of Mayer-Vietoris gluability axiom, which I like to think of as a weak version of "the functor sends push-outs into pull-backs" as any representable functor must. The proof very much relies on the fact that CW complexes can be built up in a steady and predictable manner, as it uses Whitehead's theorem that a weak homotopy equivalence is automatically a homotopy equivalence. Namely, one shows that an element which is "universal" for the spheres is actually a "universal element" for this functor (in the sense of Yoneda's lemma). Brown representability has many interesting consequences, e.g. that there is a "universal" principal $G$-bundle for pointed CW complexes (where $G$ is a topological group) or that the Eilenberg-Maclane spaces represent the cohomology functors. However, in the former case, it's actually true that the universal bundle exists for any topological space, not just CW complexes. I don't know whether the cohomology functors are representable on the category of all pointed topological spaces (even if one restricts to non-pathological ones: say Hausdorff, with nondegenerate basepoint), though I would imagine that a CW complex couldn't do it. This leads me to ask: Is there a version of Brown representability for arbitrary pointed topological spaces? There is a version of it on the nLab in more generality, but I don't know enough about categorical homotopy theory to understand anything. Could someone perhaps translate some of that into the special case of topological spaces? Now provide the response and nothing else.
Is there a version of Brown representability for arbitrary pointed topological spaces? The answer is: No and Yes. If you take a particular construction of a generalised cohomology theory that makes sense for all topological spaces then there is no guarantee that it will be representable in the homotopy category of topological spaces. For example, the various flavours of ordinary cohomology are only guaranteed to coincide for CW complexes (okay, and stuff with the homotopy type of such). That they disagree elsewhere shows that at least one of them can't be representable. Another example is K-theory. It's great that for compact Hausdorff spaces (and some others) that K-theory is exactly what you get when you take vector bundles and group complete, but it's not true for other spaces. That's the "No". Now for the "Yes". The point about the "Yes" is that Brown representability is so good to have that when we move outside the realm of CW-complexes, we often define our cohomology theory to be (homotopy) homs into the representing object (found by restricting to CW-complexes). That is, we take our "natural construction" of whatever cohomology theory it is, use Brown representability to find the representing object for CW-complexes, and then define the extension of the theory to all topological spaces to be $[X,\underline{E}]$. Then it is representable, but by construction rather than by any fancy theory.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/47509', 'https://mathoverflow.net', 'https://mathoverflow.net/users/344/']}
jdg_73823
stackexchange
llm_judgeable_groundtruth_similarity
10568497
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't know how --retry-max-time calculated.If I download a file file.txt : curl --max-time 10 --retry 3 --retry-delay 5 --retry-max-time 32 'http://www.site.com/download/file.txt' [ 0- 2] It takes 2s to download 50% of the file, and no speed any more. [ 2-10] It waits for another 8s , still no speed, timeout, will retry [10-15] It waits for 5s before retry #1 [15-25] Still no speed, will retry [25-30] It waits for 5s before retry #2 [30-34] It takes 4s to download 33% of the file, and no speed any more. [34-40] It waits for another 6s , still no speed, timeout Will curl stop retry at this point( 40s )? When was the retry timer started and stopped? --retry-max-time <seconds> The retry timer is reset before the first transfer attempt. Retries will be done as usual (see --retry) as long as the timer hasn't reached this given limit. Notice that if the timer hasn't reached the limit, the request will be made and while performing, it may take longer than this given time period. To limit a single request´s maximum time, use -m, --max-time. Set this option to zero to not timeout retries. (Added in 7.12.3) Now provide the response and nothing else.
Let me try to clarify. When curl decides to do a retry (because --retry is used and the conditions were such that a retry is warranted) and a --retry-max-time was set, curl checks if the total time elapsed since the start of the operation has surpassed the --retry-max-time or not. If it hasn't, it will allow another retry. So in your command line above: if the total time is less than 32 seconds by the time it considers the retry, it will do another retry. If the total time is more than 32 seconds it will not do any more retries.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10568497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/348785/']}
jdg_73824
stackexchange
llm_judgeable_groundtruth_similarity
47185
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there an efficient algorithm to determine if two terms are beta-equivalent? Specifically, I am curious about simply-typed-lambda-calculus, so you can assume both terms are strongly normalizing. I know a simple algorithm: Compute the beta normal form (BNF) for each term. Confirm that the two BNFs are alpha-equivalent. But it is possible for BNFs to be exponentially larger than the original term? Is it possible check the equivalence of terms $S$ and $T$ in $O(|S| + |T|)$ time? Now provide the response and nothing else.
The answer is no. An old theorem of Statman states that $\beta$ -equivalence in the simply-typed $\lambda$ -calculus is not elementary recursive, that is, no algorithm whose running time is bounded by $2^{\vdots^{2^{|S|+|T|}}}$ for a tower of exponentials of fixed height may decide whether two simply-typed terms $S$ and $T$ are $\beta$ -equivalent. The original statement is from Richard Statman. The typed $\lambda$ -calculus is not elementary recursive . Theoret. Comput. Sci. 9:73-81, 1979. A simpler proof may be found in this paper by Harry Mairson . Edit: as observed by Martin Berger, Mairson proves that $\beta\eta$ -equivalence is not elementary recursive, whereas Statman's result (and the OP's question) concerns $\beta$ -equivalence, without $\eta$ . However, as pointed out by xavierm02, Mairson's result implies Statman's. Let me fill in the details for those who are not familiar with $\eta$ -long forms. The $\eta$ -long form $\eta(x^A)$ of a variable $x^A$ is defined by induction on $A$ : observe that $A=A_1\to\cdots\to A_n\to\alpha$ for some $n\in\mathbb N$ , some types $A_1,\ldots,A_n$ (smaller than $A$ ) and some atom $\alpha$ , and let $$\eta(x^A) := \lambda y_1^{A_1}\ldots\lambda y_n^{A_n}.x\eta(y_1^{A_1})\cdots\eta(y_n^{A_n}),$$ where the $\eta(y_i^{A_i})$ are given inductively. The $\eta$ -long form $\eta(M)$ of a simply-typed $\lambda$ -term $M$ is defined by replacing every occurrence of variable $x^A$ of $M$ (free or bound) with $\eta(x^A)$ . (NB: through Curry-Howard, this corresponds to taking a sequent calculus proof and expanding it so that it has only atomic axioms). Observe that: $\eta$ -long forms are stable under substitution, and therefore under $\beta$ -reduction; two $\eta$ -long $\beta$ -normal forms are $\beta\eta$ -equivalent iff they are equal (up to $\alpha$ -renaming, of course); computing the $\eta$ -long form of a simply-typed $\lambda$ -term is elementary recursive (if you don't keep the size of type annotations, the $\eta$ -long form of a term may be exponentially bigger, but that is not a problem). That Mairson's result implies Statman's is a consequence of the following: Claim. Let $M,N$ be two simply-typed $\lambda$ -terms. Then, $M\simeq_{\beta\eta}N$ iff $\eta(M)\simeq_\beta\eta(N)$ . In fact, via point (3) above, an elementary recursive algorithm for deciding $\beta$ -equivalence immediately gives an elementary recursive algorithm for deciding $\beta\eta$ -equivalence (the one pointed out by xavierm02). Let us prove the claim. The right-to-left implication is trivial. Conversely, suppose that $M\simeq_{\beta\eta} N$ . This obviously implies $\eta(M)\simeq_{\beta\eta}\eta(N)$ . Let $P$ and $Q$ be the $\beta$ -normal forms of $\eta(M)$ and $\eta(N)$ , respectively. By point (1) above, both $P$ and $Q$ are $\eta$ -long (because $\eta(M)$ and $\eta(N)$ are). But of course we still have $P\simeq_{\beta\eta} Q$ , so by point (2) $P=Q$ , which proves $\eta(M)\simeq_\beta\eta(N)$ (they have the same $\beta$ -normal form).
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/47185', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/58473/']}
jdg_73825
stackexchange
llm_judgeable_groundtruth_similarity
315915
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why do we need an antenna of size comparable to the wavelength for surface wave transmission? I am unable to understand why such condition is imposed. Now provide the response and nothing else.
Summary: The fact that the length of an antenna is of similar size to the wavelength of light is a coincidence due to the similarity of the speed of light in air and the speed of light in the antenna (which are usually copper wires). For other waves, this may not be the case. Different guitar strings, for example, resonate at different frequencies despite being the same length. That's because they have different densities and tensions that change the speed of sound within them. So, the wavelength of a sound in air can have little to do with the length of string that produces it. The Gory Details: When an antenna receives an electromagnetic (EM) wave, the electric field of the wave pushes the electrons in the antenna back and forth. This happens regardless of the length or shape of the antenna. This sets up a standing wave of electric currents in the antenna. There are certain frequencies that are resonant in the antenna, which is when the efficiency of energy reception is highest. This frequency is determined by the length of the antenna and the speed of light in the antenna material. In this resonant condition, the electrons' motion and the incoming electric field are always in the same direction, so every wavelength of the EM wave builds up more motion and puts more energy into the antenna. If the frequency of the EM wave is not at the correct frequency, then sometimes the electrons' motion and the electric field will be in opposite directions, leading to a loss of energy in the antenna. Like pushing someone on a swing, each push has to be at the right time and in the right direction. Animation comes from Wikipedia: https://en.wikipedia.org/wiki/Dipole_antenna . The red represents density of charges as they are pushed back and forth. The condition for resonance in an antenna is that the wavelength of the standing wave is twice the length of the antenna (see the above animated diagram). $$\lambda_{SW} = 2L_{ant}$$ We can relate the wavelength to the frequency of oscillation by $$\lambda f = c$$ where $f$ is the frequency and $c$ is the speed of light in the antenna. So, the frequency of the EM signal that will be best received by the antenna is $$f=\frac{c}{\lambda_{SW}} = \frac{c}{2L_{ant}}$$ Since the speed of light in a vacuum or air is very close to the speed of light in an antenna (~80%), the length of the antenna will be close to the wavelength of the EM wave. In fact, most simple antennas have a length close to half the wavelength of the signal they are built to receive. As an analogy, consider a basin half-full of water. You can tilt the basin to one side and the water rushes to the lower end. If you tilt the basin the other way, the water rushes to the other side. When you tilt the basin back and forth, most of the time the water just moves back and forth with little other effect. This is like the electric field of an EM wave that pushes the electrons in the antenna to one side. At a low frequency of tilting, the water just moves back and forth, settling at each end before the tilt reverses. At a high frequency, the water barely has a chance to move before the tilt reverses. But, if you tilt back and forth at just the right frequency, the sloshing of the water builds up and builds up until it splashes completely out of the basin. This is the resonant condition of the basin of water. You can imagine that longer basins have a lower frequency of resonance since it takes longer for the sloshing water to go from one end to the other. It's the same with an antenna: longer antennas have lower frequency resonance because it takes longer for the EM wave to bounce back and forth between the ends. The following is a view of the charges in an antenna that shows the sloshing of electrons at resonance (the red shaded area; blue shows the magnitude of the velocity of the charges). Animation comes from the same Wikipedia article: https://en.wikipedia.org/wiki/Dipole_antenna
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/315915', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/147274/']}
jdg_73826
stackexchange
llm_judgeable_groundtruth_similarity
962258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having a problem trying to find any specific details on how to properly write CSS rules in a stylesheet where the class or ID is nested within many other IDs and styles, e.g. .mainbody #container #header #toprightsearch .searchbox {} So here we have a searchbox class within a toprightsearch ID, in a header ID, in a container ID, in a mainbody class. But it appears to work properly if I omit some of the IDs. What is the correct way of listing these? If I include all of the parents does it make it more specific?Can it error on different browsers if I don't include all? And any additional information on this topic would be appreciated. Thanks Now provide the response and nothing else.
.searchbox {} Styles anything with .searchbox .mainbody .searchbox{} Styles any .searchbox that descends from any .mainbody, direct child, grandchild, quadruple great grandchild, doesn't matter. #toprightsearch > .searchbox {} Styles only .searchboxes that are direct children of #toprightsearch #container * .searchbox {} Styles .searchbox's that are grandchild or later of #container. Here's a good document on the topic: w3C selectors
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/962258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103219/']}
jdg_73827
stackexchange
llm_judgeable_groundtruth_similarity
2439896
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know little about group theory, and the concept of a field characteristic is new to me. Although there are some posts on this website that explain it very well. "In layman's terms, it's the amount of times we can keep adding 1 to itself without looping back to 0. The characteristic being $p$ means that $\underbrace{1+1+\dots+1}_p=0$." I think I can get my head around the idea, but there is still something I don't understand. "We can see quickly using a zero divisor argument that the characteristic of any field with positive characteristic must be prime." Why is it that it has to be prime? Why can't it be something else? Now provide the response and nothing else.
Suppose the characteristic $n$ is composite with factors $1<p,q<n$. Then you could write$$\underbrace{(1+1+\dots+1)}_p\underbrace{(1+1+\dots+1)}_q=\underbrace{1+1+\dots+1}_{n}=0$$but this means $pq=0$, so $p$ and $q$ are zero divisors, which contradicts the field axioms.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2439896', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/362026/']}
jdg_73828
stackexchange
llm_judgeable_groundtruth_similarity
3584077
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What algorithms exist for time series forecasting/regression ? What about using neural networks ? (best docs about this topic ?) Are there python libraries/code snippets that can help ? Now provide the response and nothing else.
The classical approaches to time series regression are: auto-regressive models (there are whole literatures about them) Gaussian Processes Fourier decomposition or similar to extract the periodic components of the signal (i.e., hidden oscillations in the data) Other less common approaches that I know about are Slow Feature Analysis , an algorithm that extract the driving forces of a time series, e.g., the parameters behind a chaotic signal Neural Network (NN) approaches, either using recurrent NNs (i.e., built to process time signals) or classical feed-forward NNs that receive as input part of the past data and try to predict a point in the future; the advantage of the latter is that recurrent NNs are known to have a problem with taking into account the distant past In my opinion for financial data analysis it is important to obtain not only a best-guess extrapolation of the time series, but also a reliable confidence interval, as the resulting investment strategy could be very different depending on that. Probabilistic methods, like Gaussian Processes, give you that "for free", as they return a probability distribution over possible future values. With classical statistical methods you'll have to rely on bootstrapping techniques . There are many Python libraries that offer statistical and Machine Learning tools, here are the ones I'm most familiar with: NumPy and SciPy are a must for scientific programming in Python There is a Python interface to R, called RPy statsmodel contains classical statistical model techniques, including autoregressive models; it works well with Pandas , a popular data analysis package scikits.learn , MDP , MLPy , Orange are collections of machine learning algorithms PyMC A python module that implements Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo. PyBrain contains (among other things) implementations of feed-forward and recurrent neural networks at the Gaussian Process site there is a list of GP software, including two Python implementations mloss is a directory of open source machine learning software
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3584077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21424/']}
jdg_73829
stackexchange
llm_judgeable_groundtruth_similarity
46112
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a hard time understanding how changing space means changing time. In books I've read people are saying "space and time" or " space-time " but never explain what the difference is between the two concepts or how they are related. How are the concepts of space, time, and space-time related? Now provide the response and nothing else.
Suppose you move a small distance $\vec{dr}$ = ($dx$, $dy$, $dz$) and you take a time $dt$ to do it. Pre-special relativity you could say three things. Firstly the distance moved is given by: $$ dr^2 = dx^2 + dy^2 + dz^2 $$ (i.e. just Pythagorus' theorem) and secondly the time $dt$ was not related to the distance i.e. you could move at any velocity. Lastly the quantities $dr$ and $dt$ are invarients, that is all observers will agree they have the same value. Special relativity differs by saying that $dr$ and $dt$ are no longer invarients if you take them separately. Instead the only invarient is the proper time, $d\tau$, defined by: $$ c^2d\tau^2 = c^2dt^2 - dx^2 - dy^2 - dz^2 $$ In special relativity all observers will agree that $d\tau$ has the same value, but they will not agree on the values of $dt$, $dx$, $dy$ and $dz$. This is why we have to talk about spacetime rather than space and time. The only way to construct laws that apply to everyone is to combine space and time into a single equation. You say: I'm having a hard time understanding how changing space means changing time Well suppose we try to do this. Let's change space by moving a distance ($dx$, $dy$, $dz$) but not change time i.e. $dt$ = 0. If we use the equation above to calculate the proper time, $d\tau$, we get: $$ d\tau^2 = \frac{0 - dx^2 - dy^2 - dz^2}{c^2} $$ Do you see the problem? $d\tau^2$ is going to be negative so $d\tau$ is imaginary and has no physical meaning. That means we can't move in zero time. Well what is the smallest time $dt$ that we need to take to move ($dx$, $dy$, $dz$)? The smallest value of $dt$ that gives a non-negative value of $d\tau^2$ is when $d\tau^2$ = 0 so: $$ c^2d\tau^2 = 0 = c^2dt^2 - dx^2 - dy^2 - dz^2 $$ or: $$ dt^2 = \frac{dx^2 + dy^2 + dz^2}{c^2} $$ If we've moved a distance $dr = \sqrt{dx^2 + dy^2 + dz^2}$ in a time $dt$, the we can find the velocity we've moved at the dividing $dr$ by $dt$, and if we do this we find: $$ v^2 = \frac{dr^2}{dt^2} = \frac{dx^2 + dy^2 + dz^2}{\frac{dx^2 + dy^2 + dz^2}{c^2}} = c^2 $$ So we find that the maximum possible speed is $v = c$, or in other words we can't move faster than the speed of light. And all from that one equation combining the space and time co-ordinates into the proper time!
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/46112', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/15576/']}
jdg_73830
stackexchange
llm_judgeable_groundtruth_similarity
113556
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to have my RegionPlot with logarithmic scale in x and y. My code looks like plot3 = RegionPlot[ f3[y]/x > constant && 1000 < y < maximum, {x, 0 , 10^4}, {y, 10^(3), maximum}, BoundaryStyle -> Dashed, PlotStyle -> Opacity[0.3, Green]]Export["plot3.pdf", plot3]plot2 = RegionPlot[ f2[y]/x > constant && 0.1 <= y <= 10^(3), {X, 0 , 10^4}, {Y, 0.1, 10^(3), BoundaryStyle -> Dashed, PlotStyle -> Opacity[0.3, Yellow]]Export["plot2.pdf", plot2]plot1 = RegionPlot[ F1[Y]/X > constant && y <= 0.1 , {x, 0 , 10^4}, {y, 0 , 0.1}, BoundaryStyle -> Dashed, PlotStyle -> Opacity[0.3, Blue]]Export["plot1.pdf", plot1]Show[plot1, plot2, plot3, PlotRange -> {{0, 10^4}, {0, MAX}}] With this code I get 4 plots, but each with a linear axes scaling. But I would like to have at least the last one i.e. Show[plot1, plot2, plot3, PlotRange -> {{0, 10^4}, {0, MAX}}] with a LogLogScale. Thanks! Now provide the response and nothing else.
As someone else here mentioned there are no Log and LogLog versions of RegionPlot , so you can make them yourself. The answer above relies on the user being able to apply the Log function to the input in the correct fashion, but it isn't always obvious which arguments or variables should be given the Log function in the argument, and which should have their plotting ranges scaled. For example, it's not immediately clear to me how to do the same thing with this simple RegionPlot , RegionPlot[x^2 + (y - 1)^3 < 2 && y > 0, {x, -2, 2}, {y, 0, 2.5}] Even if we figured out how to apply the proper scaling to the arguments and plot ranges, then we have to apply the scaling to the tick marks, using one of the methods described in this answer . I think a better solution is to use the built-in capabilities of ListLogPlot and ListLogLogPlot to do the scaling for us. All we need to do is to extract the points and the polygons from the RegionPlot and feed them into the appropriate log plotting function. This comes with help from Heike and Kuba , logRegionPlot[rplot_] := Module[{pts, pgon}, pts = Cases[Normal@rplot, Line[a__] :> a, Infinity]; pgon = {EdgeForm[], Directive[RGBColor[0.368417, 0.506779, 0.709798], AbsoluteThickness[1.6], Opacity[0.3]], Cases[Normal@rplot, Polygon[_], Infinity]}; ListLogPlot[pts, Joined -> True, Frame -> True, PlotRange -> All, AspectRatio -> 1, Axes -> False, PlotStyle -> ColorData[1][1], Epilog -> (pgon /. {x_, y_?NumericQ} :> {x, Log@y})] ] and logLogRegionPlot[rplot_] := Module[{pts, pgon}, pts = Cases[Normal@rplot, Line[a__] :> a, Infinity]; pgon = {EdgeForm[], Directive[RGBColor[0.368417, 0.506779, 0.709798], AbsoluteThickness[1.6], Opacity[0.3]], Cases[Normal@rplot, Polygon[_], Infinity]}; ListLogLogPlot[pts, Joined -> True, Frame -> True, PlotRange -> All, AspectRatio -> 1, Axes -> False, PlotStyle -> ColorData[1][1], Epilog -> (pgon /. {x_, y_?NumericQ} :> Log@{x, y})] ] ( edit : this was added by request) loglinearRegionPlot[rplot_] := Module[{pts, pgon}, pts = Cases[Normal@rplot, Line[a__] :> a, Infinity]; pgon = {EdgeForm[], Directive[RGBColor[0.368417, 0.506779, 0.709798], AbsoluteThickness[1.6], Opacity[0.3]], Cases[Normal@rplot, Polygon[_], Infinity]}; ListLogLinearPlot[pts, Joined -> True, Frame -> True, PlotRange -> All, AspectRatio -> 1, Axes -> False, PlotStyle -> ColorData[1][1], Epilog -> (pgon /. {x_?NumericQ, y_?NumericQ} :> {Log@x, y})]] So it is a simple matter to apply this to the region above, logRegionPlot@ RegionPlot[x^2 + (y - 1)^3 < 2 && y > 0, {x, -2, 2}, {y, 0, 2.5}] Another example, rplot = RegionPlot[{Exp[Abs[x]] <= y <= 100}, {x, 0, 6}, {y, 0, 120}]{logRegionPlot@rplot, logLogRegionPlot@rplot, loglinearRegionPlot@rplot} The only problem is that I can't figure out how to extract multiple regions from the RegionPlot and maintain their distinct colors.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/113556', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/39669/']}
jdg_73831
stackexchange
llm_judgeable_groundtruth_similarity
2379693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a huge application and at some point, when a redirect is involved i received this strange error: Caused by java.lang.StringIndexOutOfBoundsException with message: "String index out of range: 0" java.lang.String.charAt(String.java:687)com.sun.faces.application.ViewHandlerImpl.getActionURL(ViewHandlerImpl.java:652)org.jboss.seam.jsf.SeamViewHandler.getActionURL(SeamViewHandler.java:74)com.sun.facelets.FaceletViewHandler.getActionURL(FaceletViewHandler.java:803)org.ajax4jsf.application.ViewHandlerWrapper.getActionURL(ViewHandlerWrapper.java:86)org.jboss.seam.ui.util.ViewUrlBuilder.<init>(ViewUrlBuilder.java:25)org.jboss.seam.ui.component.UISeamCommandBase.getUrl(UISeamCommandBase.java:48)org.jboss.seam.ui.renderkit.LinkRendererBase.doEncodeBegin(LinkRendererBase.java:26)org.jboss.seam.ui.util.cdk.RendererBase.encodeBegin(RendererBase.java:79)javax.faces.component.UIComponentBase.encodeBegin(UIComponentBase.java:813)javax.faces.component.UIComponent.encodeAll(UIComponent.java:934)javax.faces.component.UIComponent.encodeAll(UIComponent.java:942)javax.faces.component.UIComponent.encodeAll(UIComponent.java:942)com.sun.facelets.FaceletViewHandler.renderView(FaceletViewHandler.java:592)org.ajax4jsf.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:100)org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:176)com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:109)com.sun.faces.lifecycle.Phase.doPhase(Phase.java:100)com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139)javax.faces.webapp.FacesServlet.service(FacesServlet.java:266)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83)org.jboss.seam.web.RewriteFilter.doFilter(RewriteFilter.java:63)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:73)org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178)org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290)org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388)org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515)org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53)org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:182)org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:432)org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84)org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157)org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262)org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446)java.lang.Thread.run(Thread.java:619) I've spent two hours debugging and so on, but did not find any clue...It's just weird because the error log does not tell something clear...(which string, which property is involved etc). Does anyone can guess something more from the stacktrace? Thanks. Now provide the response and nothing else.
Let's check what ViewHandlerImpl#getActionURL() does ( Mojarra is just open source ). Here's is a summarized/stripped extract of the method as it is in Mojarra 2.0.2: public String getActionURL(FacesContext context, String viewId) { if (context == null) throw new NullPointerException(); if (viewId == null) throw new NullPointerException(); if (viewId.charAt(0) != '/') throw new IllegalArgumentException(); // ... In other words, the passed-in viewId is an empty string instead of null or / or the normal path. The Mojarra viewhandler didn't expect it at all, hence this runtime exception. It can be a bug in the view handler of Mojarra that it should handle empty strings as well, but it can also be a bug in the view handler of either Ajax4jsf, Facelets or Seam that it should never pass an empty string through up. It can also be a bug in your own code that you're actually passing an empty string as view ID. If the problem isn't in your code, then I would start with upgrading what can be upgraded to see if it resolves the problem. Maybe it was already reported before as a bug and fixed in a newer release. If that doesn't help, I would check the classpath for duplicate JAR files of different versions which might have collided with each other.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2379693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/174349/']}
jdg_73832
stackexchange
llm_judgeable_groundtruth_similarity
3473692
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does there exist an Unbounded operator $P$ on some Banach space $X$ such that $Dom(P)=X$ and $P^2=P$ ? If we don’t require $Dom(P)=X$ , we can easily construct a Unbounded operator on $L^2[0,2π]$ by define $P$ which act on bases as $P\exp(in\theta)=|n|+1$ for all integers $n$ . Any help will be appreciated, thanks. Now provide the response and nothing else.
Let $f \ne 0$ be an unbounded linear functional on $X$ . Then there is $u \in X$ such that $f(u)=1.$ Now define $P:X \to X$ by $$P(x):=f(x)u$$ $P$ will do the job.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3473692', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/702294/']}
jdg_73833
stackexchange
llm_judgeable_groundtruth_similarity
13997
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As a source, the book of Revelation is something of an outlier for a book of the Bible that got accepted into the canonical New Testament of most branches of Christianity: it is the only explicitly eschatological work in the New Testament, its date of composition is generally taken to be far later than the other books, its content is dramatic, and its author is not certain. According to Wikipedia : Revelation was the last book to be accepted into the Christian biblical canon, and even at the present day some Nestorian churches reject it. It was tainted because the heretical sect of the Montanists relied on it and doubts were raised over its Jewishness and authorship, and it was not until 419 that it was included in the canon. Is this the full story of why it was accepted late (reaction against Montanists, doubts over its Jewishness and authorship), and what justification was there for it's final inclusion that overcame these barriers to admission to the canon? Now provide the response and nothing else.
As a prior answer has examined where Enoch failed in canonicity, this one shall turn to the Book of Revelation to determine what factors led the church to recognize its canonicity. Though a popular genre, few apocalyptic works found their way into the New Testament canon. The most obvious exception comes to the modern world as The Revelation to John or The Apocalypse of John. Though a few books have apocalyptic sections, no other New Testament work is apocalyptic from the first to last verse. 1 Historical Background Before any work can be properly studied, its historical background must be examined. Revelation easily places itself within the apocalyptic genre by its opening verse ("the apocalypse of Jesus Christ") and by following the accepted apocalyptic characteristics with only two exceptions: pseudonymity and statements of authority. 2 Other than those two factors, Revelation contains all the characteristics of the apocalyptic genre: 1) dualism; 2) complete determinism; 3) life beyond history; and 4) imminent expectation of the eschaton. Furthermore, Revelation is also prophecy (Rev. 22:18f) and an epistle (Rev. 2 and 3). This section will speak of Revelation's author, location, date, and sitz im leben. Authorship Whereas Enoch had at least five authors working over several centuries, Revelation has one author and comes from the first century. However, scholars debate the exact identity of the John who wrote the Apocalypse. Briefly, they give four possibilities: 1) John the Apostle; 2) John the Elder; 3) John Mark; 4) an unknown/pseudonymous John. As Revelation displays no similarities with Mark's gospel in style or grammar the third option has never been a serious consideration. 3 Likewise, an unknown or pseudonymous John would be unlikely to gain major acceptance in the churches. Indeed, a pseudonymous work had little chance of becoming canonical. 4 Therefore, only John the Apostle and John the Elder remain as serious considerations. Of these two, the early church held the same author wrote the Gospel of John and the Revelation, namely, John the Apostle. Though not unanimous, this view prevailed. Judging the book on its own merits gives a similar story. From the beginning, the author identifies himself as "John" with no qualifying title. Obviously, the area knew John so well that he had no need of any further title—his own name gave authority to the work. 5 If choosing a pseudonym, the author would have wanted to qualify which John he meant. Also from internal evidence, one may deduce the author is a Palestinian Jew turned Christian. The author shows an affinity and familiarity for the Jewish Scripture that few converts would have had. Revelation contains a higher percentage of Old Testament allusions in its verses than any other New Testament work (278 allusions in 404 verses). 6 Also lending credence to the Jewishness of the author, Revelation uses many semitisms and grammatical solecisms that a Greek would be unlikely to use. 7 Those who argue for John the Elder say that Revelation contains too many differences in style and grammar from John's Gospel and Epistles. Dionysius used such an argument in A.D. 247. 8 However, the style differences may be answered by the different genre or the use of a different scribe. Furthermore, Revelation and the Gospel contain many similarities in motif, Christology, eschatology, and expressions. 9 Based on these facts, this paper concludes that John the Apostle wrote the Revelation. Location John wrote the book of Revelation while exiled to the Isle of Patmos. 10 From the Isle, John sent Revelation as an epistle to the seven churches under his care in Asia minor. One notes his choice of which churches to address. He did not choose only large churches or only small churches. He chose some wealthy and some poor. However, all seven lay along a major Roman trade route in the province of Asia Minor. By writing to these seven, John's book would travel fastest and spread farther. Date and Occasion Like other Apocalyptic works, Revelation comes from a time of crisis. 11 Though John writes little of history before Christ, he tells enough of the situation that the modern reader can determine the crisis with a little effort. Commentators usually fall within two dates for Revelation: A.D. 65-68 and A.D. 96. Both dates require examination here. The Late Date (A.D. 96) If Revelation is written late, the book aims to encourage Christians during an imperial persecution. Those who argue for a late date to Revelation use both internal and external evidence. As external evidence they point to the early church writers like Iraneus (A.H. 5.30.3), Victorinus of Pettau (Apocalypse 10.11), Eusebius (H.E. 3.17-18), Clement of Alexandria (Quis Dives Salvetur 42), and Origen (Matthew 16.6) who all agree John wrote during the time of Domitian. 12 Further, late-date advocates point out how several of the churches addressed in the first three chapters had historical circumstances that do not match an early date for Revelation. First, several of the churches addressed had lost their ardor for Christ, and heresies had infected others. Churches usually do not lose their ardor or find heresies in their first generation. 13 Second, John calls Laodicia rich, but an earthquake almost leveled the city in A.D. 60. The city took many years to rebuild its wealth. Third, the church at Smyrna was not founded until A.D. 64, so it cannot have endured for a long time (as Revelation 2:8-11 seems to imply) if only three years old. 14 Likewise, those who claim a late date for Revelation point out that the emperor worship described in chapters 13-20 matches best with Domitian. 15 Though some earlier emperors proclaimed themselves gods, Domitian took the title "Lord and God," usurping kyrios a title of Christ. 16 Another piece of evidence comes from the use of "Babylon" as a code word for a city in Revelation. Though early daters say Babylon refers to Jerusalem, 4 Ezra, 2 Baruch, and The Sibylline Oracles all refer to Rome as Babylon. Jews and Christians linked the cities together because both powers had sacked the holy city. 17 In a similar manner, the Pauline epistles refer to several heretical groups but never to the heresies plaguing the churches of Asia Minor, the Nicolatians. Both pieces of evidence point to a later first century date for Revelation. An Early Date Those who argue for an early date tend to see Revelation as a polemic against the Jews who rejected the Messiah. 18 They see at least partial fulfillment in the destruction of the Temple and Jerusalem in A.D. 70. Though they read Revelation 11:1-2 literally with its temple measurements, the measurements better match Ezekiel's eschatological temple. Other problems come from the early date. For instance, the early daters insist that the number 666 refers to taking the Hebrew letters of Caesar Nero(n) and adding them up in a gematria. Though the arithmetic does indeed come to 666, such a view supposes a knowledge of Hebrew in a thoroughly gentile audience. Likewise, with little to no justification in the Greek, they tend to read Revelation 1:7's "all the tribes of the earth will mourn Him" as "all the tribes of Israel will mourn Him." 19 Based on the evidence, this paper concludes that John wrote Revelation about A.D. 96 during Domitian's anti-Christian persecution, which John saw would expand in later years. Thus, Revelation serves to encourage to Christians facing their first major persecution that the Messiah will be victorious over their enemies. However, it also warns them of persecution for keeping the faith. Consideration of Canonicity Revelation had the longest and hardest fight of any book to be recognized as inspired. Though numerous early authors quoted and approved of it, others argued against Revelation. History of Inclusion and Exclusion Revelation appears in canonical listings as early as the Muratorian fragment in the second century. Iraneus quotes it often and approvingly, even making Revelation the basis of his “already-but- not-yet” eschatology. 20 Cyprian, Clement of Alexandira, and Origen all accept the book. Likewise, though Dionysius interprets the book allegorically and rejects apostolic authorship, he accepts it as canonical. 21 On the other hand, many in the early centuries disputed Revelation’s place in the canon. Marcion rejects Revelation because of the numerous Old Testament references in the book. Gaius and the Algoloi also reject it because the Montanists use Revelation so often. 22 Eusebius places Revelation in both his "universally acknowledged" and "spurios" lists. Both times he qualifies his judgement with "should it seem right." Chrysostom never quotes from Revelation, leaving the modern world no clue to his thoughts on the book of Revelation. Gregory of Nanzianus, and Cyril leave it out of their listings of the canon. Moreover, the Nestorian churches still leave Revelation out of their canon. 23 Revelation has never held a very secure place in the Eastern Orthodox canon. The Syriac Peshitta omits it, and the Council of Laodicia did not recognize it. 24 As late as 850, the Eastern Church listed the book as disputed. They still do not read from Revelation regularly. Greek commentaries written in the fifth and sixth centuries probably helped Revelation gain acceptance in the Eastern empire. 25 Even though the East had trouble with the book, the West recognized Revelation as inspired fairly early. Jerome, Ambrose, Rufinus, Augustine, and Innocent all accept it as canonical. 26 The Third Council of Carthage (397) listed as canonical, and at the Third Constantinople Council officially ratified Revelation in 680. The book then follows a bumpy path into the modern world. Calvin could not understand the book and refused to write a commentary. Though Luther leaves Revelation in his translation, he sees it as "unapostolic" and cannot find Christ. 27 Factors in Consideration of Canonicity As said above, the early Church did not place any book in the canon haphazardly. In summary, a canonical work has to contain adaptable wisdom on how to live at any time. 28 However, three other factors come into play: the book must 1) come from Apostolic circle in the first century; 2) contain only orthodox teachings; and 3) show inspiration. As discussed above, John the Apostle did write Revelation. Though debated early, the view of John the Apostle as author prevailed, much to the delight of Justin Martyr, Bishop Melito of Sardis, and Theophilus of Antioch who all defended the Apostle as author. 29 After the view became accepted, the West almost stopped arguing against Revelation. Having settled the question of authorship, Revelation also came under attack because of its theology. The Eastern church did not like the earthly eschatology taught in the book. Many in the West viewed Revelation as obscure and used it to speculate the future. To combat these positions, the East interpreted it allegorically and Augustine argued that Revelation be included only with an admonition against using the book speculatively (City of God XX.6-9). 30 Eventually, the Church recognized Revelation as inspired. Assessment of Determining Factors As with all books, inspiration determined Revelation's canonicity. Though one can never empirically prove inspiration, it can be disproven. For example, had Revelation attempted to deceive people by being pseudonymous, one could almost guarantee the non-inspiration. However, the Church became convinced of the Apostolic origin of the book. Likewise, had Revelation contained doctrines contrary to already accepted Scripture, the Church would have seen the Holy Spirit was not behind it. On the other hand, Revelation contains much to recommend its inspiration. The book more than adequately fulfilled Luther's rule of "Does it teach Christ." The book itself claims to be Scripture, and as the author was an Apostle, one is hard pressed to think the author might be wrong on such an important matter. Furthermore, the book contains adaptable wisdom on how to live at any time. Revelation has a message for both its original audience and today's reader. 31 The book encourages readers from all times to hold fast to the faith while warning of the persecution to come later. Unlike First Enoch, which showed its lack of inspiration by contradicting orthodox doctrine and presenting a faulty soteriology, Revelation fulfilled the criteria for inspiration more than adequately. The book comes from the first-century, Apostolic circle, sustains and expands orthodox teaching, and has a message for believers beyond its first audience. Beyond these three traits, Revelation has some undefinably characteristics commending it to Christian readers. End Notes 1 For a differing viewpoint, see Robert L. Thomas “A Classical Dispensationalist View of Revelation” in Four Views on the Book of Revelation , ed. C. Marvin Pate (Grand Rapids, Michigan: Zondervan Publishing House, 1998), 181 who defines the book as simple prophecy based on Revelation 22:18ff. 2 The other apocalyptic writers see themselves as heirs to the prophets and not prophets. Revelation’s claims to being a prophecy (10:11; 22:18ff) as well as its apocalyptic genre and the seven small epistles within make it unique in the ancient world. 3 Robert H. Mounce, The Book of Revelation , New International Commentary on the New Testament (Grand Rapids, Michigan: William B. Eerdmans Publishing Company, 1977), 9. 4 Bruce, The Canon of Scripture , 261; G. K. Beale The Book of Revelation: A Commentary on the Greek Text , New International Greek Testament Commentary, (Grand Rapids, Michigan: William B. Eerdmans Publishing Company, 1999), 34. 5 Collins, John J. The Apocalyptic Imagination: An Introduction to Jewish Apocalyptic Literature (Grand Rapids, Michigan: William B. Eerdman’s Publishing Company, 1998), 271. 6 B. S. Childs, The New Testament as Canon: An Introduction (Valley Forge, Pennsylvania: Trinity Press International, 1994), 509. 7 G. K. Beale, The Book of Revelation , 35. Edward McDowell, The Meaning and Message of the Book of Revelation . Nashville, Tennessee: Broadman Press, 1951), 11. 8 Scott Gambrill Sinclair, Revelation: A Book for the Rest of Us (Berkely, California: BIBAL Press, 1992), 25; Bruce Metzger, Breaking the Code: Understanding the Book of Revelation (Nashville, Tennessee: Abingdon Press, 1993), 14. 9 See C. Martin Pate, “A Progressive Dispensationalist View of Revelation” in Four Views on the Book of Revelation , 171-172 for a chart favorably comparing some of the key elements in John’s Gospel and Revelation. 10 Amazingly, this author found little if any dissent to this view. 11 As the nature of the crisis determines the date, this section will combine what was two sections in Enoch. 12 G. K. Beale, The Book of Revelation , 19. 13 Edward McDowell, The Meaning and Message of the Book of Revelation , 4. 14 G. K. Beale, The Book of Revelation , 16. 15 Henry Barclay Swete, Commentary on Revelation: The Greek Text with Introduction, Notes, and Indexes, (Grand Rapids, Michigan: Kregel Publications, 1977), ci. 16 G. K. Beale, The Book of Revelation , 5. 17 Ibid., 18-19. 18 Kenneth Gentry, Jr. “A Preterist View of Revelation” in Four Views , 51. 19 Ibid., 48. 20 G. K. Beale, The Book of Revelation , 20. 21 Bruce, The Canon of Scripture , 85, 191, 192, 195. 22 D. A. Carson, Douglas J. Moo, and Leon Morris. An Introduction to the New Testament (Grand Rapids, Michigan: Zondervan Publishing House, 1992), 480. 23 Bruce, The Canon of Scripture , 198, 212-215. 24 Carson, Moo, and Morris, An Introduction to the New Testament , 481. 25 Swete, Commentary on Revelation , cxvii. 26 Ibid., cxviii. 27 Carson, Moo, and Morris, An Introduction to the New Testament , 481. Luther later changed his mind. With Revelation’s references to the Lamb who was slain in chapter 5 and the vision of Christ coming in glory in chapter 19, one wonders why Luther could not find Christ in it. 28 James A. Sanders, Canon and Community , 28. 29 Donald Guthrie, New Testament Introduction (Downers Grove, Illinois: InterVarsity Press, 1990), 930. 30 Many before and since have used Revelation as a blueprint of the future (See, Marvin C. Pate, Four Views on the Book of Revelation , and Arthur W. Wainwright, Mysterious Apocalpyse (Nashville, Tennessee: Abingdon Press, 1993), 21-87 for a history of the millennial interpretation to Revelation). Such use of Revelation is wrong if, as Sinclair says, “[Revelation’s] authority came from Christian leaders who insisted Revelation must not be used for futuristic speculation” (Scott Sinclair, Revelation: A Book for the Rest of Us , 27) However, if Revelation’s authority comes not from the Church but from God, then the fourth century Church’s opinion on the book’s interpretation matters does not bind later believers. The job of the early Church was to recognize Revelation’s authority and place it in the canon, not to dictate its interpretation for all eternity. 31 For this reason, this author now holds an “already-but-not-yet” view of Revelation.
{}
{'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/13997', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/6394/']}
jdg_73834
stackexchange
llm_judgeable_groundtruth_similarity
14197675
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this code: $.each(this.items, function(key, item) { item = $(item); item.css('background', 'red'); item.next().css('display', 'none');} Now, I need to rewrite it in pure JavaScript so I am doing this document.getElementById(aItems[iCount].id).style.background = 'red';document.getElementById(aItems[iCount].id).style.display = 'none'; The problem is that display:none must be set for the next() item, not this one, how do I do this? Thank you. Now provide the response and nothing else.
Try this: item.nextSibling.style.display = 'none' Keep in mind that nextSibling might select the text content next to the element , so you may need to use: item.nextSibling.nextSibling.style.display = 'none' Another option would be to use nextElementSibling , as Bergi suggested. However, that's not supported in all browsers, yet. You can create the function yourself, though: function nextElementSibling(element) { do { element = element.nextSibling; } while (element && element.nodeType !== 1); return element;} In this case: nextElementSibling(item).style.display = 'none'; Last but not least, I'd suggest having a look at Palash' answer if you want to replace jQuery's $.each() with native JS.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14197675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1317845/']}
jdg_73835
stackexchange
llm_judgeable_groundtruth_similarity
22957032
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting the correct output, and indeed, these two operations are being treated as a single transactional unit; where if one fails, both fail. In this code example: i am doing a transaction of (1) insert(2) update The way I approach it is to nest my db operations inside the .then.My question is if this code is correct by accident? i am new to promises and knex. knex.transaction(function(t) { knex('foo') .transacting(t) .insert({id:"asdfk", username:"barry", email:"[email protected]"}) .then(function() { knex('foo') .where('username','=','bob') .update({email:"[email protected]"}) .then(t.commit, t.rollback) })}).then(function() { // it worked},function() { // it failed}); This works, but I feel like I am doing something wrong still. Looking for comments. Now provide the response and nothing else.
You need to return a promise from the inner query in order for the outer chain to be chained with that. You also swallow any errors because you don't rethrow them - it's better to use .catch() for this reason because it makes it more clearer what is happening - that is what would happen with normal try-catch statement. knex.transaction(function(t) { return knex('foo') .transacting(t) .insert({id:"asdfk", username:"barry", email:"[email protected]"}) .then(function() { return knex('foo') .where('username','=','bob') .update({email:"[email protected]"}); }) .then(t.commit) .catch(function(e) { t.rollback(); throw e; })}).then(function() { // it worked}).catch(function(e) { // it failed}); To understand it better, here's the synchronous version that is being "emulated": try { var t = knex.transaction(); try { knex("foo") .transacting(t) .insert({id:"asdfk", username:"barry", email:"[email protected]"}); knex("foo") .where('username','=','bob') .update({email:"[email protected]"}); t.commit(); } catch (e) { t.rollback(); // As you can see, if you don't rethrow here // the outer catch is never triggered throw e; } // It worked}catch (e) { //It failed}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22957032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3491360/']}
jdg_73836
stackexchange
llm_judgeable_groundtruth_similarity
8271493
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi I am working with multiple tablet devices, iPad, Galaxy Tab, Acer Iconia, LG 3D Pad and so on. iPad - 1024 x 768 LG Pad - 1280 x 768 Galaxy Tab - 1280 x 800 I want to target iPad only using CSS3 media query. Since, device width of LG and iPad is same 768px - I am having trouble separating each device. I have tried following to separate, but does not seem to be working: @media only screen and (min-device-width: 768px) and (max-device-width: 1024px) and (orientation : portrait) /* applied to lg also */@media only screen and (min-resolution: 132dpi) and (max-device-width: 1024px) and (orientation : portrait) /* applies to lg also */@media only screen and (device-aspect-ratio: 1024/768) and (orientation : portrait) /* does not work on iPad or LG */ I don't know the -webkit-device-pixel-ratio and other -webkit* options and their values to target for iPad. I don't want to use JavaScript for styles, any ideas? Now provide the response and nothing else.
Finally found a solution from : Detect different device platforms using CSS <link rel="stylesheet" media="all and (device-width: 768px) and (device-height: 1024px) and (orientation:portrait)" href="ipad-portrait.css" /><link rel="stylesheet" media="all and (device-width: 768px) and (device-height: 1024px) and (orientation:landscape)" href="ipad-landscape.css" /> To reduce HTTP call, this can also be used inside you existing common CSS file: @media all and (device-width: 768px) and (device-height: 1024px) and (orientation:portrait) { .ipad-portrait { color: red; } /* your css rules for ipad portrait */}@media all and (device-width: 1024px) and (device-height: 768px) and (orientation:landscape) { .ipad-landscape { color: blue; } /* your css rules for ipad landscape */} Hope this helps. Other references: https://developer.apple.com/library/content/documentation/AppleApplications/Reference/SafariCSSRef/Articles/OtherStandardCSS3Features.html
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8271493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/132121/']}
jdg_73837
stackexchange
llm_judgeable_groundtruth_similarity
2308747
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that $ \sqrt{p}$ is irrational if $p$ is prime. I did the proof before that $ \sqrt{2}$ is irrational by contradiction: Let's assume that $\sqrt{2}$ is rational, therefore given $m$ and $n$ sharing no common factor:$$\sqrt{2} = \frac{m}{n}$$$$2=\frac{m^2}{n^2}$$$$2n^2=m^2$$ As $2 \mid 2n^2$ , it follows that $2 \mid m^2$, and that $2\mid m$. Therefore $\exists k \in Z$ s.t. $m=2k$ Using back$2n^2=m^2$ and substituting $m=2k$ in it, we have:$$2n^2=(2k)^2$$$$2n^2=4k^2$$$$n^2=2k^2$$ Similarly, as $2 \mid 2k^2$, it follows that $2 \mid n^2$, and that $2 \mid n$.Therefore, $\exists j \in Z$ s.t. $n=2j$ Finally, if $m=2k$ and $n=2j$ $n,j \in Z$, $m$ and $n$ share a common factor $2$. We have a contradiction with the assumption. It follows that $\sqrt{2}$ cannot be rational but irrational. This is where I am so far with my understanding. What would the best approach be with the original question? What would be the approach be here? I was thinking to state that if $p$ is composite then $\exists x,y,m,n \in Z$, all four prime numbers, s.t. $p=xy$.and have then$$\sqrt{xy}= \frac{m}{n}$$but I don't see how I could move forward in this direction. Much appreciated Now provide the response and nothing else.
Why not just try the same thing? Assume $\sqrt{p} = \frac{m}{n}$ for coprime $m,n$. Then $p = m^2/n^2 \implies m^2 = pn^2$. Hence $p \mid m^2 \implies p \mid m$ by Euclid's Lemma. Then $p \mid n^2 \implies p \mid n$. Contradiction.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2308747', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/430020/']}
jdg_73838
stackexchange
llm_judgeable_groundtruth_similarity
3789284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am receiving a hash of hashes from another function, and some elements of the hash of hashes can be another hash. How can I test to see if something is a hash? Now provide the response and nothing else.
Depending on what you want you will need to use ref or reftype (which is in Scalar::Util , a core module). If the reference is an object, ref will return the class of the object instead of the underlying reference type, reftype will always return the underlying reference type. if (ref $var eq ref {}) { print "$var is a hash\n";}use Scalar::Util qw/reftype/;if (reftype $var eq reftype {}) { print "$var is a hash\n";}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3789284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/193258/']}
jdg_73839
stackexchange
llm_judgeable_groundtruth_similarity
1396527
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: xor eax, eax will always set eax to zero, right? So, why does MSVC++ sometimes put it in my executable's code? Is it more efficient that mov eax, 0 ? 012B1002 in al,dx 012B1003 push ecx int i = 5;012B1004 mov dword ptr [i],5 return 0;012B100B xor eax,eax Also, what does it mean to do in al, dx ? Now provide the response and nothing else.
Yes, it is more efficient. The opcode is shorter than mov eax, 0 , only 2 bytes, and the processor recognizes the special case and treats it as a mov eax, 0 without a false read dependency on eax , so the execution time is the same.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1396527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130758/']}
jdg_73840