source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
17127980
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a website using multiple views.The tag and the tags of the page get changed through a a $rootScope variable.so I have something like <html><head> <title ng-bind="page_title"></title> <meta property="og:title" content="{{page_title}}"></head> Whenever each view get loaded on the website, the page_title variable changes and the title and the og:title tags get updated (everything works as expected). The problem is that I need, on some views to load a facebook, a google+ and a twitter button.I can display them properly but if I click on each them the page title appear to be something like: {{page_title}} I've tried to delay the execution of the scripts of each button using setTimeOut but to no good.But the scripts just read whatever is written, they don't parse the page_title. Does anyone know a workaround to this? Thank you Now provide the response and nothing else.
This can't be done using javascript. Some people think that Facebook is reading what's currently on the page. It's not. It makes a separate request to your server using the same url (from window.location.href) using it's Scraper, and the Facebook Scraper does not run javascript. That's why you get {{page_title}} when clicking on something like a Facebook share button. Your content will have to be generated by the server so when Facebook goes to hit the url it gets the content it needs up front without the need for javascript. You can tackle the server side rendering in a fews ways. You can allow your server side technology to render the content. You can use the PhantomJS approach https://github.com/steeve/angular-seo .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17127980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/825662/']}
jdg_82341
stackexchange
llm_judgeable_groundtruth_similarity
19412462
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried implementing this formula: http://andrew.hedges.name/experiments/haversine/ The aplet does good for the two points I am testing: Yet my code is not working. from math import sin, cos, sqrt, atan2R = 6373.0lat1 = 52.2296756lon1 = 21.0122287lat2 = 52.406374lon2 = 16.9251681dlon = lon2 - lon1dlat = lat2 - lat1a = (sin(dlat/2))**2 + cos(lat1) * cos(lat2) * (sin(dlon/2))**2c = 2 * atan2(sqrt(a), sqrt(1-a))distance = R * cprint "Result", distanceprint "Should be", 278.546 The distance it returns is 5447.05546147 . Why? Now provide the response and nothing else.
Edit: Just as a note, if you just need a quick and easy way of finding the distance between two points, I strongly recommend using the approach described in Kurt's answer below instead of re-implementing Haversine -- see his post for rationale. This answer focuses just on answering the specific bug OP ran into. It's because in Python, all the trig functions use radians , not degrees. You can either convert the numbers manually to radians, or use the radians function from the math module: from math import sin, cos, sqrt, atan2, radians# approximate radius of earth in kmR = 6373.0lat1 = radians(52.2296756)lon1 = radians(21.0122287)lat2 = radians(52.406374)lon2 = radians(16.9251681)dlon = lon2 - lon1dlat = lat2 - lat1a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2c = 2 * atan2(sqrt(a), sqrt(1 - a))distance = R * cprint("Result:", distance)print("Should be:", 278.546, "km") The distance is now returning the correct value of 278.545589351 km.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/19412462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1344854/']}
jdg_82342
stackexchange
llm_judgeable_groundtruth_similarity
10390782
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know I can avoid boxing by adding my own Equals implementation. public struct TwoDoubles{ public double m_Re; public double m_Im; public TwoDoubles(double one, double two) { m_Re = one; m_Im = two; } public override bool Equals(object ob) { return Equals((TwoDoubles)ob); } public bool Equals(TwoDoubles ob) { TwoDoubles c = ob; return m_Re == c.m_Re && m_Im == c.m_Im; }} I can't call this an override as much as an overload. By the magic of the runtime it does correctly call the correct Equals() implementation based on the type of the caller. Why can't I override and change the parameter type to TwoDoubles and let boxing occur by the power of the runtime on an as needed basis? Is it because C# doesn't support parameter contravariance (if that's the reason then why is it not supported...seems a small step from object o = new TwoDoubles() )? UPDATE Clarification: object is a part of the inheritance hierarchy of a struct. Why can we not specify a more derived type as a parameter to override an implementation from a less derived type? This would allow us to write: public override bool Equals(TwoDoubles ob) { TwoDoubles c = ob; return m_Re == c.m_Re && m_Im == c.m_Im; } Which should be called when the variable is a TwoDouble even if said variable has been boxed into an object type. Now provide the response and nothing else.
Why can't I override and change the parameter type to TwoDoubles? Because that would not be typesafe! class B{ public virtual void M(Animal animal) { ... }}class D : B{ public override void M(Giraffe animal) { ... }}B b = new D();b.M(new Tiger()); And now you just passed a tiger to a method that actually only takes a giraffe! Same thing in your case. You're overriding a method that takes any object with a method that can only take a struct; that's not typesafe. Is it because C# doesn't support parameter type contravariance ? No, it is because you are asking for parameter type covariance , which is not typesafe. C# does not support parameter type contravariance either, but that's not what you're asking for.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10390782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/87796/']}
jdg_82343
stackexchange
llm_judgeable_groundtruth_similarity
711184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know of any examples or tutorials of an MVC view that shows parent/child data all on one form, and allows all the child records to be editable? For example, say I have a table of people and another containing the vehicles they own. One one form, I want to show every vehicle for a given person, and make the data elements editable (i.e. license plate number, car color, etc.) in case there are mistakes. I don't want to jump to a separate edit form for each vehicle. My attempts thus far have gotten me to the point where I can display the data, but I can't get it to post back to the controller. I've tried to narrow down the problem as far as I could here , but I'm still not getting it, and I think a broader example may be in order. Any ideas? Now provide the response and nothing else.
You can try something like this. Suppose you have this object : public class Vehicle{ public int VehicleID { get; set; } public string LicencePlate { get; set; } public string Color { get; set; }} And this is your controller action that you'll use to edit vehicle details (where you'll post the form) : [AcceptVerbs(HttpVerbs.Post)]public ActionResult EditVehicles(int Owner, Vehicle[] vehicles){ //manipulate the data, then return back to the list return RedirectToAction("YourAction");} Then you should set your form this way : <!--have a form for each person, listing their vehicles--><form action="/EditVehicles" method="post"> <input type="hidden" name="Owner" value="25" /> <input type="hidden" name="Vehicles[0].VehicleID" value="10" /> <input type="text" name="Vehicles[0].LicencePlate" value="111-111" /> <input type="text" name="Vehicles[0].Color" value="Red" /> <input type="hidden" name="Vehicles[1].VehicleID" value="20" /> <input type="text" name="Vehicles[1].LicencePlate" value="222-222" /> <input type="text" name="Vehicles[1].Color" value="Blue" /> <input type="submit" value="Edit" /></form> This will help the DefaultModelBinder to correctly bind the form data to your model in your controller. Thus Response.Write(vehicles[1].Color); on your controller, will print "Blue". This is a very simple example, but I'm sure you get the idea. For more examples about binding forms to arrays, lists, collections, dictionaries, take a look at here .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/711184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23935/']}
jdg_82344
stackexchange
llm_judgeable_groundtruth_similarity
34139102
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: pandas function read_csv() reads a .csv file. Its documentation is here According to documentation, we know: dtype : Type name or dict of column -> type, default None Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32} (Unsupported with engine=’python’) and converters : dict, default None Dict of functions for converting values in certain columns. Keys can either be integers or column labels When using this function, I can call either pandas.read_csv('file',dtype=object) or pandas.read_csv('file',converters=object) . Obviously, converter, its name can says that data type will be converted but I wonder the case of dtype? Now provide the response and nothing else.
The semantic difference is that dtype allows you to specify how to treat the values, for example, either as numeric or string type. Converters allows you to parse your input data to convert it to a desired dtype using a conversion function, e.g, parsing a string value to datetime or to some other desired dtype. Here we see that pandas tries to sniff the types: In [2]:df = pd.read_csv(io.StringIO(t))t="""int,float,date,str001,3.31,2015/01/01,005"""df = pd.read_csv(io.StringIO(t))df.info()<class 'pandas.core.frame.DataFrame'>Int64Index: 1 entries, 0 to 0Data columns (total 4 columns):int 1 non-null int64float 1 non-null float64date 1 non-null objectstr 1 non-null int64dtypes: float64(1), int64(2), object(1)memory usage: 40.0+ bytes You can see from the above that 001 and 005 are treated as int64 but the date string stays as str . If we say everything is object then essentially everything is str : In [3]: df = pd.read_csv(io.StringIO(t), dtype=object).info()<class 'pandas.core.frame.DataFrame'>Int64Index: 1 entries, 0 to 0Data columns (total 4 columns):int 1 non-null objectfloat 1 non-null objectdate 1 non-null objectstr 1 non-null objectdtypes: object(4)memory usage: 40.0+ bytes Here we force the int column to str and tell parse_dates to use the date_parser to parse the date column: In [6]:pd.read_csv(io.StringIO(t), dtype={'int':'object'}, parse_dates=['date']).info()<class 'pandas.core.frame.DataFrame'>Int64Index: 1 entries, 0 to 0Data columns (total 4 columns):int 1 non-null objectfloat 1 non-null float64date 1 non-null datetime64[ns]str 1 non-null int64dtypes: datetime64[ns](1), float64(1), int64(1), object(1)memory usage: 40.0+ bytes Similarly we could've pass the to_datetime function to convert the dates: In [5]:pd.read_csv(io.StringIO(t), converters={'date':pd.to_datetime}).info()<class 'pandas.core.frame.DataFrame'>Int64Index: 1 entries, 0 to 0Data columns (total 4 columns):int 1 non-null int64float 1 non-null float64date 1 non-null datetime64[ns]str 1 non-null int64dtypes: datetime64[ns](1), float64(1), int64(2)memory usage: 40.0 bytes
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34139102', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1432142/']}
jdg_82345
stackexchange
llm_judgeable_groundtruth_similarity
1029433
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The following passage has been extracted from the book "Mathematical methods for Physicists": A key idea of the present chapter is that a quantity that is properly called a vector must have the transformation properties that preserve its essential features under coordinate transformation; there exist quantities with direction and magnitude that do not transform appropriately and hence are not vectors. Cross product: $\nabla \times (Vector)=Vector$ From the above equation of cross product we can say that $\nabla$ is a vector (specifically vector operator). However, a vector generally has magnitude and an associated direction. While in case of $\nabla$, it might satisfy essential features under transformation to be a vector, but I don't see whether it has magnitude or not? Does it has magnitude? If so, what is it? Or otherwise is it that a vector need not have magnitude? Now provide the response and nothing else.
Let $f=f(x,y,z)$ be a scalar function and $\mathbf F=\langle F_1(x,y,z),F_2(x,y,z),F_3(x,y,z)\rangle$ be a vector field in $\mathbb{R}^3$. Then we can think of $f$ or $\mathbf F$ (as appropriate) as the inputs to the operators grad, div, curl, and even laplacian with the resulting outputs indicated: \begin{align}f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla}}} \longrightarrow \langle f_x,f_y,f_z\rangle\\\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla\cdot}}} \longrightarrow {\partial F_1\over \partial x}+{\partial F_2\over \partial y}+{\partial F_3\over \partial z}\\\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla\times}}} \longrightarrow \left\langle {\partial F_3\over \partial y}-{\partial F_2\over \partial z},{\partial F_1\over \partial z}-{\partial F_3\over \partial x},{\partial F_2\over \partial x}-{\partial F_1\over \partial y}\right\rangle\\f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\nabla\cdot\nabla}}} \longrightarrow f_{xx}+f_{yy}+f_{zz}.\\\end{align} Thus $\nabla$ is not a vector, but rather indicates an operator whose action on the input $f$ results in the output $\langle f_x,f_y,f_z\rangle$. Similarly for the others. If you find the del notation counterproductive, just abandon that notation/nomenclature for this: \begin{align}f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{grad}}}} \longrightarrow \langle f_x,f_y,f_z\rangle\\\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{div}}}} \longrightarrow {\partial F_1\over \partial x}+{\partial F_2\over \partial y}+{\partial F_3\over \partial z}\\\mathbf F\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{curl}}}} \longrightarrow \left\langle {\partial F_3\over \partial y}-{\partial F_2\over \partial z},{\partial F_1\over \partial z}-{\partial F_3\over \partial x},{\partial F_2\over \partial x}-{\partial F_1\over \partial y}\right\rangle\\f\longrightarrow &\ \color{blue}{{\LARGE\boxed{\text{lap}}}} \longrightarrow f_{xx}+f_{yy}+f_{zz}.\\\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1029433', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/102669/']}
jdg_82346
stackexchange
llm_judgeable_groundtruth_similarity
2655117
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to craft a case-insensitive regex (for JavaScript) that matches street names, even if each word has been abbreviated. For example: n univ av should match N Univ ersity Av e king blv should match Martin Luther King Jr. Blv d ne 9th should match both NE 9th St and 9th St NE Bonus points (JK) for a "replace" regex that wraps the matched text with <b> tags. Now provide the response and nothing else.
You got: "n univ av" You want: "\bn.*\buniv.*\bav.*" So you do: var regex = new RegExp("n univ av".replace(/(\S+)/g, function(s) { return "\\b" + s + ".*" }).replace(/\s+/g, ''), "gi"); Voilà! But I'm not done, I want my bonus points. So we change the pattern to: var regex = new RegExp("n univ av".replace(/(\S+)/g, function(s) { return "\\b(" + s + ")(.*)" }).replace(/\s+/g, ''), "gi"); And then: var matches = regex.exec("N University Ave"); Now we got: matches[0] => the entire expression (useless) matches[odd] => one of our matches matches[even] => additional text not on the original match string So, we can write: var result = '';for (var i=1; i < matches.length; i++){ if (i % 2 == 1) result += '<b>' + matches[i] + '</b>'; else result += matches[i];}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2655117', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/204900/']}
jdg_82347
stackexchange
llm_judgeable_groundtruth_similarity
627518
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to install Linux on my Lenovo IdeaPad 1 11ADA05. I have prepared the installer on an SD card. It boots into the SD card fine, but when I try to install the distro, it can't find my SSD. My SSD model is an eMMC card 64gb Ramexal SSD. I have tried multiple different distros, including Arch, Manjaro, Ubuntu, Mint, Kubuntu, and GNU Guix. Here's the output of fdisk -l Disk /dev/loop0: 81.81 MiB, 85786624 bytes, 167552 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop1: 537.95 MiB, 564084736 bytes, 1101728 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop2: 1.31 GiB, 1404850176 bytes, 2743848 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop3: 656.67 MiB, 688570368 bytes, 1344864 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/sda: 29.72 GiB, 31914983424 bytes, 62333952 sectorsDisk model: MassStorageClassUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x00000000Device Boot Start End Sectors Size Id Type/dev/sda1 * 64 5496075 5496012 2.6G 0 Empty/dev/sda2 5496076 5504267 8192 4M ef EFI (FAT-12/16/32) Is there any way to get my SSD to work or am I stuck on Windows? Now provide the response and nothing else.
5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !) As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules To resume : Add guc paramters to kernel parameters by editing /etc/default/grub GRUB_CMDLINE_LINUX="i915.enable_guc=2" Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with : options i915 enable_guc=2 Add i915 to /etc/mkinitcpio.conf : MODULES=(i915) Rebuild kernel initramfs (needs reboot after successfull build) : # mkinitcpio -P Remove xf86-video-intel (driver is already in kernel) : # pacman -Rscn xf86-video-intel
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/627518', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/427024/']}
jdg_82348
stackexchange
llm_judgeable_groundtruth_similarity
187374
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't have any experience working as a programmer - I just code as a hobby so far. Some years ago, I heard a programmer say that I should focus on making my code just work despite being slow (it was for a game, so it was more noticeable if my code was slow or not) and then I would spend my time improving it to make it fast and better. I wonder how often is that advice applied in actual working environments. Do you make something that (barely) works and then spend the rest of the time improving it, or do you make it "perfect" (or acceptable) from the very beginning without looking back? I'm assuming that the difference between these is that one is more about writing the first solution that comes to your mind, and the other is about carefully planning and thinking what you're going to write - supposing that the latter is, well, significantly slower than the former. Now provide the response and nothing else.
You have heard only part of the idea. Several iterative methods, such as Test-Driven Development (TDD), recommend that you should make your code work (for TDD: pass the tests) as early as possible. The point is that you can gain feedback earlier in the programming process and adapt your design to the real needs. In actual working environments, this pattern of making a workable but imperfect implentation, gaining information with it, and then improving and gaining new information is used heavily when you do any Agile programming. The crucial point is that you don't do it for the sake of rework, but because the requirements are not clear at the project start and there is simply no way to get an acceptable result from the start because "acceptable" isn't defined yet. The concept has very little to do with less careful planning and thinking. It just acknowledges that planning and thinking are often quite useless when you and the user's cannot see the half-finished product and get feedback early.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/187374', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/13833/']}
jdg_82349
stackexchange
llm_judgeable_groundtruth_similarity
39034
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Found in the lab, it looks like a round bottom flask with an extremely thin neck. Effectively a glass sphere about 60 mm in diameter with what looks like a capillary tube as the neck. Not sure whether it is a standard COTS item or custom glass. Now provide the response and nothing else.
We used to have glassware like that in our lab. It is basically an "unfinished" piece of glassware. In a nutshell, it was used to handle (collect, transport) gases. It was easy and inexpensive for the glassblower to make several of those bulbs at a time; he'd make a variety of sizes. We'd keep them in a drawer and then before we used them we'd do some glassblowing ourselves and add the expensive part - a stopcock with a ground glass joint. Then, having already measured the volume, we could attach one to a vacuum line (no vacuum) with the stopcock open and feed a gas into the vacuum line, close the stopcock and remove the bulb. Knowing the pressure, temperature and volume, we knew how many moles of gas were in the bulb. We could then attach the bulb to a reaction flask and open the stopcock, so that a known amount of gas was used in a reaction. Here is a simple picture, imagine the tube on the bottom is the vac line. These bulbs were also used to trap gaseous reaction products by attaching the bulb with its stopcock and joint to a vac line along with a reaction flask. The bulb was cooled to the desired temperature and when the reaction produced gases, they could be collected in the cooled bulb.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/39034', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/9473/']}
jdg_82350
stackexchange
llm_judgeable_groundtruth_similarity
8012
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Recently I just had a small crash that broke a plastic flange from my ac system. I checked around to be sure nothing else is broken and it really look like this is the only broken thing. The compressor has emptied itself since this caused a big leak. So my question is : Is there anything i should worry about if I don't repair this before spring (in about 4 month)? I was told I should at least remove the fuses to stop the compressor from working. Should I do this? edit / this is on a vw passat 1998 1.8t Now provide the response and nothing else.
It's best to repair it now if you can. Moisture is the enemy of an AC system and the moisture in the air is enough to cause a problem. This can cause corrosion on the inside of the components in the AC system. Causing you problems down the road. You also need a functioning AC system for the defroster to operate effectively. The AC compressor is run to dehumidify the air to help clear the windshield. As for the fuse there is no need to pull it, the AC compressor won't come on if it's low on refrigerant.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/8012', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/3725/']}
jdg_82351
stackexchange
llm_judgeable_groundtruth_similarity
3650892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am curious as what else the new keyword does in the background apart from changing what the this scope refers too. For example if we compare using the new keyword to make a function set properties and methods on an object to just making a function return a new object, is there anything extra that the new object does? And which is preferred if I don't wish to create multiple objects from the function constructor var foo2 = function () { var temp = "test"; return { getLol: function () { return temp; }, setLol: function(value) { temp = value; } };}();var foo = new function () { var temp = "test"; this.getLol = function () { return temp; } this.setLol = function(value) { temp = value; }}(); The firebug profiler tells me using the new keyword is slightly faster (2ms instead of 3ms), on large objects is new still significantly faster? [Edit] Another matter is on really large object constructors is having a return at the bottom of the function (It will have a large amount of local functions) or having a few this.bar = ... at the top of the function more readable? What is considered a good convention? var MAIN = newfunction() { this.bar = ... // Lots of code}();var MAIN2 = function() { // Lots of code return { bar: ... }}(); Now provide the response and nothing else.
Quoting Douglas Crockford from the Good Parts book (page 47), to answer the title of this question: If the new operator were a method instead of an operator, it could be implemented like this: Function.method('new', function () { // Create a new object that inherits from the // constructor's prototype. var that = Object.create(this.prototype); // Invoke the constructor, binding -this- to // the new object. var other = this.apply(that, arguments); // If its return value isn't an object, // substitute the new object. return (typeof other === 'object' && other) || that;}); The Function.method method is implemented as follows. This adds an instance method to a class ( Source ): Function.prototype.method = function (name, func) { this.prototype[name] = func; return this;}; Further reading: Mozilla Dev Center: Object.create() Mozilla Dev Center: Function.apply() Douglas Crockford: Classical Inheritance in JavaScript
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3650892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/419970/']}
jdg_82352
stackexchange
llm_judgeable_groundtruth_similarity
39720225
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I add to a Jenkins pipeline an old-style post-build task which sends email when the build fails? I cannot find "Post-build actions" in the GUI for a pipeline. I know that I can wrap the entire build script try/catch, however, this seems ugly when the build script is large and continues to send emails even when the job was aborted manually. I would like to achieve the same functionality as with the previous email-ext based post-build action. try { // Do sth} catch(e) { emailext body: '$DEFAULT_CONTENT', recipientProviders: [ [$class: 'CulpritsRecipientProvider'], [$class: 'DevelopersRecipientProvider'], [$class: 'RequesterRecipientProvider'] ], replyTo: '$DEFAULT_REPLYTO', subject: '$DEFAULT_SUBJECT', to: '$DEFAULT_RECIPIENTS' throw err} Now provide the response and nothing else.
This answer worked on my Jenkins ver. 2.96. Jenkins pipeline email not sent on build failure - Stack Overflow pipeline { agent any stages { stage('Test') { steps { sh 'echo "Fail!"; exit 1' } } } post { always { echo 'This will always run' } success { echo 'This will run only if successful' } failure { mail bcc: '', body: "<b>Example</b><br>Project: ${env.JOB_NAME} <br>Build Number: ${env.BUILD_NUMBER} <br> URL de build: ${env.BUILD_URL}", cc: '', charset: 'UTF-8', from: '', mimeType: 'text/html', replyTo: '', subject: "ERROR CI: Project name -> ${env.JOB_NAME}", to: "[email protected]"; } unstable { echo 'This will run only if the run was marked as unstable' } changed { echo 'This will run only if the state of the Pipeline has changed' echo 'For example, if the Pipeline was previously failing but is now successful' } } } Official documentation for the mail step
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39720225', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/342473/']}
jdg_82353
stackexchange
llm_judgeable_groundtruth_similarity
6699374
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an image with 3 channels (img) and another one with a single channel (ch1). Mat img(5,5,CV_64FC3); Mat ch1 (5,5,CV_64FC1); Is there any efficient way (not using for loop) to copy the first channel of img to ch1 ? Now provide the response and nothing else.
In fact, if you just want to copy one of the channels or split the color image in 3 different channels, CvSplit() is more appropriate (I mean simple to use). Mat img(5,5,CV_64FC3);Mat ch1, ch2, ch3;// "channels" is a vector of 3 Mat arrays:vector<Mat> channels(3);// split img:split(img, channels);// get the channels (dont forget they follow BGR order in OpenCV)ch1 = channels[0];ch2 = channels[1];ch3 = channels[2];
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6699374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/456984/']}
jdg_82354
stackexchange
llm_judgeable_groundtruth_similarity
11758391
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Lin2Sql DataContext that I am using to get all my data from a sql database however I am struggling to find a way to successfully Mock this so that I can create relevant Unit Tests. In my data access objects that I am wanting to test I am refreshing the context each time and I am finding it difficult to find a simple suitable way to mock this. Any help with this matter will be greatly appreciated. Now provide the response and nothing else.
Mocking the linq-to-sql context is indeed a huge task. I usually work around it by letting my unit tests run against a separate database copy, with data specially crafted to fit the unit tests. (I know it can be argued that it's no longer unit tests, but rather integration tests, but I don't care as long as I get the code tested). To keep the database in a known state I wrap each test in a TransactionScope which is rolled back at the end of the test. That way the state of the database is never changed. A sample test method looks like this: [TestMethod]public void TestRetire(){ using (TransactionScope transaction = new TransactionScope()) { Assert.IsTrue(Car.Retire("VLV100")); Assert.IsFalse(Car.Retire("VLV100")); // Deliberately not commiting transaction. }} The code is from a blog post about the method I wrote some time ago: http://coding.abel.nu/2011/12/using-transactions-for-unit-tests/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11758391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1255043/']}
jdg_82355
stackexchange
llm_judgeable_groundtruth_similarity
24200909
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: resolve property of $routeProvider allows to execute some jobs BEFORE corresponding view is rendered. What if I want to display a spinner while those jobs are executed in order to increase user experience? Indeed, otherwise the user would feel the application has been blocked since no view elements were displayed for some milliseconds > 600 for instance. Of course, there was the way to define a global div element out of the current view to display in order to display the spinner thanks to the $scope.$rootChangeStart function. But I don't want to hide the whole page with just a poor spinner in the middle. I want some pages of my webapp differ regarding the way the loading is displayed. I came across this interesting post containing the exact issue I described above: That approach results in a horrible UI experience. The user clicks on a button to refresh a list or something, and the entire screen gets blanketed in a generic spinner because the library has no way of showing a spinner just for the view(s) that are actually affected by the state change. No thanks. In any case, after I filed this issue, I realised that the "resolve" feature is an anti-pattern. It waits for all the promises to resolve then animates the state change. This is completely wrong - you want your transition animations between states to run parallel to your data loads, so that the latter can be covered up by the former. For example, imagine your have a list of items, and clicking on one of them hides the list and shows the item's details in a different view. If we have an async load for the item details that takes, on average, 400ms, then we can cover up the load almost entirely in most cases by having a 300ms "leave" animation on the list view, and a 300ms "enter" animation on the item details view. That way we provide a slicker feel to the UI and can avoid showing a spinner at all in most cases. However, this requires that we initiate the async load and the state change animation at the same moment. If we use "resolve", then the entire async animation happens before the animation starts. The user clicks, sees a spinner, then sees the transition animation. The whole state change will take ~1000ms, which is too slow. "Resolve" could be a useful way to cache dependencies between different views if it had the option not to wait on promises, but the current behaviour, of always resolving them before the state change starts makes it almost useless, IMO. It should be avoided for any dependencies that involve async loads. Should I really stop using resolve to load some data and rather start loading them in the corresponding controller directly? So that I can update the corresponding view as long as the job is executed and in the place I want in the view, not globally. Now provide the response and nothing else.
You can use a directive that listens on $routeChangeStart and for example shows the element when it fires: app.directive('showDuringResolve', function($rootScope) { return { link: function(scope, element) { element.addClass('ng-hide'); var unregister = $rootScope.$on('$routeChangeStart', function() { element.removeClass('ng-hide'); }); scope.$on('$destroy', unregister); } };}); Then you place it on the specific view's loader, for example: View 1: <div show-during-resolve class="alert alert-info"> <strong>Loading.</strong> Please hold.</div> View 2: <span show-during-resolve class="glyphicon glyphicon-refresh"></span> The problem with this solution (and many other solutions for that matter) is that if you browse to one of the routes from an external site there will be no previous ng-view template loaded, so your page might just be blank during resolve. This can be solved by creating a directive that will act as a fallback-loader. It will listen for $routeChangeStart and show a loader only if there is no previous route. A basic example: app.directive('resolveLoader', function($rootScope, $timeout) { return { restrict: 'E', replace: true, template: '<div class="alert alert-success ng-hide"><strong>Welcome!</strong> Content is loading, please hold.</div>', link: function(scope, element) { $rootScope.$on('$routeChangeStart', function(event, currentRoute, previousRoute) { if (previousRoute) return; $timeout(function() { element.removeClass('ng-hide'); }); }); $rootScope.$on('$routeChangeSuccess', function() { element.addClass('ng-hide'); }); } };}); The fallback loader would be placed outside the element with ng-view: <body> <resolve-loader></resolve-loader> <div ng-view class="fadein"></div></body> Demo of it all: http://plnkr.co/edit/7clxvUtuDBKfNmUJdbL3?p=preview
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24200909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/985949/']}
jdg_82356
stackexchange
llm_judgeable_groundtruth_similarity
9497120
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Need a Java-based solution or, at the worst, command-line for Linux. I tried to use Ghostscript: gs -sDEVICE=pdfwrite -dPDFA -dBATCH -dNOPAUSE -dUseCIEColor \ -sProcessColorModel=DeviceCMYK -sPDFACompatibilityPolicy=1 \ -sOutputFile=downgraded.pdf leon_range_my12_w22_brochure.pdf but I got a lot of errors... Now provide the response and nothing else.
Here's an example of how you can downsample all (color, gray and mono) images to 72dpi with a Ghostscript commandline: gs \ -o downsampled.pdf \ -sDEVICE=pdfwrite \ -dDownsampleColorImages=true \ -dDownsampleGrayImages=true \ -dDownsampleMonoImages=true \ -dColorImageResolution=72 \ -dGrayImageResolution=72 \ -dMonoImageResolution=72 \ -dColorImageDownsampleThreshold=1.0 \ -dGrayImageDownsampleThreshold=1.0 \ -dMonoImageDownsampleThreshold=1.0 \ input.pdf Update: The *ImageDownsampleThreshold=1.0 parameters enforce that all Images with a resolution higher than the target resolution of 72 dpi will be downsampled. If this parameter is not given (or set to a different value), the default values will be used: *ImageDownsampleThreshold=1.5 . This default value will only downsample images with a value of 108 dpi (or higher) and leave the other ones untouched.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9497120', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/866185/']}
jdg_82357
stackexchange
llm_judgeable_groundtruth_similarity
25483735
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my current application i need to maintain one config.properties file and from this properties file i need to get the data in my java file. I have placed the properties file and ConfigUtil.java which is accessing those properties files values are in the same location. But when i am running the application it's giving FileNotFoundException . I am not able to get why this is not loading the properties file when both are inside the same folder. My ConfigUtils.java code is : public class ConfigUtil {private Properties properties;InputStream inputStream = null;public Properties getUrl(){ properties = new Properties(); try { inputStream = new FileInputStream("config.properties"); properties.load(inputStream); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); }finally { if(inputStream != null){ try { inputStream.close(); } catch (IOException e) { e.printStackTrace(); } } } return properties;}} and config.properties file is also in same folderlocation of config.properties is : /app/src/main/java/config.properties location of ConfigUtil.java is : /app/src/main/java.configutils.java Now provide the response and nothing else.
Step 1 Create a .properties file in assets folder, if you don’t have assets folder please create one under main config.properties name=User Nameage=User Ageok=Click Step 2 Create a Util.java file to read the properties file. Util.java package javaant.com.propertiesfile;import android.content.Context;import android.content.res.AssetManager;import java.io.IOException;import java.io.InputStream;import java.util.Properties;/** * Created by Nirmal Dhara on 12-07-2015. */public class Util { public static String getProperty(String key,Context context) throws IOException { Properties properties = new Properties();; AssetManager assetManager = context.getAssets(); InputStream inputStream = assetManager.open("config.properties"); properties.load(inputStream); return properties.getProperty(key); }} Step 3 Use variables in the properties file by calling Util.getProperty(,getApplicationContext()). MainActivity.java package javaant.com.propertiesfile;import android.app.Activity;import android.os.Bundle;import android.view.LayoutInflater;import android.view.Menu;import android.view.MenuItem;import android.view.View;import android.view.ViewGroup;import android.widget.Button;import android.widget.ImageButton;import android.widget.TextView;import java.io.IOException;public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // read value from properties file TextView txtname= (TextView) findViewById(R.id.txtname); TextView txtage= (TextView) findViewById(R.id.txtage); Button btnok= (Button) findViewById(R.id.btnok); try { txtname.setText(Util.getProperty("name",getApplicationContext())); txtage.setText(Util.getProperty("age",getApplicationContext())); btnok.setText(Util.getProperty("ok",getApplicationContext())); } catch (IOException e) { e.printStackTrace(); } }} Please download the complete code from here http://javaant.com/how-to-use-properties-file-in-android/#.VwzippN96Hs
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25483735', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2775139/']}
jdg_82358
stackexchange
llm_judgeable_groundtruth_similarity
41511
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to Deutsche Welle , Austria's Chancellor, Sebastian Kurz, has announced that there will be new elections after the vice chancellor's involvement in the corruption scandal: Austrian Chancellor Kurz has requested new elections after a corruption scandal brought down his vice chancellor. Footage showed the far-right deputy allegedly offering contracts to a supposed Russian investor. Kurz praised the fact that his Austrian People's Party (ÖVP) had been able to see through many of its campaign promises with the help of Strache's far-right Freedom Party of Austria (FPÖ), but the chancellor said that "enough is enough" after the latest FPÖ embarrassment. I am wondering why go so far (new elections) instead of simply replacing the vice chancellor. Or if the alliance can no longer be maintained, why not negotiate to create another alliance? Question: Why isn't Austria's vice chancellor simply replaced instead of going for new elections? Now provide the response and nothing else.
For context: The video in question showed Strache and a colleague of his with a woman purporting to be the niece of a Russian oligarch. She promised to influence the election in their favor, and in turn wanted to gain work orders from the state, as well as take over a popular, far-right news paper which could then further help the FPÖ in the election. Strache also spread rumors about Kurz concerning sex and drugs, and talked about illegal campaign finances . Kurz explained his reasoning in detail. The important parts of his speech: [W]ir haben in den letzten zwei Jahren inhaltlich genau das umgesetzt, was wir auch im Wahlkampf versprochen haben... Sie haben wahrscheinlich aber auch alle mitverfolgt, dass in den letzten beiden Jahren für diese inhaltlichen Erfolge ich bereit sein musste, viel auszuhalten und auch vieles in Kauf zu nehmen: über das Rattengedicht, über die Nähe zu rechtsradikalen Gruppierungen bis hin zu immer wiederkehrenden Einzelfällen. [...] Ich habe trotzdem im Sinne der Sacharbeit nicht bei der erstbesten Verfehlung die Koalition aufgekündigt. Aber nach dem gestrigen Video muss ich ganz ehrlich sagen: Genug ist genug. [...] Was über mich in diesem Video gesagt wird von Beschimpfungen bis hin zu sehr derben Anschuldigungen und Unterstellungen, das ist alles eigentlich nebensächlich. [...] Was aber wirklich schwerwiegend und problematisch ist, das sind die Ideen des Machtmissbrauchs, die Ideen zum Umgang mit österreichischen Steuergeldern und natürlich auch das Verständnis gegenüber der Medienlandschaft in unserem Land. Die FPÖ schadet mit diesem Verhalten dem Reformprojekt und dem Weg der Veränderung. Sie schadet auch dem Ansehen unseres Landes... Vor allem – und das ist aus meiner Sicht aber das Schlimmste – habe ich in den heutigen Gesprächen mit einigen Vertretern der Freiheitlichen Partei nicht den Eindruck gewonnen, dass ein wirklicher Wille da ist, die Freiheitliche Partei abseits der beiden Rücktritte auf allen Ebenen zu verändern. Und ich glaube, das wäre nach den Vorkommnissen der letzten Tage mehr als nur notwendig... [...] Ich glaube, dass das derzeit mit niemandem möglich ist. Die FPÖ kann es nicht, die Sozialdemokratie teilt meine inhaltlichen Zugänge nicht, und die kleinen Parteien sind zu klein, um wirklich Unterstützung sein zu können. To summarize, Kurz is happy about the work he did with the FPÖ, but he had to endure a lot – racism, vicinity to far-right extremists, other recurring issues. Despite this, he wanted to keep working with the FPÖ until the video. The insults and insinuations in the video are only incidental, but what is problematic for him is the abuse of power, the handling of tax payer money, and the handling of the press. This – according to Kurz – hurts the image of his country. Additionally, he did not have the impression that members of the FPÖ want to change the party to the better. As to why he didn't negotiate a new alliance: The FPÖ is out, the social democrats don't share his political ideas, and the other parties are too small. According to the Zeit , there were some negotiations about keeping the coalition, but Kurz wanted the far-right minister of the interior Herbert Kickl to step back as well. Kickl, like the FPÖ in general, had ties to Putin, which may have been one of the problems; Germany for example reduced its cooperation with Austrian intelligence services out of fear that information would be given to Russia. There is some speculation that the ÖVP could take over enough voters of the disgraced FPÖ to govern without them. Keeping the coalition, on the other hand, could further associate the ÖVP with the far-right and corrupt FPÖ and thus hurt them. tl;dr According to Kurz himself, the problem was that the corruption of high-level FPÖ politicians hurt the image of Austria, and that the FPÖ had little interest in changing this image. Kurz cannot find a new coalition because the only party large enough does not share his political ideas. The breaking of the coalition with the FPÖ can also help prevent further damage to the ÖVP as well as Austria because of their association with the far-right and corrupt FPÖ.
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/41511', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/11278/']}
jdg_82359
stackexchange
llm_judgeable_groundtruth_similarity
51492
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Heatwaves have been in the news in recent years. I found it amazing to consider that the energy from the Sun is enough to kill you, when only a tiny fraction of its radiation reaches Earth. Then I wondered, what fraction of the Sun's energy reaches Earth, exactly? I think the way to approach this is to answer: What fraction of the Sun's sky does the Earth take up? Can anyone tell me what the answer is? How do you calculate it? Is this the right way to approach the question? Now provide the response and nothing else.
The calculation of the apparent size of Earth as seen from the Sun is pretty straight forward. Just consider the triangle: from your observation point the distance to the center of the object, to the surface. That is a triangle with a 90° at the center of the object you view.You can apply the tangent law ( actually a sine , as the Sun is a sphere and not a disk) for the angle between the line to the center and the rim of the object: $$ \tan \alpha^\prime = \frac{r}{d}$$ For small angles $\alpha$ the tangens is nearly identical to the angle itself, so you can approximate the viewing angle $\alpha = 2\alpha^\prime$ of an object with the diameter $2r$ as $$ \alpha \approx \frac{2r}{d}$$ . This angle of course is given in radians. Thus if you want to convert it to degrees you have to apply the typical conversion of $360° = 2\pi$ . For the viewing angle of Earth ( $r = 6371km$ ) and the distance to the Sun ( $d=149,600,000\mathrm{km}$ ) you get a viewing angle of $$\alpha = 4.259\cdot 10^{-5} = 0.00244° = 8.78"$$ Compare a disk of radius $\alpha^\prime$ to the overall surface of a sphere ( $4\pi$ ), that is a tiny fraction. But knowing the solar constant on Earth ( $S = 1370 W/m^2$ ) with a cross-section of $A=\pi r^2$ , this allows to calculate the overall energy $L$ (called luminosity in astrophysics) radiated by the Sun: $$L = \frac{4\pi}{\pi(\alpha^\prime)^2} \cdot \left( S\cdot A\right)\\L = 3.85\cdot 10^{26} \mathrm{W}.$$ That number obtained only using ballpark numbers corresponds surprisingly well with the entry found on Wikipedia of $3.82\cdot 10^{26} \mathrm{W}$ . For perspective: The primary energy production and consumption by the whole world was in the year 2021 around 14800 million tons oil equivalent which corresponds to 11.63TWh each. Break that down it means that Earth alone receives at this tiny viewing angle from the Sun already about 9000x as much energy from the Sun than the whole humankind consumes. And the Sun as a whole produces 2 billion times more energy, every second. Thus if we could tap and store the complete solar energy for one second, we'd have enough to last the world with its current energy consumption as-is about 1500x the current age of the universe.
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/51492', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/48582/']}
jdg_82360
stackexchange
llm_judgeable_groundtruth_similarity
347750
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Any idea why the following doesn't work? It hangs with no output. desktop$ ssh myserver "sudo ausearch -k my_key" However, the following works. It outputs the auditing history of this key from auditd. desktop$ ssh myservermyserver$ sudo ausearch -k my_key The following also works. (Meaning, sudo is not currently set to require a password.) desktop$ ssh myserver "sudo ls" Now provide the response and nothing else.
Since you logon as a normal user you might not have /sbin in your $PATH, which means that ausearch might not be found. To try this, specific /sbin/ausearch manually in your command line. For some commands you also require ssh to aquire a tty, you accomplish this with the -t flag, so to try this out, type: ssh -t myserver "sudo /sbin/ausearch -k my_key" To fully emulate a logged in session you can also call sudo with the -i flag, and then you can probably omit the /sbin (since it worked in your logged in session), as this: ssh -t myserver "sudo -i ausearch -k my_key"
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/347750', 'https://serverfault.com', 'https://serverfault.com/users/54795/']}
jdg_82361
stackexchange
llm_judgeable_groundtruth_similarity
44751469
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just noticed this lint error: Call requires API Level 24 (current min is 19) java.util.map#foreach when I use the extension function forEach on a MutableMap in Kotlin.This didn't happen when I wrote the line, but its there now.And I'm not seeing this error on my other machine. Now provide the response and nothing else.
What you are using is not kotlin.collections.MutableMap.forEach . What you are using seems to be Map.forEach in Java 8. Refer to this article: http://blog.danlew.net/2017/03/16/kotlin-puzzler-whose-line-is-it-anyways/ This seems to be a common mistake. Java 8 is well-supported from Android API level 24. For short, do not use forEach like map.forEach { t, u -> Log.i("tag", t + u) } (this is using Java 8 API, which does not work for Android API level <= 23). Use it like map.forEach { (t, u) -> Log.i("tag", t + u) } (this is using Kotlin API). (Not two parameters. Just one parameter that is a pair.)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44751469', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4449252/']}
jdg_82362
stackexchange
llm_judgeable_groundtruth_similarity
48066237
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to output a table and it's content which can be updated via Ajax.So I'm planning to use vue-tables-2 ( https://github.com/matfish2/vue-tables-2 ) which is a Vue.js plugin. Using the Laravel way, I'm writing a Vue.js custom component, so how can I use the vue-tables-2 plugin inside my component? Here an example of how to use the plugin https://jsfiddle.net/matfish2/jfa5t4sm/ . Unfortunately in the example is not wrapping the plugin inside a component. Now provide the response and nothing else.
You have two ways available to make a third party component available to your custom Vue component: 1. Import (ES6) and use locally In your component's script block, put this on top: import { ServerTable, ClientTable, Event } from 'vue-tables-2' In your component VM, add this to the components property: export default { data () { return { /* data properties here */ } }, components: { ServerTable, ClientTable, Event }} You can now use the <v-server-table> , <v-client-table> etc in your component template. 2. Import (ES6) globally in your application entry point: import { ServerTable, ClientTable, Event } from 'vue-tables-2' Then make those parts of vue-tables-2 that you application repeatedly needs available to your main Vue file and all child components: Vue.use(ClientTable, [options = {}], [useVuex = false], [theme = 'bootstrap3'], [template = 'default']); Or/And: Vue.use(ServerTable, [options = {}], [useVuex = false], [theme = 'bootstrap3'], [template = 'default']); This can also be found on the vue-tables-2 documentation GitHub page . Note: When you are not using a build system like webpack in your Vue application, there's a third way: 3. Make globally available when not using webpack or the likes Put this in your HTML before including you application script: <script src="/path/to/vue-tables-2.min.js"></script> This will expose a global VueTables object so in your application entry point you can Vue.use(VueTables.ClientTable); If you use the global way, you don't have to declare those 3rd party components in the components object of your custom component. Why would I pick either method over the other? Importing globally has the advantage of you having to write less code and following the DRY principle (don't repeat yourself). So this does make sense if your whole application at many points needs that plugin/3rd party Vue component. It does, though, have the downside that it makes your custom components harder to reuse across several applications/projects because they no longer declare their own dependencies. Also, if your own custom components at some point gets removed from your application for whatever reason, your application will still include the vue-tables-2 package, which might not be needed any more. In this scenario it will uselessly increase your bundle size.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48066237', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1190011/']}
jdg_82363
stackexchange
llm_judgeable_groundtruth_similarity
17282155
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was recently asked during an interview, using just bit shift operators, write some code that would tell you if a number is divisible by 8, apparently the code is very short - does anyone have a clue? Note: if you aren't required to use shifts, you'd test the low n bits for being all-zero with a mask like x&7 == 0 , or more generally x & ((1UL << n) - 1) == 0 . How can I tell if a number is a multiple of four using only the logic operator AND? Now provide the response and nothing else.
With any integer represented in binary the remainder of division by any power of two is simply the value of the bits of lower order so 0b11001110 divided by 0b1000 has remainder 0b110 . So in order to check for divisibility by 8 you need to check if the three low order bits are all zero: if (((x >> 3) << 3) == x) divisibleBy8 = true; Right shifting clears the bottom three bits before the left shift restores the magnitude and then compare to the original number. As others have pointed out, if you know the bit width of the integer you can do this if (!(x<<29)) divisibleby8 = true; Replace that 29 by 61 for 64-bit integers, etc. Apparently in Java you can do this: if ((x << -3) != 0) divisibleby8 = true; Because negative shifts such as -3 are interpreted as bit_width - 3 and it will work with both 32- and 64-bit integers. (You don't need all the brackets, I've included for clarity) Just for completeness These are all pretty bad ways to test for divisibility by 8. Doing if !(x & 7) is clearer and almost certainly as fast or faster.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17282155', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1285948/']}
jdg_82364
stackexchange
llm_judgeable_groundtruth_similarity
2115214
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using link_to img tag like following <%= link_to image_tag("Search.png", :border=>0, :class => 'dock-item'), :action => 'search', :controller => 'pages'%><span>Search</span></a> Which results in following html <a href="/pages/search"><img alt="Search" border="0" class="dock-item" src="/images/Search.png?1264132800" /></a><span>Search</span></a> I want the class="dock-item" to go to the <a> tag instead of the img tag. How can i change this? Update: <%= link_to image_tag("Search.png", :border=>0), :action => 'search', :controller => 'pages', :class => 'dock-item' %> results in <a href="/pages/search?class=dock-item"><img alt="Search" border="0" src="/images/Search.png?1264132800" /></a> Now provide the response and nothing else.
hi you can try doing this link_to image_tag("Search.png", border: 0), {action: 'search', controller: 'pages'}, {class: 'dock-item'} or even link_to image_tag("Search.png", border: 0), {action: 'search', controller: 'pages'}, class: 'dock-item' note that the position of the curly braces is very important, because if you miss them out, rails will assume they form a single hash parameters (read more about this here ) and according to the api for link_to : link_to(name, options = {}, html_options = nil) the first parameter is the string to be shown (or it can be an image_tag as well) the second is the parameter for the url of the link the last item is the optional parameter for declaring the html tag, e.g. class, onchange, etc. hope it helps! =)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2115214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44286/']}
jdg_82365
stackexchange
llm_judgeable_groundtruth_similarity
32367041
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My app has a toolbar that should be present on every view. Currently, I do the following in my onCreate() method for each Activity I have: Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);setSupportActionBar(toolbar); Does this need to be done in every onCreate() method in every Activity or is there a simpler way? Also, as a side question, how can I implement a "back" feature in the toolbar that takes the user back one action if they click it? Now provide the response and nothing else.
Create a Base class for Activity public abstract class BaseActivity extends AppCompatActivity { Toolbar toolbar; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(getLayoutResource()); configureToolbar(); } protected abstract int getLayoutResource(); private void configureToolbar() { toolbar = (Toolbar) findViewById(R.id.toolbar); if (toolbar != null) { setSupportActionBar(toolbar); getSupportActionBar().setDisplayHomeAsUpEnabled(true); } } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case android.R.id.home: FragmentManager fm = getSupportFragmentManager(); if (fm != null && fm.getBackStackEntryCount() > 0) { fm.popBackStack(null, FragmentManager.POP_BACK_STACK_INCLUSIVE); } else { finish(); } return true; default: return super.onOptionsItemSelected(item); } }} And in each Activity extends this BaseActivity to get the ToolBar and implementing the back feature. At last don't forget to include the ToolBar in each activity layout . Edit: Override that method getLayoutResource() in each Activity and pass the layout id. public class MainActivity extends BaseActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } @Override public int getLayoutResource() { return R.layout.activity_main; }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32367041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5294977/']}
jdg_82366
stackexchange
llm_judgeable_groundtruth_similarity
309651
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to development on the iPhone. Just about every sample project (as well as the default project templates) have one or more delegates and controllers. Can someone give me a breakdown of what the delegates are responsible for vs. what the controllers are supposed to do? Now provide the response and nothing else.
The simplest way I can think to differentiate the two are: A delegate is a protocol (interface) that defines methods that an object implements in order to receive specific messages from other objects. Delegates objects are most often used to receive asynchronous callbacks such as user input, I/O. A controller is an object that usually contains UI elements (views, controls, etc.) and data, and both receives and send messages to the various objects within it. In many cases, a controller is a delegate and can implement several delegate protocols to receive events from multiple objects. Keep in mind that many UI elements and controls let you pass events back to the controller by linking them to an IBAction method in Interface Builder. This is very handy as it doesn't require extra code to implement delegates. However, some other APIs such as the ABPeoplePickerNavigationController or NSURLConnection have no visualization in Interface Builder and so must use delegates to handle their events.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/309651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678/']}
jdg_82367
stackexchange
llm_judgeable_groundtruth_similarity
17295169
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I want to extract the type of a const reference (like double from const double&), do I have to use : typename std::remove_cv<typename std::remove_reference<Type>::type>::type or typename std::remove_reference<typename std::remove_cv<Type>::type>::type ? Now provide the response and nothing else.
Use remove_reference first. remove_cv removes top level qualifiers only and in case of references, there isn't any (or is ignored). An example that shows the difference: #include <iostream>#include <type_traits>template<typename T>using Remove_cv_ref = std::remove_cv<typename std::remove_reference<T>::type>;template<typename T>using Remove_ref_cv = std::remove_reference<typename std::remove_cv<T>::type>;int main(){ std::cout << std::is_same<typename Remove_cv_ref<const int&>::type, int>::value; // 1 std::cout << std::is_same<typename Remove_ref_cv<const int&>::type, int>::value; // 0} Live demo.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17295169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/882932/']}
jdg_82368
stackexchange
llm_judgeable_groundtruth_similarity
3672289
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a very simple method scheduled to run every 10 seconds like this: @Componentpublic class SimpleTask { @Scheduled(fixedRate=10000) public void first() { System.out.println("Simple Task " + new Date()); }} Config: <task:annotation-driven executor="myExecutor" scheduler="myScheduler" /><task:executor id="myExecutor" pool-size="5" /> <task:scheduler id="myScheduler" pool-size="10" /> My problem is that my method is being invoked 3 times every 10 seconds. It should be invoked just once. What am I doing wrong?I use Spring Source ToolSuite with SpringSource tc Server 6. Now provide the response and nothing else.
I had this same problem. One of the causes is a bug in Spring 3.0.0. I upgraded to 3.0.5 and the repetition went down to only two. The other cause was because my class that had the @Scheduled method was getting instantiated twice. This happened because the context config was getting loaded twice. In web.xml I was pointing my ContextLoaderListener and DispatcherServlet at the same context config file: ...<listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class></listener><servlet><servlet-name>spring</servlet-name><servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class><init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value></init-param><load-on-startup>1</load-on-startup></servlet>... WEB-INF/applicationContext.xml is the default context config for the ContextLoaderListener. So make sure that your ContextLoaderListener and your ServletDispatcher are using different context files. I ended up creating a /WEB-INF/spring-servlet.xml without any bean definitions and it worked flawlessly.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3672289', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/439203/']}
jdg_82369
stackexchange
llm_judgeable_groundtruth_similarity
46803441
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering what is the simplest way (ideally with CSS) to achieve this effect where the viewport rotates 180 degrees. Example here: https://www.zakgroup.co.uk/projects/agenzia-indipendente-di-ricerca (Note: This site is no longer live so don't waste your time looking for it like I did.) Now provide the response and nothing else.
So first of all I am not sure if Nginx is suppose to respect http_proxy and https_proxy variables. I didn't find any documentation on the same. So I assume your issues is related to nginx not using proxy at a all So now you have an option to use something which actually uses proxy. This is where socat comes to rescue. Running socat forwarder If you have a transparent proxy then run socat TCP4-LISTEN:8443,reuseaddr,fork TCP:<proxysever>:<proxyport> And if you have CONNECT proxy then use below socat TCP4-LISTEN:8443,reuseaddr,fork PROXY:yourproxy:backendserver:443,proxyport=<yourproxyport> Then in your nginx config use location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host-Real-IP $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-Pcol http; proxy_intercept_errors on; proxy_set_header Host backend.mygreat.server.com; proxy_pass https://127.0.0.1:8443; proxy_redirect https://backend.mygreat.server.com https://mygreat.server.com; } You probably want to use Systemd service to launch the socat, so it runs on startup and is handled as a service
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46803441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8793754/']}
jdg_82370
stackexchange
llm_judgeable_groundtruth_similarity
28444556
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im just move to laravel 5 and im receiving errors from laravel in HTML page. Something like this: Sorry, the page you are looking for could not be found.1/1NotFoundHttpException in Application.php line 756:Persona no existein Application.php line 756at Application->abort('404', 'Person doesnt exists', array()) in helpers.php line When i work with laravel 4 all works fine, the errors are in json format, that way i could parse the error message and show a message to the user. An example of json error: {"error":{"type":"Symfony\\Component\\HttpKernel\\Exception\\NotFoundHttpException","message":"Person doesnt exist","file":"C:\\xampp\\htdocs\\backend1\\bootstrap\\compiled.php","line":768}} How can i achieve that in laravel 5. Sorry for my bad english, thanks a lot. Now provide the response and nothing else.
I came here earlier searching for how to throw json exceptions anywhere in Laravel and the answer set me on the correct path. For anyone that finds this searching for a similar solution, here's how I implemented app-wide: Add this code to the render method of app/Exceptions/Handler.php if ($request->ajax() || $request->wantsJson()) { return new JsonResponse($e->getMessage(), 422);} Add this to the method to handle objects: if ($request->ajax() || $request->wantsJson()) { $message = $e->getMessage(); if (is_object($message)) { $message = $message->toArray(); } return new JsonResponse($message, 422);} And then use this generic bit of code anywhere you want: throw new \Exception("Custom error message", 422); And it will convert all errors thrown after an ajax request to Json exceptions ready to be used any which way you want :-)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28444556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4471534/']}
jdg_82371
stackexchange
llm_judgeable_groundtruth_similarity
5387
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Boggle is a word game played with 16 dice and a 4x4 tray. This question is inspired by a Stack Overflow question about Boggle that I decided to solve using Mathematica . In addition to Mathematica , I thought I'd use some of the new version's graph-related functionality, which I've never really explored in the past. It seemd like a natural fit, but the further I got into my program, the more awkward everything seemed. First, the board from the orginal question: F X I EA M L OE W B XA S T U I use a mix of features to turn this into a Graph ; with ImportString I was able to copy and past from my browser directly into a string literal in my notebook. makeBoggleBoard[s_String] := makeBoggleBoard@ImportString[s]makeBoggleBoard[mat : {{__String} ..}] /; MatrixQ[mat] := With[{dims = Dimensions@mat, cPatt = {_Integer, _Integer}, dPatt = Alternatives[{1, 0}, {0, 1}, {1, -1}, {1, 1}] }, With[{coords = Tuples[Range /@ dims]}, With[{ vertexRules = Thread[coords -> Thread[{Range@Length@coords, Flatten@mat}]], edgePattern = {c1 : cPatt, c2 : cPatt} /; MatchQ[c1 - c2, dPatt] }, Graph@Cases[Tuples[coords, 2], c : edgePattern :> ((UndirectedEdge @@ c) /. vertexRules)]]]] This already seems a little bit clunky. Most of the difficulty comes about from trying to associate a letter with each vertex in my graph while keeping all the vertices distinct. However, this mostly seems to work well. Also, it will help to winnow the dictionary so it only contains valid words (they must have more than three letters and only use letters on the board). This part, at least, is pretty easy and quick: makeBoggleDictionary[board_Graph] := With[{chars = ToLowerCase@DeleteDuplicates@(VertexList@board)[[All, -1]]}, DictionaryLookup[chars ~~ chars ~~ chars ~~ (chars ...)]]; Now it's time to traverse the graph, finding all the words along the way. We need to traverse each possible path from each vertex, check to see if the path so far spells a word, and if it does, collect it. In order to keep performance reasonably in hand, we want to cull paths that can't possible spell a word as quickly as possible. Here's the function I came up with: findWordsInBoggleBoard[graph_Graph, dict : {__String}] := With[{ makeWord = ToLowerCase@StringJoin[#[[All, 2]]] &, lookup = Function[pattern, Flatten[StringCases[dict, StartOfString ~~ pattern ~~ EndOfString]]] }, Module[{extendPaths}, extendPaths[v_, {}] := With[{adj = DeleteCases[VertexList@NeighborhoodGraph[graph, v], v]}, Join @@ (extendPaths[#, {{v}}] & /@ adj)]; extendPaths[v_, paths_] := Module[{ extended = Append[#, v] & /@ paths, nexts, strings, feasible, adj = DeleteCases[VertexList@NeighborhoodGraph[graph, v], v]}, strings = makeWord /@ extended; Scan[ Sow, lookup[Alternatives @@ strings] ]; feasible = Pick[ extended, Function[string, MatchQ[ Select[lookup[string ~~ __], StringLength@# >= 3 &], {__String}]] /@ strings]; nexts = DeleteCases[ {#, Select[feasible, Function[path, FreeQ[path, #]]]} & /@ adj, {{_, _}, {}}]; extendPaths @@@ nexts ]; Reap[Scan[extendPaths[#, {}] &, VertexList@graph]] /. {{Null, {}} -> {}, {Null, {words : {__String}}} :> Union@words}]] The performance is sort of acceptable (it takes about 3 seconds to find all the words in the sample board), but the entire approach I'm taking here seems very ugly. In particular, the repeated use of NeighborhoodGraph to find adjacent vertices in the recursion for extendPaths seems faintly ridiculous, and the whole approach feels quite low-level compared to some of the other graph functions. Can anyone suggest some possible ways to speed this up? EDIT : Part of what I'm interested in seeing is whether Mathematica's graph functions are a good fit for this problem, though of course I'm happy to see the good, fast implementations that people have posted. Now provide the response and nothing else.
Preview and comparative results The implementation below may be not the most "minimal" one, because I don't use any of the built-in functionality ( DictionaryLookup with patterns, Graph -related functions, etc), except the core language functions. However, it uses efficient data structures, such as Trie, linked lists, and hash tables, and arguably maximally avoids the overheads typical in Mathematica programming. The combined use of Trie, linked lists, and recursion allows the main function to copy very little. The use of trie data structure allows me to be completely independent of the system DictionaryLookup function. Why is this critical here? Because the nature of the problem makes only a single last letter important for the next traversal step, and constructing the whole word (containing all previous letters) just to check that it exists is a waste, and this is arguably the reason why other solutions are both much slower and do not scale so well.Also, the preprocessing step, while rather costly (takes about 6 seconds on my machine), has to be done only once, to initialize the "boggle engine" (moreover, the resulting trie can be stored in e.g. .mx file for later reuse, avoiding this overhead for subsequent uses), while in other posted solutions some preprocessing has to be done for every particular board. The main message I want to deliver is that, for the top-level Mathematica code, the choice of efficient data structures is crucial. Our Mathematica programming instincts demand that we reuse as much of the built-in functionality as possible, but one always has to question how well the existing functionality matches the problem. In this particular case, my opinion is that neither the built-in Graph - related functions nor the DictionaryLookup with patterns bring much to the table. To the opposite, these functions force us to use unnatural for this problem data representations and/or algorithms, and this is what leads to the slowdowns. I may be over-emphasizing this point, but this was exactly the essence of the question. Now, some timing comparisons (note that for the solution of @R.M., I had to include the pieces defining adjnodes , letters and dict variables, into the timing measurements): Board 4x4 (the original one): Pillsy 3.3 sec R.M. 1.4 sec L.S. 0.04 sec Board 5x5: "E I S H R B D O I O T R O E X Z U Y Q S I A S U M" Pillsy 18.8 sec R.M. 7.6 sec L.S. 0.05 sec Board 7x7 "E I E G E O T A O B A U R A N E I P L A Y O O I I C A T I I F U N L A S T I N G E W U H L E O X S" Pillsy 373.8 sec R.M. 191.5 sec L.S. 0.18 sec So, you can see that for larger boards, the difference between the running times is even more dramatic, hinting that the solutions have different computational complexities. I took the trouble to perform and present all these timings because I think that this problem is an important counterexample to the "conventional wisdom" to favor shorter implementations utilizing built-ins over the hand-written top-level mma code. While I agree that in general this is a good strategy, one has to always examine the case at hand. To my mind, this problem presents one notable exception to this rule. Implementation The following solution will not use Mathematica graphs, but will be about 100 times faster (than the timings you cite), and will rely on this post . I will borrow a function which builds the word tree from there: ClearAll[makeTree];makeTree[wrds : {__String}] := makeTree[Characters[wrds]];makeTree[wrds_ /; MemberQ[wrds, {}]] := Prepend[makeTree[DeleteCases[wrds, {}]], {} -> {}];makeTree[wrds_] := Reap[If[# =!= {}, Sow[Rest[#], First@#]] & /@ wrds, _, #1 -> makeTree[#2] &][[2]] Its use is detailed in the mentioned post. Now, here is a helper function which will produce rules for vertex number to letter conversion, and adjacency rules: Clear[getLetterAndAdjacencyRules];getLetterAndAdjacencyRules[letterMatrix_?(MatrixQ[#, StringQ] &)] := Module[{a, lrules, p, adjRules}, lrules = Thread[Range[Length[#]] -> #] &@Flatten[letterMatrix]; p = ArrayPad[ Partition[Array[a, Length[lrules]], Last@Dimensions@letterMatrix], 1 ]; adjRules = Flatten[ ListConvolve[{{1, 1, 1}, {1, 2, 1}, {1, 1, 1}}, p] /. Plus -> List /. {left___, 2*v_, right___} :> {v -> {left, right}} /. a[x_] :> x]; Map[Dispatch, {lrules, adjRules}] ]; It is pretty ugly but it does the job. Next comes the main function, which will find all vertex sequences which result in valid dictionary words: EDIT Apparently, there is a problem with Module -generated inner functions. I used Module in getVertexSequences initially, but, because in my benchmarks I happened to use a previous incarnation of it with a different name (where I did not yet modularize the inner functions), I did not see the difference. The difference is an order of magnitude slow-down . Therefore, I switched to Block , to get back the performance I claimed (You can replace back the Block with Module to observe the effect). This is likely related to this issue , and is something anyone should be aware of IMO, since this is quite insidious. END EDIT Clear[getVertexSequences];getVertexSequences[adjrules_, letterRules_, allTree_, n_] :=Block[{subF, f, getWordsForStartingVertex}, (* A function to extract a sub-tree *) subF[v_, tree_] := With[{letter = v /. letterRules}, With[{res = letter /. tree}, res /; res =!= letter]]; subF[_, _] := {}; (* Main function to do the recursive traversal *) f[vvlist_, {{} -> {}, rest___}] := f[Sow[vvlist], {rest}]; f[_, {}] := Null; f[vvlist : {last_, prev_List}, subTree_] := Scan[ f[{#, vvlist}, subF[#, subTree]] &, Complement[last /. adjrules, Flatten[vvlist]] ]; (* Function to post-process the result *) getWordsForStartingVertex[v_] := If[# === {}, #, Reverse[Map[Flatten, First@#], 2] ] &@Reap[f[{v, {}}, subF[v, allTree]]][[2]]; (* Call the function on every vertex *) Flatten[Map[getWordsForStartingVertex, Range[n]], 1]] At the heart of it, there is a recursive function f , which acts very simply. The vvlist variable is a linked list of already visited vertices. The second argument is a sub-tree of the main word tree, which corresponds to the sequence of already visited vertices (converted to letters. To understand better what the sub-tree is, see the mentioned post ). When the sub-tree starts with {} -> {} , this means (by the way word tree is constructed), that the sequence of vertices corresponds to a valid word, so we record it. In any case, if the subtree is not {} , we Scan our function recursively on adjacent vertices, removing from them those we already visited. The final functions we need are the one to convert vertex sequences to words, and the one to construct the trie data structure. Here they are: Clear[wordsFromVertexSequences];wordsFromVertexSequences[vseqs_List, letterRules_] := Map[StringJoin, vseqs /. letterRules];ClearAll[getWordTree];getWordTree[minLen_Integer: 1, maxLen : (_Integer | Infinity) : Infinity] := makeTree[ Select[ToLowerCase@DictionaryLookup["*"], minLen <= StringLength[#] <= maxLen &]]; The function to bring this all together: ClearAll[getWords];getWords[board_String, wordTree_] := getWords[ToLowerCase@ImportString@board, wordTree];getWords[lboard_, wordTree_] := Module[{lrules, adjrules}, {lrules, adjrules} = getLetterAndAdjacencyRules[lboard ]; wordsFromVertexSequences[ getVertexSequences[adjrules, lrules, wordTree, Times @@ Dimensions[lboard]], lrules ]]; Illustration First, construct a full tree of all words in a dictionary. This preprocessing step can take a little while: largeTree = getWordTree[]; Now, construct the word matrix: wmat = ToLowerCase@ImportString@ "F X I E A M L O E W B X A S T U" {{"f", "x", "i", "e"}, {"a", "m", "l", "o"}, {"e", "w", "b","x"}, {"a", "s", "t", "u"}} Next, construct the rules for vertex-to-letter conversion and adjacency rules: ({lrules,adjrules} = getLetterAndAdjacencyRules[wmat])//Short[#,3]& {Dispatch[{1->f,2->x,3->i,4->e,5->a,6->m,7->l,8->o,9->e,10->w,11->b, 12->x,13->a,14->s,15->t,16->u},-DispatchTables-], Dispatch[{1->{2,5,6},<<14>>,16->{11,12,15}},<<1>>]} We are now ready to use our function: (seqs = getVertexSequences[adjrules,lrules,largeTree,16])//Short//AbsoluteTiming {0.0185547,{{1,5},{1,5,2},{1,5,6,9},{1,6},<<89>>,{15,14}, {15,16,11},{15,16,11,14},{15,16,12}}} Note that it took very little time to get the result. We can finally convert it to words: wordsFromVertexSequences[seqs,lrules]//Short {fa,fax,fame,fm,xi,xml,xl,<<84>>,twas,tb,ts,tub,tubs,tux} The way to call a final function: (* Do this only once per session *)$largeTree = getWordTree[3];board = ToLowerCase@ImportString@"F X I E A M L O E W B X A S T U"getWords[board, $largeTree] {fax,fame,xml,imf,eli,elm,elma,<<59>>,stub,twa,twa,twas,tub,tubs,tux} (note that the result differs from that in illustration section, since I am now using the word tree with words with less than 3 letters excluded - using the $largeTree rather than largeTree now). Discussion Of course, I was a bit cheating in the sense that the preprocessing time takes a while, but this has to be done only once. My main point is that I think, the Trie data structure (my interpretation of it) is the right one here, and coupled with linked lists and hash tables ( Dispatch -ed rules), it leads to a rather simple solution. The essence of the solution is expressed in function f , which is just a few lines long and more or less self-documenting. And, also, the solution itself turns out quite fast (especially given that this uses just the top-level mma, no packed arrays, Compile , etc). EDIT 2 To address the question in your edit, and generally the question on applicability of Mathematica's new Graph functionality to this problem: I think, that while you can use new Graphs to solve the problem, it is not a natural choice here. I may be wrong, of course, but these are my reasons: The graph traversal you need for this problem does not fit directly into either one of DepthFirstScan and BreadthFirstScan built-in graph-traversal functions. Rather, it is a kind of enumeration of all possible depth-first traversals starting at a given vertex. Those traversals should stop as soon as it becomes clear that no words can be constructed by going to any of the adjacent vertices. This can be also achieved in DepthFirstScan through the use of Catch and Throw , but it is rather inelegant, and will also induce an overhead. The general ideology of DepthFirstScan and BreadthFirstScan is somewhat similar to a visitor design pattern used for a tree traversal. The idea is that the traversal is done for you, while you have to supply the functions to be called on tree (or graph) nodes. This approach works well when your traversal matches exactly the one implemented by the pattern. For example, most of the time, a tree is traversed depth-first. However, I had many chances to observe (in other languages) that as soon as I have to modify the traversal even slightly, using the tools like that creates more problems than it solves. The main question to ask yourself is this: does you traversal (sequence of visited vertices) depend on the content of the vertices (information you get during the traversal)? If yes, then it is more than likely that custom general traversal functions will not give you a good solution, because you then need more control over the way traversal is performed. The whole idea of visitor pattern (used for tree traversals) and the like is that you can separate the traversal itself from the information-processing during the traversal, and it's just not true for data-dependent traversals, where you can not really decouple traversal from the data-processing of the tree (or graph) nodes. I think that we should separate cases where graphs represent just a useful abstraction to think about the problem, from those where the problem can be solved by means of more or less standard graph-theoretical functionality (in particular that present in Mathematica), once it is reformulated in an appropriate way. The case at hand clearly looks to me like belonging to the first category.
{}
{'log_upvote_score': 7, 'links': ['https://mathematica.stackexchange.com/questions/5387', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/531/']}
jdg_82372
stackexchange
llm_judgeable_groundtruth_similarity
21895853
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an app that's using background downloads with the new NSURLSession APIs. When a download cancels or fails in such a way that NSURLSessionDownloadTaskResumeData is provided, I store the data blob so that it can be resumed later. A very small amount of the time I am noticing a crash in the wild: Fatal Exception: NSInvalidArgumentExceptionInvalid resume data for background download. Background downloads must use http or https and must download to an accessible file. The error occurs here, where resumeData is the NSData blob and session is an instance of NSURLSession : if (resumeData) { downloadTask = [session downloadTaskWithResumeData:resumeData]; ... The data is provided by the Apple APIs, is serialized, and is then deserialized at a later point in time. It may be corrupted, but it is never nil (as the if statement checks). How can I check ahead of time that the resumeData is invalid so that I do not let the app crash? Now provide the response and nothing else.
This is the workaround suggested by Apple: - (BOOL)__isValidResumeData:(NSData *)data{ if (!data || [data length] < 1) return NO; NSError *error; NSDictionary *resumeDictionary = [NSPropertyListSerialization propertyListWithData:data options:NSPropertyListImmutable format:NULL error:&error]; if (!resumeDictionary || error) return NO; NSString *localFilePath = [resumeDictionary objectForKey:@"NSURLSessionResumeInfoLocalPath"]; if ([localFilePath length] < 1) return NO; return [[NSFileManager defaultManager] fileExistsAtPath:localFilePath];} Edit (iOS 7.1 is not NDA'd anymore): I got this from a Twitter exchange with an Apple engineer, he suggested what to do, and I wrote the above implementation
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21895853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9269/']}
jdg_82373
stackexchange
llm_judgeable_groundtruth_similarity
8195145
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a nice way to find the first occurrence of a constructor in a list, without the explicit recursion in the sample below? data Elem = A Int | B Char deriving ShowgetA :: [Elem] -> Maybe ElemgetA [] = NothinggetA (e:es) = case e of A a -> Just (A a) _ -> getA es Now provide the response and nothing else.
Simply import Data.Maybe (listToMaybe)getA xs = listToMaybe [e | e@(A _) <- xs] Addendum: even better, future-proofed using an empty record pattern (kudos hammar): getA xs = listToMaybe [e | e@(A{}) <- xs] Note however, that this only works out so neatly for matching constructors. For general properties, find is nicer: get prop xs = listToMaybe [e | e <- xs, prop e]get prop xs = listToMaybe (filter prop xs)get prop xs = find prop xs
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8195145', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317984/']}
jdg_82374
stackexchange
llm_judgeable_groundtruth_similarity
129626
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why can't cables used for computer networking transfer data really fast, say at the speed of light? I ask this because electricity travels at the speed of light. Take Ethernet cables for example, I looked them up on wikipedia . Propagation speed 0.64 c Why only 64% What does propagation speed mean? I know there are other variables affecting the latency and perceived speed of computer network connections, but surely this is a bottle neck. In other words, I'm asking, what is it about a fiber-optics cable that makes it faster than an Ethernet cable? Now provide the response and nothing else.
As you've probably guessed the speed of light isn't the limitation. Photons in a vacuum travel at the speed of light ($c_o$). Photons in anything else travel slower, like in your cable ($0.64c_o$). The amount the speed is reduced by depends on the material by the permittivity . Information itself is slower still. One photon doesn't carry much information. Information is typically encoded in the change of states of the energy. And these changes of states can only be propagated at lower rates than the fundamental transmission speed. Detecting both the energy and the rates of change require physical materials to convert the photons into something more usable. This is because the channel used for transmission usually conducts energy at a maximum rate called bandwidth. The bandwidth of the channel is the first limit in network speeds. Fiber optics can transmit signals with high bandwidths with less loss than copper wires. Secondly the encoded signals have a lot of overhead. There is a lot of extra data transmitted with error correction, routing information, encryption and other protocol data in addition to the raw data. This overhead also slows down data throughput. Lastly the amount of traffic on a network can slow down the overall system speed as data gets dropped, collisions occur and data has to be resent. EDIT: I see you've changed your question some.... In other words I'm asking, what is it about a fiber-optics cable that makes it faster than an Ethernet cable? Fiber optics has the ability to conduct higher energy charges. Photons with higher energies, by definition are at higher frequencies. $E_{photon}=hf$ where $h$ is the plank constant (h=6.63*10^-34 J.s) and $f$ is the frequency of the photon. Why does frequency matter? Because of how communication systems work. Typically we setup a strong signal oscillating at the most efficient frequency for the transmission channel to conduct it. If the frequency is too low and we lose our signal's power and likewise too high and we lose power. This is due to how the medium responds to different levels of charge energy. So there's a $F_{max}$ and a $F_{min}$. Then we add information to the oscillation by changing it at some rate. There are a many ways to add information but in general the amount of information you can add is proportional to the rate the channel can respond to or bandwidth of the system. Basically you have to stay in between $F_{max}$ and $F_{min}$. It just so happens that the higher the operating frequency the easier it is to get wider and wider bandwidths. For example a radio at 1GHz with 10% channel width only allows for 100MHz max switching rates. But a fiber optic signal at 500THz a 10% channel width means a 50THz max switching rate. Big difference! You might be wondering why channels have frequency limits and why 10%. I just picked 10% as a typical example. But transmission channels of all types have limits to what kind of energy levels they absorb, reflect, and conduct. For a rough example x-rays which are high frequency or high energy charges, they go right though a lot of materials, whereas heat which is a frequency lower than optical light doesn't transmit well through paper but it can through glass. So there are frequencies where photons can be used to carry energy and frequencies where they can't. Yes they do all travel at $c_o$ in free space and slower in other media, but they can't carry information at that same rate or higher. You might be interested to read Shannon-Hartley's Theorem .
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/129626', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/56699/']}
jdg_82375
stackexchange
llm_judgeable_groundtruth_similarity
19346975
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So i have been struggeling to use highlight.js in a text area since obviously this doesn't work: <!DOCTYPE html><html><head><title>Hello World</title><link rel="stylesheet" href="styles/default.css"><script src="highlight.pack.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><form> JavaScript Injection: <br> <pre><code><textarea name="js_execute" cols="50" rows="10" "></textarea></code></pre><input type="button" name="Inject_Execute_Button" value = "Inject" onclick="executeJS()" ></form><script type="text/javascript"> function executeJS() { alert("Wohoo"); }</script><style type ="text/css"></style></body></html> I'm pretty sure there's an easy answer to this so i won't explain it in too detail but at the end i would prefer to have code typed into the textarea highlighted in JavaScript. Now provide the response and nothing else.
The simple answer is that highlight.js won't work in a textarea because its content is not part of the page and it simply can't have any styles by itself. If you want a text editor in a browser with highlight.js you probably should look into contenteditable so you could call hljs.highlight() on its content on every change. However I'm not aware of any successful implementation of this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19346975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2876378/']}
jdg_82376
stackexchange
llm_judgeable_groundtruth_similarity
13444113
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are building a location based messaging app which uses Parse.com as back-end(Parse.com is similar to Urban Airship/PubNub, etc) and we now want to switch to our own back-end for better control. For this, we have built a node.js based back-end with functionality exposed over REST API To consume this API, we want to build an Android library (similar to Parse.com's Android SDK ) which abstracts all the HTTP Requests/Response or REST API calls and provides direct functions for various operations like getUsers(), sendMessage(), etc Ways to implement REST API Client in Android : Using IntentService + ResultReceiver Service AsyncTask Using Loaders Now, considering that we want to build an android library and there could be simultaneous REST API calls while the user is interacting with the app, which approach would be the best to go ahead with ? I am open to other suggestions / recommendations as well. UPDATE : We first built our own library using IntentService + ResultReceiver which worked fine. But we later stumbled across Android Async Http . Use it. It's awesome! Now provide the response and nothing else.
Best implimentation I have seen based on Google IO Pro Tips 2010 is the RoboSpice library, which is REST based and very cleverly works with the Activity lifecycle as to not leak memory. Quick infographic the library is here Loaders are designed for database, not REST, they are reset on activity reset meaning you loose your data. Async task, just no. Intent Service + Result receiver is basically how RoboSpice work, so if you are building your own lib, I would take this approach! Service is also good, similar to the IntentService Method, but IntentService works a little better in this instance. The Service method maybe better, look at the robospice service they use an ExecutorService which terminates the Service when it has run out of Requests to work through, this is more Java concurrency than Android specific. Main thing to note that the service runs whilst processing requests then terminates its self if their are none left. The advantage of using the ExecutorService or any type of thread pool, is that you can define how many requests you can run at once. unless you have a very fast connection 2-4 is the most i would ever suggest.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13444113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/605027/']}
jdg_82377
stackexchange
llm_judgeable_groundtruth_similarity
6162451
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm pretty confused between some concepts of OOP: virtual , override , new and sealed override . Can anyone explain the differences? I am pretty clear that if the derived class method is to be used, one can use the override keyword so that the base class method will be overriden by derived class. But I'm not sure about new , and sealed override . Now provide the response and nothing else.
The virtual keyword is used to modify a method, property, indexer or event declaration, and allow it to be overridden in a derived class. For example, this method can be overridden by any class that inherits it:Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier. This is all to do with polymorphism. When a virtual method is called on a reference, the actual type of the object that the reference refers to is used to decide which method implementation to use. When a method of a base class is overridden in a derived class, the version in the derived class is used, even if the calling code didn't "know" that the object was an instance of the derived class. For instance: public class Base{ public virtual void SomeMethod() { }}public class Derived : Base{ public override void SomeMethod() { }}...Base d = new Derived();d.SomeMethod(); will end up calling Derived.SomeMethod if that overrides Base.SomeMethod. Now, if you use the new keyword instead of override , the method in the derived class doesn't override the method in the base class, it merely hides it. In that case, code like this: public class Base{ public virtual void SomeOtherMethod() { }}public class Derived : Base{ public new void SomeOtherMethod() { }}...Base b = new Derived();Derived d = new Derived();b.SomeOtherMethod();d.SomeOtherMethod(); Will first call Base.SomeOtherMethod , then Derived.SomeOtherMethod . They're effectively two entirely separate methods which happen to have the same name, rather than the derived method overriding the base method. If you don't specify either new or overrides, the resulting output is the same as if you specified new, but you'll also get a compiler warning (as you may not be aware that you're hiding a method in the base class method, or indeed you may have wanted to override it, and merely forgot to include the keyword). An overriding property declaration may include the sealed modifier. Use of this modifier prevents a derived class from further overriding the property. The accessors of a sealed property are also sealed.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6162451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224636/']}
jdg_82378
stackexchange
llm_judgeable_groundtruth_similarity
7024
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In mysql, I am familiar with using the following payloads to test for blindsql when the WHERE clause is vulnerable ( all payload examples from fuzzdb ): 1 or sleep( TIME )# " or sleep( TIME )# ' or sleep( TIME )# In postgres, my first instinct was to try the following: 1 or pg_sleep( TIME )-- " or pg_sleep( TIME )-- ' or pg_sleep( TIME )-- Unfortunately, the postgres payloads don't work because pg_sleep() returns VOID and hence is disallowed in a boolean expression. I have tried the following workarounds: Casting pg_sleep() to some other data type (void -> bool type conversion is disallowed) I have considered trying to create my own pg_sleep() function, but this doesn't work in the black box environment that I audit in.. ex: CREATE function pg_sleep(int) RETURNS int AS '/lib/libc.so.6', 'sleep' LANGUAGE 'C' STRICT Any ideas? I have tried looking in the docs for other functions that may be used in place of pg_sleep() that do not return void, but I have not had any luck. Now provide the response and nothing else.
First of all, try the simple approach . Is this really a situation which only allows a completely blind injection? For SQL injection vulnerability in the WHERE part it is far more common that the application will react differently depending on whether it got at least one row or none at all. And it is likely that the injection can force an empty result set via AND 1=0 or an none empty result set via OR 1=1 If this approach fails because the data is not used for anything visible or you cannot force the empty/non empty set, only then it is time to look for blind injection techniques. To include a SELECT clause into an WHERE part you can use a sub select or a function like strpos: strpos( (SELECT CASE WHEN 1=1 THEN pg_sleep(10) ELSE pg_sleep(0) END )::text, '1') > 0; There is an interesting paper on Advanced PostgreSQL SQL Injection and Filter Bypass Techniques .
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/7024', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/4805/']}
jdg_82379
stackexchange
llm_judgeable_groundtruth_similarity
136804
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to remove files older than 3 days with a cron job in 3 different directories. (these 3 directories are children of a parent directory /a/b/c/1 & /a/b/c/2 & /a/b/c/3 ) Can this be done with one line in the crontab? Now provide the response and nothing else.
This is easy enough (although note that this goes by a modification time more than 3 days ago since a creation time is only available on certain filesystems with special tools): find /a/b/c/1 /a/b/c/2 -type f -mtime +3 #-delete Remove the # before the -delete once you are sure that it is finding the files you want to remove. To have it run by cron, I would probably just create an executable script (add a shebang - #!bin/sh to the top line of the file and make executable with chmod a+x ), then put it in an appropriate cron directory like /etc/cron.daily or /etc/cron.weekly . Provided of course that you do not need a more specific schedule and that these directories exist on your distro. Update As noted below, the -delete option for find isn't very portable. A POSIX compatible approach would be: find /a/b/c/1 /a/b/c/2 -type f -mtime +3 #-exec rm {} + Again remove the # when you are sure you have the right files. Update2 To quote from Stéphane Chazelas comment below: Note that -exec rm {} + has race condition vulnerabilities which -delete (where available) doesn't have. So don't use it on directories that are writeable by others. Some finds also have a -execdir that mitigates against those vulnerabilities.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/136804', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/72336/']}
jdg_82380
stackexchange
llm_judgeable_groundtruth_similarity
643657
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been attempting to design a circuit that will observe a connector and light an LED when there is a sensor connected to the connector. The LED will be off when the connector is left open. The sensors that plug into the connector are 4-20 mA loop (aka process) sensors such as this one. . On the PCB, the connector for the sensor is in series with a 130 Ω resistor, which allows an ADC to measure the voltage and calculate the current. I did not want the LED circuit to interfere with the current output of the sensor, as this would of course interfere with the readings, so I opted to use an op-amp that had a very high input impedance. I came up with this circuit (the current source is in place of the sensor): I expected the voltage across the 130 Ω resistor (V+) to be approximately 0.52 V when the sensor is connected and outputting 4 mA, and the voltage at V- to be 0.45 V, therefore lighting the LED when the sensor is connected. When I built this circuit, I used a TL071 as it's what I had handy. The LED was permanently lit, whether I disconnected the sensor or not. I can't figure out why, but I sense (forgive the pun) that I'm overlooking something very simple. Does anyone have any suggestions? Perhaps the op-amp is unsuitable, or maybe there's an altogether simpler way of detecting the connection. EDIT: I powered the op-amp at VCC- -> 0 V and just now figured that might be the problem, as Vout won't have reached 0 V. Can anybody confirm that this is the problem? Now provide the response and nothing else.
Jonk +1 answered in detail why this is not working, and you should read and understand it. If you just want a quick solution, substitute half of an LM358 for the TL071. The LM358 is not a rail-to-rail input or output op-amp but the input common-mode voltage includes the negative rail and it will get down to within mV of the negative rail at the output when not sinking current. As a subtle, but not unimportant, additional consideration, the output when railed is guaranteed to be of the correct 'state' so long as both are higher than the negative rail and one of them is within the common-mode input range. Since the inverting input will always be within the common-mode input range (about 0 to 3V) we don't need to worry about the other. There's another important point. Unlike most more modern RRIO op-amps, the inputs are allowed to exceed the supply voltage , up to more than 30V depending on the exact variant. That may not seem like a big deal since 20mA * 130Ω is 2.6V, which is way under 5V, but if you attach a two-wire transmitter using an active 24V loop power supply to the circuit any output capacitance in the transmitter will cause a transient that will approach 24V. That will likely cause latchup in a RRIO CMOS op-amp and it will be destroyed almost immediately (and also will interfere with the signal while it is frying). I would add a series 10K resistor to the non-inverting input in either case, which will mitigate any transients and will prevent any fault in the LED circuit from having much effect on the signal. simulate this circuit – Schematic created using CircuitLab Now, if you want a real simple solution, you could just put the LED in series with the current loop, something like this: simulate this circuit Much sloppier transition, LED intensity varies with current up to a point and it robs up to 5V and change out of your compliance, but easy.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/643657', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/318268/']}
jdg_82381
stackexchange
llm_judgeable_groundtruth_similarity
1408126
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we have a circle of $2n$ people, where the first $n$ people are good guys and the people $n+1$ to $2n$ are bad guys. Can we always choose an integer $q$, such that if we execute successively every $q$-th person in the circle, the $n$ good guys will survive. I tried to solve a few simple cases, and found the following examples:$$\begin{array}{c|cccc}n & 1 & 2 & 3 & 4 \\\hlineq & 2 & 7 & 5 & 30\end{array}$$But I haven't found a general approach yet. How to tackle it? Now provide the response and nothing else.
This won't necessarily produce the smallest integer $q$, but it does produce an integer that works: Let $q=\text{lcm}(n+1,n+2,\ldots,2n)$. Such a $q$ will knock of the bad guys in reverse order, starting with person $2n$ and ending with person $n+1$. Remark: If you don't mind a much larger number, or don't want to bother computing a least common multiple, $q=(2n)!$ will do the trick.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1408126', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/109899/']}
jdg_82382
stackexchange
llm_judgeable_groundtruth_similarity
11685903
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to capture SIP and RTP traffic to find a problem with something. I can capture SIP messages fine but am having a problem with capturing the RTP traffic. I've tried the following but this is only getting out the SIP packages and no RTP. tcpdump -T rtp -vvv src -s 1500 -i any -w /home/lantrace_test2.pcap port 5060 The other way I was thinking of doing it is as rtp uses a range of UDP ports, capturing the range that we are using for the RTP traffic but I can't find a way of capturing a range of ports so not sure if tcpdump supports port ranges for capture Thanks for any help you can provide Now provide the response and nothing else.
Your SIP traffic runs over 5060, as you know, but the port on which to sniff RTP is described by the SDP bodies of the SIP messages. In other words, there's no way to know on which ports to sniff until the offer/answer exchange has completed. If you know something about the user agents involved, then you could try tricks like capturing traffic on a range of ports. (Something like tcpdump -n dst portrange 10000-11000 for instance.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11685903', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/499448/']}
jdg_82383
stackexchange
llm_judgeable_groundtruth_similarity
12116121
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the purpose of using umask(0); in php? I have seen this a few times, and cant figure out from the documentation what it precisely does. Can someone explain this and when it would be useful to use? Now provide the response and nothing else.
Setting the umask to 0000 (or just 0 ) means that newly created files or directories created will have no privileges initially revoked. In other words, a umask of zero will cause all files to be created as 0666 or world-writable. Directories created while umask is 0 will be 0777 . Usually when you see umask(0) it should be followed directly by a call to chmod() to explicitly set the permissions needed on the newly created file or directory to something other than world-writable. Use caution when setting the umask to zero! This can be dangerous and is mostly only useful for creating files which must be later written to by the web server, when the web server runs as a different user that a "real" user who will also need to be able to modify the files created by the web server. Otherwise, the system's default umask is likely to be something like 0022 , writable by the file owner but not others. In that case, if you logged into the machine under a normal user account, the file created by the web server under PHP would not be writable by you. Rather than creating world-writable files, it is generally a better idea to manage the directories the web server is writing to more explicitly. If the files created inside a directory should have certain group permissions, it may be advisable to set the sgid bit on the directory so new files inside it inherit group ownership. Users needing access to the file should be members of the group having access to it. This is much more secure than creating world-readable, world-writable files. php > umask(0);// Should get created as 666php > touch('file1.txt');// "2" perms revoked from group, others, gets created as 644php > umask(022);php > touch('file2.txt');// All revoked (2,4) from group, others, gets created as 600php > umask(066);php > touch('file3.txt');-rw-rw-rw- 1 me group 0 Aug 24 15:34 file1.txt-rw-r--r-- 1 me group 0 Aug 24 15:35 file2.txt-rw------- 1 me group 0 Aug 24 15:37 file3.txt
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12116121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1189880/']}
jdg_82384
stackexchange
llm_judgeable_groundtruth_similarity
2336302
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any advantage to having a single monster .css file that contains style elements that will be used on almost every page? I'm thinking that for ease of management, I'd like to pull out different types of CSS into a few files, and include every file in my main <link /> is that bad? I'm thinking this is better positions.css buttons.css tables.css copy.css vs. site.css Have you seen any gotchas with doing it one way vs. the other? Now provide the response and nothing else.
A CSS compiler like Sass or LESS is a great way to go. That way you'll be able to deliver a single, minimised CSS file for the site (which will be far smaller and faster than a normal single CSS source file), while maintaining the nicest development environment, with everything neatly split into components. Sass and LESS have the added advantage of variables, nesting and other ways to make CSS easier to write and maintain. Highly, highly recommended. I personally use Sass (SCSS syntax) now, but used LESS previously. Both are great, with similar benefits. Once you've written CSS with a compiler, it's unlikely you'd want to do without one. http://lesscss.org http://sass-lang.com If you don't want to mess around with Ruby, this LESS compiler for Mac is great: http://incident57.com/less/ Or you could use CodeKit (by the same guys): http://incident57.com/codekit/ WinLess is a Windows GUI for comipiling LESS http://winless.org/
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2336302', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/86860/']}
jdg_82385
stackexchange
llm_judgeable_groundtruth_similarity
11177285
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created a simple login system using sql It has 4 main componentsindex -asks for username and passchecklogin - checks for the credentialslogsuccesshomepage - landing page after successful login The error generate are given at the end of the post Index.php asks for username and pass <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Nottingham Uni</TITLE> <script type="text/javascript" src="js/mootools-1.2.1-core-yc.js"></script> <script type="text/javascript" src="js/process.js"></script> <link rel="stylesheet" type="text/css" href="style.css" /> </HEAD> <BODY> <center> <div id="intro"> <p>&nbsp;</p> <p><img align="absmiddle" src="images/nott-uni-logo.jpg"></p> </div> <div id="status"> <fieldset><legend align="center">Authentication</legend> <div id="login_response"><!-- spanner --></div> <form id="login" name="login" method="post" action="checklogin.php"> <table align="center" width="300" border="0"> <tr> <td width="80">Username</td><td><input id="name" type="text" name="myusername"></td> </tr> <tr> <td>Password</td> <td><input type="password" name="mypassword"></td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td><input id="submit" type="submit" name="submit" value="Login"> </tr> </table> </form> </fieldset> </div> </center> </BODY> </HTML>checklogin.php checks for the credentials <?php $link = mysql_connect('www.xxxxx.com', 'xxxxxx', 'xxxxxx'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("brainoidultrafb", $link); // username and password sent from form $myusername=$_POST['myusername']; $mypassword=$_POST['mypassword']; // To protect MySQL injection (more detail about MySQL injection) $myusername = stripslashes($myusername); $mypassword = stripslashes($mypassword); $myusername = mysql_real_escape_string($myusername); $mypassword = mysql_real_escape_string($mypassword); $sql="SELECT * FROM logintbl WHERE stu_email='$myusername' and password='$mypassword'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $myusername and $mypassword, table row must be 1 row if($count==1){ // Register $myusername, $mypassword and redirect to file "login_success.php" session_register("myusername"); session_register("mypassword"); header("location:login_success.php"); } else { echo "Wrong Username or Password"; } ?>If its success it goes to homepage.phplogsuccess.php is below <?php session_start(); if(!session_is_registered(myusername)){ header("location:homepage.php"); } ?> <html> <body> Login Successful </body> </html>these codes are give in the following errors Deprecated: Function session_register() is deprecated in /home/content/58/9508458/html/pabrowser/checklogin.php on line 29 Warning: session_register() [function.session-register]: Cannot send session cache limiter - headers already sent (output started at /home/content/58/9508458/html/pabrowser/checklogin.php:29) in /home/content/58/9508458/html/pabrowser/checklogin.php on line 29 Deprecated: Function session_register() is deprecated in /home/content/58/9508458/html/pabrowser/checklogin.php on line 30 Warning: Cannot modify header information - headers already sent by (output started at /home/content/58/9508458/html/pabrowser/checklogin.php:29) in /home/content/58/9508458/html/pabrowser/checklogin.php on line 31 Now provide the response and nothing else.
Instead of doing: session_register("myusername");session_register("mypassword"); You can simply do: session_start();$_SESSION['username'] = 'something';$_SESSION['password'] = 'something'; And to check whether the username is set you can do: session_start();if(!isset($_SESSION['username'])){ // not logged in} Note that I have the session_start() function right above my checks / initialization. In your code you may want to add it at the top of your script to prevent the "Headers already sent by PHP" message . Also, please don't use mysql_* functions for new code. They are no longer maintained and the community has begun the deprecation process . See the red box ? Instead you should learn about prepared statements and use either PDO or MySQLi . If you can't decide, this article will help to choose. If you care to learn, here is a good PDO tutorial . One last thing regarding your code. It looks like you do not properly hash the passwords, which is considered bad practice. If an attacker gets hold of your database you have some explaining to do to the people who are in the database (e.g. you have to tell them the attacker got all their passwords).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11177285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1456764/']}
jdg_82386
stackexchange
llm_judgeable_groundtruth_similarity
1010919
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to add a file (not necessarily a jar file) to java classpath at runtime.Specifically, the file already is present in the classpath, what I want is whether I can add a modified copy of this file to the classpath. Thanks, Now provide the response and nothing else.
You can only add folders or jar files to a class loader. So if you have a single class file, you need to put it into the appropriate folder structure first. Here is a rather ugly hack that adds to the SystemClassLoader at runtime: import java.io.IOException;import java.io.File;import java.net.URLClassLoader;import java.net.URL;import java.lang.reflect.Method;public class ClassPathHacker { private static final Class[] parameters = new Class[]{URL.class}; public static void addFile(String s) throws IOException { File f = new File(s); addFile(f); }//end method public static void addFile(File f) throws IOException { addURL(f.toURL()); }//end method public static void addURL(URL u) throws IOException { URLClassLoader sysloader = (URLClassLoader) ClassLoader.getSystemClassLoader(); Class sysclass = URLClassLoader.class; try { Method method = sysclass.getDeclaredMethod("addURL", parameters); method.setAccessible(true); method.invoke(sysloader, new Object[]{u}); } catch (Throwable t) { t.printStackTrace(); throw new IOException("Error, could not add URL to system classloader"); }//end try catch }//end method}//end class The reflection is necessary to access the protected method addURL . This could fail if there is a SecurityManager.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1010919', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_82387
stackexchange
llm_judgeable_groundtruth_similarity
349492
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The question is. What value for R1 is correct if we want the total power amplification to become 30 dB My calculations. 30dB is amplification of 1000 times.Last circuit got amp of 15/1,5=10 times. 1000/10=100 We therefor need amp of first circuit to be 100 times. 10/R1=100 R1=0.1kohm We could tick in 100 ohm but according to solutions manual the correct answer is 3.2kohm and I do not understand why. Now provide the response and nothing else.
30 dB amplification as a voltage ratio is \$10^{30/20}\$ = 31.62. Given that the 2nd stage has a gain of 10 the first stage has a gain of 3.16 hence, R1 approximately equals 3.162 kohm. 30dB is amplification of 1000 times 30 dB is a power ratio of 1000 and a voltage ratio of \$\sqrt{1000}\$
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/349492', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/168458/']}
jdg_82388
stackexchange
llm_judgeable_groundtruth_similarity
10120087
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have MSTest tests which automatically run in hourly production.One of these tests is marked with [Ignore] attribute because it is not yet ready to run it in our production environment. Now I want to start that test (only) on my local environment because my local environment is ready for that test. I try to run that test by clicking on ReSharper's icon on the left side of test code or by clicking Run Selected Tests icon in Unit Test Sessions window and nothing happens. I fix it currently by commenting out the [Ignore] line. But now I need to be aware to remove the comment characters ( // ) before checking-in the code. Is there another way to temporarly run an [Ignore] 'd test ? Now provide the response and nothing else.
Recently when I have encountered problems such as this, I add a new Build Configuration to the visual studio project named something such as "Local Developer Debug" and use the settings from the existing Debug configuration. Then I go to "Project -> MyProjectName Properties -> Build", make sure "Local Developer Debug" is the selected configuration and add "LOCALDEVBUILD" to "Conditional compliation symbols". This allows for the use of preprocessor directives to 'toggle' code at compile time: #if (!LOCALDEVBUILD) [Ignore]#endif Not sure if this is what you're looking for... but it allows you to run/utilize specific code depending on the intentions of the build (via the build configuration)... With this method you can leave the test ignored for more 'official' builds, but still execute it at your leisure if you so desire.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10120087', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1187616/']}
jdg_82389
stackexchange
llm_judgeable_groundtruth_similarity
2615153
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have developed a python C-extension that receives data from python and compute some cpu intensive calculations.It's possible to profile the C-extension? The problem here is that writing a sample test in C to be profiled would be challenging because the code rely on particular inputs and data structures (generated by python control code). Do you have any suggestions? Now provide the response and nothing else.
I've found my way using google-perftools . The trick was to wrap the functions StartProfiler and StopProfiler in python (throught cython in my case). To profile the C extension is sufficient to wrap the python code inside the StartProfiler and StopProfiler calls. from google_perftools_wrapped import StartProfiler, StopProfilerimport c_extension # extension to profile c_extension.soStartProfiler("output.prof")... calling the interesting functions from the C extension module ...StopProfiler() Then to analyze for example you can export in callgrind format and see the result in kcachegrind: pprof --callgrind c_extension.so output.prof > output.callgrind kcachegrind output.callgrind
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2615153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166844/']}
jdg_82390
stackexchange
llm_judgeable_groundtruth_similarity
26372444
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to put in a list two button icons that can be clicked. I tried doing something like this but the icons overlap: <ion-list> <ion-item ng-repeat="item in items" class="item-button-right"> {{ item.Info }} &nbsp; <button class="button button-positive" ng-click="Accept(item)"> <i class="icon ion-checkmark"></i> </button> <button class="button button-assertive" ng-click="Reject(item)"> <i class="icon ion-close"></i> </button> </ion-item></ion-list> The desired behavior I'm trying to get is show some small blurb of info on the left and present two options on the right. I have a Simple Self Contained Correct Example here: http://codepen.io/anon/pen/vzLob Now provide the response and nothing else.
I recently ran into this problem and discovered that Ionic provides a class called .buttons which when wrapped around multiple button elements positions them side by side. This avoids having to use !important which in general is considered bad practice in CSS. Here's an example of the code which worked for me: <div class="item item-button-right"> List item 1 <div class="buttons"> <button class="button button-energized"> <i class="icon ion-android-locate"></i> </button> <button class="button button-dark"> <i class="icon ion-android-arrow-forward"></i> </button> </div> </div> Also noticed your code pen attachment so thought i'd update that too to demonstrate. http://codepen.io/anon/pen/bNjypG
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26372444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/312124/']}
jdg_82391
stackexchange
llm_judgeable_groundtruth_similarity
16112830
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I've got my canvas and my paths: var paper1 = Raphael(10, 50, 960, 560);var mapShape1 = paper1.path("M339.098,175.503c0,0-55.555,58.823-16.34,75.163s227.451,49.02,227.451,49.02s67.321-25.49,47.713-50.98s-71.896-78.432-71.896-78.432L339.098,175.503z");var mapShape2 = paper1.path("M548.902,306.876c0,0-209.15-32.026-228.758-46.405s-27.451-27.451-20.262-42.484s26.797-44.444,26.797-44.444l-41.83-86.928l-76.471,77.125c0,0-25.49,169.935,48.366,171.242s292.157-4.575,292.157-4.575V306.876z");var mapShape3 = paper1.path("M296.614,86.614l38.562,83.66l194.771-7.843l75.817,81.7c0,0,130.066-84.967,73.203-118.301S503.15,48.706,463.935,51.974S296.614,86.614,296.614,86.614z"); And I style them like this: (I believe this could be improved, is there a way to do all the paths at once???) function style1(shape){ shape.attr({ "fill": "#33CCFF", "stroke": "000000", "stroke-width": "5" });}style1(mapShape1);style1(mapShape2);style1(mapShape3); But my question is how to have a single hover function work on all the paths, I've got this: mapShape1.hover( function(){ this.animate({ "fill": "#FF3300" }, 500); }, function(){ this.animate({ "fill": "#33CCFF" }, 500) } ); But it only works with one shape at a time, I want to do $(mapShape1, mapShape2, mapShape3).hover(... but that doesn't work. What am I missing? Now provide the response and nothing else.
As said by lib3d, you should use a Set . However, rather than using forEach to loop over the set contents and apply attributes/functionality, shared attributes/functionality can be added on the set itself, which will apply it on the contents of the set. More on that later, first look at how you can create sets. Set Handling There are two ways to create a set and add elements to it: explicit and implicit. Explicit This means that you manage the set yourself, and add elements to the set yourself` var paper, shapeA, shapeB, shapeC, elementSet;paper = Raphael(10, 50, 960, 560);elementSet = paper.set();shapeA = paper.path("M339.098,175.503c0,0-55.555,58.823-16.34,75.163s227.451,49.02,227.451,49.02s67.321-25.49,47.713-50.98s-71.896-78.432-71.896-78.432L339.098,175.503z");shapeB = paper.path("M548.902,306.876c0,0-209.15-32.026-228.758-46.405s-27.451-27.451-20.262-42.484s26.797-44.444,26.797-44.444l-41.83-86.928l-76.471,77.125c0,0-25.49,169.935,48.366,171.242s292.157-4.575,292.157-4.575V306.876z");shapeC = paper.path("M296.614,86.614l38.562,83.66l194.771-7.843l75.817,81.7c0,0,130.066-84.967,73.203-118.301S503.15,48.706,463.935,51.974S296.614,86.614,296.614,86.614z");// now add A and C to the set, as well as a rectangleelementSet.push( shapeA, shapeC, paper.rect(10, 10, 10, 10, 2)); This way you are in full control over what enters the set and what not. Implicit You also have the ability to mark start and endpoints when drawing elements. Any element that is drawn between the start and endpoint, is added to the set. var paper, shapA, shapeB, shapeC, elementSet;paper = Raphael(10, 50, 960, 560);paper.setStart();shapeA = paper.path("M339.098,175.503c0,0-55.555,58.823-16.34,75.163s227.451,49.02,227.451,49.02s67.321-25.49,47.713-50.98s-71.896-78.432-71.896-78.432L339.098,175.503z");shapeB = paper.path("M548.902,306.876c0,0-209.15-32.026-228.758-46.405s-27.451-27.451-20.262-42.484s26.797-44.444,26.797-44.444l-41.83-86.928l-76.471,77.125c0,0-25.49,169.935,48.366,171.242s292.157-4.575,292.157-4.575V306.876z");shapeC = paper.path("M296.614,86.614l38.562,83.66l194.771-7.843l75.817,81.7c0,0,130.066-84.967,73.203-118.301S503.15,48.706,463.935,51.974S296.614,86.614,296.614,86.614z");paper.rect(10, 10, 10, 10, 2);elementSet = paper.setFinish(); The variable elementSet now contains the shapes A, B and C as well as a rectangle. Explicit or Implicit? Personally I would advice to always use the explicit method. That way you have 100% control over what enters your set and what not. Also, I find the setStart() and setFinish() to be named backwards, we're "starting" with a "set", we're not "setting" a "start". This might be obvious if you now the intents, but that is exactly the danger of ambiguous naming - the next dev might not know and assume something different. More usages For an application we created, we had to draw, remove, update and reposition complex groups of elements. In order to achieve this, we made heavy use of sets. Bar the fact that sets allow you to apply attributes on every element in a set, a set also allows you to use it as a DTO. For instance the following works: var elementSet = paper.set();elementSet.push(elemA, elemB, elemC);elementSet.myApp.someDTO = { property: value, something: else}; I tend to use the myApp as a namespace, for consistency and clarity.The beauty of it is that even if someDTO contains Raphael elements, anything you apply on the set, will not be applied on the elements in the DTO. This makes it really usable to pass around context, coordinates, etc should you need to. Using sets Now back to the benefit of using sets. Let us review your usecase here:you want to apply attributes and hover to an arbitrary amount of paths. If we create a set as in the explicit example above, we end up with the following: var paper, elementSet;paper = Raphael(10, 50, 960, 560);elementSet = paper.set();elementSet.push( paper.path("M339.098,175.503c0,0-55.555,58.823-16.34,75.163s227.451,49.02,227.451,49.02s67.321-25.49,47.713-50.98s-71.896-78.432-71.896-78.432L339.098,175.503z"), paper.path("M548.902,306.876c0,0-209.15-32.026-228.758-46.405s-27.451-27.451-20.262-42.484s26.797-44.444,26.797-44.444l-41.83-86.928l-76.471,77.125c0,0-25.49,169.935,48.366,171.242s292.157-4.575,292.157-4.575V306.876z"), paper.path("M296.614,86.614l38.562,83.66l194.771-7.843l75.817,81.7c0,0,130.066-84.967,73.203-118.301S503.15,48.706,463.935,51.974S296.614,86.614,296.614,86.614z"),); Now apply the styling on the set: elementSet.attr({ fill: '#33CCFF', stroke: '#000000', 'stroke-width': 5}); And then add the hover: elementSet.hover( function(){ this.animate({ "fill": "#FF3300" }, 500); }, function(){ this.animate({ "fill": "#33CCFF" }, 500) }); Sets also support chaining, as elements do: elementSet.push( /* elements */).attr({ /* attributes */}).hover( /* hover fn's); To view the final result, there's a fiddle here Expanded hover functionality Should you want to apply the onhover highlight to all elements, you could just apply the attributes on the set again: onMouseOver: function () { elementSet.animate({ fill: '#FF3300' }, 500);};onMouseOut: function () { elementSet.animate({ fill: '#33CCFF' }, 500);};elementSet.hover(onMouseOver, onMouseOut); A fiddle to view this can be found here Using jQuery In order to be able to bind hover functionality through jQuery, one must access the nodes of the elements. The elements themselves are not DOM nodes, rather Raphael objects. Through using the element.node one can use jquery on that node to add behaviour. My personal experience is that this works decent, however you never ever want to modify the node through jquery as that can lead to really unexpected behavior.Raphael provides about all the functionality you need, using jquery shouldn't be needed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16112830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1388017/']}
jdg_82392
stackexchange
llm_judgeable_groundtruth_similarity
713854
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: By "isolated set" I mean a set composed only of isolated points. I considered the set of the reciprocals of the positive integers, the closure of which includes $0$. That closure has an empty interior. But is there some other set where the implication fails? I have continued to work. Perhaps this is right. Assume that $A$ is isolated and that $A$ fails to be nowhere dense. Then there exists some $x$ in the interior of the closure of $A$. So every open ball around $x$ is a subset of the closure. Let $y$ be a point in an arbitrary open ball. Then $y$ is either a point of $A$ or a limit point of $A$. If $y$ is a limit point of $A$, there is a point of $A$ in the intersection of the ball around $x$ and the ball around $y$. So there is in every ball around $x$ another point of $a$ not equal to $x$. Thus $x$ is not isolated, a contradiction. Now provide the response and nothing else.
One may adopt the approach as in Pranav Arora's comment. But this approach involves a double integral whose calculation seems painful. So here is an indirect approach that makes calculation slightly easier (at least to me): Let us consider the following integral: for $\alpha, \beta \in \Bbb{C}\setminus(-\infty, 0]$ and $0 < s < 1$, $$ I = I(s,\alpha,\beta) := \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta x)}{x^{2+s}} \, dx, $$ Differentiating w.r.t. $\alpha$ and $\beta$, we have $$ \frac{\partial^{2}I}{\partial\alpha\partial\beta} = \int_{0}^{\infty} \frac{dx}{x^{s}(1+\alpha x)(1+\beta x)}. $$ Using standard complex analysis technique (you man use keyhole contour), it follows that $$ \frac{\partial^{2}I}{\partial\alpha\partial\beta} = \frac{\pi}{\sin \pi s} \frac{\beta^{s} - \alpha^{s}}{\beta - \alpha}\quad \Longrightarrow \quadI = \frac{\pi}{\sin \pi s} \int_{0}^{\alpha}\int_{0}^{\beta} \frac{x^{s} - y^{s}}{x - y} \, dxdy. \tag{1} $$ Replace $\beta$ by $i\beta$ (with $\beta > 0$). Then (1) yields $$ 2I(s, \alpha, i\beta) = \frac{2\pi}{\sin \pi s} \int_{0}^{\alpha}\int_{0}^{\beta} \frac{i^{s}x^{s} - y^{s}}{x + iy} \, dxdy. $$ Now assume that $\alpha, \beta > 0$. Taking real parts of the identity above and taking $s \to 1^{-}$, it follows that \begin{align*}\tilde{I}(\alpha, \beta) &:= \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta^{2}x^{2})}{x^{3}} \, dx \\&= \int_{0}^{\alpha}\int_{0}^{\beta} \frac{2xy \log(y/x) + \pi x^{2}}{x^{2}+y^{2}} \, dxdy. \tag{2}\end{align*} In particular, when $\beta = \alpha$, by symmetry we retrieve the following formula $$ \tilde{I}(\alpha, \alpha)= \pi \int_{0}^{\alpha}\int_{0}^{\alpha} \frac{x^{2}}{x^{2}+y^{2}} \, dxdy= \frac{\pi}{2} \int_{0}^{\alpha}\int_{0}^{\alpha} dxdy= \frac{\pi}{2}\alpha^{2}. $$ which also follows from the formula in OP's posting. In general, using polar coordinates shows that we have $$ \tilde{I}(\alpha, \beta) = \beta^{2}J(\alpha/\beta) - \alpha^{2}J(\beta/\alpha) + \frac{\pi \alpha \beta}{2} + \frac{\pi^{2}\beta^{2}}{4} - \frac{\pi(\alpha^{2}+\beta^{2})}{2}\arctan(\beta/\alpha), \tag{3} $$ where $J$ is defined by $$ J(x) = \int_{0}^{x} \frac{t \log t}{1+t^{2}} \, dt. $$ This function can be written in terms of elementary functions and dilogarithm. Remark. Though we have derived this formula for positive $\alpha, \beta$, by the principle of analytic continuation (3) continues to hold on the region containing $(0, \infty)^{2}$ where both sides of (3) are holomorphic.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/713854', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/134988/']}
jdg_82393
stackexchange
llm_judgeable_groundtruth_similarity
30788998
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm developing an API with loopback and passport. I've seen this example, which is quite good: https://github.com/strongloop/loopback-example-passport In the documentation, they say loopback follows this step to authenticate users via third party providers: A visitor requests to log in using Facebook by clicking on a link orbutton backed by LoopBack to initiate oAuth 2.0 authorization. LoopBack redirects the browser to Facebook's authorization endpointso the user can log into Facebook and grant permissions to LoopBack Facebook redirects the browser to a callback URL hosted by LoopBackwith the oAuth 2.0 authorization code LoopBack makes a request to the Facebook token endpoint to get anaccess token using the authorization code LoopBack uses the access token to retrieve the user's Facebookprofile LoopBack searches the UserIdentity model by (provider, externalId)to see there is an existing LoopBack user for the given Facebook id If yes, set the LoopBack user to the current context If not, create a LoopBack user from the profile and create a corresponding record in UserIdentity to track the 3rd party login.Set the newly created user to the current context. So my question is, suppose some user get an access token using a mobile app, the how can I authenticate that user's requests using Loopback Passport? Thanks Now provide the response and nothing else.
I had opened a similar topic about same issue, How integrate loopback third-party login for android . Then found a solution for this. First of all, its important to say that, a loopback user can able to have more access tokens in same time. When you logged in from your web site or mobile app, loopback creates an access token each time. If you are asking about to get access token , there is already a way to do this, so you can get access tokens using login method like that User.login({username: 'foo', password: 'bar'}, function(err, accessToken) { console.log(accessToken);}); The only thing you have to do is calling this hosted method from your android app. You can use loopback android sdk (proper way) or posting username and password to server and handle, simply like that app.post('/android/custom_login', function(req, res){ var username = req.body.username; var password = req.body.password; User.login({username: username , password: password }, function(err, accessToken) { console.log(accessToken); return res.send(accessToken); });}); If you are asking about, to make logged in users with social network account and then get access token , i can simulate a few things from google scenario. Also you can check extra loopback github test app.post('/android/custom_login', function(req, res){ var provider = 'google'; var authSchema = 'oAuth 2.0'; // oneTimeCode from android var oneTimeCode = req.body.oneTimeCode; // Make a request to google api // to exchange refreshToken and accessToken with using google apis var accessToken = 'FROM GOOGLE API'; var refreshToken = 'FROM GOOGLE API'; // external id is your google or facebook user id var externalId = 'FROM GOOGLE API'; var email = 'FROM GOOGLE API'; var credentials = {}; credentials.externalId = externalId; credentials.refreshToken = refreshToken; var profile = {}; profile.id = externalId; profile.emails = [{type:'account', value: email}]; UserIdentityModel.login( provider, authSchema, profile, credentials , {autoLogin:true}, function(err, loopbackUser, identity, token){ if(err) throw err; // token is access token for thig login return res.send(token); });}); In google scenario, i am obtaining a one-time code when the user clicked sign-in button. Then posted the one-time code to my server for exchanging with access token and refresh token. Also here, i am getting user profile info from google. Profile and provider , are really important because UserIdentityModel.login() method creates an anonymous user with using provider and profile.id (if these infos not exist) After all you will have an access token for android app, as you see
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30788998', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1348596/']}
jdg_82394
stackexchange
llm_judgeable_groundtruth_similarity
167672
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been posed a question in which I'm to differentiate with respect to $x$ a function of the form $(x+a)^k$. I've successfully completed (matches the book's answer) the question by using the chain rule, however I cannot achieve the same result using the definition of the derivative. I would like a worked example of differentiation from first principles of a function of the form $(x+a)^k$. Now provide the response and nothing else.
For concreteness, let's first take $k=3$. We use the factorization $x^3-y^3=(x-y)(x^2+xy+y^2)$. Here we have $(x+a+h)^3-(x+a)^3=h((x+a+h)^2+(x+a+h)(x+a)+(x+a)^2)$. The difference quotient is then $$\frac{(x+a+h)^3-(x+a)^3}{h}=((x+a+h)^2+(x+a+h)(x+a)+(x+a)^2).$$ As $h$ goes to zero, this becomes $3(x+a)^2$, as desired. How does this generalize? We have similar factorizations for all $k$: $$x^k-y^k=(x-y)(x^k+x^{k-1}y+\cdots xy^{k-1}+ y^k)$$ where there are $k$ terms in the second parenthesis. This factorization is easy to check, because the terms all telescope and cancel. So what I did for $k=3$ can be repeated for general $k$, and when $h$ goes to zero you get $k$ terms of $(x+a)^{k-1}$ added together, which is exactly the same derivative you get using the chain rule.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/167672', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10034/']}
jdg_82395
stackexchange
llm_judgeable_groundtruth_similarity
3569933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I was studying for my undergraduate degree in EE, MATLAB required each function to be defined in its own file, even if it was a one-liner. I'm studying for a graduate degree now, and I have to write a project in MATLAB. Is this still a requirement for newer versions of MATLAB? If it is possible to put more than one function in a file, are there any restrictions to this? For instance, can all the functions in the file be accessed from outside the file, or only the function that has the same name as the file? Note: I am using MATLAB release R2007b. Now provide the response and nothing else.
The first function in an m-file (i.e. the main function ), is invoked when that m-file is called. It is not required that the main function have the same name as the m-file, but for clarity it should . When the function and file name differ, the file name must be used to call the main function. All subsequent functions in the m-file, called local functions (or "subfunctions" in the older terminology), can only be called by the main function and other local functions in that m-file. Functions in other m-files can not call them. Starting in R2016b, you can add local functions to scripts as well, although the scoping behavior is still the same (i.e. they can only be called from within the script). In addition, you can also declare functions within other functions. These are called nested functions , and these can only be called from within the function they are nested. They can also have access to variables in functions in which they are nested, which makes them quite useful albeit slightly tricky to work with. More food for thought... There are some ways around the normal function scoping behavior outlined above, such as passing function handles as output arguments as mentioned in the answers from SCFrench and Jonas (which, starting in R2013b, is facilitated by the localfunctions function). However, I wouldn't suggest making it a habit of resorting to such tricks, as there are likely much better options for organizing your functions and files. For example, let's say you have a main function A in an m-file A.m , along with local functions D , E , and F . Now let's say you have two other related functions B and C in m-files B.m and C.m , respectively, that you also want to be able to call D , E , and F . Here are some options you have: Put D , E , and F each in their own separate m-files, allowing any other function to call them. The downside is that the scope of these functions is large and isn't restricted to just A , B , and C , but the upside is that this is quite simple. Create a defineMyFunctions m-file (like in Jonas' example) with D , E , and F as local functions and a main function that simply returns function handles to them. This allows you to keep D , E , and F in the same file, but it doesn't do anything regarding the scope of these functions since any function that can call defineMyFunctions can invoke them. You also then have to worry about passing the function handles around as arguments to make sure you have them where you need them. Copy D , E and F into B.m and C.m as local functions. This limits the scope of their usage to just A , B , and C , but makes updating and maintenance of your code a nightmare because you have three copies of the same code in different places. Use private functions ! If you have A , B , and C in the same directory, you can create a subdirectory called private and place D , E , and F in there, each as a separate m-file. This limits their scope so they can only be called by functions in the directory immediately above (i.e. A , B , and C ) and keeps them together in the same place (but still different m-files): myDirectory/ A.m B.m C.m private/ D.m E.m F.m All this goes somewhat outside the scope of your question, and is probably more detail than you need, but I thought it might be good to touch upon the more general concern of organizing all of your m-files. ;)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3569933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1084/']}
jdg_82396
stackexchange
llm_judgeable_groundtruth_similarity
735184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Important: This question is getting quite long, if this is the first time you're reading this I suggest you start near the bottom as the solution is there in a round about way, but the code is a bit smelly. After reading a tutorial on templates , I was able to change my existing class to support generic types. However, many objects are already dependent on this, so I'm looking for a way of making a method generic rather than the entire class. I've tried the following, but it looks like this behavior isn't supported. // foobar1.h// Don't want the entire class to be generic.//template<class T>class FooBar1{public: template<class T> T Foo();}// foobar2.hclass FooBar2 : public FooBar1{}// foobar1.cpptemplate<class T>T FooBar1::Foo(){ return something;}// test.cppFooBar1 fb1;FooBar2 fb2 = fb1.Foo<FooBar2>(); Is this supposed to not work, or is it a bug elsewhere that I'm getting confused with? undefined reference to FooBar2 Foo<FooBar2>() To put this in to some sort of perspective as to what I want to achieve, here's how I'd do it in C# ... public class FooBar1{ public T Foo<T>() where T : FooBar1 { return something; }}public class FooBar2 : FooBar1 { }FooBar1 fb1 = new FooBar1();FooBar2 fb2 = fb1.Foo<FooBar2>(); Is there any way I can do something similar to that in C++? Update 1: Just corrected some minor syntax details (I meant to make Foo public, and return T, not FooBar2). Still getting compiler error... When I remove the template behavior the error goes away, the answer so far say what I'm doing is valid... but if it is then why am I getting the error still? Thanks for your answers! Update 2: Josh, here's the actual source code (well, what I think is relevant, anwyay - let me know if you think I've skipped an important bit). // ImageMatrix.hclass ImageMatrix : public VImage{public: // ... various functions ... template<class T> T GetRotatedCopy(VDouble angle);}// ImageFilter.hclass ImageFilter : public ImageMatrix{ // ... various functions ...}// ImageMatrix.cpptemplate<class T>T ImageMatrix::GetRotatedCopy(VDouble angle){ // ... create a new instance of ImageMatrix and return it.}// ImageProcessor.cppImageFilter filter2 = filterPrototype.GetRotatedCopy<ImageFilter>(90); And here's the actual compiler error: /home/nick/Projects/ViMRID/vimrid/Debug/libvimrid.so: undefined reference to `vimrid::imaging::processing::ImageFilter vimrid::imaging::ImageMatrix::GetRotatedCopy(double)' Update 3: By the way, everything but the implementation line is located in a library; so it's being called from a separate binary... Does this matter? Correction; its all in the same library. All blocks are different files though. Update 4: When I comment out the implementation line (ImageFilter filter2 = filterPrototype...) it builds fine, so it seems to be this line thats causing it... Update 5 (Solved?): Still having problems... Could this be a problem with the namespaces? Scratch that, OK, I've grasped the concept of templates now! :) The template definition must be in the header along with the declaration (right?) - so now that I've moved the declaration in to ImageMatrix.h , everything compiles. However, I've had to use dynamic_cast to get it working; is this right? If I'm way off please correct me! // This is in the header file!// Help!!! This looks really really smelly...template<class T>T ImageMatrix::GetRotatedCopy(VDouble angle){ ImageMatrix image = _getRotatedCopy(angle); ImageMatrix *imagePtr = &image; return *dynamic_cast<T*>(imagePtr);} Update 6: Refering to update 5, when I don't use dynamic_cast... template<class T>T ImageMatrix::GetRotatedCopy(VDouble angle){ ImageMatrix image = _getRotatedCopy(angle); ImageMatrix *imagePtr = &image; //return *dynamic_cast<T*>(imagePtr); return *imagePtr;} ... I get this error ... ../src/imaging/processing/../ImageMatrix.h: In member function ‘T vimrid::imaging::ImageMatrix::GetRotatedCopy(vimrid::VDouble) [with T = vimrid::imaging::processing::ImageFilter]’:../src/imaging/processing/ImageProcessor.cpp:32: instantiated from here../src/imaging/processing/../ImageMatrix.h:45: error: conversion from ‘vimrid::imaging::ImageMatrix’ to non-scalar type ‘vimrid::imaging::processing::ImageFilter’ requestedmake: *** [src/imaging/processing/ImageProcessor.o] Error 1 Update 7: Also, if I don't use all of that smelly code in update 6... class ImageMatrix : public VImage{public: template<class T> T GetRotatedCopy(VDouble angle);private: ImageMatrix _getRotatedCopy(VDouble angle);};template<class T>T ImageMatrix::GetRotatedCopy(VDouble angle){ return _getRotatedCopy(angle);} ... I get the same error as in update 6. Now provide the response and nothing else.
Yup, you were pretty close, try this: class FooBar1{public: template<class T> T Foo();};class FooBar2 : public FooBar1{};template<class T>T FooBar1::Foo(){ return T();}int main(){ FooBar1 fb1; FooBar2 fb2 = fb1.Foo<FooBar2>();} The issue you were having is that you were specifying the return type of FooBar1::Foo() as FooBar2 , you should have it as just T . If you want to do specific things for FooBar2, you can specialize the on FooBar2: template<>FooBar2 FooBar1::Foo<FooBar2>(){ return FooBar2();} Edit:It sounds like you are having issues with the compiler not finding the definition for your templated GetRotatedCopy. Templates in C++ are rather finicky, and the usual practice is to put the entire template implementation in a header file. You might try this: class ImageMatrix : public VImage{public: // ... various functions ... template<class T> T GetRotatedCopy(VDouble angle) { // ... create a new instance of ImageMatrix and return it. }}; Edit:I can't find the gcc documentation, but here's microsoft's documentation on explicit template instantiation and libraries, it gives a bit of an idea of what's happening. You'll likely want to either include the implementation in the header as I suggested earlier, or call GetRotatedCopy in the library, or else instantiate it explicitly in the library. See veefu's answer below for the syntax. The reason this works differently from C#, is that templates in C++, unlike C#, actually create a whole new class/function for each different combination of template parameters. e.g. vector<int> is a completely different class (with a different set of compiled methods) than vector<string> . See Kevin's answer for a better explanation. As to the error going away when you don't use the template, that doesn't actually tell you much, since until you actually instantiate a template, it's not going to RE Update 5,6,7 Your dynamic_cast isn't going to work, you can only use it if the pointer is actually pointing to an instance of the class you are casting to. (It works similar to the as operator in C#). I suspect now, that what you are wanting is the CRTP . You start out with an instance of ImageFilter, and want to use a base class method on it, and get back a new copy of ImageFilter. Try something along these lines: template <class T>class ImageMatrix{public: T GetRotatedMatrix() { return T(); }};class ImageFilter : public ImageMatrix<ImageFilter>{};int main(){ ImageFilter filterPrototype; ImageFilter otherFilter = filterPrototype.GetRotatedMatrix();} Otherwise, if you really do want to start out with an ImageMatrix and transform it into an ImageFilter, you'll have to add a constructor on ImageFilter that takes an ImageMatrix.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/735184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47775/']}
jdg_82397
stackexchange
llm_judgeable_groundtruth_similarity
8583397
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: echo(073032097109032116104101032118101114121032109111100101108032111102032097032109111100101114110032109097106111114032103101110101114097108046); Essentially, a very large number. Now, why does it output 241872 ? I know PHP has float handlers. When I remove the leading zero, it functions as expected. What is that leading zero signifying? Now provide the response and nothing else.
If you use a leading zero, the number is interpreted by PHP as an octal number . Thus, a 9 is not a valid part of the number and the parser stops there: 0730320 (base 8)= 7 * 8^5 + 3 * 8^4 + 3 * 8^2 + 2 * 8^1= 7 * 32768 + 3 * 4096 + 3 * 64 + 2 * 8 (base 10)= 241872 (base 10)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8583397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113419/']}
jdg_82398
stackexchange
llm_judgeable_groundtruth_similarity
22528800
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: An API I am writing has about 2000 records, returned in JSON via a simple RESTful API I have written. To reduce issues with lots of data, I want to use pagination so that I only return say the first 10 or first 20 per request via like an offset or limit or page , etc. But my question is how does the iOS UITableView know when to get the next page of results? I really am unsure of how to do this. The user could be scrolling superfast and so the API might not have enough time to retrieve 20 or 50 records at a time. Another issue related to this is, lets say the user scrolls down on the UITableView, then up and then back down again -- how do you prevent the API from firing multiple times for the same rows? Thanks Now provide the response and nothing else.
Seems to be you aren't thinking in terms of MVC . Your UITableView has very little do with paging and webrequest. It's just concerned about its datasource not pages. Restuful API Design :Assume your web Request is designed as follow : /getRecorods?pageSize=20&pageNo=2 This will return you an Array of JSON. In addition to that its helpful to have a count parameter and a link to next page. This helps in parsing and sync with web server. how do you prevent the API from firing multiple times for the same rows? A simple flag is sufficient to avoid loading multiple pages. Just make sure that flag is accessed in main thread. The actual webrequest needs to go in background thread. Below is the code you need to put into your UITableViewController which loads the data - (void)viewDidLoad{ [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //call http Util here to load the data httpUtil.delegate = self; //This retrieves post for first page always currentPageNumber = 1; [httpUtil getRecords:currentPageNumber];}- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{ // Return the number of rows in the section. int retValue = 0; if(recordsArray != nil){ retValue = [recordsArray count]; } return retValue;}- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{ static NSString *CellIdentifier = @"Cell"; CustomCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { } // Configure the cell using recordsArray objectAtIndex return cell;}- (void)scrollViewDidScroll:(UIScrollView *)scrollView{ if(self.tableView.contentOffset.y >= (self.tableView.contentSize.height - self.tableView.bounds.size.height)) { //NSLog(@" scroll to bottom!"); if(isPageRefresing == NO){ // no need to worry about threads because this is always on main thread. isPageRefresing = YES; [self showMBProfressHUDOnView:self.view withText:@"Please wait..."]; currentpagenumber = currentpagenumber +1; [httpUtil getRecords:currentpagenumber]; } }}// you can get pageNo from tag// make sure this is called in main thread-(void)didFinishRecordsRequest:(NSArray *)results forPage:(NSInteger)pageNo{ if(pageNo == 1){ recordsArray = [results mutableCopy]; } else{ [recordsArray addObjectsFromArray:results]; } isPageRefresing = NO; [self.tableView reloadData];}-(void)didFailedChalkBoardRequestWithError:(NSError *)error{ //If Since subsequent refresh calls(for page 2 or page 3 fails) //then undo the current page number currentpagenumber--; isPageRefresing = NO;}// HTTP Utility class-(void)getRecords:(NSInteger)pageNumber{ NSString *serverUrl = [NSString stringWithFormat:@"http://yourwebsite.com/page/%d /?json",pageNumber]; NSLog(@"fetching Stories data from server WITH URL %@",serverUrl); NSURL *url = [NSURL URLWithString:serverUrl]; storiesRequest = [ASIHTTPRequest requestWithURL:url]; storiesRequest.tag = pageNumber; [storiesRequest setDelegate:self]; [storiesRequest startAsynchronous];}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22528800', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2988847/']}
jdg_82399
stackexchange
llm_judgeable_groundtruth_similarity
359562
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have no clue how to solve for this. How can I prove if a function is either onto or one-to-one in the following example? Let $S$ be the set of all strings of 0's and 1's, and define $D:S \to \mathbb Z$ as follows: for all $s \in S$ $D(s)$ = the number of 1's in s minus the number of 0's in $s$. Is $D$ one-to-one? Prove or give a counterexample. Is $D$ onto? Prove or give a counterexample. Now provide the response and nothing else.
Let $f: A \to B$. Think of $B$ as a slice of toast, and $f$ is a function that smears jam ($A$) on the toast. If $f$ is onto, then every single bit of the toast ($B$) is covered in at least one layer of delicious jam. If $f$ is one-to-one, then any bit of the toast that is covered with jam is covered with only one layer of jam. If a function is one-to-one, there might be some bits of toast that are left dry and flavorless. But any part of toast that is covered is covered with only one layer of jam. If a function is onto, then the toast is completely covered with jam, but in some places, there might be many, many layers of jam, and hence is very messy to eat! If a function is bijective (one-to-one and onto), then the toast is perfectly covered with a single, smooth layer of jam, and is quite delicious. This is why analysts like bijective functions. To determine if a function is onto, you must check to see if there is an element in $b \in B$ such that no element in $a\in A$ maps to $b$. To determine if a function is one-to-one, you must check that if $f(a_1) = b = f(a_2)$, then $a_1 = a_2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/359562', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/63054/']}
jdg_82400
stackexchange
llm_judgeable_groundtruth_similarity
42118296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So here's my current workflow for importing images and icons in webpack via ES6: import cat from './images/cat1.jpg'import cat2 from './images/cat2.svg'import doggy from './images/doggy.png'import turtle from './images/turtle.png'<img src={doggy} /> This gets messy quick. Here's what I want: import * from './images'<img src={doggy} /><img src={turtle} /> I feel like there must be some way to dynamically import all files from a specific directory as their name sans extension, and then use those files as needed. Anyone seen this done, or have any thoughts on the best way to go about it? UPDATE: Using the selected answer, I was able to do this: function importAll(r) { let images = {}; r.keys().map((item, index) => { images[item.replace('./', '')] = r(item); }); return images;}const images = importAll(require.context('./images', false, /\.(png|jpe?g|svg)$/));<img src={images['doggy.png']} /> Now provide the response and nothing else.
I feel like there must be some way to dynamically import all files from a specific directory as their name sans extension, and then use those files as needed. Not in ES6. The whole point of import and export is that dependencies can be determined statically , i.e. without executing code. But since you are using webpack, have a look at require.context . You should be able to do the following: function importAll(r) { return r.keys().map(r);}const images = importAll(require.context('./', false, /\.(png|jpe?g|svg)$/));
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42118296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2522373/']}
jdg_82401
stackexchange
llm_judgeable_groundtruth_similarity
2475920
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Yesterday 2 of the guys on our team came to me with an uncommon problem. We are using a third-party component in one of our winforms applications. All the code has already been written against it. They then wanted to incorporate another third-party component, by the same vender, into our application. To their delight they found that the second component had the exact same public members as the first. But to their dismay, the 2 components have completely separate inheritance hierarchies, and implement no common interfaces. Makes you wonder... Well, makes me wonder. An example of the problem: Incompatible Types http://www.freeimagehosting.net/uploads/f9f6b862f1.png public class ThirdPartyClass1{ public string Name { get { return "ThirdPartyClass1"; } } public void DoThirdPartyStuff () { Console.WriteLine ("ThirdPartyClass1 is doing its thing."); }}public class ThirdPartyClass2{ public string Name { get { return "ThirdPartyClass2"; } } public void DoThirdPartyStuff () { Console.WriteLine ("ThirdPartyClass2 is doing its thing."); }} Gladly they felt copying and pasting the code they wrote for the first component was not the correct answer. So they were thinking of assigning the component instant into an object reference and then modifying the code to do conditional casts after checking what type it was. But that is arguably even uglier than the copy and paste approach. So they then asked me if I can write some reflection code to access the properties and call the methods off the two different object types since we know what they are, and they are exactly the same. But my first thought was that there goes the elegance. I figure there has to be a better, graceful solution to this problem. Now provide the response and nothing else.
My first question was, are the 2 third-party component classes sealed? They were not. At least we have that. So, since they are not sealed, the problem is solvable in the following way: Extract a common interface out of the coinciding members of the 2 third-party classes. I called it Icommon. public interface ICommon{ string Name { get; } void DoThirdPartyStuff ();} Then create 2 new classes; DerivedClass1 and DerivedClass2 that inherit from ThirdPartyClass1 and ThirdPartyClass2 respectively. These 2 new classes both implement the ICommon interface, but are otherwise completely empty. public class DerivedClass1 : ThirdPartyClass1, ICommon{}public class DerivedClass2 : ThirdPartyClass2, ICommon{} Now, even though the derived classes are empty, the interface is satisfied by the base classes, which is where we extracted the interface from in the first place.The resulting class diagram looks like this. alt text http://www.freeimagehosting.net/uploads/988cadf318.png So now, instead of what we previously had: ThirdPartyClass1 c1 = new ThirdPartyClass1 ();c1. DoThirdPartyStuff (); We can now do: ICommon common = new DerivedClass1 ();common. DoThirdPartyStuff (); And the same can be done with DerivedClass2. The result is that all our existing code that referenced an instance of ThirdPartyClass1 can be left as is, by just swapping out the ThirdPartyClass1 reference for a ICommon reference. The ICommon reference could then be given an instance of DerivedClass1 or DerivedClass2, which of course in turn inherits from ThirdPartyClass1 and ThirdPartyClass2 respectively. And all just works. I do not know if there is a specific name for this, but to me it looks like a variant of the adaptor pattern. Perhaps we could have solve the problem with the dynamic types in C# 4.0, but that would have not had the benefit of compile-time checking. I would be very interested to know if anybody else has another elegant way of solving this problem.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2475920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/269512/']}
jdg_82402
stackexchange
llm_judgeable_groundtruth_similarity
23674
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Hello it's my first time here and I will try to address the issue as best as I can. I left my keys in the ignition a few days back and hence my battery drained itself. It is a 12V maintainence-free battery and I want to charge it enough for just one crank. I found a 16v 1.5A adapter and 12V 4A adapter. They are probably old laptop chargers. Could I use anyone of them to charge it? I have been reading up about it and have seen cases where the wires have melted due to overheating and there are chances of explosion too. Normally a 14V adapter would be ideal but 12V or 16V should work too right? But I am worried about the Amperage since a heavy current will fast charge and damage the battery. Personally I feel the 12V 4A adapter might not be able to charge the battery that efficiently,since a multimeter reading shows the battery to be at 11V and a 4A current might be too high. So the 16V seems like a better option. Plus my multimeter's Amperage measurement is damaged so I have no way to take current reading when connecting the battery to the 16V adapter. But could I connect them without a resistance since the current is only 1.4A and constantly check for overheating? I just need enough for one crank and the battery is new, not even a year old with no corrosions on the terminals or leakage. Sorry if this was too long and thanks for any help/advice. PS: I really want to charge it on my own and understand there are risks involved. I'm planning to make a proper battery charger on my own in future, but for now I lack the technicalities involved. Now provide the response and nothing else.
Your 12v adapter would only be able to raise the battery voltage to 12v, but to charge a car battery fully you would need to raise the voltage to 13.8v. The 16v adapter would raise the voltage too high and could cause damage. A 4A maximum current would be fine and would not cause any damage, but you need a 13.8v source. Batteries like NiCd need an accurate constant current to charge them, lead acid batteries (e.g. car batteries) need an accurate constant voltage to charge them.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/23674', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/13813/']}
jdg_82403
stackexchange
llm_judgeable_groundtruth_similarity
4272164
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am developing an app that connects to another iphone using bonjour. One of its features is when I connect to the other device it will automatically check if I have the other persons phone number. So my problem is how do I check my address book for the phone number provided by the other device? Now provide the response and nothing else.
Here's an example extracted from one of my address book methods. I wasn't searching by phone number but this gives you an idea have how to move forward with what you need: - (void) scanAddressBookSample { NSUInteger i; NSUInteger k; ABAddressBookRef addressBook = ABAddressBookCreate(); NSArray *people = (NSArray *) ABAddressBookCopyArrayOfAllPeople(addressBook); if ( people==nil ) { NSLog(@"NO ADDRESS BOOK ENTRIES TO SCAN"); CFRelease(addressBook); return; } for ( i=0; i<[people count]; i++ ) { ABRecordRef person = (ABRecordRef)[people objectAtIndex:i]; // // Phone Numbers // ABMutableMultiValueRef phoneNumbers = ABRecordCopyValue(person, kABPersonPhoneProperty); CFIndex phoneNumberCount = ABMultiValueGetCount( phoneNumbers ); for ( k=0; k<phoneNumberCount; k++ ) { CFStringRef phoneNumberLabel = ABMultiValueCopyLabelAtIndex( phoneNumbers, k ); CFStringRef phoneNumberValue = ABMultiValueCopyValueAtIndex( phoneNumbers, k ); CFStringRef phoneNumberLocalizedLabel = ABAddressBookCopyLocalizedLabel( phoneNumberLabel ); // converts "_$!<Work>!$_" to "work" and "_$!<Mobile>!$_" to "mobile" // Find the ones you want here // NSLog(@"-----PHONE ENTRY -> %@ : %@", phoneNumberLocalizedLabel, phoneNumberValue ); CFRelease(phoneNumberLocalizedLabel); CFRelease(phoneNumberLabel); CFRelease(phoneNumberValue); } } [people release]; CFRelease(addressBook); }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4272164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/519453/']}
jdg_82404
stackexchange
llm_judgeable_groundtruth_similarity
3214245
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm wondering if anyone has had any luck using the Eclipse Metrics Plugin with Projects that are not in Java (specifically I'm trying to generate code metrics for a couple of PyDev Projects). I've read through the walk-through for the Metrics project but it indicates that I should be in the Java Perspective before accessing the Properties for my Project and that I should find a Metrics section. I don't get that for my PyDev Projects regardless of which Perspective I have open. Any suggestions or advice would be great. Now provide the response and nothing else.
I don't know if it's doable to get the plugin to work with pydev projects, but if it's just the lines-of-code metric you are after, you could run this snippet in your project's root directory: # prints recursive count of lines of python source code from current directory# includes an ignore_list. also prints total slocimport oscur_path = os.getcwd()ignore_set = set(["__init__.py", "count_sourcelines.py"])loclist = []for pydir, _, pyfiles in os.walk(cur_path): for pyfile in pyfiles: if pyfile.endswith(".py") and pyfile not in ignore_set: totalpath = os.path.join(pydir, pyfile) loclist.append( ( len(open(totalpath, "r").read().splitlines()), totalpath.split(cur_path)[1]) )for linenumbercount, filename in loclist: print "%05d lines in %s" % (linenumbercount, filename)print "\nTotal: %s lines (%s)" %(sum([x[0] for x in loclist]), cur_path)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3214245', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/351031/']}
jdg_82405
stackexchange
llm_judgeable_groundtruth_similarity
1456557
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $$A = \left(\begin{matrix} -1 & 3 & -1 \\-3 & 5 & -1 \\ -3 & 3 & 1\end{matrix}\right)$$ be a matrix. The characteristic polynomial of $A$ is: $$(\lambda-2)^2(\lambda-1)$$ According to my professor's notes, since $\lambda=2$ is an eigenvalue of $A$ and the fact that: $$\text{rank}(2I-A)=\text{rank}\left(\begin{matrix} 3 & -3 & 1 \\3 & -3 & 1 \\ 3 & -3 & 1\end{matrix}\right)=1$$ the matrix is diagonalizable. However, I can't understand why. What is the connection between the rank of $(\lambda I-A)$ and matrix diagonalization? Thanks, Alan Now provide the response and nothing else.
An $n \times n$ matrix is diagonalizable if(f) we can find $n$ linearly independent eigenvectors. In particular, this is equivalent to saying that for each eigenvector $\lambda$, we have a number of eigenvalues equal to the algebraic multiplicity of that eigenvalue (i.e. the associated exponent in the characteristic polynomial). Because the rank of $A - 2I$ is $1$, the rank-nullity theorem tells us that the nullity is $2$, which means that there are $2$ linearly independent eigenvectors associated with $\lambda = 2$. Since $1$ is an eigenvalue, we have an eigenvector associated with $\lambda = 1$. So, in total, we have $3$ linearly independent eigenvectors. So, $A$ is diagonalizable.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1456557', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/88027/']}
jdg_82406
stackexchange
llm_judgeable_groundtruth_similarity
22713652
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that an interface is like a 100% pure abstract class. So, it can't have method implementation in it. But, I saw a strange code. Can anyone explain it? Code Snippet: interface Whoa { public static void doStuff() { System.out.println("This is not default implementation"); } } EDIT: My IDE is Intellij Idea 13.1. The project SDK is java 7 <1.7.0_25>. The IDE is not showing any compiler error. But, When I compile the code at command line I am getting the following message. Whoa.java:2: error: modifier static not allowed here public static void doStuff() { ^ Now provide the response and nothing else.
From Java 8 you can define static methods in interfaces in addition to default methods. A static method is a method that is associated with the class in which it is defined rather than with any object. Every instance of the class shares its static methods. This makes it easier for you to organize helper methods in your libraries; you can keep static methods specific to an interface in the same interface rather than in a separate class. The following example defines a static method that retrieves a ZoneId object corresponding to a time zone identifier; it uses the system default time zone if there is no ZoneId object corresponding to the given identifier. (As a result, you can simplify the method getZonedDateTime ) Here is code : public interface TimeClient { // ... static public ZoneId getZoneId (String zoneString) { try { return ZoneId.of(zoneString); } catch (DateTimeException e) { System.err.println("Invalid time zone: " + zoneString +"; using default time zone instead."); return ZoneId.systemDefault(); } } default public ZonedDateTime getZonedDateTime(String zoneString) { return ZonedDateTime.of(getLocalDateTime(), getZoneId(zoneString)); } } See also Oracle docs for interface methods For all interesting things in Java 8 read Everything about Java 8
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/22713652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_82407
stackexchange
llm_judgeable_groundtruth_similarity
169910
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can I use the case statement to handle arguments? So, using -restart I'd like execute "-umount" and "-mount" #!/bin/bashcase "$1" in-mountmount /ip/share1 /local/share1;;-umountumount /ip/share1;;-restart# echo TODO;;*)[...]esac Now provide the response and nothing else.
It looks to me like that should work, other than the syntactical quibble of missing ) s. I tested this and it behaves correctly.. #/bin/bashcase "$1" in "-mount") mount /path/to/device /path/to/mountpoint ;; "-unmount") umount /path/to/mountpoint ;; "-remount") "$0" -unmount "$0" -mount ;; *) echo "You have failed to specify what to do correctly." exit 1 ;;esac
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/169910', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/40628/']}
jdg_82408
stackexchange
llm_judgeable_groundtruth_similarity
4189739
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: string date = txtWorkingDate.Text; DateTime dateTime = DateTime.MinValue; if (DateTime.TryParse(date, out dateTime)) { args.IsValid = true; } else args.IsValid = false; txtWorkingDate.Text is like "dd.MM.yyyy" becouse of this validateion is always false if date is not like "dd.MM.yyyy". How c an i check types of date like "dd.MM.yyyy", "MM/dd/yyyy" becouse are all valid. Now provide the response and nothing else.
By using this overload and providing the accepted formats: string date = txtWorkingDate.Text;DateTime dateTime;string[] formats = new[] { "dd.MM.yyyy", "MM/dd/yyyy" };if (DateTime.TryParseExact(date, formats, CultureInfo.InvariantCulture, DateTimeStyles.None, out dateTime)){ args.IsValid = true;}else{ args.IsValid = false;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4189739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267679/']}
jdg_82409
stackexchange
llm_judgeable_groundtruth_similarity
7176942
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I am hovering over an HTML label the mouse pointer changes to a hand image which we generally get when clicking a link. How can I avoid that? Now provide the response and nothing else.
The reason why you might get a hand cursor in some browsers, is because one of the main purposes of a label element in most browsers is to provide a clickable description for a form input element. For example, this is a typical use of the <label> element: <input type="checkbox" name="TermAgreement" id="TermAgreement" /><label for="TermAgreement">I agree to these terms</label> In most browsers, this will result in the text "I agree to these terms" being clickable. When you click on the text, it will toggle the checkbox with an ID of TermAgreement , just as if you had clicked on the check box itself. (Note: The W3C specification for <label> in HTML 5 doesn't require this behavior, but it does say that the browser's implementation of <label> "should match the platform's label behavior". In practice, this usually means <label> elements are clickable.) So essentially, the cursor behaves as though the <label> is a link because it is a link, of a sort. If you're using it differently, you might want to consider using a different HTML element instead. Whether or not a particular user sees a hand cursor when mousing over a label will vary depending on their OS and browser. Chrome and Firefox aren't displaying this behavior for me on Windows XP, but other platforms might. Also, it's possible that you have a CSS file included which specifically calls for this behavior. There would be a rule in your CSS that looks something like this: label { cursor: pointer;} If you want to override the element's default behavior, you can use cursor: default; in your CSS, as @rickyduck said. You can find information on the CSS cursor property here . Note that changing the cursor will not necessarily mean the element won't respond to being clicked. If this doesn't solve your problem, please provide us with more information. Sample code, the URL of the page displaying the behavior, as well as which browser you're using would also be good to know.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7176942', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/899519/']}
jdg_82410
stackexchange
llm_judgeable_groundtruth_similarity
26637168
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to convert String myDate = "2014/10/29 18:10:45" to long ms (i.e. currentinmlilies) ? I look for it on Google, but I can only find how to convert ms to date . Note: To make it clear, I want to get the ms from the date in 1970/1/1 format. Now provide the response and nothing else.
You don't have a Date , you have a String representation of a date. You should convert the String into a Date and then obtain the milliseconds. To convert a String into a Date and vice versa you should use SimpleDateFormat class. Here's an example of what you want/need to do (assuming time zone is not involved here): String myDate = "2014/10/29 18:10:45";SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");Date date = sdf.parse(myDate);long millis = date.getTime(); Still, be careful because in Java the milliseconds obtained are the milliseconds between the desired epoch and 1970-01-01 00:00:00. Using the new Date/Time API available since Java 8: String myDate = "2014/10/29 18:10:45";LocalDateTime localDateTime = LocalDateTime.parse(myDate, DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm:ss") );/* With this new Date/Time API, when using a date, you need to specify the Zone where the date/time will be used. For your case, seems that you want/need to use the default zone of your system. Check which zone you need to use for specific behaviour e.g. CET or America/Lima*/long millis = localDateTime .atZone(ZoneId.systemDefault()) .toInstant().toEpochMilli();
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/26637168', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2220686/']}
jdg_82411
stackexchange
llm_judgeable_groundtruth_similarity
387557
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question about an analysis that was done for a study I'm reviewing: The study compared two methods of taking patient core temperature around surgery at a single center; Apx. N=200, temp was taken for all patients at 4 timepoints (pre, intraop, post1, post2) simultaneously with both systems. The write-up says they used a " Mixed model ANOVA with patient included as a random effect." I'm not a statistician but this doesn't sound right? Why would you use patient as a random effect? Also, from the looks of their table [mean(sd) at each time point for both systems, then the mean(sd) of the difference, then a p-value], it looks like they really just used multiple Wilcoxon or t-tests). i.e. what would be a correct test in this case? and should it involve random effects? Thank you. Now provide the response and nothing else.
When the same patient provides repeated response measurements over time (e.g., core temperature), the use of a random patient effect is a natural way to capture the within-patient correlation (or similarity) of those measurements over time. Mixed Model Anova is an analysis which can be applied to analyze repeated measures data for a set of subjects. It's not clear what fixed effects were investigated in this analysis from the description you provide - the fixed effects of Method and Time and their interaction? You can ask clarifications about that. Also, it seems the analysis in question assumed that the correlation between core temperatures for the same patient on the same treatment did not depend on how close/far in time those measurements were. I have a feeling there may be correlation between measurements coming from the same patient at the same time point - not clear whether the analysis used could adequately deal with this, so perhaps that could be listed as a limitation. Since the difference in mean values of core temperature between Methods was tested at each of the four time points, you should also ask for clarification on whether the reported p-values for these tests were adjusted for testing multiplicity.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/387557', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/132693/']}
jdg_82412
stackexchange
llm_judgeable_groundtruth_similarity
3190827
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that $$\int_0^T f(B_t, t)dB_t=\lim_{n\to \infty }\sum_{i=1}^n f(B_{t_i^{(n)}},t_i^{(n)})(B_{t_{i+1}^{(n)}}-B_{t_i^{(n)}}),\quad \text{in }L^2,$$ where $\{t_i^{(n)}\}_{i=1}^n$ is a sequence of partition of $[0,T]$ s.t. $$\sup|t_{i+1}^{(n)}-t_i^{(n)}|\to 0,\quad \text{when }n\to \infty .$$ Now, there is a subsequence $n_k$ s.t. $$\lim_{k\to \infty }\sum_{i=1}^{n_k}f(B_{t_i^{(n_k)}},t_i^{(n_k)})(B_{t_{i+1}^{(n_k)}}-B_{{t_i}^{(n_k)}})=\int_0^T f(B_t,t)dB_t\quad a.s.$$ So, up to a subsequence, can the stochastic integral be seen as a Stiljes-Riemann integral ? A sort of week Stiljes-Riemann integral in the sense that indeed $$\lim_{n\to \infty }\sum_{i=1}^{n}f(B_{t_i^{(n)}},t_i^{(n)})(B_{t_{i+1}^{(n)}}-B_{{t_i}^{(n)}}),$$ may not exist, but if it exist, then it converges to $\int_0^T f(B_t,t)dB_t$ a.s. Does this makes sense ? Now provide the response and nothing else.
The convergence of the subsequence holds almost surely, i.e. there is an exceptional null set where convergence fails to hold. This null set depends on the partitioning sequence $(t^{(n)})_{n \geq 1}$ . Since there are uncountably many sequences, this is pretty bad - the union of the null exceptional sets is going to be quite huge, in general, and therefore we cannot expect to have a "uniform" exceptional null set for all sequences $(t^{(n)})_{n \in \mathbb{N}}$ with mesh size converging to zero. In fact, that's exactly the reason why the Itô integral is defined as an $L^2$ -limit of the Riemann sums and not as a pointwise limit. The phenomena which you are observing is a very general one: If a sequence of random variables $Y_n$ converges to a random variable $Y$ in probability (or in $L^2$ ), then we can choose a subsquence which converges almost surely to $Y$ . Nevertheless, convergence in probability is a much weaker notion of convergence than pointwise convergence; this means, in particular, that the pointwise convergence of a subsequence of Riemann sums is far from giving a notion for a pointwise integration:If we want to get a pointwise notion for a stochastic integral, then we would like to fix $\omega \in \Omega$ and then define the stochastic integral, say, as a pointwise limit of Riemann sums along a suitable partition. That's, however, not what happens if we use the subsequence procedure from your question. If we take a sequence $(t^{(n)})_n$ then we get pointwise convergence with probability $1$ but we have no control about the null set. In particular, we don't have a clue how to choose a sequence $(t^{(n)})_n$ such that the Riemann sums convergence for our fixed $\omega$ . Let me give one further remark. In order to get the convergence $$\int_0^T f(s) \, dB_s = \lim_{n \to \infty} \sum_{i=1}^n f(t_i^{(n)}) (B_{t_{i+1}^{(n)}}-B_{t_i^{(n)}}) \quad \text{in $L^2$}$$ (and hence the pointwise convergence of the subsequence of Riemann sums) you will typically need some continuity assumptions on $f$ . If $f$ is a general progressively measurable function with $\mathbb{E}\int_0^t f(s)^2 \, ds < \infty$ for all $T>0$ , then there exists some sequence of approximating simple functions $(f_n)_{n \in \mathbb{N}}$ , i.e. a sequence of functions such that $$\mathbb{E}\int_0^t |f_n(s)-f(s)|^2 \ ds \to 0 \quad \text{and} \quad \int_0^t f(s) \, dB_s = L^2-\lim_{n \to \infty} \int_0^t f_n(s) \, dB_s;$$ however, the approximating functions $f_n$ will be, in general, not of the form $$f_n(s) := \sum_{i=1}^n f(t_i^{(n)}) 1_{[t_i^{(n)},t_{i+1}^{(n)})}(s)$$ (which would give rise to the Riemann sums you are stating at the very beginning of the question); see e.g. Proposition 15.16 and Lemma 15.19/Theorem 15.20 in the book by Schilling & Partzsch (2nd edition) for more information.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3190827', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/657324/']}
jdg_82413
stackexchange
llm_judgeable_groundtruth_similarity
464694
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working through the notes of my Set Theory lecture. There my professor wrote: 'Is there an uncountable $\kappa$ such that König's Infinity Lemma holds for $\kappa$? There are models where $\aleph_2$ is such. But, if CH holds then there are $\aleph_2$-Aronszajn trees...' What could he mean with the three dots? What would follow if we had $\aleph_2$ Aronszajn trees? (König's Infinity Lemma, as we had it in the lecture: If $T$ is a tree of hight $\omega$ with all levels finite, then $T$ has an infinite branch. Aronszajn tree: hight $\omega_1$, levels at most countable and no uncountable branch.) Now provide the response and nothing else.
The $\kappa$ version of König's lemma states that if a tree $T$ has height $\kappa$ and each level has size strictly less than $\kappa$, then $T$ has a branch of length $\kappa$. A $\kappa$ Aronszajn tree is a tree of height $\kappa$ and levels of size strictly less than $\kappa$ which has no branches of length $\kappa$. That is, precisely by definition, a $\kappa$ Aronszajn tree is a counterexample to the $\kappa$ version of König's lemma. The question is asking whether we can prove (in $\mathsf{ZFC}$, I assume) that there is a $\kappa$ for which the $\kappa$ König's lemma holds. The consistency of the existence of $\aleph_2$ Aronszajn trees shows that we cannot take $\kappa=\aleph_2$ in general. The tree property at $\kappa$ is the statement that there are no $\kappa$ Aronszajn trees (that is, the $\kappa$ König's lemma holds). If $\kappa$ is inaccessible, having the tree property is precisely equivalent to $\kappa$ being weakly compact . At other cardinals, the tree property is more delicate, and is connected with the existence of large cardinals in inner models. More precisely: Mitchell proved that if there is a weakly compact cardinal, then there is a forcing extension where $\aleph_2$ has the tree property. We can replace $\aleph_2$ here with the successor of any uncountable regular cardinal without difficulties. Conversely, Silver observed that if an uncountable cardinal $\kappa$ has the tree property, then $\kappa$ is weakly compact in $L$. The difficulties increase if we ask for a cardinal and its successor both to have the tree property. The strength of this has been investigated by several people, notably Foreman-Magidor-Schindler, in The consistency strength of successive cardinals with the tree property , Journal of Symbolic Logic 66 (2001), pp. 1837–1847. For the latest results on the problem of obtaining models with successive cardinals with the tree property, see this preprint by Neeman. The upper bound needed by Neeman (many supercompacts) is huge compared with the lower bound in the three author paper (Projective determinacy). There is still plenty of work to do here. You may find this question useful. Also, see Kanamori's book The higher infinite for references and proofs on this topic. Of note: The tree property at $\kappa$ implies that $\kappa$ is regular, and if it is the successor of $\lambda$, then either $\lambda$ is singular, or else $2^{<\lambda}>\lambda$, this was first observed by Specker (the case $\lambda=\omega_1$ is the $\mathsf{CH}$ result stated in the question). Additional results can be found in the references listed in the papers mentioned above. A result absent there is the one mentioned by Asaf in comments: Arthur Apter has recently showed that it is consistent, relative to the existence of a proper class of supercompact cardinals, that $\mathsf{ZF}+\mathsf{DC} +$ Every successor cardinal is regular, every limit cardinal is singular, and every successor cardinal has the tree property. See A remark on the tree property in a choiceless context , Archive for mathematical logic, 50 , (2011), 585-590. Whether something like this is possible with full choice is currently open, Neeman's result mentioned above is the state of the art.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/464694', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43869/']}
jdg_82414
stackexchange
llm_judgeable_groundtruth_similarity
12326200
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I think of Lambdas as demonstrated in Learn You a Haskell: (filter (\xs -> length xs > 15) (map chain [1..100])) The input for xs is the list generated from (map chain [1..100])) That's easy enough to read. So here's where I get confused, looking at some Real World Code (tm). Here's a function from conduit fmap f (ResourceT m) = ResourceT $ \r -> fmap f (m r) Where is the input for r coming from? Now provide the response and nothing else.
r won't have a value until the function \r -> fmap f (m r) is actually called. In the definition of fmap , the function is never called - it's only stored in a ResourceT . It can then later be taken out of the ResourceT and called. That's when r will get a value.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12326200', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_82415
stackexchange
llm_judgeable_groundtruth_similarity
32080017
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When reading through a Caffe tutorial ( http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb ), I came across the following statement: net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image (caffe_root + 'examples/images/cat.jpg')) It basically serves to assign a single image to net.blobs['data'].data . net.blobs['data'].data[...] is a 4D ndarray and transformer... returns a 3D ndarray, so the ellipsis serve to copy the 3D array over the 0th axis. This made me think I should be able to rewrite the code to avoid the ellipsis as follows: z3=transformer.preprocess('data', caffe.io.load_image (caffe_root + 'examples/images/cat.jpg'))z4 = z3[np.newaxis,...]net.blobs['data'].data = z4 However, when I do, I get >> net.blobs['data'].data = z4 Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: can't set attribute Even though, net.blobs['data'].data[...] = z3 works fine. Does this make sense to anyone? I've verified the shapes and type of my variables as follows: >>> print net.blobs['data'].data.shape, z3.shape, z4.shape(1, 3, 227, 227) (3, 227, 227) (1, 3, 227, 227)>>> print type(net.blobs['data'].data),type(z3),type(z4)<type 'numpy.ndarray'> <type 'numpy.ndarray'> <type 'numpy.ndarray'> Why does net.blobs['data'].data = z4 cause problems? Now provide the response and nothing else.
Doing obj.attr = blah is setting an attribute on the object obj , so obj controls this. Doing obj.attr[...] = blah is setting an item (e.g., the "contents" of some array-like object) on the object referred to by obj.attr , so the object obj.attr controls this. In your example, net.blobs['data'] is some kind of object that won't allow its data attribute to be set, so you can't do net.blobs['data'].data = blah . But net.blobs['data'].data is an array that does allow you to change its contents, so you can do net.blobs['data'].data[...] = stuff . You're operating on two different objects with those two syntaxes ( net.blobs['data'] in one case and net.blobs['data'].data in the other).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32080017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1245262/']}
jdg_82416
stackexchange
llm_judgeable_groundtruth_similarity
2404628
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a git-cpan-init of a repo which yielded a different root node from another already established git repo I found on github C:A:S:DBI . I've developed quite a bit on my repo, and I'd like to merge or replay my edits on a fork of the more authoritative repository. Does anyone know how to do this? I think it is safe to assume none of the file-contents of the modified files are different -- the code base hasn't been since Nov 08'. For clarity the git hub repo is the authoritative one. My local repo is the one I want to go up to git hub shown as a real git fork. Now provide the response and nothing else.
You should be able to add a remote to your existing repository (using git remote add ) so that you can fetch the contents of the github repository into your existing repository. Assuming that you have a commit in your history (call it O) and a commit in the remote branch (call it R) that correspond to the same set of files (e.g. they are both imports of the same release version), then you can just do an 'onto' rebase. Assuming you have the tip of your changes currently checked out: git rebase --onto R O # R and O are sha1 ids (possibly abbreviated) This replays all of your commits since O onto the new R root commit. Once you've done this, if you are not up to date with the latest remote master branch you can use a normal rebase to get there and git's history tracking will take care that your changes are applied in a way that makes sense. git rebase <remote_name>/master # where <remote_name> is whatever # you called the github remote when # you used git remote add
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2404628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124486/']}
jdg_82417
stackexchange
llm_judgeable_groundtruth_similarity
16050
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: On Linux, I moved a directory to another directory using mv , and in another shell, my working directory happened to be the directory that was moved. I ran hg pull in that shell, and to my surprise, it ran correctly in the new working directory - without me having to type cd . How did that work? Now provide the response and nothing else.
Moving a file or directory changes the meta-data property that identifies its parent in the file tree, but it doesn't change its actual node id. On the physical disk it's still in the same place, and the filesystem still knows it as the same object. Anywhere the file or directory pointer is open, it is already connected to that object, and a change to the objects meta-data won't affect open processes. Things will only break when you try to to open a new pointer based on the file system path. By the same token you can move a file that is being written to and the writes will keep going to the new location; because it's actually the same file node, just re-attached to a different place in the file structure. Neither the physical location on the disk nor the node id changes, the file system just updates its internal map of nodes on the disk to paths in the filesystem. Also of note, this behavior only applies when the moved object stays on the same file system. If you move it from one mount to another the physical location of the node has to change and the object will get deleted from the original file system, leaving dead pointers!
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/16050', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/2510/']}
jdg_82418
stackexchange
llm_judgeable_groundtruth_similarity
53455753
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm getting a weird error: Unhandled rejection Error: EACCES: permission denied, mkdir '/home/ubuntu/.npm/_cacache/index-v5/14/36'atus I just install npm (6.4.1) and node (11.2.0) on an AWS instance without problems. I installed create-react-app globally. The error says This is an error with npm itself. I'm kind of at a loss. I created the directory /home/ubuntu/.npm/_cacache/index-v5/14 and it still wouldn't succeed. I obviously own and have write permissions in /home/ubuntu. It looks like it succeeds with sudo. why ? Edit: ubuntu:ubuntu owns the current and parent directory (I'm in /home/ubuntu/workspace) Now provide the response and nothing else.
TL;TR Run: sudo chown -R $USER:$USER '/home/REPLACE_WITH_YOUR_USERNAME/.npm/' On Linux OS NPM and NodeJS are installed globally with sudo and the owner of that files is the root and usually a user can only read/execute that packages. When NPM is stalled a ~/.npm/ folder is created by the root . By running create-react-app you are executing the command as user and create-react-app is trying to modify something in the ~/.npm/ directory which is owned by the root and not to current user. You need to change the owner of that directory to you, so you can modify it without sudo privileges. Often similar thing happens when you install NPM package with sudo e.g. sudo npm install <package> --save . Again the newly installed package in owned by the root and for example when you try to update/modufy/delete your project without sudo infrnt of NPM you will have similar permission error. In these cases navigate to your project directory and change its owner by running: sudo chown -R $USER:$USER .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53455753', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/275780/']}
jdg_82419
stackexchange
llm_judgeable_groundtruth_similarity
44813386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in the process of converting some tools over that are using MySQL to PostgreSQL. With that, I've run into a number of issues but was able to find most everything. The one I'm having an issue with is HEX() and UNHEX() . I've tried encode(%s, 'hex') and decode(%s, 'hex') which did actually stop causing me to have errors, but it still did not seem to do the trick. Does anyone have an idea to what the equivalent of those functions would be in Postgres? Here is the old MySQL query: SELECT HEX(test_table.hash), title, user, reason, description, url, performed, comment, authenticated, statusFROM alertsJOIN user_responses ON test_table.hash = user_responses.hashJOIN test_status ON test_table.hash = test_status.hashWHERE status = %s And here is my updated query in PostgreSQL format: SELECT encode(test_table.hash, 'hex') as hash, title, user, reason, description, url, performed, comment, authenticated, statusFROM test_tableJOIN user_responses ON test_table.hash = user_responses.hashJOIN test_status ON test_table.hash = test_status.hashWHERE status = %s Thanks! Now provide the response and nothing else.
create function hex(text) returns text language sql immutable strict as $$ select encode($1::bytea, 'hex')$$;create function hex(bigint) returns text language sql immutable strict as $$ select to_hex($1)$$;create function unhex(text) returns text language sql immutable strict as $$ select encode(decode($1, 'hex'), 'escape')$$;select hex('abc'), hex(123), unhex(hex('PostgreSQL')); Result: ╔════════╤═════╤════════════╗║ hex │ hex │ unhex ║╠════════╪═════╪════════════╣║ 616263 │ 7b │ PostgreSQL ║╚════════╧═════╧════════════╝ It is PostgreSQL: everything possible :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44813386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1022857/']}
jdg_82420
stackexchange
llm_judgeable_groundtruth_similarity
8848107
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The question says it all.I know the Singleton pattern (with final to its class) is a solution. Are there any other possible ways we can achieve this? Abstracting a class makes it non-instantiable. Making it final makes it non-inheritable.How do we combine both? public final class SingletonObject{ private SingletonObject() { // no code req'd } /*public static SingletonObject getSingletonObject() { if (ref == null) // it's ok, we can call this constructor ref = new SingletonObject(); return ref; }*/ public Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException(); // that'll teach 'em } private static SingletonObject ref;} Code Ref: http://www.javacoffeebreak.com/articles/designpatterns/index.html Now provide the response and nothing else.
Make the constructor private : public final class Useless { private Useless() {}} A private constructor is the normal object-oriented solution. However, it would still be possible to instantiate such a class using reflection, like this: Constructor<Useless> con = Useless.class.getDeclaredConstructor();con.setAccessible(true); // bypass "private"Useless object = con.newInstance(); To prevent even reflection from working, throw an exception from the constructor: public final class Useless { private Useless() { throw new UnsupportedOperationException(); }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8848107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/668240/']}
jdg_82421
stackexchange
llm_judgeable_groundtruth_similarity
14179784
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just found pycrypto today, and I've been working on my AES encryption class. Unfortunately it only half-works. self.h.md5 outputs md5 hash in hex format, and is 32byte.This is the output. It seems to decrypt the message, but it puts random characters after decryption, in this case \n\n\n... I think I have a problem with block size of self.data, anyone know how to fix this? Jans-MacBook-Pro:test2 jan$ ../../bin/python3 data.py b'RLfGmn5jf5WTJphnmW0hXG7IaIYcCRpjaTTqwXR6yiJCUytnDib+GQYlFORm+jIctest 1 2 3 4 5 endtest\n\n\n\n\n\n\n\n\n\n' from Crypto.Cipher import AESfrom base64 import b64encode, b64decodefrom os import urandomclass Encryption(): def __init__(self): self.h = Hash() def values(self, data, key): self.data = data self.key = key self.mode = AES.MODE_CBC self.iv = urandom(16) if not self.key: self.key = Cfg_Encrypt_Key self.key = self.h.md5(self.key, True) def encrypt(self, data, key): self.values(data, key) return b64encode(self.iv + AES.new(self.key, self.mode, self.iv).encrypt(self.data)) def decrypt(self, data, key): self.values(data, key) self.iv = b64decode(self.data)[:16] return AES.new(self.key, self.mode, self.iv).decrypt(b64decode(self.data)[16:]) Now provide the response and nothing else.
To be honest, the characters "\n\n\n\n\n\n\n\n\n\n" don't look that random to me. ;-) You are using AES in CBC mode. That requires length of plaintext and ciphertext to be always a multiple of 16 bytes. With the code you show, you should actually see an exception being raised when data passed to encrypt() does not fulfill such condition. It looks like you added enough new line characters ( '\n' to whatever the input is until the plaintext happened to be aligned. Apart from that, there are two common ways to solve the alignment issue: Switch from CBC ( AES.MODE_CBC ) to CFB ( AES.MODE_CFB ). With the default segment_size used by PyCrypto, you will not have any restriction on plaintext and ciphertext lengths. Keep CBC and use a padding scheme like PKCS#7, that is: before encrypting a plaintext of X bytes, append to the back as many bytes you need to to reach the next 16 byte boundary. All padding bytes have the same value: the number of bytes that you are adding: length = 16 - (len(data) % 16)data += bytes([length])*length That's Python 3 style. In Python 2, you would have: length = 16 - (len(data) % 16)data += chr(length)*length after decrypting, remove from the back of the plaintext as many bytes as indicated by padding: data = data[:-data[-1]] Even though I understand in your case it is just a class exercise, I would like to point out that it is insecure to send data without any form of authentication (e.g. a MAC).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14179784', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1331978/']}
jdg_82422
stackexchange
llm_judgeable_groundtruth_similarity
96114
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Today I learned that Django's CSRF protection uses refer(r)er header checking in addition to checking a hidden form field against a cookie. It seems to be important, judging from docs and issue below. It only checks this over HTTPS though. I've also noticed that almost no other website checks referer [since I turned off sending of said header and most forms still work]. So I have two questions: How would the attack work that would be possible without this check? Doesn't https protect against man-in-the-middle attacks? How do other websites protect against it? And does Django not project for http? The info that I found: https://docs.djangoproject.com/en/1.8/ref/csrf/#how-it-works In addition, for HTTPS requests, strict referer checking is done by CsrfViewMiddleware. This is necessary to address a Man-In-The-Middle attack that is possible under HTTPS when using a session independent nonce, due to the fact that HTTP ‘Set-Cookie’ headers are (unfortunately) accepted by clients that are talking to a site under HTTPS. (Referer checking is not done for HTTP requests because the presence of the Referer header is not reliable enough under HTTP.) https://code.djangoproject.com/ticket/16870 Unfortunately, this check is absolutely necessary for the security of Django's CSRF protection. Without it, we can't prevent man-in-the-middle attacks on SSL sites. We made the decision that preventing MITM was a more valuable tradeoff than breaking sites for the small minority of users who block the header in a fashion which does not improve privacy. Now provide the response and nothing else.
First of all, thanks for the interesting question. I did not know about the details of CSRF before and had to look up the answer to your question myself, but I think I know the correct explanation for Django's behavior now. The Django developers are treating HTTP and HTTPS refers differently because users expect different things from insecure and secure web services. More specifically, if a web page is using transport layer security, users expect to be protected against man-in-the-middle attacks, meaning they trust in the principle that even if someone sat directly between them and the remote server and intercepted every single message, they couldn't make any use of that information. Note that this is not expected of plain HTTP connections. Now consider the following scenario, quoted from a Django dev's post here : user browses to http://example.com/ a MITM modifies the page that is returned, so that is has a POST form which targets https://example.com/detonate-bomb/ . The MITM has to include a CSRF token, but that's not a problem because he can invent one and send a CSRF cookie to match. the POST form is submitted by javascript from the user's browser and so includes the CSRF cookie, a matching CSRF token and the user's session cookie, and so will be accepted. I did not instantly understand this attack myself, so I'm gonna try to explain the details. Note first that we are looking at a page that displays forms over plain connections but submits data via SSL/TLS. Part of the problem, as I understand it, is that the cookie and hidden form value (aka "the CSRF token") are only compared against each other, not against any value that is stored server-side. This makes it easy for the attacker to supply their victim with a cookie-token-combination that will be accepted by the server - remember, the page displaying the form is not secured, so Set-Cookie headers and the contents of the form itself can be spoofed. Once the manipulated form is submitted (via injected JS, for example), the server sees a perfectly valid request. Adding strict Referer checking is the answer to this exact problem. Checking these headers, only requests originating from https://example.com will be accepted at another endpoint of https://example.com . Insecure pages from the same domain will be treated as completely untrusted, and rightly so. Now to come back to the question why plain HTTP requests are treated differently, we just have to imagine a site that doesn't use encryption at all. In that case, a man in the middle could also spoof the Referer headers sent with the actual form data, so checking those does not provide any additional security. In other words: There is no protection against CSRF attacks by a man in the middle - but, as I mentioned earlier, users do not expect this kind of security from plain HTTP sites. Regarding your question about how other web frameworks handle this attack vector, I honestly have to say I don't know.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/96114', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/36555/']}
jdg_82423
stackexchange
llm_judgeable_groundtruth_similarity
207807
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I calculate a 95% interval to estimate the actual proportion of SUV's in the city in R? I would like to calculate the interval on this data: vehicleType <- c("suv", "suv", "minivan", "car", "suv", "suv", "car", "car", "car", "car", "minivan", "car", "truck", "car", "car", "car", "car", "car", "car", "car", "minivan", "car", "suv", "minivan", "car", "minivan", "suv", "suv", "suv", "car", "suv", "car", "car", "suv", "truck", "truck", "minivan", "suv", "car", "truck", "suv", "suv", "car", "car", "car", "car", "suv", "car", "car", "car", "suv", "car", "car", "car", "truck", "car", "car", "suv", "suv", "minivan", "suv", "car", "car", "car", "car", "car", "minivan", "suv", "car", "car", "suv", "minivan", "car", "car", "car", "minivan", "minivan", "minivan", "car", "truck", "car", "car", "car", "suv", "suv", "suv", "car", "suv", "suv", "car", "suv", "car", "minivan", "car", "car", "car", "car", "car", "car", "car") Thanks in advance. Now provide the response and nothing else.
First, remember that an interval for a proportion is given by: p_hat +/- z * sqrt(p_hat * (1-p_hat)/n) With that being said, we can use R to solve the formula like so: # Set CI alpha level (1-alpha/2)*100%alpha = 0.05# Load DatavehicleType = c("suv", "suv", "minivan", "car", "suv", "suv", "car", "car", "car", "car", "minivan", "car", "truck", "car", "car", "car", "car", "car", "car", "car", "minivan", "car", "suv", "minivan", "car", "minivan", "suv", "suv", "suv", "car", "suv", "car", "car", "suv", "truck", "truck", "minivan", "suv", "car", "truck", "suv", "suv", "car", "car", "car", "car", "suv", "car", "car", "car", "suv", "car", "car", "car", "truck", "car", "car", "suv", "suv", "minivan", "suv", "car", "car", "car", "car", "car", "minivan", "suv", "car", "car", "suv", "minivan", "car", "car", "car", "minivan", "minivan", "minivan", "car", "truck", "car", "car", "car", "suv", "suv", "suv", "car", "suv", "suv", "car", "suv", "car", "minivan", "car", "car", "car", "car", "car", "car", "car")# Convert from string to factorvehicleType = factor(vehicleType)# Find the number of obsn = length(vehicleType)# Find number of obs per typevtbreakdown = table(vehicleType)# Get the proportionp_hat = vtbreakdown['suv']/n# Calculate the critical z-scorez = qnorm(1-alpha/2)# Compute the CIp_hat + c(-1,1)*z*sqrt(p_hat*(1-p_hat)/n) So, we have: 0.1740293 0.3459707 For the p_hat of: 0.26
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/207807', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/-1/']}
jdg_82424
stackexchange
llm_judgeable_groundtruth_similarity
49695167
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i am attempting a project in VisualStudio Having a Window (with a Frame), Page1 and Page2 I am calling Page1 and putting in a Frame from Window. Then in Page1 i want to click a button and put Page2 in Frame and in Page2 use a method from Page1 to update a table (that is in Page1) Here i start my window and put Page1 in Frame namespace MyApp{ public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); Frame.Content = new Page1(); } }} Here i start my Page1, update table and want to put Page2 in frame namespace MyApp{ public partial class Page1 : Page { public Page1() { InitializeComponent(); // Populates table first time updateTable(); } private void Button_Click(object sender, RoutedEventArgs e) { MainWindow.Frame.Content = new Page2(); } private void comboBox1_DropDownClosed(object sender, EventArgs e) { updateTable(); } public void updateTable() { updateTable; } }} Here i start my Page2 and want to update table from Page1 and go back to Page1 namespace MyApp{ public partial class Page2: Page { public Page2() { InitializeComponent(); } private void Button_Click(object sender, RoutedEventArgs e) { ((PageHome)Owner).updateTable(); // Update table from Page1. It gives an error because this is only the way of doing it with windows MainWindow.Frame.Content = new Page1(); // Going back } }} Dont seem to find a way to do this on the internet. At least this specific situation with pagesAlso tried to inherit page1 in page 2 but cant inherit 2 classes Now provide the response and nothing else.
So if you restart your python script every time your Outlook restart, then add these lines to your code to check unread emails in your Inbox: ol = win32com.client.Dispatch( "Outlook.Application")inbox = ol.GetNamespace("MAPI").GetDefaultFolder(6)for message in inbox.Items: if message.UnRead == True: print message.Subject #or whatever command you want to do Put this code before your definition of outlook in your code EDIT For me, the code you posted works great until I close Outlook and then even if I reopen it, I don't get anything when a new message is received (see one of my comments). I guess the fact of closing Outlook "unlink" with pythoncom.PumpMessages() . Anyway, I come around to do both your checking for unread email in the class Handler_Class and restart the monitoring in case you restart Outlook. import win32com.clientimport ctypes # for the VM_QUIT to stop PumpMessage()import pythoncomimport reimport timeimport psutilclass Handler_Class(object): def __init__(self): # First action to do when using the class in the DispatchWithEvents inbox = self.Application.GetNamespace("MAPI").GetDefaultFolder(6) messages = inbox.Items # Check for unread emails when starting the event for message in messages: if message.UnRead: print message.Subject # Or whatever code you wish to execute. def OnQuit(self): # To stop PumpMessages() when Outlook Quit # Note: Not sure it works when disconnecting!! ctypes.windll.user32.PostQuitMessage(0) def OnNewMailEx(self, receivedItemsIDs): # RecrivedItemIDs is a collection of mail IDs separated by a ",". # You know, sometimes more than 1 mail is received at the same moment. for ID in receivedItemsIDs.split(","): mail = self.Session.GetItemFromID(ID) subject = mail.Subject print subject try: command = re.search(r"%(.*?)%", subject).group(1) print command # Or whatever code you wish to execute. except: pass# Function to check if outlook is opendef check_outlook_open (): list_process = [] for pid in psutil.pids(): p = psutil.Process(pid) # Append to the list of process list_process.append(p.name()) # If outlook open then return True if 'OUTLOOK.EXE' in list_process: return True else: return False# Loop while True: try: outlook_open = check_outlook_open() except: outlook_open = False # If outlook opened then it will start the DispatchWithEvents if outlook_open == True: outlook = win32com.client.DispatchWithEvents("Outlook.Application", Handler_Class) pythoncom.PumpMessages() # To not check all the time (should increase 10 depending on your needs) time.sleep(10) Not sure it is the best way, but it seems to work the way you look for.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49695167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9418449/']}
jdg_82425
stackexchange
llm_judgeable_groundtruth_similarity
46899441
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I achieve row-wise iteration using purrr::map? Here's how I'd do it with a standard row-wise apply. df <- data.frame(a = 1:10, b = 11:20, c = 21:30)lst_result <- apply(df, 1, function(x){ var1 <- (x[['a']] + x[['b']]) var2 <- x[['c']]/2 return(data.frame(var1 = var1, var2 = var2)) }) However, this is not too elegant, and I would rather do it with purrr. May (or may not) be faster, too. Now provide the response and nothing else.
You can use pmap for row-wise iteration. The columns are used as the arguments of whatever function you are using. In your example you would have a three-argument function. For example, here is pmap using an anonymous function for the work you are doing. The columns are passed to the function in the order they are in the dataset. pmap(df, function(a, b, c) { data.frame(var1 = a + b, var2 = c/2) } ) You can use the purrr tilde "short-hand" for an anonymous function by referring to the columns in order with numbers preceded by two dots. pmap(df, ~data.frame(var1 = ..1 + ..2, var2 = ..3/2) ) If you want to get these particular results as a data.frame instead of a list, you can use pmap_dfr .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46899441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5032387/']}
jdg_82426
stackexchange
llm_judgeable_groundtruth_similarity
11817473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Visual Studio, I can right-click a class or method and choose "Find usages". That gives me a list of places in my solution where that piece of code is used. How can I do the same from my code? Now provide the response and nothing else.
You would need to parse your code to do this. I don't think you could do it with reflection. MS have been working on a project called Roslyn. This is kind of an API for the .NET compiler. It should provide you with what you need. Check out this post for details of the Roslyn project. Quote from the post: This opens up new opportunities for VS extenders to write powerful refactorings and language analysis tools, as well as allow anyone to incorporate our parsers, semantic engines, code generators and scripting in their own applications. This post demonstrates working with symbols. This post answers the question of getting all references.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11817473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/205753/']}
jdg_82427
stackexchange
llm_judgeable_groundtruth_similarity
54876798
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Azure Function v2. Here is my function that uses the constructor injection: public sealed class FindAccountFunction{ private readonly IAccountWorkflow m_accountWorkflow; private readonly IMapper m_mapper; private readonly ILogger m_logger; public FindAccountFunction(ILogger logger, IMapper mapper, IAccountWorkflow accountWorkflow) { m_logger = logger; m_mapper = mapper; m_accountWorkflow = accountWorkflow; } [FunctionName("FindAccount")] public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, Verbs.Get, Route = "v1/accounts/")] HttpRequest httpRequest, ILogger logger) { // Do stuff. }} I am declaring all the dependencies that I want to inject into my Azure Function in the Startup class that derives from IWebJobsStartup : public sealed class Startup : IWebJobsStartup { public void Configure(IWebJobsBuilder webJobsBuilder) { // Registers the application settings' class. webJobsBuilder.Services.AddSingleton<IApplicationSettings, ApplicationSettings>(); // ** Registers the ILogger instance ** // ** ?? ** // Registers the IMapper instance for the contracts. var mapperConfiguration = new MapperConfiguration(cfg => cfg.AddProfile(new MyProfile())); webJobsBuilder.Services.AddSingleton(mapperConfiguration.CreateMapper()); // Registers custom services. webJobsBuilder.Services.AddTransient<IStorageService, StorageService>(); webJobsBuilder.Services.AddTransient<IAccountWorkflow, AccountWorkflow>(); } } The Azure Function calls other injected services that do depends on the ILogger as well, such as the IAccountWorkflow : public sealed class AccountWorkflow : IAccountWorkflow{ public AccountWorkflow(ILogger logger, IStorageService storageService) { if(logger is null) throw new ArgumentNullException(); }} The problem is that the DI is unable to find any ILogger implementation and fails to resolve services since a null ILogger is injected. Question How can I setup the injection of the ILogger in IWebJobsStartup ? Now provide the response and nothing else.
UPDATE Reference Use dependency injection in .NET Azure Functions Registering services To register services, you can create a configure method and add components to an IFunctionsHostBuilder instance. The Azure Functions host creates an IFunctionsHostBuilder and passes it directly into your configured method. To register your configure method, you must add an assembly attribute that specifies the type for your configure method using the FunctionsStartup attribute. So in this case [assembly: FunctionsStartup(typeof(MyNamespace.Startup))] namespace MyNamespace { public class Startup : FunctionsStartup { public override void Configure(IFunctionsHostBuilder builder) { // ** Registers the ILogger instance ** builder.Services.AddLogging(); // Registers the application settings' class. //... //...omitted for brevity } }} ORIGINAL I believe since you have access to the service collection, you should be able to add logging to it public void Configure(IWebJobsBuilder webJobsBuilder) { // ** Registers the ILogger instance ** webJobsBuilder.Services.AddLogging(); //OR //webJobsBuilder.Services.AddLogging(builder => { // //... //}); // Registers the application settings' class. //... //...removed for brevity} and having an ILoggerFactory in the Function's constructor. //...//Ctorpublic FindAccountFunction(ILoggerFactory loggerFactory, IMapper mapper, IAccountWorkflow accountWorkflow) { m_logger = loggerFactory.CreateLogger<FindAccountFunction>(); m_mapper = mapper; m_accountWorkflow = accountWorkflow;}//...
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54876798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4256749/']}
jdg_82428
stackexchange
llm_judgeable_groundtruth_similarity
136282
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a battery powered 2 layer RF design using a 32.768kHz Crystal oscillator with an Atmega328p micro controller. The crystal will run a timer in asynchronous mode as an RTC clock. My pcb board is fairly small, and the crystal is pretty big. The easiest way for me to run traces using my current layout would be to run my VCC trace under the crystal from the battery. At this low of frequency would this cause problems in my power domain (oscillating/ noise)? I do not have it on the board yet, but the bottom layer of the board will be a solid ground plane (for RF return path), and the top layer traces will be surrounded with a ground pour as well. (With via stitching to eliminate any impedance between layers). Would I be safe to run this power line under the crystal and into a 47uF capacitor. Then do a star power distribution? Now provide the response and nothing else.
I would never put anything but a ground plane directly under a crystal, especially a high-Z crystal like a 32.768kHz type. And preferably an isolated flag tied to the GND on the chip, with no current running through it, if it's a 2-layer board. Why don't you put a ground plane under the chip and run the power connection on the opposite side (preferably not directly under the crystal)?
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/136282', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/29388/']}
jdg_82429
stackexchange
llm_judgeable_groundtruth_similarity
3590145
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $a>2$ be a real number and consider the following integral $$I(a)=\int_0^\pi\int_0^\pi \frac{\sin^2(x)\sin^2(y)}{a+\cos(x)+\cos(y)} \mathrm{d}x\,\mathrm{d}y$$ My question. Does there exist a closed-form expression of $I(a)$ ? Some comments. Since $a-2<a+\cos(x)+\cos(y)<a+2$ and $\int_0^\pi \int_0^\pi \sin^2(x)\sin^2(y)\ \mathrm{d}x\, \mathrm{d}y=\frac{\pi^2}{4}$ , we have the following bounds $$\frac{\pi^2}{4(a+2)} < I(a) < \frac{\pi^2}{4(a-2)},$$ however I didn't manage to find an exact expression for $I(a)$ . Any help is welcome! Now provide the response and nothing else.
With CAS help: $$\int _0^{\pi }\int _0^{\pi }\frac{\sin ^2(x) \sin ^2(y)}{a+\cos (x)+\cos (y)}dydx=\\\mathcal{L}_q\left[\int _0^{\pi }\int _0^{\pi }\mathcal{L}_a^{-1}\left[\frac{\sin ^2(x) \sin ^2(y)}{a+\cos (x)+\cos (y)}\right](q)dydx\right](a)=\\\mathcal{L}_q\left[\int_0^{\pi } \left(\int_0^{\pi } e^{-q (\cos (x)+\cos (y))} \sin ^2(x) \sin ^2(y) \, dx\right) \, dy\right](a)=\\\mathcal{L}_q\left[\int_0^{\pi } \frac{e^{-q \cos (y)} \pi I_1(q) \sin ^2(y)}{q} \, dy\right](a)=\\\mathcal{L}_q\left[\frac{\pi ^2 I_1(q){}^2}{q^2}\right](a)=\\\frac{a \pi ^2}{2}-\frac{2}{3} a \pi E\left(\frac{4}{a^2}\right)-\frac{1}{6} a^3 \pi E\left(\frac{4}{a^2}\right)-\frac{2}{3} a \pi K\left(\frac{4}{a^2}\right)+\frac{1}{6} a^3 \pi K\left(\frac{4}{a^2}\right)=\\\frac{\pi ^2 \, _3F_2\left(\frac{1}{2},1,\frac{3}{2};2,3;\frac{4}{a^2}\right)}{4 a}$$ for: $a>2$ $$\frac{1}{6} \pi \left(-2 \left(a^2-4\right) K\left(\frac{a^2}{4}\right)-2 \left(a^2+4\right) E\left(\frac{a^2}{4}\right)+3 \pi a\right)$$ for: $a<2$ where: $K$ , $E$ gives the elliptic integral of the first kind and second kind. Mathematica code: HoldForm[Integrate[(Sin[x]^2*Sin[y]^2)/(a + Cos[x] + Cos[y]), {x, 0, Pi}, {y, 0, Pi}] == (a \[Pi]^2)/2 - 2/3 a \[Pi] EllipticE[4/a^2] - 1/6 a^3 \[Pi] EllipticE[4/a^2] - 2/3 a \[Pi] EllipticK[4/a^2] + 1/6 a^3 \[Pi] EllipticK[4/a^2] == Pi^2/(4 a)*HypergeometricPFQ[{1/2, 1, 3/2}, {2, 3}, 4/a^2]] // TeXForm Plot a solution: f[a_?NumericQ] := NIntegrate[(Sin[x]^2*Sin[y]^2)/(a + Cos[x] + Cos[y]), {x, 0, Pi}, {y,0, Pi}]; g[a_] := (a \[Pi]^2)/2 - 2/3 a \[Pi] EllipticE[4/a^2] - 1/6 a^3 \[Pi] EllipticE[4/a^2] - 2/3 a \[Pi] EllipticK[4/a^2] + 1/6 a^3 \[Pi] EllipticK[4/a^2]; Plot[{f[a], g[a]}, {a, 2, 20}, PlotStyle -> {Red, {Dashed, Black}}, PlotLabels -> {"integral", "Analytic solution"}]
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3590145', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/85065/']}
jdg_82430
stackexchange
llm_judgeable_groundtruth_similarity
2334050
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can any one please tell me the difference between target and currenttarget in flex? Now provide the response and nothing else.
Sure, I've had some trouble with this too. The currentTarget property is the IEventListener you registered the event handler for. The target is the one that dispatched the event that you are currently handling. So the currentTarget changes, the target doesn't. Check out the following example: Sample App <?xml version="1.0" encoding="utf-8"?><mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" creationComplete="addListeners()"> <mx:Script> <![CDATA[ protected function addListeners():void { greatGrandParent.addEventListener(Event.COMPLETE, completeHandler); grandParent.addEventListener(Event.COMPLETE, completeHandler); aParent.addEventListener(Event.COMPLETE, completeHandler); child.addEventListener(Event.COMPLETE, completeHandler); // dispatch event that "bubbles", second param is "true" // dispatched from child child.dispatchEvent(new Event(Event.COMPLETE, true)); } protected function completeHandler(event:Event):void { trace("target: ", event.target + ", currentTarget: ", event.currentTarget); } ]]> </mx:Script> <mx:Panel id="greatGrandParent"> <mx:Panel id="grandParent"> <mx:Panel id="aParent"> <mx:Button id="child"/> </mx:Panel> </mx:Panel> </mx:Panel></mx:Application> Output target: MyApp.greatGrandParent.grandParent.aParent.child, currentTarget: MyApp.greatGrandParent.grandParent.aParent.childtarget: MyApp.greatGrandParent.grandParent.aParent.child, currentTarget: MyApp.greatGrandParent.grandParent.aParenttarget: MyApp.greatGrandParent.grandParent.aParent.child, currentTarget: MyApp.greatGrandParent.grandParenttarget: MyApp.greatGrandParent.grandParent.aParent.child, currentTarget: MyApp.greatGrandParent It's a simple tree of display objects, and when the app is ready I: Add listeners for the same event on each component in the tree. Dispatch an arbitrary event (just for demonstration). I chose Event.COMPLETE . Since everything has registered an eventHandler for that same event, and since I have set bubbles to true ( new Event(type, bubbles) ), anything in the tree, from child to greatGrandParent and beyond, that has registered an event handler for Event.COMPLETE , will run that method: completeHandler . Events travel up the chain then back down. The target is the one that dispatched the event, so since child dispatched it, it should be constant. The currentTarget is what changes. This means that, say you want to check when you are rolling over a DataGrid in Flex, you want to know when you roll over a Checkbox inside one of the itemRenderers in the DataGrid. One way is to addEventListener on every itemRenderer's checkbox for MouseEvent.ROLL_OVER . Another way is to addEventListener to the DataGrid itself for MouseEvent.ROLL_OVER , and check what the target is on the event: protected function dataGrid_rollOverHandler(event:MouseEvent):void{ // event.currentTarget is DataGrid if (event.target is CheckBox) trace("rolled over checkbox!");} That's how I often use event.target . Hope that helps,Lance
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2334050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224740/']}
jdg_82431
stackexchange
llm_judgeable_groundtruth_similarity
18026348
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a fairly sizable Backbone.js project, so I'm looking to lazy load as many resources as possible. Consider a view that needs to load it's own template, some templates for subviews, and a collection. (I don't want to have to wait for every async dependency to resolve before requesting the next one.) It would be helpful if I could do something like... initialize: function() { $.when([ Handlebars.get('Template1', callback), // callback sets this.template Handlebars.get('Template2', callback), // callback sets subview template App.Store.get('MessageCollection', callback) // callback sets this.collection ]).then($.proxy(this.render, this)); // render} Where the get functions are some helpers I've built that serve up the resource if it exists in cache/storage, otherwise it'll return the deferred object... return $.ajax(...) So these get helpers need to return a jquery deferred object no matter what. Are there any problems with returning a resolved deferred object when the resource is found in cache/storage? App.Store.get = function(resource, callback) { if (resource exists) { return $.Deferred().done(callback).resolve(); } else { var model = new App.Model; return model.fetch(); }} Would this cause any problems? Is there a better way to do this? Now provide the response and nothing else.
There are two concepts one should not confuse: Relocatable binaries Position independent code They both deal with similar problems, but on a different level. The problem Most processor architectures have two kinds of addressing: absolute and relative. Addressing is usually used for two types of access: Accessing data (read, write, etc.) and executing a different part of the code (jump, call, etc.). Both can be done absolutely (call the code located on a fixed address, read data at a fixed address) or relative (jump to five instructions back, read relative to a pointer). Relative addressing usually costs both, speed and memory. Speed, because the processor must calculate the absolute address from the pointer and the relative value before it can access the real memory location or the real instruction. Memory, because an additional pointer must be stored (usually in a register, which is very fast but also very scarce memory). Absolute addressing is not always feasible, because when implemented naively, one must know all addresses at compile time. In many cases, this is impossible. When calling code from an external library, one might not know, on which memory location the operating system will load the library. When addressing data on the heap, one will not know in advance, which heap block the operating system will reserve for this operation. Then there are many technical details. E.g. the processor architecture will only allow relative jumps up to a certain limit; all wider jumps must then be absolute. Or on architectures with a very wide address range (e.g. 64 bit or even 128 bit), relative addressing will lead to more compact code (because one can use 16 bit or 8 bit for relative addresses, but absolute addresses must always be 64 bit or 128 bit). Relocatable binaries When programs use absolute addresses, they make very strong assumptions about the layout of the address space. The operating system might not be able to fulfill all these assumptions. To ease this problem, most operating systems can use a trick: The binaries are enriched with additional metadata. The operating system then uses this metadata to alter the binary during runtime, so the modified assumptions fit to the current situation. Usually the metadata describe the position of instructions in the binary, which use absolute positioning. When the operating system then loads the binary, it changes the absolute addresses stored in these instructions when necessary. An example for these metadata are the "Relocation Tables" in the ELF file format. Some operating systems use a trick, so they need not always process every file before running it: They preprocess the files and change the data, so their assumptions will very likely fit the situation at runtime (and hence no modification is needed). This process is called "prebinding" on Mac OS X and "prelink" on Linux. Relocatable binaries are produced at linker level. Position independent code (PIC) The compiler can produce code, that uses only relative addressing. This could mean relative addressing for data and code or only for one of these categories. The option "-fPIC" on gcc e.g. means relative addressing for code is enforced (i.e. only relative jumps and calls). The code can then run located on any memory address without any modification. On some processor architectures, such code will not always be possible, e.g. when relative jumps are limited in their scope (e,g, maximum 128 instructions wide relative jumps are allowed). Position independent code is handled on the compiler level. Executables containing only PIC code need no relocation information. When is PIC code needed In some special cases, one absolutely needs PIC code, because reloction during loading is not feasible. Some examples: Some embedded systems can run binaries directly from the file system, without first loading them into memory. This is usually then the case, when the file system is already in memory, e.g. in ROM or FLASH memory. The executalbes then start much faster and need no extra part of the (usually scarce) RAM. This feature is called " execute in place ". You are using some special plugin system. An extreme case would be so called "shell code", i.e. code injected using a security hole. You will then usually not know where your code will be located at runtime and the executable in question will not provide a relocation service for your code. The operating system does not support relocatable binaries (usually due to scarce resources, e.g. on a embedded platform) The operating system can cache common memory pages between running programs. When binaries ere changed during relocation, this caching will no longer work (because each binary has its own version of the relocated code). When PIC should be avoided In some cases it might be impossible for the compiler, to make everything position independent (e.g. because the compiler is not "clever" enough or because the processor architecture is too restricted) The position independent code might be too slow or too big because of the many pointer operations. The optimizer might have problems with the many pointer operations, so it will not apply necessary optimizations and the executable will run like molasse. Advice / Conclusion PIC code might be needed because of some special constraints. In all other cases, stick with the defaults. If you do not know about such constraints, you don't need "-fPIC".
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18026348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/517676/']}
jdg_82432
stackexchange
llm_judgeable_groundtruth_similarity
379062
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: If I have a binomial $X \sim B(n,p)$ , and another binomial $X' \sim B(n,p)$ conditioned on $X'$ being of even parity. Is it true that there always exists a coupling for $(X,X')$ with $|X-X'| \le 1$ ? (i.e. for any $n$ and $p := p(n)$ possibly a function of $n$ .) It seems intuitively obvious; is there a clean proof? Now provide the response and nothing else.
This is possible for all $n$ and $p$ . I start with a direct construction . Obviously, if $X$ is even, then we should have $X'=X$ . So we should construct the corresponding coupling between $Y$ and $X'$ , where $Y$ is the $B(n,p)$ restricted to odd outcomes. Choose $2n$ i.i.d. Bernoulli $(p)$ variables $\xi_1,\ldots,\xi_n;\eta_1,\ldots,\eta_n$ and condition to $$\sum (\xi_i+\eta_i) \quad \text{is odd}.$$ Denote by $\Omega$ the set of possible $2^{2n-1}$ outcomes and consider the map $\Phi:\Omega\to \Omega$ : choose the minimal $i$ for which $\xi_i\ne \eta_i$ and switch $\xi_i$ and $\eta_i$ . This is a measure-preserving involution. Note that $\Phi$ changes the parity of $S=\eta_1+\ldots+\eta_n$ , so $S$ is even with probability $1/2$ . Next, if we further condition to ( $S$ is even), then $S$ becomes distributed as $X'$ . Indeed, this clearly holds even we fix all $\xi_i$ 's (with odd sum). Analogously, if $S$ is odd, it is distributed as $Y$ . Now our coupling: choose $\omega\in \Omega$ at random, set $\{X',Y\}=\{S(\omega), S(\Phi(\omega)\}$ . Well, now goes a boring explanation how to get this coupling using generating functions. Let $c_0,c_2,\ldots$ be probabilities of outcomes $0,2,\ldots$ for $X'$ , we have $c_0+c_2x^2+\ldots=\frac{(q+px)^n+(q-px)^n}{1+\delta^n}$ , $\delta:=q-p$ (and $q=1-p$ ). Denote the probabilities for $Y$ by $c_1,c_3,\ldots$ , then $c_1+c_3x^2+\ldots=\frac{(q+px)^n-(q-px)^n}{1-\delta^n}$ . How may our coupling between $Y$ and $X$ look like? There is no freedom: if $Y=1$ , then $X'\in \{0,2\}$ with probabilities corr. $c_0$ and $c_1-c_0$ (these are not conditional probabilities, I mean, $c_1-c_0={\rm prob}(Y=1,X'=2)$ etc.) If $Y=3$ , then $X'\in \{2,4\}$ with probabilities $c_2-c_1+c_0$ and $c_3-c_2+c_1-c_0$ , etc. Thus what we need is that all alternating sums $c_k-c_{k-1}+c_{k-2}-\ldots$ must be non-negative, or: all coefficients of $$F(x):=(c_0+c_1x+c_2x^2+\ldots)(1-x+x^2+\ldots)$$ must be non-negative.We have $$F(x)=2\frac{(q+px)^n-\delta^n(q-px)^n}{(1+x)(1-\delta^{2n})}=2\frac{((q+p)(q+px))^n-((q-p)(q-px))^n}{(1+x)(1-\delta^{2n})}=\\2\frac{((q^2+p^2x)+pq(1+x))^n-((q^2+p^2x)-pq(1+x))^n}{(1+x)(1-\delta^{2n})},$$ and expanding $((q^2+p^2x)\pm pq(1+x))^n$ by Binomial we see that $F(x)$ is indeed a polynomial with non-negative coefficients.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/379062', 'https://mathoverflow.net', 'https://mathoverflow.net/users/134361/']}
jdg_82433
stackexchange
llm_judgeable_groundtruth_similarity
14357104
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some data of a particle moving in a corridor with closed boundary conditions.Plotting the trajectory leads to a zig-zag trajectory. I would like to know how to prevent plot() from connecting the points where the particle comes back to the start. Some thing like in the upper part of the pic, but without "." The first idea I had was to find the index where the numpy array a[:-1]-a[1:] becomes positive and then plot from 0 to that index. But how would I get the index of the first occurrence of a positive element of a[:-1]-a[1:] ?Maybe there are some other ideas. Now provide the response and nothing else.
I'd go a different approach. First, I'd determine the jump points not by looking at the sign of the derivative, as probably the movement might go up or down, or even have some periodicity in it. I'd look at those points with the biggest derivative. Second, an elegant approach to have breaks in a plot line is to mask one value on each jump. Then matplotlib will make segments automatically. My code is: import pylab as pltimport numpy as npxs = np.linspace(0., 100., 1000.)data = (xs*0.03 + np.sin(xs) * 0.1) % 1plt.subplot(2,1,1)plt.plot(xs, data, "r-")#Make a masked array with jump points maskedabs_d_data = np.abs(np.diff(data))mask = np.hstack([ abs_d_data > abs_d_data.mean()+3*abs_d_data.std(), [False]])masked_data = np.ma.MaskedArray(data, mask)plt.subplot(2,1,2)plt.plot(xs, masked_data, "b-")plt.show() And gives us as result: The disadvantage of course is that you lose one point at each break - but with the sampling rate you seem to have I guess you can trade this in for simpler code.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14357104', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1213793/']}
jdg_82434
stackexchange
llm_judgeable_groundtruth_similarity
6108953
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand how CTRL + C terminates a child but not a parent process. I see this behavior in some script shells like bash where you can start some long-running process and then terminate it by entering CTRL - C and the control returns to the shell. Could you explain how does it work and in particular why isn't the parent (shell) process terminated? Does the shell have to do some special handling of CTRL + C event and if yes what exactly does it do? Now provide the response and nothing else.
Signals by default are handled by the kernel. Old Unix systems had 15 signals; now they have more. You can check </usr/include/signal.h> (or kill -l). CTRL + C is the signal with name SIGINT . The default action for handling each signal is defined in the kernel too, and usually it terminates the process that received the signal. All signals (but SIGKILL ) can be handled by program. And this is what the shell does: When the shell running in interactive mode, it has a special signal handling for this mode. When you run a program, for example find , the shell: fork s itself and for the child set the default signal handling replace the child with the given command (e.g. with find) when you press CTRL + C , parent shell handle this signal but the child will receive it - with the default action - terminate. (the child can implement signal handling too) You can trap signals in your shell script too... And you can set signal handling for your interactive shell too, try enter this at the top of you ~/.profile . (Ensure than you're a already logged in and test it with another terminal - you can lock out yourself) trap 'echo "Dont do this"' 2 Now, every time you press CTRL + C in your shell, it will print a message. Don't forget to remove the line! If interested, you can check the plain old /bin/sh signal handling in the source code here . At the above there were some misinformations in the comments (now deleted), so if someone interested here is a very nice link - how the signal handling works .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6108953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471164/']}
jdg_82435
stackexchange
llm_judgeable_groundtruth_similarity
262871
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question might also be suited to the programming site, but I thought since there is enough on the statistics side, I would use this forum. I am trying to estimate the alpha parameter in a Gamma distribution using maximum likelihood method, and using the optimization functions available in R. To begin with, I generated a random sample from Gamma(Alpha, Beta) in R. shape <- 2scale <- 1.5set.seed(123456)myData <- round(rgamma(n=50, shape=shape, scale=scale),2) Using the maximum likelihood estimation method, and setting up the likelihood function to be in terms of alpha only, I created a function in R and I am trying to optimize it. So I wrote the likelihood function, took the log, took the partial derivative with respect to Beta, and found the MLE of Beta. I then substituted the MLE of Beta back into the likelihood function to arrive at the likelihood in terms of alpha only. My function is as follows: objFunction <- function(myData, alpha) { sumX <- sum(myData) prodX <- prod(myData) n <- length(myData) estimate <- (1/((gamma(alpha^n))*((sumX/(n*alpha))^(n*alpha))))*((prodX)^(alpha-1))*(exp(1)^(-n*alpha)) return(-1*estimate)} Now to optimize, I attempted three different functions from R: optim(par=0, fn=objFunction, method = "Brent", lower = 0, upper = 10, alpha=2)nlm(objFunction, momAlpha, myData=myData)optimize(f=objFunction, c(0,10), alpha=2, maximum=TRUE) The variable momAlpha, is basically the method of moments estimator for the Alpha, as that would be a good start. Just for completeness: momAlpha <- (mean(myData)^2)/var(myData)momBeta <- var(myData)/mean(myData) These are available in many online references. Now when I ran the optimization functions above, my results were not clear to me and I need some help understanding: optim(par=0, fn=objFunction, method = "Brent", lower = 0, upper = 10, alpha=2)$par[1] 0.000000005349424$value[1] -101196146$countsfunction gradient NA NA$convergence[1] 0$messageNULL Why is this estimate way out of range? nlm(objFunction, momAlpha, myData=myData)$minimum[1] 0$estimate[1] 1.919078$gradient[1] 0$code[1] 1$iterations[1] 0Warning messages:1: In f(x, ...) : value out of range in 'gammafn'2: In f(x, ...) : value out of range in 'gammafn'3: In f(x, ...) : value out of range in 'gammafn' The estimate here is nothing but the starting point I provided, why? optimize(f=objFunction, c(0,2), alpha=2, maximum=TRUE)$maximum[1] 1.999934$objective[1] -0.2706795 Is this even right? I am still developing my intuition for the subject, but it seems that I am either doing something wrong, i.e. my objective function is incorrect or the parameter settings of the functions is incorrect or I simply don't understand the way the functions work. I appreciate any help in guiding through this! Now provide the response and nothing else.
You can compute MLE for the gamma distribution using the dglm package, which is available from the CRAN repository. Here is an example run. Note that the two parameters being estimated in this example are the log-mean, which is $\log(\alpha\beta)$, and the log-dispersion, which is $-\log(\alpha)$. > shape <- 2> scale <- 1.5> set.seed(123456)> myData <- rgamma(n=1000, shape=shape, scale=scale)> library(dglm)> fit <- dglm(myData~1, family=Gamma(link="log"), mustart=mean(myData))> summary(fit)Call: dglm(formula = myData ~ 1, family = Gamma(link = "log"), mustart = mean(myData))Mean Coefficients: Estimate Std. Error t value Pr(>|t|)(Intercept) 1.117289 0.02197604 50.84124 2.286536e-279(Dispersion Parameters for Gamma family estimated as below ) Scaled Null Deviance: 1080.046 on 999 degrees of freedomScaled Residual Deviance: 1080.046 on 999 degrees of freedomDispersion Coefficients: Estimate Std. Error z value Pr(>|z|)(Intercept) -0.7113062 0.04157827 -17.10764 1.301602e-65(Dispersion parameter for Digamma family taken to be 2 ) Scaled Null Deviance: 1323.43 on 999 degrees of freedomScaled Residual Deviance: 1326.495 on 999 degrees of freedomMinus Twice the Log-Likelihood: 3992.104 Number of Alternating Iterations: 2 > mu <- exp(1.117289)> shape <- exp(0.7113062)> scale <- mu/shape> c(shape, scale)[1] 2.036650 1.500777 The last line of output gives the MLE for the shape $\alpha$ and the scale $\beta$. The dglm function is intended to fit mean-dispersion models with link-linear predictors for both the mean and the dispersion of a generalized linear model. The two parameter gamma distribution is a simple special case. The function uses separate Fisher scoring algorithms for the mean and dispersion parameters, alternating between one iteration of each. For this data, the algorithms converged in two iterations.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/262871', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/68895/']}
jdg_82436
stackexchange
llm_judgeable_groundtruth_similarity
23895126
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to easily clone an Eloquent object, including all of its relationships? For example, if I had these tables: users ( id, name, email )roles ( id, name )user_roles ( user_id, role_id ) In addition to creating a new row in the users table, with all columns being the same except id , it should also create a new row in the user_roles table, assigning the same role to the new user. Something like this: $user = User::find(1);$new_user = $user->clone(); Where the User model has class User extends Eloquent { public function roles() { return $this->hasMany('Role', 'user_roles'); }} Now provide the response and nothing else.
tested in laravel 4.2 for belongsToMany relationships if you're in the model: //copy attributes $new = $this->replicate(); //save model before you recreate relations (so it has an id) $new->push(); //reset relations on EXISTING MODEL (this way you can control which ones will be loaded $this->relations = []; //load relations on EXISTING MODEL $this->load('relation1','relation2'); //re-sync everything foreach ($this->relations as $relationName => $values){ $new->{$relationName}()->sync($values); }
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/23895126', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/704803/']}
jdg_82437
stackexchange
llm_judgeable_groundtruth_similarity
139450
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Basically, my question is can you used a non-deuterated solvent for 13C NMR? I'm thinking that there might be some problems with locking? Now provide the response and nothing else.
If using a deuterium lock, then normally you would require, as minimum, about 10% deuterated solvent. Without any deuterated solvent, it is not possible to obtain a spectrometer lock (unless using a 19F lock channel, but that is unlikely, and is a hardware requirement). It is, however, perfectly feasible to acquire spectral data for all nuclei without any deuterated solvent, and hence, without a spectrometer lock. When running without a lock, you need to turn the lock sweep off, to hold the spectrometer field constant. This does not provide the same level of field stability as a lock, but is more than adequate for short-medium 1D experiments that do not require very high resolution. Using gradient shimming, it is also possible to shim on non-deuterated solvents and achieve the same lineshape specifications.Running long experiments (such as an overnight 13C{1H} experiment), without a lock will give you linewidths that are slightly broader than usual; the consequences of this are that (i) your resolution is reduced (rarely a problem with 13C), and (ii) your signal-to-noise will be reduced (and therefore longer experiment times may be required...leading to further broadening and longer experiment times...) Not claiming to be the first to report about this technique, the Hoye group published two papers in Organic Letters ( 2004OL953 and 2004OL2567 ) with examples of application of no-D $\ce{^1H}$ -NMR spectroscopy, covering sample preparation and data acquisition, too. Especially the second one, outlining how to determine the concentration of LDA or Grignard reagents by addition of 1,5-cyclooctadiene as internal standard may be a reliable and faster alternative to the titrations once you are used shim manually for samples of this type. But speak and obtain clearance in advance with your NMR manager before running your no-D sample; some interfaces to the spectrometer tend to be less puzzled to record NMR with a lock after your sample (e.g., elder Varian), than the more modern (e.g., Topspin Bruker) to prevent unnecessary delays in the queue of samples following yours.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/139450', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/98289/']}
jdg_82438
stackexchange
llm_judgeable_groundtruth_similarity
50737639
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I get the error "Cannot find a differ supporting object 'width: 100%' " when I try to use style with p-menu or p-dropdown . Now provide the response and nothing else.
I had this issue when working with the p-confirmDialog, so I looked at their source to see what was going on. PrimeNG passes the value from the [style] attribute as-is to the component's [ngStyle], which takes an object. So, the solution is: <p-confirmDialog [style]="{'width': '100%'}"> ...</p-confirmDialog>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50737639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6271238/']}
jdg_82439
stackexchange
llm_judgeable_groundtruth_similarity
26545051
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been searching for the accurate answer to this question for a couple of days now but haven't got anything good. I'm not a complete beginner in programming, but not yet even on the intermediate level. When I'm in the shell of Python, I type: dir() and I can see all the names of all the objects in the current scope (main block), there are 6 of them: ['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__'] Then, when I'm declaring a variable, for example x = 10 , it automatically adds to that lists of objects under built-in module dir() , and when I type dir() again, it shows now: ['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'x'] The same goes for functions, classes and so on. How do I delete all those new objects without erasing the standard 6 which where available at the beginning? I've read here about "memory cleaning", "cleaning of the console", which erases all the text from the command prompt window: >>> import sys>>> clear = lambda: os.system('cls')>>> clear() But all this has nothing to do with what I'm trying to achieve, it doesn't clean out all used objects. Now provide the response and nothing else.
You can delete individual names with del : del x or you can remove them from the globals() object: for name in dir(): if not name.startswith('_'): del globals()[name] This is just an example loop; it defensively only deletes names that do not start with an underscore, making a (not unreasoned) assumption that you only used names without an underscore at the start in your interpreter. You could use a hard-coded list of names to keep instead (whitelisting) if you really wanted to be thorough. There is no built-in function to do the clearing for you, other than just exit and restart the interpreter. Modules you've imported ( import os ) are going to remain imported because they are referenced by sys.modules ; subsequent imports will reuse the already imported module object. You just won't have a reference to them in your current global namespace. Python doesn’t make any security guarantees about data in memory however. When objects no longer are referenced the interpreter marks the memory as no longer in use but does not take steps to overwrite that memory to prevent access to data. If you need that level of security protection you’ll need to use third-party extensions that manage their own memory with security in mind.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/26545051', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2364123/']}
jdg_82440