source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
19843025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I try to execute two functions in javascript, but the second function have to wait for first function was finished. View: <button id="DefinirEstab" class="btn btn-success btn-lg" data-toggle="modal" data-target="#myModal" onclick="saveContrato(); CountEstab()"> Define </button> First function js: function saveContrato() {$("#formContrato").ajaxSubmit({ type: "POST", url: $("#formContrato").attr("action"), clearForm: false, success: function () { $("#Serie").attr("disabled", true) }});$("#DefinirEstab").attr("onclick", "CountEstab()"); } Second function js (wait for first was finished): function CountEstab() { //Guardar os dados na tabela serviço var Serie = $("#Serie").val(); var NumDoc = $("#NumDoc").val(); $.getJSON("/Contrato/saveCabecalhoServicos", { serie: Serie, numDoc: NumDoc, idForn: window.IdFornecedor, idFilial: e });} else { alert("Falta de dados obrigatórios preenchidos!!");} } Now provide the response and nothing else.
The part Any access to an a variable must be done through protected methods in the sub classes. ... just means that the subclasses have to call protected methods that are defined in the superclass. Since these methods are protected they can be accessed by the subclasses. First you would define a base class like this: public class Base { private int x; // field is private protected int getX() { // define getter return x; } protected void setX(int x) { // define setter this.x = x; }} Then you would use it in your child class like this: class Child extends Base{ void foo() { int x = getX(); // we can access the method since it is protected. setX(42); // this works too. }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19843025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2964365/']}
jdg_80641
stackexchange
llm_judgeable_groundtruth_similarity
153035
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: On the project I am working on, I have 3 line-level audio signals, and I need to determine on an arduino if a non-silent signal exists on each line and act appropriately (in my case, "unplugging" a device from mains voltage). My first over-engineered idea is to amplify each signal, add a dc bias, then do some expensive operations on the arduino to determine if an audio signal exists. Each audio signal is a mono signal consisting of a ground wire and an AC, low voltage, audio wire. Now I don't remember much from my analog class from university, but my gut instinct is this can be simply done using analog componenets where I can feed the result (an psuedo digital signal where high would be above voltage x and low would be below voltage y) and do much easier computation on the microcontroller to determine if a signal exists. The frequency of the audio signal is between 10 to 22,000 HzThe peak to peak voltage (of the consumer level line level) is 894mV, but the circuit should be able to handle as low as 10mV peak to peak Any ideas? Now provide the response and nothing else.
This would be easier if you were detecting signals at your -10 dBV maximum level. But since you want this to detect levels as low as 10mV, you need to use comparitors. You get 4 comparitors in a 14-pin package (LM339). Couple of things to note: 1) you need to ensure that the voltage differential that you want to measure is greater than the worst-case input offset voltage of your chosen comparitor. A quick check with my old National Semiconductor datasheet says that the worst case offset voltage for the LM339 is 2mV. 2) these comparitors are open-collector. They need pullup resistors to whatever supply voltage you need (+5V or whatever). This is actually an advantage because it makes adding a peak-hold detector easy. 3) you need to bias the input to the comparitors HIGHER than your maximum expected negative peak signal and LESS than your maximum expected positive peak signal. I usually set the bias to the power supply mid-point. 4) this going to be really expensive (grin). Total BOM cost should be about a dollar for 4 channels. I'll do a quicky schematic here. Note that you will need to use 1% resistors in the input bias and comparitor reference section to get anything close to 10 mV sensitivity. simulate this circuit – Schematic created using CircuitLab Turns out that about the best sensitivity this will do is about 11 mV peak (22 mV P-P) and it will tolerate input signals up to about 2.5V peak. You can improve the sensitivity by changing the bottom resistor on the reference voltage divider to the same 22.1K resistor used everywhere else and adding a low-value pot in series with that resistor. Adjust the pot for your desired sensitivity. Note that this detector includes peak hold. The output goes LO every time the detected audio exceeds the reference voltage and decays back towards +5V when the audio goes away. The time constant is currently close to 1 second but can be easily changed by modifying the RC network values at the output pins.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/153035', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/4712/']}
jdg_80642
stackexchange
llm_judgeable_groundtruth_similarity
39268792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm reading a big file with hundreds of thousands of number pairs representing the edges of a graph. I want to build 2 lists as I go: one with the forward edges and one with the reversed. Currently I'm doing an explicit for loop, because I need to do some pre-processing on the lines I read. However, I'm wondering if there is a more pythonic approach to building those lists, like list comprehensions, etc. But, as I have 2 lists, I don't see a way to populate them using comprehensions without reading the file twice. My code right now is: with open('SCC.txt') as data: for line in data: line = line.rstrip() if line: edge_list.append((int(line.rstrip().split()[0]), int(line.rstrip().split()[1]))) reversed_edge_list.append((int(line.rstrip().split()[1]), int(line.rstrip().split()[0]))) Now provide the response and nothing else.
I would keep your logic as it is the Pythonic approach just not split/rstrip the same line multiple times: with open('SCC.txt') as data: for line in data: spl = line.split() if spl: i, j = map(int, spl) edge_list.append((i, j)) reversed_edge_list.append((j, i)) Calling rstrip when you have already called it is redundant in itself even more so when you are splitting as that would already remove the whitespace so splitting just once means you save doing a lot of unnecessary work. You can also use csv.reader to read the data and filter empty rows once you have a single whitespace delimiting: from csv import readerwith open('SCC.txt') as data: edge_list, reversed_edge_list = [], [] for i, j in filter(None, reader(data, delimiter=" ")): i, j = int(i), int(j) edge_list.append((i, j)) reversed_edge_list.append((j, i)) Or if there are multiple whitespaces delimiting you can use map(str.split, data) : for i, j in filter(None, map(str.split, data)): i, j = int(i), int(j) Whatever you choose will be faster than going over the data twice or splitting the sames lines multiple times.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39268792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4856945/']}
jdg_80643
stackexchange
llm_judgeable_groundtruth_similarity
1275321
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: First, I'm a freshman student of physics, not of mathematics, so please excuse my ignorance of mathematics :) Well, I'm reading the book "Huygens and Barrow, Newton and Hooke" by Vladimir Arnold, and one excerpt (at the page 43) called my attention greatly: "he used Taylor's formula for calculating derivatives rather than using the derivatives for the expansion of functions" ("he" refers to Sir Isaac Newton). My main question is: How to obtain the series of the common elementary functions (trigonometric, exponential...) without using derivatives? And, for example, could calculus be developed in a way that we would get $\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots$ and also the one for cosine and then use this series to find the derivative of sine by seeing that it reduces to the series of the cosine? P.S. I don't know English very well, I just hope the question asked is that I wanted to ask! Thanks in advance to everyone who answer. Now provide the response and nothing else.
It is often not hard to solve some equation in form of series. E.g. Let $y = \sin x$ be defined as the solution of $y'' + y = 0, y(0) = 0, y'(0) = 1$. Let's assume that Taylor expansion for $y(x)$ is$$y(x) = \sum_{k=0}^\infty c_k x^k$$and it converges for all $x$. Collecting $x^k$ terms in differential equation we have$$(k + 2)(k + 1) c_{k + 2} + c_k = 0,\quad c_{k+2} = -\frac{c_k}{(k+1)(k+2)}.$$From the initial conditions we have$$c_0 = 0\\c_1 = 1.$$Hence our expansion for $\sin x$ is$$\sin x = x - \frac{x^3}{6} + \dots$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1275321', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/239357/']}
jdg_80644
stackexchange
llm_judgeable_groundtruth_similarity
12669674
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Set active tab style with AngularJS I'm using AngularJS and trying to add a "current" class to my menu whenever that tab's content is being displayed. This is what I have so far, and it works fine when loading the page: HTML: <ul id="nav"> <li><a href="#/one" class="highlighttab">One</a></li> <li><a href="#/two" class="highlighttab">Two</a></li></ul> JS: myModule.directive('highlighttab', function($location) { var currentPath = "#" + $location.path(); return { restrict: 'C', link: function(scope, element, attrs) { var href = element.attr("href"); if (currentPath == href) { element.addClass("current"); } } };}); This will add the 'current' class to the correct <a> tag when the page url is #/one or /#two The problem is if I click the second tab, the class does not get added to it. I imagine I need some way to get the code inside the directive to be re-run when the URL changes. Is there any way to do that ? Now provide the response and nothing else.
Using kfis's code from https://stackoverflow.com/a/12631214/1684860 in the comments, I have this working now using $scope.watch on location.path() . myModule.directive('highlighttab', ['$location', function(location) { return { restrict: 'C', link: function($scope, $element, $attrs) { var elementPath = $attrs.href.substring(1); $scope.$location = location; $scope.$watch('$location.path()', function(locationPath) { (elementPath === locationPath) ? $element.addClass("current") : $element.removeClass("current"); }); } };}]);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12669674', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/308097/']}
jdg_80645
stackexchange
llm_judgeable_groundtruth_similarity
14156608
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just want to use ProtocolBuffers on Android Platform,but seems the Android jar package was so large that make my apk more and more large then using it before. Does any one has some resolution for it? Now provide the response and nothing else.
NDK_PROJECT_PATH is an environment variable so you don't have to include in the Android.mk file. Is nkd-build launched in the project directory? For more info read the docs in docs/HOWTO.html in the NDK folder where I read Starting with NDK r4, you can simply place the file under $PROJECT/jni/ and launch the 'ndk-build' script from your project tree. If you want to use 'ndk-build' but place the file to a different location, use a GNU Make variable override as: ndk-build NDK_APPLICATION_MK=/path/to/your/Application.mk
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14156608', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1948420/']}
jdg_80646
stackexchange
llm_judgeable_groundtruth_similarity
57972538
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my view controller I have a table view in which I am loading multiple tableview cells. Some have UITextfield,Labels,Radio and check buttons. Now I have done all the part showing and entering data in tableview cell but not able to check whether any one field is left empty.On button click in my controller I need to check this validation if all fields are non-empty. Or How can I get data from each field on button click.Here is my cellForRowAt indexpath code that will give idea of diff cell I am loading. How can I validate if my uitextfield are empty and radio button are checked. func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { switch indexPath.row { case 0: let cell = tableView.dequeueReusableCell(withIdentifier: "StartEndDateCell", for: indexPath) as! StartEndDateCell //cell with 2 textfield return cell case 1: let cell = tableView.dequeueReusableCell(withIdentifier: "NameTableViewCell", for: indexPath) as! NameTableViewCell //cell with single textField return cell case 2: let cell = tableView.dequeueReusableCell(withIdentifier: "StartEnddateTableViewCell", for: indexPath) as! StartEnddateTableViewCell //cell open UIDATEPICKER here on click of textfield contains 2 textfield return cell case 3: let cell = tableView.dequeueReusableCell(withIdentifier: "AmountTableViewCell", for: indexPath) as! AmountTableViewCell //cell with single textfield with numeric keyboard return cell case 4: let cell = tableView.dequeueReusableCell(withIdentifier: "MaxFixTableViewCell", for: indexPath) as! MaxFixTableViewCell return cell //cell with radio buttons case 5: let cell = tableView.dequeueReusableCell(withIdentifier: "infoTableViewCell", for: indexPath) as! infoTableViewCell //for oneTimeLabel let tapGestureFrequency : UITapGestureRecognizer = UITapGestureRecognizer.init(target: self, action: #selector(freqLblClick(tapGestureFrequency:))) tapGestureFrequency.delegate = self as? UIGestureRecognizerDelegate tapGestureFrequency.numberOfTapsRequired = 1 cell.oneTimeLabel.isUserInteractionEnabled = true cell.oneTimeLabel.tag = 1 cell.oneTimeLabel.addGestureRecognizer(tapGestureFrequency) //for onLabel let tapGestureOnDebit : UITapGestureRecognizer = UITapGestureRecognizer.init(target: self, action: #selector(lblClickDebit(tapGestureOnDebit:))) tapGestureOnDebit.delegate = self as? UIGestureRecognizerDelegate tapGestureOnDebit.numberOfTapsRequired = 1 cell.onLabel.isUserInteractionEnabled = true cell.onLabel.tag = 2 cell.onLabel.addGestureRecognizer(tapGestureOnDebit) return cell case 6: let cell = tableView.dequeueReusableCell(withIdentifier: "RemarksTableViewCell", for: indexPath) as! RemarksTableViewCell return cell case 7: let cell = tableView.dequeueReusableCell(withIdentifier: "InformTableViewCell", for: indexPath) as! InformTableViewCell return cell default: let cell = tableView.dequeueReusableCell(withIdentifier: "checkTableViewCell", for: indexPath) as! checkTableViewCell return cell } } Now provide the response and nothing else.
Basic Example Below is a State class of a StatefulWidget, where: a ListView is wrapped in a RefreshIndicator numbersList state variable is its data source onRefresh calls _pullRefresh function to update data & ListView _pullRefresh is an async function, returning nothing (a Future<void> ) when _pullRefresh 's long running data request completes, numbersList member/state variable is updated in a setState() call to rebuild ListView to display new data import 'package:flutter/material.dart';import 'dart:math';class PullRefreshPage extends StatefulWidget { const PullRefreshPage(); @override State<PullRefreshPage> createState() => _PullRefreshPageState();}class _PullRefreshPageState extends State<PullRefreshPage> { List<String> numbersList = NumberGenerator().numbers; @override Widget build(BuildContext context) { return Scaffold( body: RefreshIndicator( onRefresh: _pullRefresh, child: ListView.builder( itemCount: numbersList.length, itemBuilder: (context, index) { return ListTile( title: Text(numbersList[index]), ); },), ), ); } Future<void> _pullRefresh() async { List<String> freshNumbers = await NumberGenerator().slowNumbers(); setState(() { numbersList = freshNumbers; }); // why use freshNumbers var? https://stackoverflow.com/a/52992836/2301224 }}class NumberGenerator { Future<List<String>> slowNumbers() async { return Future.delayed(const Duration(milliseconds: 1000), () => numbers,); } List<String> get numbers => List.generate(5, (index) => number); String get number => Random().nextInt(99999).toString();} Notes If your async onRefresh function completes very quickly, you may want to add an await Future.delayed(Duration(seconds: 2)); after it, just so the UX is more pleasant. This gives time for the user to complete a swipe / pull down gesture & for the refresh indicator to render / animate / spin indicating data has been fetched. FutureBuilder Example Here's another version of the above State<PullRefreshPage> class using a FutureBuilder, which is common when fetching data from a Database or HTTP source: class _PullRefreshPageState extends State<PullRefreshPage> { late Future<List<String>> futureNumbersList; @override void initState() { super.initState(); futureNumbersList = NumberGenerator().slowNumbers(); } @override Widget build(BuildContext context) { return Scaffold( body: FutureBuilder<List<String>>( future: futureNumbersList, builder: (context, snapshot) { return RefreshIndicator( child: _listView(snapshot), onRefresh: _pullRefresh, ); }, ), ); } Widget _listView(AsyncSnapshot snapshot) { if (snapshot.hasData) { return ListView.builder( itemCount: snapshot.data.length, itemBuilder: (context, index) { return ListTile( title: Text(snapshot.data[index]), ); },); } else { return Center( child: Text('Loading data...'), ); } } Future<void> _pullRefresh() async { List<String> freshNumbers = await NumberGenerator().slowNumbers(); setState(() { futureNumbersList = Future.value(freshNumbers); }); }} Notes slowNumbers() function is the same as in the Basic Example above, but the data is wrapped in a Future.value() since FutureBuilder expects a Future , but setState() should not await async data according to Rémi, Collin & other Dart/Flutter demigods it's good practice to update Stateful Widget member variables inside setState() ( futureNumbersList in FutureBuilder example & numbersList in Basic example), after its long running async data fetch functions have completed. see https://stackoverflow.com/a/52992836/2301224 if you try to make setState async , you'll get an exception updating member variables outside of setState and having an empty setState closure, may result in hand-slapping / code analysis warnings in the future
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/57972538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7940904/']}
jdg_80647
stackexchange
llm_judgeable_groundtruth_similarity
34213843
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've created a little program with BlueJ that works as a digital registry (registry as in 'a sorted list'). The program has only two classes: one working as a template for the entries in the registry which in return is represented by an ArrayList in the second class, the one managing all entries. E.g. the user can create a new entry via the console and later search for the right entry and read the details over the console, thus this entries are instances of the first class "Entry". Now I've exported the . jar and I do can use all features as in the console-simulation but since the instances aren't saved actually but rather for only one session this program has no useful functionality. I somehow need to save those created instanced in order to have them already in the ArrayList everytime I run the program.I'm searching for something that does like the " save world " functionality does in Greenfoot . I've read that you can save objects with the help of the testclass in BlueJ: Saving Objects with the testclass . But the problem is that I can't actually "see" those entries since they are stored in the ArrayList and that I later want to run the program completely independent (i.e. without BlueJ but the .jar). Does anybody know how I can save those instances and use them later on when I quit and run the program again? Now provide the response and nothing else.
You cannot save a variable for later use in other Dockerfile commands (if that is your intention). This is because each RUN happens in a new shell. However, if you just want to capture the output of ls you should be able to do it in one RUN compound command. For example: RUN file="$(ls -1 /tmp/dir)" && echo $file Or just using the subshell inline: RUN echo $(ls -1 /tmp/dir) Hope this helps your understanding. If you have an actual error or problem to solve I could expand on this instead of a hypothetical answer. A full example Dockerfile demonstrating this would be: FROM alpine:3.7RUN mkdir -p /tmp/dir && touch /tmp/dir/file1 /tmp//dir/file2RUN file="$(ls -1 /tmp/dir)" && echo $fileRUN echo $(ls -1 /tmp/dir) When building you should see steps 3 and 4 output the variable (which contains the list of file1 and file2 creating in step 2): $ docker build --no-cache -t test .Sending build context to Docker daemon 2.048kBStep 1/4 : FROM alpine:3.7 ---> 3fd9065eaf02Step 2/4 : RUN mkdir -p /tmp/dir && touch /tmp/dir/file1 /tmp//dir/file2 ---> Running in abb2fe683e82Removing intermediate container abb2fe683e82 ---> 2f6dfca9385cStep 3/4 : RUN file="$(ls -1 /tmp/dir)" && echo $file ---> Running in 060a285e3d8afile1 file2Removing intermediate container 060a285e3d8a ---> 2e4cc2873b8cStep 4/4 : RUN echo $(ls -1 /tmp/dir) ---> Running in 528fc5d6c721file1 file2Removing intermediate container 528fc5d6c721 ---> 1be7c54e1f29Successfully built 1be7c54e1f29Successfully tagged test:latest
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34213843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5663613/']}
jdg_80648
stackexchange
llm_judgeable_groundtruth_similarity
19492747
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Below are two pieces of code. One works, one does not, I'd like to know why. I apologise in advance for the lack of comments and awful variable names but right now this language is really grinding my gears. // File1.cpp (contains relevant includes)// Works! It writes to out.txt and appears to use in.txt correctlyint main(int argc, char* argv[]) { int num; std::ifstream in("in.txt"); std::streambuf* cinbuf = std::cin.rdbuf(); std::cin.rdbuf(in.rdbuf()); std::ofstream out("out.txt"); std::streambuf* coutbuf = std::cout.rdbuf(); std::cout.rdbuf(out.rdbuf()); cout << "Give me a number: "; cin >> num; std::cin.rdbuf(cinbuf); std::cout.rdbuf(coutbuf); return 0;}// File2.cpp (contains relevant includes)// Does not work! Outputs nothing to out.txt.class TestWithStdIO { std::streambuf* cinbuf; std::streambuf* coutbuf; public: TestWithStdIO(const char* inFile, const char* outFile) { std::ifstream in(inFile); cinbuf = std::cin.rdbuf(); std::cin.rdbuf(in.rdbuf()); std::ofstream out(outFile); coutbuf = std::cout.rdbuf(); std::cout.rdbuf(out.rdbuf()); } ~TestWithStdIO() { std::cin.rdbuf(cinbuf); std::cout.rdbuf(coutbuf); }};int main(int argc, char* argv[]) { int num; TestWithStdIO* ioTest = new TestWithStdIO("in.txt", "out.txt"); cout << "Give me a number: "; cin >> num; delete ioTest; return 0;} Now provide the response and nothing else.
As I pointed out on my answer to your other question , you can use bindler for installing a set of plugins specific to a project using a single command. If bindler is installed and the required plugin is not, bindler will error out and will abort the process. There is also an open issue related to automatically installing plugins on vagrant up s but so far no one signed up for it yet. If you don't want to use bindler, you can make use of Vagrant.has_plugin? (available on 1.3.0+) at the top of your Vagrantfile and error out if the required plugin is not installed. Something like: unless Vagrant.has_plugin?("vagrant-some-plugin") raise 'some-plugin is not installed!'endVagrant.configure("2") do |config| config.vm.box = "box-name"end UPDATE : Bindler is no longer supported and no equivalent funcionality has been provided by Vagrant core as of May 11th, 2015
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19492747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1221906/']}
jdg_80649
stackexchange
llm_judgeable_groundtruth_similarity
121076
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a definition of pseudo differential forms in ordinary differential geometry. However searching the web gave me nothing. There are definitions in supergeometrybut that is not what I'm after. Recently I read, that pseudo-differentialforms are the natural structure to integrate, since integration works on any kind of submanifold (orientation not required) for them, but those texts don't gave a 'clean' definition of these kind of forms. What are pseudo-differentialforms? Can pseudo differentialforms be defined as sections of some kind of fiber bundle?If yes that's a definition I would prefer. Now provide the response and nothing else.
I don't know if these groups have been studied before, but I can say something about their cohomology rings, at least over $\mathbb{Q}$. Namely, we have $H^k(E_n;\mathbb{Q}) = \mathbb{Q}$ if $k=0,1$ and $H^k(E_n;\mathbb{Q}) = 0$ for $k \geq 2$. Of course, this is the same as the cohomology of the ordinary braid group as computed by Arnold in V. I. Arnold, On some topological invariants of algebraic functions, Trudy Moscov. Mat. Obshch. 21 (1970), 27-46 (Russian), English transl. in Trans. Moscow Math. Soc. 21 (1970), 30-52. Recall that if $H$ is a finite-index normal subgroup of $G$, then $G$ acts on $H^k(H;\mathbb{Q})$ and using the transfer map we have that $H^k(G;\mathbb{Q})$ is equal to the invariants of this action. For braid groups, the action of $B_n$ on $H^k(PB_n;\mathbb{Q})$ factors through an action of the symmetric group $S_n$, so $H^k(PB_n;\mathbb{Q})$ is a representation of $S_n$ and $H^k(B_n;\mathbb{Q})$ is the trivial subrepresentation $\{\text{$v \in H^k(PB_n;\mathbb{Q})$ $|$ $\sigma v = v$ for all $\sigma \in S_n$}\}$. Let's now consider $E_n$. In this case, the above argument shows that $H^k(E_n;\mathbb{Q})$ is the subrepresentation $\{\text{$v \in H^k(PB_n;\mathbb{Q})$ $|$ $\sigma v = v$ for all $\sigma \in A_n$}\}$. Now, representations of finite groups over $\mathbb{Q}$ decompose into direct sums of irreducible representations. The only two irreducible representations of $S_n$ that restrict to the identity on $A_n$ are the trivial representation and the alternating representation. As we said above, the trivial representation corresponds to $H^k(B_n;\mathbb{Q})$, so we conclude that$$H^k(E_n;\mathbb{Q}) = W \oplus H^k(B_n;\mathbb{Q}),$$where $W \subset H^k(PB_n;\mathbb{Q})$ is the direct sum of all alternating subrepresentations. The above calculation is thus equivalent to the assertion that the alternating representation does not occur in $H^k(PB_n;\mathbb{Q})$. This follows from the calculation of $H^k(PB_n;\mathbb{Q})$ as a representation of $S_n$ which was done by in the paper "Coxeter group actions on the complement of hyperplanes and special involutions" by Felder-Velesov; see here . The above ref to Felder-Velesov was suggested by Vladimir Dotsenko; I originally included the argument below, which only works for $n \gg k$. It's quite hard to decompose $H^k(PB_n;\mathbb{Q})$ into irreducibles; however the paper "Representation Theory and Homological Stability" (see here ) by Church and Farb introduces a recipe that they call "representation stability" which describes how the decomposition of $H^k(PB_{n+1};\mathbb{Q})$ into irreducibles can be constructed from the decomposition of $H^k(PB_n;\mathbb{Q})$ into irreducibles, at least for $n$ large. Their results are hard to summarize briefly, but they do imply that the alternating representation does not occur (it is not "stable" in their sense), again at least for $n$ large.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/121076', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21302/']}
jdg_80650
stackexchange
llm_judgeable_groundtruth_similarity
175504
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Java, there is the final keyword in lieu of the const keyword in C and C++. In the latter languages there are mutable and immutable methods such as stated in the answer by Johannes Schaub - litb to the question How many and which are the uses of “const” in C++? Use const to tell others methods won't change the logical state of this object. struct SmartPtr { int getCopies() const { return mCopiesMade; }}ptr1;...int var = ptr.getCopies(); // returns mCopiesMade and is specified that to not modify objects state. How is this performed in Java? Now provide the response and nothing else.
You can't. I'm not familiar with Java 7, but at least in Java 6 you cannot tell the compiler a method is not supposed to mutate its arguments or the this instance. Unfortunately final in Java doesn't mean the same as const in C++. A final argument means something else: merely that you cannot reassign it, so that the following is an error: A method(final B arg) { ... arg = something; // error, since arg is final ...} This is a good practice, but it won't prevent you from mutating arg by calling one of its methods. A final method such as final A method(...) is related to subclassing rules, and not with mutation.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/175504', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/63214/']}
jdg_80651
stackexchange
llm_judgeable_groundtruth_similarity
25807545
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am Working with autolayout and constraints and found there is a Constrain to margins option in Xcode 6 which was not present in Xcode 5 and is checked by default. I created a test project then I added a UITableView on a ViewController with the frame set to the same size as view and added constraints Xcode 6 You can see here even though tableview has the same frame as view Xcode suggests to add -16 as constraint whereas Xcode 5 would suggest adding spacing 0. Now when you uncheck "Constrain to margin" option it behaves same as Xcode 5 and would suggest adding 0 as constraint Also, I found that once I add constraint with Constrain to margin checked, I am no longer able to open the storyboard file in Xcode 5 so it's definitely something new in Xcode 6 Hopefully, I am able to explain my question properly. I would like to understand what "Constrain to margin" actually does and when I should and should not use it. I do apologize if it's something very simple and obvious. EDIT I found something about layout margins in discussion here , I wonder if it's related to this. Now provide the response and nothing else.
I don't understand at all why people are complaining that " Margins would cause an outright crash on anything prior to iOS 8. " Setting your constraints relative to margin in a xib file or storyboard DOES NOT make your app crash on iOS7, and it DOES NOT make a UI difference on your iOS7 device neither, as long as you don't touch the UIView.layoutMargins and UIView.preservesSuperviewLayoutMargins properties in your code. What is Margins in iOS8 Layout margins represent padding around the interior of a UIView that the layout system can use when laying out subviews - to ensure that a gap is left between the edge of a view and a subview. In this respect it is very much like the padding property associated with blocks in CSS. By default, a UIView has layout margins of 8 points on each side, and this can not be changed in Interface Builder . However, by setting the UIView.layoutMargins property in the code, which is only available on iOS8, you are able to adjust these values. You can get IB to display the margins with Editor > Canvas > Show Layout Rectangles: Margins can be used to help layout your views and subviews. Every UIView come with margins by default, but they only affect view placement when you set up a constraint that is related to a margin. How to use Margins The only way to use margins in Interface Builder is to check the Relative to margin option while configuring your constraints. This is how you direct your constraint to Use margins instead of edges when laying out my view. Let's take a look at four different ways of setting up a leading constraint between a view and its subview. For each constraint we review the first association described will be the subview's leading , and the second will be superview's leading . What you want to pay close attention to is the check and uncheck status of the Relative to margin option of each constraint end, because that defines whether the constraint is tied to the margin or the edge of the view. First item(uncheck), second item(check): In this case, we're declaring that subview's left edge should align to superview's left margin(as shown in this image). First item(uncheck), second item(uncheck): Both using edge, not margin . In this case, we're declaring that subview's left edge should align to superview's left edge. First item(check), second item(uncheck): In this case, we're declaring that subview's left margin should align to superview's left edge. This kind of layout actually makes the subview overlap the superview. First item(check), second item(check). This actually has a same effect as case 2, since both subview and superview has a same default margin. We're declaring that subview's left margin should align to superview's left margin. What is good about Margins This new feature (iOS8) only impacts UI development if you decide to use margins. By using margins you can adjust the placement of multiple subviews that share a common relation to a shared superview by changing the value of a single property. This is a clear win over setting all associated constraints with fixed values, because if you need to update all the spacing, instead of changing each value one by one, you can simultaneously modify all relevant placement by updating the superview's margin with a single line of code like this one: self.rootView.layoutMargins = UIEdgeInsetsMake(0, 50, 0, 0); To illustrate this benefit, in the following case all subviews' left edges are aligned to their superview's left margin. Thus, changing superview's left margin will affect all subviews at the same time.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25807545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2321467/']}
jdg_80652
stackexchange
llm_judgeable_groundtruth_similarity
17134929
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a div with a background-image . I want to overlay the background-image with an rgba color ( rgba(0,0,0,0.1) ) when the user hovers the div. I was wondering if there's a one-div solution (i.e. not with multiple divs, one for the image and one for the color, etc.). I tried multiple things: <div class="the-div" id="test-1"></div><div class="the-div" id="test-2"></div><div class="the-div" id="test-3"></div> And this CSS: .the-div { background-image: url('the-image'); margin: 10px; width: 200px; height: 80px;}#test-1:hover { background-color: rgba(0,0,0,0.1);}#test-2:hover { background: url('the-image'), rgba(0,0,0,0.1);}#test-3:hover { background: rgba(0,0,0,0.1);} See this fiddle . The only option I saw is to make another image, with overlay, preload it using JavaScript and then use .the-div:hover { background: url('the-new-image'); } . However, I'd like a CSS-only solution (neater; less HTTP requests; less harddisk). Is there any? Now provide the response and nothing else.
The solution by PeterVR has the disadvantage that the additional color displays on top of the entire HTML block - meaning that it also shows up on top of div content, not just on top of the background image. This is fine if your div is empty, but if it is not using a linear gradient might be a better solution: <div class="the-div">Red text</div><style type="text/css"> .the-div { background-image: url("the-image.png"); color: #f00; margin: 10px; width: 200px; height: 80px; } .the-div:hover { background-image: linear-gradient(to bottom, rgba(0, 0, 0, 0.1), rgba(0, 0, 0, 0.1)), url("the-image.png"); background-image: -moz-linear-gradient(top, rgba(0, 0, 0, 0.1), rgba(0, 0, 0, 0.1)), url("the-image.png"); background-image: -o-linear-gradient(top, rgba(0, 0, 0, 0.1), rgba(0, 0, 0, 0.1)), url("the-image.png"); background-image: -ms-linear-gradient(top, rgba(0, 0, 0, 0.1), rgba(0, 0, 0, 0.1)), url("the-image.png"); background-image: -webkit-gradient(linear, left top, left bottom, from(rgba(0, 0, 0, 0.1)), to(rgba(0, 0, 0, 0.1))), url("the-image.png"); background-image: -webkit-linear-gradient(top, rgba(0, 0, 0, 0.1), rgba(0, 0, 0, 0.1)), url("the-image.png"); }</style> See fiddle . Too bad that gradient specifications are currently a mess. See compatibility table , the code above should work in any browser with a noteworthy market share - with the exception of MSIE 9.0 and older. Edit (March 2017): The state of the web got far less messy by now. So the linear-gradient (supported by Firefox and Internet Explorer) and -webkit-linear-gradient (supported by Chrome, Opera and Safari) lines are sufficient, additional prefixed versions are no longer necessary.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17134929', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_80653
stackexchange
llm_judgeable_groundtruth_similarity
856436
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am reading a book on Combinatorial Game Theory that describes a proof by John Nash that Hex is a 'first player' win, but I find the proof very confusing. This proof uses a strategy-stealing argument. At one point it says: With this (first) move Left becomes Second. How can making the first move make you Second? At the end it says: At some point, if this strategy calls for Left to place a stone where the extra sits; then she will simply make another arbitrary placement. Thus Left can win in contradiction to the hypotheses. I don't understand this at all. Why would the game call for a placement on "extra" if that spot is already occupied by a stone? Can someone explain this proof to me in a way that I can understand? Here's the proof: The proof is by contradiction. Let Left make the first play of the game, and assume that Right has a winning strategy. With Left's first move she puts a stone on any cell, a placement called "extra". With this move Left becomes Second, and she henceforth follows the winning strategy that is available to Right. At some point, if this strategy calls for Left to place a stone where the extra sits; then she will simply make another arbitrary placement. Thus Left can win in contradiction to the hypotheses. Now provide the response and nothing else.
The point is that in the game Hex, it never hurts to have an extra piece on the board. So, suppose there is a strategy for the second player, but you are stuck with being the first player. What should you do? Well, you can place a stone on the board, and then pretend in your own mind that it isn't there! In other words, you are imagining that the other player will now make the first move. In your mind, you are imagining that you are the second player now, and you can follow the winning strategy for the second player. The only time you could have trouble is if your winning strategy tells you to place a move at the position you are pretending is empty. Since it's not really empty, you can't really make a move there. But luckily, you have already moved there, so you can imagine that you are making a move there right now -- the stone is already there, so you can imagine that you are just putting it there now. But in reality you still need to make a move, so you can do the same thing you did at the beginning -- just place a stone at some random position, and then imagine that you didn't. The real state of the game is always just like your imagined state, except that there is an extra stone of yours on the board, which can't make things any worse. It limits the opponent's options, but if you have a winning strategy, it will work for any moves the opponent makes, so this isn't a problem either. The conclusion is that, if there were a strategy for the second player to win, then you could "steal" that strategy as outlined above to win even when you are the first player. This is a contradiction, because if there were really a winning strategy for the second player, then the first player would not be able to guarantee a win. Therefore, there is not in fact any strategy for the second player to win.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/856436', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/96276/']}
jdg_80654
stackexchange
llm_judgeable_groundtruth_similarity
313973
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The rotation operator is $$\exp\left(-i\frac{\theta}{2}\boldsymbol{J}\cdot\hat{\boldsymbol{n}}\right).$$ If $\boldsymbol{\sigma}$ is the Pauli matrix, the operator can be written as a matrix form $$\boldsymbol{1}\cos(\phi/2)-i\boldsymbol{\sigma}\cdot\hat{\boldsymbol{n}}\sin(\phi/2).$$ But when $J$ is the spin-3/2 operator, $J$ is 4-dimensional. Is there a matrix representation of operator $\exp\left(-i\frac{\theta}{2}\boldsymbol{J}\cdot\hat{\boldsymbol{n}}\right)$? I find that when $\{J_x,J_y\}\neq0$ for spin-3/2, not like Pauli matrices. What is the case when $J$ is spin-1 operator? Now provide the response and nothing else.
There is a general expression in my article A Compact Formula for Rotationsas Spin Matrix Polynomials, SIGMA 10 (2014), 084 , to the effect that, e.g., for the doublet,\begin{gather*}e^{i(\theta/2)(\hat{\boldsymbol{n}}\cdot\boldsymbol{\sigma})}=I_{2}\cos{\theta/2}+i(\hat{\boldsymbol{n}}\cdot\boldsymbol{\sigma})\sin{\theta/2},\end{gather*}and the triplet, $j=1$, so $J_{3}=\mathrm{diag}(1,0,-1)$,\begin{gather*}e^{i\theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})}=I_{3}+i(\boldsymbol{\hat{n}}\cdot\boldsymbol{J})\sin{\theta}+(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})^{2}(\cos\theta-1)\\\phantom{e^{i\theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})}}=I_{3}+(2i\hat{\boldsymbol{n}}\cdot\boldsymbol{J}\sin(\theta/2))\cos(\theta/2)+\tfrac{1}{2}(2i\hat{\boldsymbol{n}}\cdot\boldsymbol{J}\sin (\theta/2))^{2}.\end{gather*} For the quartet, $j=3/2$,\begin{gather}e^{i \theta (\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} = I_4 \cos (\theta/2)\left(1+\tfrac{1}{2}\sin^2(\theta/2)\right)+(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin (\theta/2))\left(1+\tfrac{1}{6} \sin^2 (\theta/2)\right)\nonumber\\\phantom{e^{i \theta (\hat{\boldsymbol{n}}\cdot\boldsymbol{J})}=}{}+\frac{1}{2!} \bigl (2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J}\sin (\theta/2) \bigr)^2 \cos (\theta/2)+\frac {1}{3!}\bigl (2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J}\sin (\theta/2) \bigr)^3.\label{quartet}\end{gather} For the quintet, $j=2$,\begin{gather*}e^{i \theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} = I_5+(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin(\theta/2))\cos(\theta/2)\left(1+\tfrac{2}{3}\sin^2(\theta/2)\right)\\\phantom{e^{i \theta (\hat{\boldsymbol{n}}\cdot\boldsymbol{J})}=}{}+\frac{1}{2!} {(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin(\theta/2))^2}\left(1+\tfrac{1}{3} \sin^2 (\theta/2)\right)\\\phantom{e^{i \theta (\hat{\boldsymbol{n}}\cdot\boldsymbol{J})}=}{}+\frac{1}{3!} {(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin(\theta/2))^3} \cos(\theta /2) +\frac{1}{4!} (2i\hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin (\theta/2))^4.\end{gather*} For the sextet, $j=5/2$,\begin{gather*}e^{i \theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} = I_6 \cos(\theta/2)\left(1+ \tfrac{1}{2} \sin^2(\theta/2+\tfrac{3}{8} \sin^4 (\theta/2)\right)\\\phantom{e^{i \theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} =}{}+(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin(\theta /2))\left(1+\tfrac{1}{6}\sin^2(\theta/2)+\tfrac{3}{40}\sin^4(\theta /2)\right)\\\phantom{e^{i \theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} =}{}+\frac{1}{2!} {(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin(\theta/2))^2} \cos(\theta /2)\left(1+\tfrac{5}{6}\sin^2(\theta/2)\right)\\\phantom{e^{i \theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} =}{}+\frac{1}{3!} {(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin (\theta/2))^3}\left(1+\tfrac{1}{2}\sin^2(\theta/2)\right)\\\phantom{e^{i \theta(\hat{\boldsymbol{n}}\cdot\boldsymbol{J})} =}{}+\frac{1}{4!} {(2i \hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin(\theta/2))^4}\cos(\theta /2) +\frac{1}{5!} {(2i\hat{\boldsymbol{n}}\cdot\boldsymbol{J} \sin (\theta/2))^5}.\end{gather*} etc... There is a simple pattern and compact formula for arbitrary spin detailed in that paper.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/313973', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/118745/']}
jdg_80655
stackexchange
llm_judgeable_groundtruth_similarity
18582310
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use animation to make a layout appear on screen. The idea is that layout will start with height of 0 and grow to 100%. I have real troubles with this and need some assistance. For some reason no animation is performed. Here is my animation XML file <?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android"> <scale android:interpolator="@android:anim/accelerate_decelerate_interpolator" android:fromXScale="0.0" android:toXScale="1" android:fromYScale="1.0" android:toYScale="1.0" android:fillAfter="false" /></set> The layout file is very basic and is designed as following <?xml version="1.0" encoding="utf-8"?><RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android"><LinearLayout android:id="@+id/dialog" android:layout_width="wrap_content" android:layout_height="200dp" android:layout_centerHorizontal="true" android:orientation="vertical" android:layout_centerVertical="true" android:background="@drawable/border"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:text="Phone" android:id="@+id/textView"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:text="Address" android:id="@+id/textView1"/> <Button android:id="@+id/btn1" android:layout_width="200dp" android:layout_height="wrap_content" android:text="Action 1" /> <Button android:id="@+id/btn2" android:layout_width="200dp" android:layout_height="wrap_content" android:text="Action 2" /></LinearLayout><Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Animate" android:id="@+id/btnAnimate" android:layout_alignParentLeft="true" android:layout_alignParentTop="true" android:onClick="animate"/></RelativeLayout> My activity code is very basic as well public class MyActivity extends Activity implements Animation.AnimationListener{@Overridepublic void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); }public void animate(View view){ LinearLayout dialog = (LinearLayout)findViewById(R.id.dialog); dialog.setVisibility(LinearLayout.VISIBLE); Animation animation = AnimationUtils.loadAnimation(this, R.anim.anim); Log.i("animate","Begin Animation"); animation.reset(); // animation.setFillAfter(true); animation.setAnimationListener(this); dialog.setAnimation(null); Log.i("animate","End Animation");}@Overridepublic void onAnimationStart(Animation animation) { //To change body of implemented methods use File | Settings | File Templates.}@Overridepublic void onAnimationEnd(Animation animation) { //To change body of implemented methods use File | Settings | File Templates.}@Overridepublic void onAnimationRepeat(Animation animation) { //To change body of implemented methods use File | Settings | File Templates.}} Thank you Now provide the response and nothing else.
Ok, I figured it out. Animation XML layout <?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android" android:fillEnabled="true" android:fillAfter="true"> <scale android:interpolator="@android:anim/accelerate_decelerate_interpolator" android:fromXScale="1.0" android:toXScale="1.0" android:fromYScale="0.0" android:toYScale="1.0" android:fillAfter="false" /></set> Layout XML file <?xml version="1.0" encoding="utf-8"?><RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android"> <LinearLayout android:id="@+id/dialog" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:orientation="vertical" android:layout_centerVertical="true" android:visibility="invisible" android:background="@drawable/border"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:text="Phone" android:id="@+id/textView"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:text="Address" android:id="@+id/textView1"/> <Button android:id="@+id/btn1" android:layout_width="200dp" android:layout_height="wrap_content" android:text="Action 1" /> <Button android:id="@+id/btn2" android:layout_width="200dp" android:layout_height="wrap_content" android:text="Action 2" /> </LinearLayout> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Animate" android:id="@+id/btnAnimate" android:layout_alignParentLeft="true" android:layout_alignParentTop="true" android:onClick="animate"/></RelativeLayout> and my Activity class public class MyActivity extends Activity{@Overridepublic void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); }public void animate(View view){ LinearLayout dialog = (LinearLayout)findViewById(R.id.dialog); dialog.setVisibility(LinearLayout.VISIBLE); Animation animation = AnimationUtils.loadAnimation(this, R.anim.anim); animation.setDuration(500); dialog.setAnimation(animation); dialog.animate(); animation.start(); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18582310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2414327/']}
jdg_80656
stackexchange
llm_judgeable_groundtruth_similarity
43752615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm dockerizing our current old app. We use several services but the one I have issues on is the php, apache, and specifically the https for apache. I am using "php:5.6.30-apache" image, so I have php and apache pre-installed. Now I changed the "000-default.conf" with this content: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html/app/htdocsAdmin ServerName admin.local.app.io CustomLog /var/log/apache2/app.admin.access.log "trueip_combined" ErrorLog /var/log/apache2/app.admin.error.log <Directory /var/www/html/app/htdocsAdmin> AllowOverride Options FileInfo AuthConfig Require all granted </Directory></VirtualHost> This is my docker file: FROM php:5.6.30-apacheMAINTAINER Tzook Bar NoyADD default /etc/apache2/sites-available/000-default.confRUN apt-get update \ && apt-get install -y apt-utils \ && apt-get install -y php5-dev php5-memcached \ && apt-get install -y memcachedRUN apt-get update && apt-get install -y apt-utilsRUN apt-get install -y libz-dev libmemcached-devRUN pecl install memcached-2.2.0RUN echo extension=memcached.so >> /usr/local/etc/php/conf.d/memcached.iniRUN a2enmod rewriteRUN a2enmod sslEXPOSE 80EXPOSE 443 Don't mind the memcached stuff, just see that I enable "ssl" and expose ports 80,443 This is being ran with docker-compose: php:build: context: ./php dockerfile: Dockerfileports: - "80:80" - "443:443"volumes: - ./../../:/var/www/htmlnetworks: - appnettty: true but after all of that, I still get this from chrome: "ERR_SSL_PROTOCOL_ERROR" As requested in comments my "docker ps" response: Now provide the response and nothing else.
Besides enabling ssl and exposing port 443, you need to create a (self-signed) certificate + private key and make sure Apache has access to those. I recommend using openSSL to create a self-signed certificate: openssl req -new -newkey rsa:4096 -days 3650 -nodes -x509 -subj \ "/C=../ST=...../L=..../O=..../CN=..." \ -keyout ./ssl.key -out ./ssl.crt Instead of the dots (...) fill in your 2-letter country code (/C), the name of your state or province (/ST), the name of your locality (/L), the name of your organization (/O) and your server FQDN (/CN) Then add the following lines to your docker file: COPY ./path/to/ssl.crt /etc/apache2/ssl/ssl.crtCOPY ./path/to/ssl.key /etc/apache2/ssl/ssl.keyRUN mkdir -p /var/run/apache2/ I'm not sure the last line is really necessary, but in my docker container the folder didn't exist yet causing Apache to fail on startup. Finally in your 000-default.conf file you need to add something like this: <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/apache2/ssl/ssl.crt SSLCertificateKeyFile /etc/apache2/ssl/ssl.key ....</VirtualHost> Note that when you use self-signed certificates most browsers will alert you that "Your connection is not secure" (Firefox) or "Invalid certificate" (Chrome). This is because there is no valid security chain to a trusted CA. Most browsers allow you to continue your request or add the site as an exception so the warning isn't displayed anymore.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43752615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1033518/']}
jdg_80657
stackexchange
llm_judgeable_groundtruth_similarity
31712386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to load a matplotlib object into reportlab.Here is my code: from reportlab.pdfgen import canvasfrom reportlab.lib.utils import ImageReaderfrom reportlab.platypus import Paragraph, SimpleDocTemplate, Spacer, Imagefrom matplotlib import pyplot as pltdef __get_img_data(): """ returns the binary image data of the plot """ img_file = NamedTemporaryFile(delete=False) plt.savefig(img_file.name) img_data = open(img_file.name + '.png', 'rb').read() os.remove(img_file.name) os.remove(img_file.name + '.png') return img_datadef get_plot(): # HERE I PLOT SOME STUFF img_data = __get_img_data() plt.close() return img_dataclass NumberedCanvas(canvas.Canvas): def __init__(self): passclass ReportTemplate: def __init__(self): pass def _header_footer(self, canvas, doc): pass def get_data(self): elements = [] elements.append('hello') ## HERE I WANT TO ADD THE IMAGE imgdata = get_plot() with open('/tmp/x.png', 'wb') as fh: fh.write(imgdata) im = Image('/tmp/x.png', width=usable_width, height=usable_width) elements.append(im) os.remove('/tmp/x.png') ###### doc.build(elements, onFirstPage=self._header_footer,\ onLaterPages=self._header_footer,\ canvasmaker=NumberedCanvas) # blah blah return obj My goal is to insert the plot image into the report.This works fine but I do not want to write to a temporary file.I tried installing PIL because I've read some people doing it with PIL's image library but as soon as I install PIL, I another part of my code breaks due to incompatible Pillow versions. Now provide the response and nothing else.
pdfrw documentation sucks The sole reason the pdfrw example discussed in the first answer to this question is a bit klunky is because the pdfrw documentation sucks badly. Due to the sucky doc, that example's author @Larry-Meyn used the vectorpdf extension for rst2pdf as as starting point, and that extension is not really documented either, and has to deal with the quirks of rst2pdf as well as pdfrw (and is more general than you need, in that it can let rst2pdf display an arbitrary rectangle from an arbitray page of a preexisting PDF). It's amazing that Larry managed to make it work at all, and my hat's off to him. I am perfectly qualified to say this, because I am the author of pdfrw and made a few contributions to rst2pdf, including that vectorpdf extension. But you probably want to use pdfrw anyway I wasn't really paying attention to stackoverflow until a month ago, and pdfrw itself languished for a few years, but I'm here now, and I think it would behoove you to take another look at pdfrw, even though the documentation still sucks. Why? Because if you output to a png file, your image will be rasterized , and if you use pdfrw, it will remain in vector format , which means that it will look nice at any scale. So I modified your answer's png example Your png example wasn't quite a complete program -- the parameters to doc.build weren't defined, styles wasn't defined, it was missing a few imports, etc. But it was close enough to garner some intent and get it working. Edit -- I just noticed that this example was actually a modified version of Larry's example, so that example is still very valuable because it's a bit more full-featured than this in some ways. After I fixed those issues and got some output, I added an option to be able to use png or pdf, so you can see the difference. The program below will create two different PDF files, and you can compare the results for yourself. import cStringIOfrom matplotlib import pyplot as pltfrom reportlab.pdfgen import canvasfrom reportlab.lib.utils import ImageReaderfrom reportlab.platypus import Paragraph, SimpleDocTemplate, Spacer, Image, Flowablefrom reportlab.lib.units import inchfrom reportlab.lib.styles import getSampleStyleSheetfrom pdfrw import PdfReader, PdfDictfrom pdfrw.buildxobj import pagexobjfrom pdfrw.toreportlab import makerlstyles = getSampleStyleSheet()style = styles['Normal']def form_xo_reader(imgdata): page, = PdfReader(imgdata).pages return pagexobj(page)class PdfImage(Flowable): def __init__(self, img_data, width=200, height=200): self.img_width = width self.img_height = height self.img_data = img_data def wrap(self, width, height): return self.img_width, self.img_height def drawOn(self, canv, x, y, _sW=0): if _sW > 0 and hasattr(self, 'hAlign'): a = self.hAlign if a in ('CENTER', 'CENTRE', TA_CENTER): x += 0.5*_sW elif a in ('RIGHT', TA_RIGHT): x += _sW elif a not in ('LEFT', TA_LEFT): raise ValueError("Bad hAlign value " + str(a)) canv.saveState() img = self.img_data if isinstance(img, PdfDict): xscale = self.img_width / img.BBox[2] yscale = self.img_height / img.BBox[3] canv.translate(x, y) canv.scale(xscale, yscale) canv.doForm(makerl(canv, img)) else: canv.drawImage(img, x, y, self.img_width, self.img_height) canv.restoreState()def make_report(outfn, use_pdfrw): fig = plt.figure(figsize=(4, 3)) plt.plot([1,2,3,4],[1,4,9,26]) plt.ylabel('some numbers') imgdata = cStringIO.StringIO() fig.savefig(imgdata, format='pdf' if use_pdfrw else 'png') imgdata.seek(0) reader = form_xo_reader if use_pdfrw else ImageReader image = reader(imgdata) doc = SimpleDocTemplate(outfn) style = styles["Normal"] story = [Spacer(0, inch)] img = PdfImage(image, width=200, height=200) for i in range(10): bogustext = ("Paragraph number %s. " % i) p = Paragraph(bogustext, style) story.append(p) story.append(Spacer(1,0.2*inch)) story.append(img) for i in range(10): bogustext = ("Paragraph number %s. " % i) p = Paragraph(bogustext, style) story.append(p) story.append(Spacer(1,0.2*inch)) doc.build(story)make_report("hello_png.pdf", False)make_report("hello_pdf.pdf", True) What are the downsides to this approach? The first obvious downside is that there is now a requirement for pdfrw, but that's available from PyPI. The next downside is that if you are putting a lot of matplotlib plots into a document, I think this technique will replicate resources such as fonts, because I don't believe that reportlab is smart enough to notice the duplicates. I believe this problem can be solved by outputting all your plots to different pages of a single PDF . I haven't actually tried that with matplotlib, but pdfrw is perfectly capable of converting each page of an existing pdf to a separate flowable . So if you have a lot of plots and it's making your final PDF too big, you could look into that, or just try one of the PDF optimizers out there and see if it helps. In any case, that's a different problem for a different day.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31712386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1896222/']}
jdg_80658
stackexchange
llm_judgeable_groundtruth_similarity
2705888
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to represent sentences in RDF format. In other words "John likes coke" would be automatically represented as: Subject : JohnPredicate : LikesObject : Coke Does anyone know where I should start? Are there any programs which can do this automatically or would I need to do everything from scratch? Now provide the response and nothing else.
It looks like you want the typed dependencies of a sentence, e.g. for John likes coke : nsubj(likes-2, John-1) dobj(likes-2, coke-3) I'm not aware of any dependency parser that directly produces RDF. However, many of them produce parses in a standardized tab limited representation known as CoNLL-X , and it shouldn't be too hard to convert from CoNLL-X to RDF. Open Source Dependency parsers There are a number of parsers to choose from that extract typed dependencies, including the following state-of-art open source options: Stanford Parser - see online demo . MaltParser MSTParser The Stanford Parser includes a pre-trained model for parsing English. To get typed dependencies you'll need to use the flag -outputFormat typedDependencies . For the MaltParser you can download an English model here . The MSTParser includes a small 200 sentence English training set that you can use to create you're own English parsing model. However, training on this little data will hurt the accuracy of the resulting parser. So, if you decide to use this parser, you are probably better off using the pretrain model available here . All of the pretrained models linked above produce parses according to the Stanford Dependency formalism ( ACL paper , and manual ). Of these three, the Stanford Parser is the most accurate. The MaltParser is the fastest, with some configurations of this package being able to parse 1800 sentences in only 8 seconds .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2705888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92319/']}
jdg_80659
stackexchange
llm_judgeable_groundtruth_similarity
7101237
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have text fields inside a custom view inside an NSOutlineView . Editing one of these cells requires a single click, a pause, and another single click. The first single click selects the table view row, and the second single click draws the cursor in the field. Double-clicking the cell, which lets you edit in a cell-based table view, only selects the row. The behavior I want: one click to change the selection and edit. What do I need to override to obtain this behavior? I've read some other posts: The NSTextField flyweight pattern wouldn't seem to apply to view-based table views, where the cell views are all instantiated from nibs. I tried subclassing NSTextField like this solution describes , but my overridden mouseDown method is not called. Overridden awakeFromNib and viewWillDraw (mentioned in this post ) are called. Of course mouseDown is called if I put the text field somewhere outside a table view. By comparison, a NSSegmentedControl in my cell view changes its value without first selecting the row. Here's the working solution adapted from the accepted response: In outline view subclass: -(void)mouseDown:(NSEvent *)theEvent { [super mouseDown:theEvent]; // Forward the click to the row's cell view NSPoint selfPoint = [self convertPoint:theEvent.locationInWindow fromView:nil]; NSInteger row = [self rowAtPoint:selfPoint]; if (row>=0) [(CellViewSubclass *)[self viewAtColumn:0 row:row makeIfNecessary:NO] mouseDownForTextFields:theEvent];} In table cell view subclass: // Respond to clicks within text fields only, because other clicks will be duplicates of events passed to mouseDown- (void)mouseDownForTextFields:(NSEvent *)theEvent { // If shift or command are being held, we're selecting rows, so ignore if ((NSCommandKeyMask | NSShiftKeyMask) & [theEvent modifierFlags]) return; NSPoint selfPoint = [self convertPoint:theEvent.locationInWindow fromView:nil]; for (NSView *subview in [self subviews]) if ([subview isKindOfClass:[NSTextField class]]) if (NSPointInRect(selfPoint, [subview frame])) [[self window] makeFirstResponder:subview];} Now provide the response and nothing else.
I'll try to return the favor ... Subclass NSOutlineView and override -mouseDown: like so: - (void)mouseDown:(NSEvent *)theEvent { [super mouseDown:theEvent]; // Only take effect for double clicks; remove to allow for single clicks if (theEvent.clickCount < 2) { return; } // Get the row on which the user clicked NSPoint localPoint = [self convertPoint:theEvent.locationInWindow fromView:nil]; NSInteger row = [self rowAtPoint:localPoint]; // If the user didn't click on a row, we're done if (row < 0) { return; } // Get the view clicked on NSTableCellView *view = [self viewAtColumn:0 row:row makeIfNecessary:NO]; // If the field can be edited, pop the editor into edit mode if (view.textField.isEditable) { [[view window] makeFirstResponder:view.textField]; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7101237', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/893113/']}
jdg_80660
stackexchange
llm_judgeable_groundtruth_similarity
7854123
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to regular expressions, and was wondering how I could get only the first number in a string like 100 2011-10-20 14:28:55 . In this case, I'd want it to return 100 , but the number could also be shorter or longer. I was thinking about something like [0-9]+ , but it takes every single number separately (100,2001,10,...) Thank you. Now provide the response and nothing else.
/^[^\d]*(\d+)/ This will start at the beginning, skip any non-digits, and match the first sequence of digits it finds EDIT:this Regex will match the first group of numbers, but, as pointed out in other answers, parseInt is a better solution if you know the number is at the beginning of the string
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7854123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/816537/']}
jdg_80661
stackexchange
llm_judgeable_groundtruth_similarity
3605923
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I m not able to edit the existing excel sheet using jxl.It always creates a new one.Can anyone please help me out with it.Please give a small sample code. Now provide the response and nothing else.
jxl is designed for increased read efficiency (since this is the primary use of the API). In order to improve performance, data which relates to output information (eg. all the formatting information such as fonts) is not interpreted when the spreadsheet is read, since this is superfluous when interrogating the raw data values. However, if we need to modify this spreadsheet a handle to the various write interfaces is needed, which can be obtained using the copy method. Workbook workbook = Workbook.getWorkbook(new File("myfile.xls"));WritableWorkbook copy = Workbook.createWorkbook(new File("temp.xls"), workbook); This copies the information that has already been read in as well as performing the additional processing to interpret the fields that are necessary to for writing spreadsheets. The disadvantage of this read-optimized strategy is that we have two spreadsheets held in memory rather than just one, thus doubling the memory requirements. But after this, you can do whatever you want. Like: WritableSheet sheet2 = copy.getSheet(1); WritableCell cell = sheet2.getWritableCell(1, 2); if (cell.getType() == CellType.LABEL) { Label l = (Label) cell; l.setString("modified cell"); }copy.write(); copy.close();workbook.close(); Note: this is directly taken from Andy Khan's tutorial page .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3605923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/422669/']}
jdg_80662
stackexchange
llm_judgeable_groundtruth_similarity
17466
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a shell script where I have to delete a file on a remote machine via a shell script. Manual workflow: Log on to remote machine: ssh [email protected] At the remote machine ( domain ), type the following commands cd ./some/whererm some_file.war How should I accomplish that task in a script? Now provide the response and nothing else.
You can pass the SSH client a command to execute in place of starting a shell by appending it to the SSH command. ssh [email protected] 'rm /some/where/some_file.war' You don't have to cd to a location to remove something as long as you specify the full path, so that's another step you can skip. The next question is authentication. If you just run that, you will get prompted for a password. If you don't want to enter this interactively you should set up public key authentication.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/17466', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/8999/']}
jdg_80663
stackexchange
llm_judgeable_groundtruth_similarity
8500021
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried to test Haskell performance, but got some unxepectedly poor results: -- main = do-- putStrLn $ show $ sum' [1..1000000]sum' :: [Int] -> Intsum' [] = 0sum' (x:xs) = x + sum' xs I first ran it from ghci -O2 : > :set +s> :sum' [1..1000000]1784293664(4.81 secs, 163156700 bytes) Then I complied the code with ghc -O3 , ran it using time and got this: 1784293664real 0m0.728suser 0m0.700ssys 0m0.016s Needless to say, these results are abysmal compared to the C code: #include <stdio.h>int main(void){ int i, n; n = 0; for (i = 1; i <= 1000000; ++i) n += i; printf("%d\n", n);} After compiling it with gcc -O3 and running it with time I got: 1784293664real 0m0.022suser 0m0.000ssys 0m0.000s What is the reason for such poor performance? I assumed that Haskell would never actually construct the list, am I wrong in that assumption? Is this something else? UPD: Is the problem that Haskell doesn't know that addition is associative? Is there a way to make it see and use that? Now provide the response and nothing else.
First, don't bother to discuss GHCi when you're talking about performance. It's nonsense to use -Ox flags with GHCi. You're Building Up A Huge Computation Using GHC 7.2.2 x86-64 with -O2 I get: Stack space overflow: current size 8388608 bytes.Use `+RTS -Ksize -RTS' to increase it. The reason this uses so much stack space is upon every loop you build an expression of i+... , so your computation is transformed into a huge thunk: n = 1 + (2 + (3 + (4 + ... That's going to take a lot of memory. There is a reason the standard sum isn't defined like your sum' . With A Reasonable Definition for sum If I change your sum' to sum or an equivalent such as foldl' (+) 0 then I get: $ ghc -O2 -fllvm so.hs$ time ./so500000500000real 0m0.049s Which seems entirely reasonable to me. Keep in mind that, with such a short-running piece of code much of your measured time is noise (loading the binary, starting up the RTS and GC nursery, misc initializations, etc). Use Criterion (a benchmarking tool) if you want accurate measurements of small-ish Haskell computations. Comparing to C My gcc -O3 time is immeasurably low (reported as 0.002 seconds) because the main routine consists of 4 instructions - the entire computation is evaluated at compile time and the constant of 0x746a5a2920 is stored in the binary. There is a rather long Haskell thread ( here , but be ware it's something of an epic flame war that still burns in peoples minds almost 3 years later) where people discuss the realities of doing this in GHC starting from your exact benchmark - it isn't there yet but they did come up with some Template Haskell work that would do this if you wish to achieve the same results selectively.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8500021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/552014/']}
jdg_80664
stackexchange
llm_judgeable_groundtruth_similarity
31058
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This is a problem I had a look at some years ago but always had the feeling that I was missing something behind its motivation. D.H. Lehmer says in his 1947 paper, “The Vanishing of Ramanujan's Function τ(n),” that it is natural to ask whether τ(n)=0 for any n>0. My question is: Why is it natural to wonder whether τ(n)=0 any n>0? Are there any particular arithmetic properties among the many satisfied by τ(n) that would lead one to ponder its vanishing? The problem is mentioned here , where it's stated that it was a conjecture of Lehmer, although it's not actually presented as a conjecture in his paper, more a curiosity. Maybe there is no deep reason to ponder the vanishing of τ(n), in which case that would be a satisfactory answer too. Now provide the response and nothing else.
The key to your question is lacunarity in modular functions. The tau function, as we know, occurs as the coefficient of the Discriminant function , which in turn is the 24th power of the Eta function . The Eta function was known to be lacunary (having gaps or zero coefficients). Therefore it was natural for Lehmer in 1947 to wonder if coefficients of powers of eta are also zero. See the opening passage of the following paper MR0021027 (9,12b) Lehmer, D. H. The vanishing of Ramanujan's function $\tau(n)$ . Duke Math. J. 14, (1947). 429--433. http://projecteuclid.org/euclid.dmj/1077474140
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/31058', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7330/']}
jdg_80665
stackexchange
llm_judgeable_groundtruth_similarity
614826
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assume that $R$ is a ring and $r-r^2$ is nilpotent for an element $r\in R$. If $r$ is not nilpotent, then $R$ has a nonzero idempotent. Now provide the response and nothing else.
Let $(r-r^2)^n=0$. Then we can conclude that $r^n=r^{n+1}f(r)$. It is not hard to see that $(r^{n}f(r)^n)^2=r^{2n}f(r)^{2n}$. Now, we compute $r^n$. Note that $r$ and $f(r)$ commuate $$r^n=r^{n+1}f(r)=rf(r)r^n=rf(r)r^{n+1}f(r)=r^2f^2(r)r^n=r^{n+2}f^2(r)$$ We can repeat procedure and we can find $r^n=r^{2n}f^n(r)$. Thus we have $r^{2n}f^{2n}(r)=r^{2n}f^n(r)f^n(r)=r^nf^n(r)$ and the idempotent element is $r^nf^n(r)$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/614826', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108209/']}
jdg_80666
stackexchange
llm_judgeable_groundtruth_similarity
1630965
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I guarantee the unique positive root of this polynomial? I have two polynomial, $$x^{n+1} + x^n - 1 =0 $$and $$x^{n+1} - x^n - 1 =0 $$respectively, where $n\in\mathbb{N}$. I have tried for the cases from $n=1$ to $100$. For every calculation, I have found unique positive root for each polynomial with the help of MATHEMATICA, but I couldn't prove it mathematically. How can I prove these polynımials have unique positive root ? Now provide the response and nothing else.
Consider $f(x)=x^{n+1}+ x^n -1.$ Note that $f(0)=-1.$ Now, since $\lim_{x\to \infty} f(x)=+\infty$ there exists $a>0$ such that $f(a)>0.$ Since $f$ is continuous in $[0,a]$ it follows from Bolzano's theorem that there is a root in $(0,a).$ A similar argument works with $f(x)=x^{n+1}- x^n -1.$ Now, if $f(x)=x^{n+1}+ x^n -1$ then $f'(x)=(n+1)x^n+nx^{n-1}>0$ on $(0,\infty).$ Thus, $f$ is strictly increasing, from where it has at most one root. Since it has one, it has exactly one. Now, if $f(x)=x^{n+1}- x^n -1$ then $f(x)<0$ in $(0,1).$ And $f'(x)=(n+1)x^n-nx^{n-1}>0$ on $(1,\infty).$ Thus, $f$ is strictly increasing in $(1,\infty),$ from where it has at most one root. Since it has one, it has exactly one.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1630965', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/122228/']}
jdg_80667
stackexchange
llm_judgeable_groundtruth_similarity
25207879
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to check if a particular HTTP request's response time is over 30 seconds, and if it is, then mark it as failed and stop the thread. Sometimes I can see response times close to 80 seconds, an no browser is waiting that long for a response from the server. I found the following three ways to set a timeout value in JMeter, however this confuses me, because there is multiple options and I don't know which one to use, or if there is any difference at all between them. So here are the options I found that are related to response timeout: Setting Response timeout value in the sampler Add a Duration assertion 3. Setting timeout in jmeter.properties configuration file. Options I found here: os_sampler.poll_for_timeout=x http.socket.timeout=x httpclient.timeout=x So, the problem is that I don't know where to set the response timeout from the listed options. Is there any difference at all between these options? So what I would like to see as a result: If a particular HTTP request takes more than 30 seconds to get a response from the server, stop waiting for a response and mark it as a failed request. Now provide the response and nothing else.
For your need, an assertion is not the right solution as it will only mark the request as failed but it will wait. The right option is a response timeout. Regarding the 3rd point os_sampler.poll_for_timeout=x => not for http, see: https://github.com/apache/jmeter/blob/master/bin/jmeter.properties http.socket.timeout=x => applies to all requests using HttpClient4 or 3, see: https://github.com/apache/jmeter/blob/master/bin/hc.parameters httpclient.timeout=x => Same, see: https://github.com/apache/jmeter/blob/master/bin/jmeter.properties I think the best option is to use 1. , if you want those values to apply to all requests, just use Http Request Defaults element: https://jmeter.apache.org/usermanual/component_reference.html#HTTP_Request_Defaults If you're looking to learn jmeter correctly, this book will help you.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25207879', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/342862/']}
jdg_80668
stackexchange
llm_judgeable_groundtruth_similarity
52604189
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My team develop an angular 5 application that has been in production for a while, but we've been tasked recently with making the app work in other 3 sites the company owns. One site is a SPA built with Angular6, other is also a SPA but uses Angular5, while the other is using some older libraries such as jQuery. Management wanted us to integrate with the Angular5 SPA right away so we just exported the whole application as a module with child routes and let the other application do the bootstrap. But I'm afraid the above approach will not work for the non-angular site. This also tight couple both applications since the 'host' app needs to know about all dependencies of our application which is not a trivial app (I'd say is pretty big) and install them, this caused problems when both applications needed different versions of the same dependency, no to mention that we will need to sync when upgrading dependencies or the framework itself. I don't think this approach will scale when embedding the app into more sites. My first idea for a more general implementation was to upgrade our app to Angular 6 and create a web component with a custom element, but we need to support IE11 and Edge which do not support native encapsulation, so we would need to test our app in every site where it is used, to make sure they are not breaking our styles, also I don't know whether a web component can manage child routes or not. Other idea is to use an iframe but my problem here is the iframe resizing to adapt to the content and how to add child routes in the 'host' app from the 'resident' app inside the iframe. Is there a better way to achieve what we need to do? The ideal solution should allow our application to be used in multiple sites (each one providing specific configuration) without us having to know about the site using our app. Thanks for your help. Now provide the response and nothing else.
UPDATE 2020 You can also use the new Webpack 5 Module federation . following examples show how to use module federation with Angular and other technologies. Implementation examples of module federation , by the creators of module federation Example for building a plugin-based workflow designer with Angular and Dynamic Module Federation Dynamic Module Federation with Angular Original answer I've been very busy with the topic lately, because many have the same problem again and again (me, of course). Nowadays, you often hear the concept of micro-frontends. The concept is about designing the frontend to be extensible and scalable. Especially in the time when many companies have both Angular and React and Vue developers.The following approaches came out of my research: Links : You can create your new app and have a hyperlink from your legacy app. (You must have a page reload when you switch beetween the both apps.) Iframes : a Software architect builded a meta router to deal with iframes and single page application: Take a look at this: (github repository) A lightweight and solid approach towards micro frontends (SPAs/ clients for micro services) Metaframeworks : Metaframeworks allow you to have a communication beetween differents apps builded using differents frameworks: here some example of metaframeworks for microfrontend purposes. Single Spa (official website) Single Spa: a javascript framework for front-end microservices FrintJs (official website) FrintJS: Modular JavaScript frameworkfor building Scalable & Reactive applications. Web Components i.e Angular elements : In order to deal with npm packages you can use as the other answerer mentioned the concept of Angular elements . You have to create a shell app and another independently apps that will be registred as elements in your shell app.Take a look at this example.: Building micro frontends — angular elements Mosaiq: Layout service The online shop Zalando faced to the same problem and created a framework in order to deal with the problem: The Mosaic9 https://www.mosaic9.org/ Espacially the part Tailor.js , an open source system for the assembling the components on-demand on a backend layer written in Go . PS. Tailor was inspired by BigPipe: Pipelining web pages for high performance from Facebook. Plugin Architecture A plugin architecture is an architecture that will call external code at certain points without knowing all the details of that code in advance. This Stackoverflow question explain more about it in case of single page application: Angular libraries I Think a right way is to have in a app the same framework and the same version of this framework (e.g Angular 7). I will prefer to take a time a do a upgrade to a typescript version of angular. I hope the answer can be usefull for others. Other Stackoverflow related Q & A : Vue.JS - micro frontend approach Micro frontend architecture advice Micro frontends examples Angular 8 and ReactJS (using NodeJS server and MySQL database): https://github.com/billyjov/angular-react-microfrontend List of ressources about micro frontends https://github.com/billyjov/microfrontend-resources
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52604189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2191641/']}
jdg_80669
stackexchange
llm_judgeable_groundtruth_similarity
33534264
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently watched Nothing is Something by Sandi Metz, and in her talk she uses the idea of sending messages to objects and goes over how that's done in Ruby. The 4:10 -7:30 section would be a good entry point where she begins on the topic (it's a building block that then permeates over half the talk). Now, for some background: I don't have a lot of experience with writing programs in Ruby, and zero experience with smalltalk. My OO experience is somewhat limited and very stale. I also looked up send object message python in Google, and all I saw was relating to sending messages over sockets and email, which isn't quite what I had in mind. I'm not sure how to interpret this concept in Python, or how to implement it. Any ideas? :) Side note: She mentions her OO-views are derived from experience with smalltalk, so I'm adding that as a tag to this question. Now provide the response and nothing else.
Python uses a slightly different terminology. It is called "calling a method". But it's the same thing. (C++ calls it "calling a virtual function". Again, same difference.) Personally, I don't like that terminology, it focuses too much on the implementation detail and loses much of the metaphoric power of the "message sending" terminology. There are other differences with Python, some of the most important ones being: object-oriented data abstraction is achieved via convention as opposed to being a builtin language feature (e.g. Smalltalk, Ruby), or Design Pattern (Scheme, ECMAScript) not all subroutines are methods The fundamental idea of OO is messaging: you send a message to an object, the object responds. Just like in real life, you have no idea what the object does with the message. All you can observe is the reply. The object might process the message itself, it might employ the help of others, it might blindly forward the message without actually doing any work itself. Since you can't know what the object does with the message and all you can observe is the object's response, all you know about the object is its protocol (the messages it understands and how it responds to them). You don't know its implementation, you don't know its representation. That's how OO achieves Data Abstraction, Information Hiding, Data Hiding, Encapsulation. Also, since each object decides independently how to respond to a message, you get Polymorphism. One typical way of responding to a message, is executing a method corresponding to that message. But that is an implementation mechanism, which is why I don't like that terminology. As a metaphor, it carries none of the connotations I mentioned above. Alan Kay has said that OO is about three things, Messaging, Data Abstraction, and Polymorphism : OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. He later clarified that the Big Thing is Messaging : Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas. And in fact, as I laid out above, the other two are just consequences of Messaging, in my opinion. When Alan Kay came up with the term "Object Orientation", he was heavily inspired by what would later become the ARPANet and then the Internet: independent machines ("objects") with their own private memory ("instance variables") that communicate with each other by sending messages. Similar points are also made in On Understanding Data Abstraction, Revisited by William R. Cook and also his Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented" . Dynamic dispatch of operations is the essential characteristic of objects. It means that the operation to be invoked is a dynamic property of the object itself. Operations cannot be identified statically, and there is no way in general to exactly what operation will executed in response to a given request, except by running it. This is exactly the same as with first-class functions, which are always dynamically dispatched. Python's object system is a bit different from other languages. Python was originally a procedural language, the object system was added later on, with the goal of making the absolut minimal amount of changes to the language as possible. The major data structure in Python were dict s (maps / hash tables), and all behavior was in functions. Even before Python's OO features, this minimalism shows itself, e.g. local and global variables are actually just keys in a dict . And so, it was natural to make objects and classes much like dict s and reuse that concept, an object is essentially a dict of values, and a class is a dict of functions. There is no separate idea of "method", rather, you have functions which take the receiver as their first argument. (In most other OO languages, the receiver is a "hidden" zeroth argument and available using a special keyword such as self , this , or me .)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33534264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1302400/']}
jdg_80670
stackexchange
llm_judgeable_groundtruth_similarity
25510766
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a Stream class and am blocked in the ReadAsync method. Please take a look at the code, I think it can explain the situation better that I can do it with my English. public override Task<int> ReadAsync(byte[] buffer, int offset, int count, System.Threading.CancellationToken cancellationToken){ if (!cancellationToken.IsCancellationRequested) { return _connection.ReceiveAsync(new ArraySegment<byte>(buffer, offset, count)); } return // <--------------- what here?} Using ILSpy I can see that other Stream classes return a cancelled task as follow: return new Task<TResult>(true, default(TResult), TaskCreationOptions.None, cancellationToken); However that Task's constructor is internal and I cannot invoke it. Google didn't help me at all. Now provide the response and nothing else.
The most direct way I know to create a canceled task is to use a TaskCompletionSource : var tcs = new TaskCompletionSource<int>();tcs.TrySetCanceled();return tcs.Task; If you haven't used it before, TaskCompletionSource provides a "promise-style" task, which basically allows you to say, "Here, take this Task now, and I'll provide the result (or report an error/cancellation) whenever I'm ready." It's useful when you want to schedule/coordinate work yourself, as opposed to simply relying on a TaskScheduler . Alternatively, if you rewrite your method using async/await , you can force the cancellation exception to automatically propagate to the result Task : public async override Task<int> ReadAsync( byte[] buffer, int offset, int count, CancellationToken cancellationToken){ cancellationToken.ThrowIfCancellationRequested(); return await _connection.ReceiveAsync( new ArraySegment<byte>( buffer, offset, count));}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25510766', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/627071/']}
jdg_80671
stackexchange
llm_judgeable_groundtruth_similarity
87130
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some situations call for manually installing a local package using dpkg -i <packagename> . Sometimes it may be further useful to leverage the functionality of apt with that package so the question is: How do you quickly create a local repository for random packages using a Debian based linux distribution - like Xubuntu 13.04/Ubuntu? Now provide the response and nothing else.
This should be distinguished from the situation where you're trying to replicate a full package tree from an official repository and fine tuning sources priority . Random packages mean virtual packages, packages which are compiled locally or copied in a piecemeal fashion for testing purposes. Here's a simple setup based on now obsolete documentation . First, make a directory to host the packages: mkdir <packagedir> Then move your .deb package files there. Execute this command from the directory above the one we just created (make sure permissions allow this!): dpkg-scanpackages packagedir | gzip > packagedir/Packages.gz Now create a file with extension .list in /etc/apt/sources.list.d/ with the contents: deb [trusted=yes] file:///path_to_dir_above_packagedir packagedir/ and update the apt database: apt-get update At this point the packages in our local repository can be installed like any other package using apt-get install <packagename> . When new packages are added to the local repository, the prescribed dpkg-scanpackages command must be issued again to update the Packages.gz file and apt must be updated before the new packages are made available. Hopefully this can be useful for testing purposes.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/87130', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_80672
stackexchange
llm_judgeable_groundtruth_similarity
28721959
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the example beneath, is it possible to ignore the dirty state of the dropdown list? Now it get's dirty if the user changes the selected person. But I don't care if this field is dirty in my form validation. function TestingCtrl($scope) { $scope.company = ''; $scope.persons = [{ name: 'Alice' }, { name: 'Bob' }]; $scope.selectedPerson = $scope.persons[0]; $scope.checkForm = function() { if ($scope.personForm.$dirty) { alert('Form is dirty'); } else { alert('Form is clean'); } }} <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.7/angular.min.js"></script><div ng-app> <div ng-controller="TestingCtrl"> <form name="personForm" novalidate> Company: <input type="text" ng-model="company" required> <br>Persons: <select ng-options="p.name for p in persons" ng-model="selectedPerson"></select> </form> <br> <button ng-click="checkForm()">Check if dirty</button> </div></div> Now provide the response and nothing else.
boindiil's directive based solution works but has a flaw: it stops working if form's $setPritine is executed manually. This can be solved by adding an extra line that wipes out the method behavior for the input: angular.module('myApp', []).directive('ignoreDirty', [function() { return { restrict: 'A', require: 'ngModel', link: function(scope, elm, attrs, ctrl) { ctrl.$setPristine = function() {}; ctrl.$pristine = false; } }}]);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28721959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/529171/']}
jdg_80673
stackexchange
llm_judgeable_groundtruth_similarity
362718
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: PHP, C#, Python and likely a few other languages have a yield keyword that is used to create generator functions. In PHP: http://php.net/manual/en/language.generators.syntax.php In Python: https://www.pythoncentral.io/python-generators-and-yield-keyword/ In C#: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/yield I am concerned that as a language feature/facility, yield breaks some conventions. One of them is what I'd refer to is "certainty". It is a method that returns a different result every time you call it.With a regular non-generator function you can call it and if it is given the same input, it will return the same output. With yield, it returns different output, based on its internal state. Thus if you randomly call the generating function, not knowing its previous state, you cannot expect it to return a certain result. How does a function like this fit into the language paradigm? Does it actually break any conventions? Is it a good idea to have and use this feature? (to give an example of what's good and what's bad, goto was once a feature of many languages and still is, but it is considered harmful and as such was eradicated from some languages, such as Java). Do programming language compilers/interpreters have to break out of any conventions to implement such a feature, for example, does a language have to implement multi-threading for this feature to work, or can it be done without threading technology? Now provide the response and nothing else.
Caveats first - C# is the language I know best, and while it has a yield that seems to be very similar to other languages' yield , there may be subtle differences I am unaware of. I am concerned that as a language feature/facility, yield breaks some conventions. One of them is what I'd refer to is "certainty". It is a method that returns a different result every time you call it. Poppycock. Do you really expect Random.Next or Console.ReadLine to return the same result every time you call them? How about Rest calls? Authentication? Get Item off a collection? There are all sorts of (good, useful) functions that are impure. How does a function like this fit into the language paradigm? Does it actually break any conventions? Yes, yield plays really badly with try/catch/finally , and is disallowed ( https://blogs.msdn.microsoft.com/ericlippert/2009/07/16/iterator-blocks-part-three-why-no-yield-in-finally/ for more info). Is it a good idea to have and use this feature? It's certainly a good idea to have this feature. Things like C#'s LINQ is really nice - lazily evaluating collections provides a big performance benefit, and yield allows that sort of thing to be done in a fraction of the code with a fraction of the bugs that a hand-rolled iterator would. That said, there's not a ton of uses for yield outside of LINQ style collection processing. I've used it for validation processing, schedule generation, randomization, and a few other things, but I expect that most developers have never used it (or misused it). Do programming language compilers/interpreters have to break out of any conventions to implement such a feature, for example, does a language have to implement multi-threading for this feature to work, or can it be done without threading technology? Not exactly. The compiler generates a state machine iterator that keeps track of where it stopped so that it can start there again the next time it is called. The process for code generation does something akin to Continuation Passing Style, where the code after the yield is pulled into its own block (and if it has any yield s, another sub-block, and so on). That's a well known approach used more often off in Functional Programming and also shows up in C#'s async/await compilation. No threading is needed, but it does require a different approach to code generation in most compilers, and does have some conflict with other language features. All in all though, yield is a relatively low impact feature that really helps with a specific subset of problems.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/362718', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/119333/']}
jdg_80674
stackexchange
llm_judgeable_groundtruth_similarity
12604909
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a data frame with a column called "Date" and want all the values from this column to have the same value (the year only). Example: City DateParis 01/04/2004Lisbon 01/09/2004Madrid 2004Pekin 31/2004 What I want is: City DateParis 2004Lisbon 2004Madrid 2004Pekin 2004 Here is my code: fr61_70xls = pd.ExcelFile('AMADEUS FRANCE 1961-1970.xlsx')#Here we import the individual sheets and clean the sheets years=(['1961','1962','1963','1964','1965','1966','1967','1968','1969','1970'])fr={}header=(['City','Country','NACE','Cons','Last_year','Op_Rev_EUR_Last_avail_yr','BvD_Indep_Indic','GUO_Name','Legal_status','Date_of_incorporation','Legal_status_date'])for year in years: # save every sheet in variable fr['1961'], fr['1962'] and so on fr[year]=fr61_70xls.parse(year,header=0,parse_cols=10) fr[year].columns=header # drop the entire Legal status date column fr[year]=fr[year].drop(['Legal_status_date','Date_of_incorporation'],axis=1) # drop every row where GUO Name is empty fr[year]=fr[year].dropna(axis=0,how='all',subset=[['GUO_Name']]) fr[year]=fr[year].set_index(['GUO_Name','Date_of_incorporation']) It happens that in my DataFrames, called for example fr['1961'] the values of Date_of_incorporation can be anything (strings, integer, and so on), so maybe it would be best to completely erase this column and then attach another column with only the year to the DataFrames? Now provide the response and nothing else.
As @DSM points out, you can do this more directly using the vectorised string methods : df['Date'].str[-4:].astype(int) Or using extract (assuming there is only one set of digits of length 4 somewhere in each string): df['Date'].str.extract('(?P<year>\d{4})').astype(int) An alternative slightly more flexible way, might be to use apply (or equivalently map ) to do this: df['Date'] = df['Date'].apply(lambda x: int(str(x)[-4:])) # converts the last 4 characters of the string to an integer The lambda function, is taking the input from the Date and converting it to a year. You could (and perhaps should) write this more verbosely as: def convert_to_year(date_in_some_format): date_as_string = str(date_in_some_format) # cast to string year_as_string = date_in_some_format[-4:] # last four characters return int(year_as_string)df['Date'] = df['Date'].apply(convert_to_year) Perhaps 'Year' is a better name for this column...
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12604909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1298051/']}
jdg_80675
stackexchange
llm_judgeable_groundtruth_similarity
22562754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This sounds like a simple task, but I can't quite figure it out: I have an array : var array = ['opt1','sub1','subsub1','subsubsub1'] From that I want to generate the following objects: { opt1:{ sub1:{ subsub1:{ subsubsub1:{} } } }} I have a way to do it, making a string and using eval, but I'm looking to avoid that, any idea? Now provide the response and nothing else.
You could use reduce : var array = ['opt1','sub1','subsub1','subsubsub1'];var object = {};array.reduce(function(o, s) { return o[s] = {}; }, object);console.log(object); But this was only introduced in ECMAScript 5.1, so it won't be supported in some older browsers. If you want something that will be supported by legacy browsers, you could use the polyfill technique described in the MDN article above, or a simple for -loop, like this: var array = ['opt1','sub1','subsub1','subsubsub1'];var object = {}, o = object;for(var i = 0; i < array.length; i++) { o = o[array[i]] = {};}console.log(object);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22562754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2383685/']}
jdg_80676
stackexchange
llm_judgeable_groundtruth_similarity
324053
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't know very well Python. I'm trying to understand more precisely what exact features of dynamic languages (à la Python, Lua, Scheme, Perl, Ruby, ....) are forcing their implementations to be slow. As a case in point, Lua 5.3 metatable machinery would intuitively make Lua quite slow, but in practice Lua is rumored to be quite fast (and faster than Python is). Also, I have the intuition (perhaps a wrong one) that since on current processors memory is much slower than raw computation (a memory access with a cache miss needs the same time as hundreds of arithmetic operations), dynamic type checking (à la if (value->type != INTEGER_TAG) return; in C parlance) could run quite fast. Of course, whole program analysis (like Stalin Scheme implementation is doing) can make a dynamic language implementation as a translator runs fast, but let's pretend I don't have time to design a whole program analyzer at first. (I'm sort of designing a dynamic language in my MELT monitor , and some of it would be translated to C) Now provide the response and nothing else.
What semantic features of Python (and other dynamic languages) contribute to its slowness? None. Performance of language implementations is a function of money, resources, and PhD theses, not language features. Self is much more dynamic than Smalltalk and slightly more dynamic than Python, Ruby, ECMAScript, or Lua, and it had a VM that outperformed all existing Lisp and Smalltalk VMs (in fact, the Self distribution shipped with a small Smalltalk interpreter written in Self, and even that was faster than most existing Smalltalk VMs), and was competitive with, and sometimes even faster than C++ implementations of the time. Then, Sun stopped funding Self, and IBM, Microsoft, Intel, and Co. started funding C++, and the trend reversed. The Self developers left Sun to start their own company, where they used the technology developed for the Self VM to build one of the fastest Smalltalk VMs ever (the Animorphic VM), and then Sun bought back that company, and a slightly modified version of that Smalltalk VM is now better known under the name of "HotSpot JVM". Ironically, Java programmers look down on dynamic languages for being "slow", when in fact, Java was slow until it adopted dynamic language technology. (Yes, that's right: the HotSpot JVM is essentially a Smalltalk VM. The bytecode verifier does a lot of type checking, but once the bytecode is accepted by the verifier, the VM, and especially the optimizer and the JIT don't actually do much of interest with the static types!) CPython simply doesn't do a lot of the stuff that makes dynamic languages (or rather dynamic dispatch) fast: dynamic compilation (JIT), dynamic optimization, speculative inlining, adaptive optimization, dynamic de-optimization, dynamic type feedback / inference. There's also the problem that almost the entire core and standard library is written in C, which means that even if you make Python 100x faster all of a sudden, it won't help you much, because something like 95% of code executed by a Python program is C, not Python. If everything were written in Python, even moderate speedups would create avalanche an effect, where the algorithms get faster, and the core datastructures get faster, but of course the core data structures are also used within the algorithms, and the core algorithms and core data structures are used everywhere else, and so on … There are a couple of things that are notoriously bad for memory-managed OO languages (dynamic or not) in today's systems. Virtual Memory and Memory Protection can be a killer for garbage collection performance in particular, and system performance in general. And it is completely unnecessary in a memory-safe language: why protect against illegal memory accesses when there aren't any memory accesses in the language to begin with? Azul have figured out to use modern powerful MMUs (Intel Nehalem and newer, and AMD's equivalent) to help garbage collection instead of hindering it, but even though it is supported by the CPU, the current memory subsystems of mainstream OS's aren't powerful enough to allow this (which is why Azul's JVM actually runs virtualized on the bare metal besides the OS, not within it). In the Singularity OS project, Microsoft have measured an impact of ~30% on system performance when using MMU protection instead of the type system for process separation. Another thing Azul noticed when building their specialized Java CPUs was that modern mainstream CPUs focus on the completely wrong thing when trying to reduce the cost of cache misses: they try to reduce the number of cache misses through such things as branch prediction, memory prefetching, and so on. But, in a heavily polymorphic OO program, the access patterns are basically pseudo-random, there simply is nothing to predict. So, all of those transistors are just wasted, and what one should do instead is reducing the cost of every individual cache miss. (The total cost is #misses * cost, mainstream tries to bring the first down, Azul the second.) Azul's Java Compute Accelerators could have 20000 concurrent cache misses in flight and still make progress. When Azul started, they thought they would take some off-the-shelf I/O components and design their own specialized CPU core, but what they actually ended up needing to do was the exact opposite: they took a rather standard off-the-shelf 3-address RISC core and designed their own memory controller, MMU, and cache subsystem. tl;dr : The "slowness" of Python is not a property of the language but a) its naive (primary) implementation, and b) the fact that modern CPUs and OSs are specifically designed to make C run fast, and the features they have for C are either not helping (cache) or even actively hurting (virtual memory) Python performance. And you can insert pretty much any memory-managed language with dynamic ad-hoc polymorphism here … when it comes to the challenges of an efficient implementation, even Python and Java are pretty much "the same language".
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/324053', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/40065/']}
jdg_80677
stackexchange
llm_judgeable_groundtruth_similarity
10781420
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have same result 1338161400 when I do DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0); DateTime date = DateTime.Parse(@"28/05/12 01:30"); TimeSpan diff = date.ToUniversalTime() - origin; Console.WriteLine( (Math.Floor(diff.TotalSeconds)).ToString()); as well as when I use date.AddHours(-4) : DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0); DateTime date = DateTime.Parse(@"28/05/12 01:30"); date.AddHours(-4); TimeSpan diff = date.ToUniversalTime() - origin; Console.WriteLine( (Math.Floor(diff.TotalSeconds)).ToString()); I try to get 1338168600 like http://www.mbari.org/staff/rich/utccalc.htm Update: Thanks I changed to DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0); DateTime date = DateTime.Parse(@"28/05/2012 01:30"); date = date.AddHours(-4); date = date.ToUniversalTime(); TimeSpan diff = date - origin; Console.WriteLine((Math.Floor(diff.TotalSeconds)).ToString()); But I got 1338147000 still not 1338168600 Now provide the response and nothing else.
Dates are immutable objects, i.e. they cannot be modified after creation. DateTime.AddHours returns a new DateTime instance which is shifted backwards by 4 hours but "date" will not be modified. Use: DateTime newDate = date.AddHours(-4);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10781420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310291/']}
jdg_80678
stackexchange
llm_judgeable_groundtruth_similarity
10415276
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: User u = new User();Type t = typeof(User);u is User -> returns trueu is t -> compilation error How do I test if some variable is of some type in this way? Now provide the response and nothing else.
The other answers all contain significant omissions. The is operator does not check if the runtime type of the operand is exactly the given type; rather, it checks to see if the runtime type is compatible with the given type: class Animal {}class Tiger : Animal {}...object x = new Tiger();bool b1 = x is Tiger; // truebool b2 = x is Animal; // true also! Every tiger is an animal. But checking for type identity with reflection checks for identity , not for compatibility bool b5 = x.GetType() == typeof(Tiger); // truebool b6 = x.GetType() == typeof(Animal); // false! even though x is an animalor with the type variablebool b7 = t == typeof(Tiger); // truebool b8 = t == typeof(Animal); // false! even though x is an animal If that's not what you want, then you probably want IsAssignableFrom: bool b9 = typeof(Tiger).IsAssignableFrom(x.GetType()); // truebool b10 = typeof(Animal).IsAssignableFrom(x.GetType()); // true! A variable of type Animal may be assigned a Tiger.or with the type variablebool b11 = t.IsAssignableFrom(x.GetType()); // truebool b12 = t.IsAssignableFrom(x.GetType()); // true! A
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10415276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/553877/']}
jdg_80679
stackexchange
llm_judgeable_groundtruth_similarity
33817118
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I cloned a git repo to my local machine, played around with it a bit and found it cool. Now I would like to keep the result as I modified it in my own github space. How can I proceed? I suppose the regular way would have been to fork the repo on the first place to my space, clone it, modify and then push it to GitHub, but now I cloned the original's author repo, how can I commit that as a new thing in my personnal? Now provide the response and nothing else.
First rename the old remote as upstream, in case you want to be able to keep in sync with the original repository. git remote rename origin upstream Then add your forked repository as origin: git remote add origin https://github.com/<your-username>/<your-project> Or if you're using ssh: git remote add origin [email protected]:<your-username>/<your-project>.git To push to your repository: git push -u origin master To pull from the base repository: git pull upstream I recommend you do all of your work in a separate branch, not the master branch. It will be easier to rebase to the upstream/master branch in case you want to make a pull request. You don't really have to rename the origin to upstream - the remote names can be arbitrary, however I recommended you to do so to keep up with the naming convention used by GitHub .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33817118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/745920/']}
jdg_80680
stackexchange
llm_judgeable_groundtruth_similarity
744435
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to apply a function to a Java collection, in this particular case a map. Is there a nice way to do this? I have a map and would like to just run trim() on all the values in the map and have the map reflect the updates. Now provide the response and nothing else.
With Java 8's lambdas, this is a one liner: map.replaceAll((k, v) -> v.trim()); For the sake of history, here's a version without lambdas: public void trimValues(Map<?, String> map) { for (Map.Entry<?, String> e : map.entrySet()) { String val = e.getValue(); if (val != null) e.setValue(val.trim()); }} Or, more generally: interface Function<T> { T operate(T val);}public static <T> void replaceValues(Map<?, T> map, Function<T> f){ for (Map.Entry<?, T> e : map.entrySet()) e.setValue(f.operate(e.getValue()));}Util.replaceValues(myMap, new Function<String>() { public String operate(String val) { return (val == null) ? null : val.trim(); }});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/744435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20641/']}
jdg_80681
stackexchange
llm_judgeable_groundtruth_similarity
28022
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was confronted by this question: Biological evolution of life on Earth, from simple prokaryote-like cells to large, multicellar eukaryotic organisms, A) has occurred in accordance with the laws of thermodynamics. B) has caused an increase in the entropy of the planet. C) has been made possible by expending Earth's energy resources. D) has occurred in accordance with the laws of thermodynamics, by expending Earth's energy resources and causing an increase in the entropy of the planet. E) violates the laws of thermodynamics because Earth is a closed system. Answer: A However, I do not understand. Biological evolution does cause the system (living organisms)'s entropy to decrease. So, by the second law of thermodynamics, the entropy of the universe (in this case Earth), must have overall increased. I would answer D... Where am I going wrong? Now provide the response and nothing else.
However, I do not understand. Biological evolution does cause the system (living organisms)'s entropy to decrease. So, by the second law of thermodynamics, the entropy of the universe (in this case Earth), must have overall increased. The universe and the earth are not equatable. Earth is not an isolated system. Life causes entropy of the earth to decrease. This is offset by increased entropy of the sun, which is the primary source of energy for the earth. Overall, the entropy of the universe increases.
{}
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/28022', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/8516/']}
jdg_80682
stackexchange
llm_judgeable_groundtruth_similarity
1148364
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've tried a few different pairs of numbers in $\mathbb{Z}[\sqrt{14}]$ and in each case I've been able to find a remainder for which the absolute value of the norm is suitable for the Euclidean algorithm. But the Tooth Fairy tells me that $\mathbb{Z}[\sqrt{14}]$ is not norm-Euclidean. If not the absolute value of the norm, then what is the Euclidean function for $\mathbb{Z}[\sqrt{14}]$? This riddle has me stumped. Now provide the response and nothing else.
This ring, as well as all the other number rings that were shown to be Euclidean using analytic techniques, are Euclidean with respect to the minimal Euclidean function $f$ , which in your case is defined as follows. We set $f(r) = 0$ iff $r = 0$ . We set $f(r) = 1$ iff $r$ is a unit, i.e, for elements $\pm (15+4\sqrt{14})^n$ . We set $f(r) = 2$ for all other elements whose residue classes are represented by elements with $f(r) \le 1$ . This includes elements of norm $2$ but also(probably) infinitely many prime elements that have the fundamental unit as a primitive root. We set $f(r) = 3$ for all other elements whose residue classes are represented by elements with $f(r) \le 2$ . Continue this process indefinitely. It can be shown (but only with analytic means so far) that $f$ is a Euclidean function on ${\mathbb Z}[\sqrt{14}]$ . It is straightforward to compute $f(r)$ for a given element; the difficult problem is showing that every element $r$ has a finite value $f(r)$ . Edit. Perhaps "straightforward" is too strong a word. Let me compute the value $f(7+2\sqrt{14})$ for the element $\pi = 7+2\sqrt{14}$ generating the prime ideal of norm $7$ . I first claim that $f(\pi) \ge 3$ . In fact, $\pi$ is not a unit, so $f(\pi) \ge 2$ . Next, $\varepsilon = 15 + 4 \sqrt{14} \equiv 1 \bmod \pi$ shows that all units are $\equiv \pm 1 \bmod \pi$ , so only the residue classes $0$ and $\pm 1$ are covered by elements with $f(r) \le 1$ . But this shows that $f(\pi) \ge 3$ . Next I claim that $f(4 + \sqrt{14}) = f(5 + \sqrt{14}) = 2$ . The first element has norm $2$ , so the claim is trivial. For the second we observe that $\varepsilon \equiv -5 \bmod (5 + \sqrt{14})$ , and since $-5$ is a primitive root modulo $11$ , all the residue classes of $5 + \sqrt{14}$ are represented by units. For showing that $f(\pi) \le 3$ I have to represent the residue classes $0$ , $\pm 1$ , $\pm 2$ and $\pm 3$ modulo $\pi$ by elements $r$ with $f(r) \le 2$ . This now follows easily as $-2 \equiv 5 + \sqrt{14} \bmod \pi$ and $-3 \equiv 4 + \sqrt{14} \bmod \pi$ .
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1148364', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/198697/']}
jdg_80683
stackexchange
llm_judgeable_groundtruth_similarity
284946
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It is common in strong, static typing to use different types even for variables with simple, primitive types to ease static analysis and indicate intent to the programmer. A color and a point in 3D space might both be represented by an array of 3 floats, but given different type names. In C, a lot of common types are simply typedef'd ints. I'm wondering: just how far is it practical to take this? If you're writing a function that takes a float between 0 and 1 (perhaps it represents a probability distribution) do you create a separate type? What about a function that must take a non-zero integer? Would it be reasonable to create a type nonZeroInt, not for the purposes of encapsulation, but simply for type-safety? You can take this arbitrarily far, encoding all preconditions of functions into the type system. For example, you could define a "primeNumber" type for the input to a function that only takes a prime. A function which must take two integers with no common factors could be changed to take one argument of type "coprimePair." The two hypothetical functions above would presumably be part of a library that includes functions to generate prime numbers and coprime pairs, and return them with the appropriate type. If I wanted to call a function in that library that requires a prime number, and I didn't get it from the included prime-generating function, I'd have to explicitly cast an int, essentially forcing me do a reality check and ask myself, "Am I certain that this variable will always be prime?" My specific question is this: How far is this philosophy ever successfully taken in practice? Are there programming languages, or well-respected programming texts, that encourage a philosophy that says: every function that cannot accommodate the entire range of any existing type as its input, and return a meaningful result for all possible inputs, should instead define a new type? Now provide the response and nothing else.
The comments by Jack and tp1 (which should actually been answers) already explain how this is implemented in functional languages. My answer adds a point about non-functional languages, especially the one quite popular in development industry: C#. In languages such as C# or Java, it is indeed a current practice to create types when you need to constraint your values a little bit. If the only thing you need is a positive integer, you'll end up with PositiveInt class, but in general, those wrappers will reflect a bit more the business logic ( ProductPrice , RebatePercentage , etc.), again with validation inside (a product price should be superior to zero, a rebate percentage don't accept a value superior to 100, etc.) The benefits and the drawbacks and the risk of going too far with such types was recently discussed here . Once you have your wrapper, you may start adding logic to it, and especially the validation. Basically, since the wrapper hides the value it wraps, the validation logic will be located in the constructor, and can be as basic as: if (value > 100){ throw new ArgumentOutOfRangeException("value", "The allowed range is (0..100].");} Another possibility provided by C# is to use code contracts , which provides several benefits: The static checking catches misusing of those wrappers before you hit the error during the runtime, Code contracts being enforced during runtime as well, you are sure the contract will not be misused even if static checking was disabled (or its warnings were dismissed by the developers), With invariants checking the wrapped value, there is no way to assign an invalid value (with the limitation that the checks are done when a method starts or ends, and not at every step during the execution of the method), Visual Studio integration makes the code self-documenting, by providing hints about the contracts to the callers. Is it successfully taken in practice? Well, there are thousands of methods within the .NET Framework itself which contain code contracts. For business code, it mostly comes to how critical is the code. If the consequences of a failure are expensive, it could be very attractive to use code contracts. On the other hand, code contracts have a substantial cost in terms of developer's time (ensuring all contracts work well on a all but tiny projects requires a lot of time), and may not be a good idea for the projects which don't necessarily need this level of reliability. This also answers your other question: I'm wondering: just how far is it practical to take this? This is a strictness vs. flexibility question. If you need to develop very fast, and accept the risk of having runtime errors, you'll pick a weakly-typed language. The benefit of writing less code, for example print(123) outweighs the risk of problems which may be more or less difficult to debug, for example 123 + "4" (would it result in 127 or "1234" ?) In this case, types either won't exist or will be managed implicitly by the language/framework. While you could still do range validation (for instance to sanitize user input), it would look weird to do such validation outside the interfaces with the outside world. If you are working on life-critical software, you'll use formal proof which will take a huge amount of time, but will make sure there are no errors in the code. In this case, chances are types will have strict validation: range and precision for numeric values, length and allowed characters for strings, etc. In between, you'll pick the approach which corresponds the most to your needs. For most software, code contracts would be an overkill. But for most software, having basic checking within a type wrapper (such as Percentage ) may be a good idea. Without going into extremes (as discussed in the link already provided above) of creating classes for everything , it could be a good compromise to have a few generic types such as Range<T> or LengthLimitedString which are not that difficult to implement in languages such as C# or Java.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/284946', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/82184/']}
jdg_80684
stackexchange
llm_judgeable_groundtruth_similarity
11054261
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a question on validation in mvc3. The built in validation looks great. However, I've had to use javascript in one case, causing it to be inconsistent w/look and feel (alert window vs nice red text). We have a form which contains a few fields for user input. When submitted, some ajax code fires a link which maps to a controller method that takes the values submitted from the form and kicks off processes which result in a client database being created.The question is: What is the best way to do validation on the fields (length, character, etc) since there is no model directly mapped to the fields on that form?My solution was to write some javascript functions but is there a cleaner way to do it? <td>@Html.TextBox("NewClientId")</td>... <script language="javascript"> function ValidateFieldLength(min, max, element) { var len = element.value.length; if (len < min || len > max) return false; else { return true; } } function createNewClient() { if (!ValidateFieldLength(3,3,document.getElementById('NewClientId'))) { alert("Invalid Client ID length"); return; } $.ajax({ url: '/api/Clients', type: 'PUT', data: JSON.stringify({ ClientId: $('#NewClientId').val(), Name: $('#NewClientName').val() }), contentType: 'application/json; charset=utf-8', success: function (reponse) { //alert(reponse.data.model.Id); alert("Database created"); }, error: function (err) { alert(err); } }); } Now provide the response and nothing else.
The other option I would see is adding the validation data attributes manually to the html element. By this way you can avoid duplicating the error messages and other properties in both server and client side. For ex. @Html.TextBox("NoOfJoinees", "", new { size = 5, data_val_required="No. of joinees is required", data_val_number = "The field No. of joinees must be a number.", data_val_range = "No. of joinees should be minimum 2 and not more than 10", data_val_range_max="10", data_val_range_min="2"}) In the above textbox I've added three types of validations: required , type and range so easily by adding the data attributes. The unobtrusive validation library shipped by microsoft will take care of the rest. You should read the error messages and other constants from a single place. So you don't need to replicate them when you are doing the validation at the server-side.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11054261', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/683130/']}
jdg_80685
stackexchange
llm_judgeable_groundtruth_similarity
3782
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: By default Kate inserts 2 spaces on Tab press but switches to real tabs starting from the fourth Tab level. Can I disable this and use spaces always, regardless to the depth? I want this because I use Kate to code Scala, and using space pairs instead of tabs is a convention there. Now provide the response and nothing else.
Aliases don't take arguments. They work more or less like simple text replacement, meaning all words after the alias (the ones that look like "arguments" to the alias) just get left at the end of the expanded alias. For instance, if you were to alias ls to ls -la , then typing ls foo bar would really execute ls -la foo bar on the command line. Which is probably fine in that example, but if foob is an alias to foo | bar , then foob abc def expands to foo | bar abc def , and there's no way to arrange those two words to be used as arguments to the left-hand side of the pipeline. One might attempt changing the alias to something like foo "$@" | bar or so, but that would expand to foo "$@" | bar abc def and use the positional parameters of the outer context in the expansion of "$@" . That's probably not what you want. If you want to have actual control over how the arguments are interpreted, then you could write a function like so: my_program_wrapper() { local first_arg=$1 \ second_arg=$2 shift 2 # get rid of the first two arguments # ... /path/to/my_program "$@"}
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/3782', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/2119/']}
jdg_80686
stackexchange
llm_judgeable_groundtruth_similarity
14459854
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm passing a parameter p1 to another page page.xhtml : <ui:include src="page.xhtml"> <ui:param name="p1" value="#{someObject}"/></ui:include> Is this possible to evaluate #{p1} inside @PostConstruct method of the backing bean of page.xhtml ? Using the following piece of code, #{p1} cannot resolve: FacesContext currentInstance = FacesContext.getCurrentInstance();currentInstance.getApplication().evaluateExpressionGet(currentInstance, "#{p1}", String.class); Why do I need this? I'm using an xhtml file (say component.xhtml) as a custom UI component. This file has a backing bean from which I should get component data. Since I'm including this xhtml file twice or more in my main JSF page, I want to pass different objects to each of component.xhtml so that my component work with my custom data each time included. Now provide the response and nothing else.
In Mojarra, you can get it as an attribute of the FaceletContext . You can get it in the @PostConstruct of a managed bean which is guaranteed to be referenced/constructed for the first time in the included page (and thus not in the parent page before the <ui:param> is declared in the component tree). FaceletContext faceletContext = (FaceletContext) FacesContext.getCurrentInstance().getAttributes().get(FaceletContext.FACELET_CONTEXT_KEY);Object p1 = faceletContext.getAttribute("p1"); In MyFaces, the whole FaceletContext isn't available in managed beans as it's discarded by end of view build time and this construct would then not work. To be JSF implementation independent, you might want to consider to set it via <c:set scope="request"> instead. It's then available as a request attribute. As to the concrete functional requirement, consider creating a comoposite component with a backing component. For some examples, see our composite component wiki page and this blog about using multiple input components in a composite component . See also When to use <ui:include>, tag files, composite components and/or custom components?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14459854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/141438/']}
jdg_80687
stackexchange
llm_judgeable_groundtruth_similarity
36860423
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We recently migrated an API application from Azure Cloud Services to Azure Websites, and some clients are still using our legacy protocol for authentication, which uses cookies (instead of the usual Authorization: Bearer HTTP header). We need to support this authentication protocol for a little longer as the clients will not be able to migrate right away. To support cookies in a cross-origin ajax request directed to the API, the client needs to set the withCredentials setting to true in the XMLHttpRequest, and the server needs to repond with the Access-Control-Allow-Credentials header as well to any CORS request. The problem we face is that the Azure Website manages CORS all by itself, and uses its own configuration (which is limited to a list of allowed origins) for the response, which does not allow this header to be set... thus breaking the application for all our Ajax clients! Is there a way to (temporarily) add this header in the responses? Now provide the response and nothing else.
We finally managed to understand the behavior of the Azure Apps CORS middleware. To disable it, you have to clear every single allowed origin entry in the CORS blade of your web app (including * ). Then you can manage CORS by yourself, either using the Web Api 2 functionality or using the web.config. The information is even available in the documentation : Don't try to use both Web API CORS and App Service CORS in one API app. App Service CORS will take precedence and Web API CORS will have no effect. For example, if you enable one origin domain in App Service, and enable all origin domains in your Web API code, your Azure API app will only accept calls from the domain you specified in Azure. So the final answer is: If your application does not need a very specific CORS management, you can use Azure App Service CORS. Otherwise you will need to handle it yourself and disable all CORS configuration in the web app.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36860423', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/547733/']}
jdg_80688
stackexchange
llm_judgeable_groundtruth_similarity
52274829
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Getting a ESLint error when copiling with babel: Line 28: Expected an assignment or function call and instead saw an expression no-unused-expressions Line 29: Expected an assignment or function call and instead saw an expression no-unused-expressions Any idea how to get rid of these whilst making my timer still work as intended? Or have you got a better way of me doing my timer? class RequestTimer extends Component { constructor(props) { super(props); this.state = { seconds: 0, minutes: 0, hours: 0 } this.getTime = this.getTime.bind(this); } getTime() { let second = this.state.seconds let minute = this.state.minutes; let hour = this.state.hours; this.state.seconds % 59 === 0 && this.state.seconds !== 0 ? minute += 1:null; this.state.minutes % 59 === 0 && this.state.seconds % 59 === 0 && this.state.minutes !== 0 ? (hour += 1, minute = 0):null; this.setState({ seconds: second +=1, minutes: minute, hours: hour }) } componentDidMount() { this.timer = setInterval(this.getTime, 1000) } render() { return ( <TimerContainer> <h2>Last Request:</h2> <p>{this.state.hours}h {this.state.minutes}m {this.state.seconds % 60}s</p> </TimerContainer> ) }} Now provide the response and nothing else.
You want to use a proper if statement: getTime() { let second = this.state.seconds let minute = this.state.minutes; let hour = this.state.hours; if (this.state.seconds % 59 === 0 && this.state.seconds !== 0) { minute += 1; } if (this.state.minutes % 59 === 0 && this.state.seconds % 59 === 0 && this.state.minutes !== 0) { hour += 1; minute = 0; } this.setState({ seconds: second +=1, minutes: minute, hours: hour });} Don't use the ternary operator if you don't want to do anything with the result value. And especially you should not use it when you have no else case or when you have to use the comma operator to do multiple things.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52274829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5377005/']}
jdg_80689
stackexchange
llm_judgeable_groundtruth_similarity
19264
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: An NPR story on the upcoming Russian presidential election mentioned that 5% of polling sites would be equipped with new electronic ballot boxes that would reject attempts to submit multiple ballots. There was "concern" that this would not do enough to stop widespread ballot box stuffing (which was the topic of most of the story). Ignoring for a moment the efficacy of the special ballot boxes, wouldn't a "known accurate" sample of 5% of voters be more than enough to determine if the remaining 95% had been tampered with? Even without those boxes, how is it possible to reliably rig an election using false ballots without producing huge statistical anomalies? Now provide the response and nothing else.
The trouble is that the "known accurate" sample is probably not a random sample from the population of all ballots, as it is made up of 100% (approximately) of the votes from a small collection of specific polling sites, and we don't know how those specific polling sites were selected. If they were randomly selected, and there were enough of them, then you could compare them with the results of other polling places and have some hope of detecting fraud, although the power of whatever tests you might construct might not be high unless you have many hundreds of polling sites in your known accurate sample. Of course, Russia is very big, so I assume they could have thousands of polling places in their known accurate sample. In many cases, rigging an election sometimes does produce huge statistical anomalies. Often the government has little interest in reporting enough information for people to find that out, and in many countries, the press is largely compliant with government wishes and won't really investigate. If it's done with some care, though, it can be hard to tell. Imagine Chicago in the 1960s, which was a) very large and b) heavily Democratic. If an extra 4-6% Democratic ballots were added across the city, consistently, year after year, who could tell? (Ignoring for the sake of the example the pointlessness of such an effort.) Here's a link to an interesting look at the 2009 Iranian election that reviews some techniques (good and bad) that can be used even in situations where you have no clean polling place data: Thomas Lotze
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/19264', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2622/']}
jdg_80690
stackexchange
llm_judgeable_groundtruth_similarity
930966
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to query the google datastore for something like (with pm --> persistanceManager): String filters = "( field == 'value' || field == 'anotherValue' )";Query query = pm.newQuery(myType.class, filters); When I execute - I am getting back: App Engine datastore does not support operator OR . What's the best approach in people experience for this kind of queries? Any help appreciated! Now provide the response and nothing else.
Perform multiple queries. The Datastore, like all other databases, isn't able to efficiently execute disjunctions. Unlike other databases, it exposes this difficulty to the user, to make it clear that what you're doing isn't efficient. Your only solution is to execute multiple queries - one for each or - and combine them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/930966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1311500/']}
jdg_80691
stackexchange
llm_judgeable_groundtruth_similarity
577688
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why does gravity act at the centre of earth and how does that happen? Now provide the response and nothing else.
All parts of the Earth exert a pull on another body (such as an apple) and all parts of the Earth are pulled by external bodies (such as apples). Let's consider the pull that the Earth exerts on the apple. Imagine the Earth divided up into kilogram portions by imaginary surfaces. The portion nearest the apple will exert the greatest pull, straight downwards on the apple. The (antipodean) portion furthest away will exert the least pull, though it will still be downwards, that is towards the centre of the Earth. Portions of the Earth to the North, South, East and West of the apple and at various depths inside the Earth will also pull the apple, but not straight downwards. The vector sum of these forces is the resultant pull. Newton famously proposed the Law of Gravitation that every particle attracted every other particle with a force proportional to the mass of each particle and inversely proportional to the square of their separation. He also showed that if the Earth is a sphere, with its mass distributed with spherical symmetry about its centre, then, according to his Law, the resultant pull on the apple was the same as if all the Earth's mass were concentrated at its centre. So gravity doesn't really "act at the centre of the Earth" but it behaves more of less as if it did. Only "more or less" because the Earth is not a perfect sphere and its mass is not distributed with perfect spherical symmetry.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/577688', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']}
jdg_80692
stackexchange
llm_judgeable_groundtruth_similarity
4146928
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Java but if I understand correctly, a char is a primitive. Doing char temp and temp.hashCode() won't compile but doing a char[] temp2 = new char[2] and temp2.hashCode() will compile and execute. Does this mean somehow a char[] is an object??? Now provide the response and nothing else.
a char is a primitive, but an array of type char is an object one way to tell is by dynamically instantiating it: final Object charArray = Array.newInstance(Character.TYPE, 5);System.out.println(charArray.getClass().getComponentType()); Output: char ( Character.TYPE is a reference to the primitive class char . Another way to access that class is through char.class )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4146928', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/503518/']}
jdg_80693
stackexchange
llm_judgeable_groundtruth_similarity
1579638
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to develop a scheduler- and calendar-dependent event application in C#, for which a crucial requirement is to represent recurring events in the database.What is the best way to represent recurring events in a database? More Details: While creating the event I am also sending invites to the certain users and the invitees should be allowed to login to the meeting only during the specified window(meeting duration) or may be decline the login when the invitee attempts to login say, 5 minutes before the scheduled start of the meeting. Now provide the response and nothing else.
The sysjobs , sysjobsschedule and sysschedules tables in SQL Server does a pretty good job of this. I wouldn't reinvent the wheel, I'd just copy their design. Here are some of the important fields from sysschedules freq_type How frequently a job runs for this schedule. 1 = One time only 4 = Daily 8 = Weekly 16 = Monthly 32 = Monthly, relative to freq_interval 64 = Runs when the SQL Server Agent service starts 128 = Runs when the computer is idle freq_interval Days that the job is executed. Depends on the value of freq_type. The default value is 0, which indicates that freq_interval is unused. Value of freq_type Effect on freq_interval 1 (once) freq_interval is unused (0) 4 (daily) Every freq_interval days 8 (weekly) freq_interval is one or more of the following: 1 = Sunday 2 = Monday 4 = Tuesday 8 = Wednesday 16 = Thursday 32 = Friday 64 = Saturday 16 (monthly) On the freq_interval day of the month 32 (monthly, relative) freq_interval is one of the following: 1 = Sunday 2 = Monday 3 = Tuesday 4 = Wednesday 5 = Thursday 6 = Friday 7 = Saturday 8 = Day 9 = Weekday 10 = Weekend day 64 (starts when SQL Server Agent service starts) freq_interval is unused (0) 128 (runs when computer is idle) freq_interval is unused (0) freq_subday_type Units for the freq_subday_interval. Can be one of the following values: Value Description (unit) 1 At the specified time 2 Seconds 4 Minutes 8 Hours freq_subday_interval Number of freq_subday_type periods to occur between each execution of the job. freq_relative_interval When freq_interval occurs in each month, if freq_interval is 32 (monthly relative). Can be one of the following values: 0 = freq_relative_interval is unused 1 = First 2 = Second 4 = Third 8 = Fourth 16 = Last freq_recurrence_factor Number of weeks or months between the scheduled execution of a job. freq_recurrence_factor is used only if freq_type is 8, 16, or 32. If this column contains 0, freq_recurrence_factor is unused.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1579638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/132752/']}
jdg_80694
stackexchange
llm_judgeable_groundtruth_similarity
15243205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get the email address from the ContactsContract, but I only get an empty string! Name and number is working fine, but not the email address! I confused and have been trying to solve this for days now, but I don't know what's wrong, if I have missed something in the code or what and I don't know how to solve this. This part of the application has a ListView of all names from the ContactsContract. When selecting a name in the ListView, the code should get the name, number and email address from the ContactsContract. I would appreciate some help to be able to continue the work! Perhaps there is a better way to solve this, then please tell me how to change! Thanks! Since toast message doesn't show up inside the while loop, while(cursorEmail.moveToFirst()){.... I guess there is something wrong with the emails cursor!? It seems like it's empty!? public class Activity_3 extends Activity {ListView listView;@Overrideprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_3); listView = (ListView) findViewById(R.id.contactList); String[] projection = { ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME, ContactsContract.CommonDataKinds.Phone.NUMBER, ContactsContract.CommonDataKinds.Phone._ID }; Cursor cursor1 = getContentResolver().query(ContactsContract.CommonDataKinds.Phone.CONTENT_URI, projection, null, null, null); // From column String[] fromColumn = { ContactsContract.Contacts.DISPLAY_NAME }; // To view int[] toView = { R.id.contactItem }; startManagingCursor(cursor1); ListAdapter adapter = new SimpleCursorAdapter(this, R.layout.activity_3, cursor1, fromColumn, toView); listView.setAdapter(adapter); listView.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> adapter, View view, int position, long id) { String[] projection = { ContactsContract.CommonDataKinds.Phone._ID, ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME, ContactsContract.CommonDataKinds.Phone.NUMBER, }; Cursor cursor = getContentResolver().query(ContactsContract.CommonDataKinds.Phone.CONTENT_URI, projection, null, null, null); cursor.moveToPosition(position); String contactId = cursor.getString(cursor.getColumnIndex(ContactsContract.Contacts._ID)); String contactName = cursor.getString(cursor.getColumnIndex(ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME)); String contactNumber = cursor.getString(cursor.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER)); String pos = Integer.toString(position); String contactEmailAddress = "?"; //Email Cursor emails = getContentResolver().query(ContactsContract.CommonDataKinds.Email.CONTENT_URI, null, ContactsContract.CommonDataKinds.Email.CONTACT_ID + "=" + contactId, null, null); while(emails.moveToNext()){ contactEmailAddress = emails.getString(emails.getColumnIndex(ContactsContract.CommonDataKinds.Email.DATA)); Toast.makeText(Activity_3.this, contactEmailAddress, Toast.LENGTH_SHORT).show(); } emails.close(); Toast.makeText(Activity_3.this, pos + " " + contactId + " " + contactName + " " + contactNumber + " " + contactEmailAddress, Toast.LENGTH_SHORT).show(); } });} } Now provide the response and nothing else.
You need to get the contacts list with details with the following code. The following method will return all contacts with name, phone number and email. public ArrayList<ContactDO> getContacts(){ ArrayList<ContactDO> alContacts = null; Cursor cursor = contResv.query(ContactsContract.Contacts.CONTENT_URI, null, null, null, null); if(cursor.moveToFirst()) { alContacts = new ArrayList<ContactDO>(); do { //Create a plain class with following variables - id, name, contactNumber, email ContactDO objContactDO = new ContactDO(); objContactDO.name = cursor.getString(cursor.getColumnIndex(ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME)); objContactDO.id = cursor.getString(cursor.getColumnIndex(ContactsContract.Contacts._ID)); Cursor emails = contResv.query(Email.CONTENT_URI,null,Email.CONTACT_ID + " = " + objContactDO.id, null, null); while (emails.moveToNext()) { objContactDO.email = emails.getString(emails.getColumnIndex(Email.DATA)); break; } emails.close(); if(Integer.parseInt(cursor.getString(cursor.getColumnIndex(ContactsContract.Contacts.HAS_PHONE_NUMBER))) > 0) { Cursor pCur = contResv.query(ContactsContract.CommonDataKinds.Phone.CONTENT_URI,null,ContactsContract.CommonDataKinds.Phone.CONTACT_ID +" = ?",new String[]{ objContactDO.id }, null); while (pCur.moveToNext()) { objContactDO.contactNumber = pCur.getString(pCur.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER)); break; } pCur.close(); } alContacts.add(objContactDO); } while (cursor.moveToNext()) ; } cursor.close(); return alContacts;} Create a custom adapter and set it to listview. lvContacts.setAdapter(adapter = new CustomContactsAdapter(getContacts()));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15243205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/637364/']}
jdg_80695
stackexchange
llm_judgeable_groundtruth_similarity
124633
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: One important property of the Robinson-Schensted correspondence (RS) is that the longest increasing subsequence of the permutation $\sigma$ is $\lambda_1$, the first entry of the shape $\lambda(\sigma)$ of the tableaux associated to $\sigma$. Greene's theorem (sorry for the paywall) generalizes this result to $k$-increasing sequences and the sum $\lambda_1 + \dots + \lambda_k$. Are there generalizations of Greene's theorem to other insertion algorithms? Such algorithms would include RSK, Hecke insertion, Edelman-Greene insertion and any other variants you are aware of. In particular, it seems a generalization of this is known for the full RSK correspondence, but I am having difficulty finding a reference. Moreover, I was told of an article by Haiman on generalizations of RS that may include such results, but have never found it. Now provide the response and nothing else.
For RSK the answer is "well known". You can find the statements neatly arranged in an article by Christian Krattenthaler http://arxiv.org/abs/math/0510676 . I think the right framework for this question is Sergey Fomin's theory of dual graded graphs.However, I don't think there are many other insertion algorithms where the Greene-Kleitman invariant is known. One is the insertion algorithm for shifted tableaux, and another, easy one is the pair (BinTree, BinWord). In fact, whenever you have such a Greene-Kleitman invariant and whenever this invariant behaves well with respect to "promotion", you are in a good position to get a result parallel to http://arxiv.org/abs/math/0604140 . For the pair (BinTree, BinWord) this is indeed the case (and interesting), but I never managed to write it up due to time constraints... For Edelman-Greene the story is slightly different I think. If I recall correctly you can say at least a little bit about the shape of the word by staring long enough at the article by Christian Stump and Luis Serrano http://arxiv.org/abs/1009.4690 or myself http://arxiv.org/abs/1009.3919 . EDIT: The Kleitman Greene invariants for some insertion algorithms ( for the standard case , i.e., where the words are permutations) are described in Sergey Fomin's paper "Schensted algorithms for dual graded graphs" : 1) Theorem 4.4.4: Young-Fibonacci insertion (due to Tom Roby and Sergey Fomin, perhaps the invariant for Janvier Nzeutchap's algorithm is different). 2) Just below Proposition 4.5.2: Shifted insertion (attributed to Worley and Bruce Sagan, see Richard Stanley's answer for the description in the semistandard case due to Luis Serrano) 3) Proposition 4.6.2: (BinTree, BinWord)-insertion (independently due to Xavier Viennot) I'd be interested in learning about Kleitman-Greene invariants for other insertion algorithms. In particular, is it known for domino insertion (as described by Marc van Leeuwen , see also this paper by Thomas Lam)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/124633', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7717/']}
jdg_80696
stackexchange
llm_judgeable_groundtruth_similarity
1678363
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Looking at the sum: $$\sum_{n=1}^\infty\tan\left(\frac\pi{2^n}\right)$$ I'd say that it does not converge, because for $n=1$ the tangent $\tan\left(\frac\pi 2\right)$ should be undefined. But Wolframlpha thinks that the sum converges somewhere around $1.63312×10^{16}$ . What am I missing? Now provide the response and nothing else.
For floating point numbers stored in IEEE double precision format, the significant has $53$ bit of accuracy. The most significant bit is implied and is always one. Only $52$ bits are actually stored. Since $1 \le \frac{\pi}{2} < 2$, among those numbers representable by IEEE,the closest number to $\frac{\pi}{2}$ is$$\left(\frac{\pi}{2}\right)_{fp} \stackrel{def}{=} 2^{-52}\left\lfloor \frac{\pi}{2} \times 2^{52}\right\rfloor$$ Numerically, we have $$\frac{\pi}{2} - \left(\frac{\pi}{2}\right)_{fp} \approx 6.1232339957\times 10^{-17}$$ Since for $\theta \approx \frac{\pi}{2}$, $\displaystyle\;\tan\theta \approx \frac{1}{\frac{\pi}{2} - \theta}$, we have $$\tan\left(\frac{\pi}{2}\right)_{fp}\approx \frac{1}{6.1232339957\times 10^{-17}}\approx 1.6331239353 \times 10^{16}$$ This is approximately the number you observed.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/1678363', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/300630/']}
jdg_80697
stackexchange
llm_judgeable_groundtruth_similarity
39187886
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to the subprocess module and the documentation leaves me wondering what the difference is between subprocess.popen and subprocess.run . Is there a difference in what the command does? Is one just newer? Which is better to use? Now provide the response and nothing else.
subprocess.run() was added in Python 3.5 as a simplification over subprocess.Popen when you just want to execute a command and wait until it finishes, but you don't want to do anything else in the mean time. For other cases, you still need to use subprocess.Popen . The main difference is that subprocess.run() executes a command and waits for it to finish, while with subprocess.Popen you can continue doing your stuff while the process finishes and then just repeatedly call Popen.communicate() yourself to pass and receive data to your process. Secondly, subprocess.run() returns subprocess.CompletedProcess . subprocess.run() just wraps Popen and Popen.communicate() so you don't need to make a loop to pass/receive data or wait for the process to finish. Check the official documentation for info on which params subprocess.run() pass to Popen and communicate() .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/39187886', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6226488/']}
jdg_80698
stackexchange
llm_judgeable_groundtruth_similarity
33923
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a 2-pin plug that was not label which is Live and which is Neutral. I do not want to damage the electrical device and was wondering what should I do in order to test and find which point is Live and which point is Neutral? The 2-pin plug look something like the following: Now provide the response and nothing else.
A plug doesn't have polarity, it completely depends on the wiring of the socket. To check the socket all you need is this two dollar tester: It has a neon light inside which connects via a high resistance to the contact at the end. If you insert in in the live pin and touch the metal dingus on the back of the tester, there will flow a very small, safe, leakage current to ground which is enough to light the neon light. If you try it on the neutral it won't light because the neutral is at the same potential as ground. edit Well, that's how the classical tester works. Cybergibbons points out that this one is probably a non-contact tester (because it doesn't go deep enough in the socket?). Anyway, those detect the electrical field emitted by the phase's voltage. They're battery operated to power the LED.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/33923', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/9584/']}
jdg_80699
stackexchange
llm_judgeable_groundtruth_similarity
6002516
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi What is the pros and cons of " Quaternions " and " Euler Angles " Method - Which one is faster ? - Which one need less Computational Effort ? - which one is more accurate , (in round off error )? Now provide the response and nothing else.
Euler angles are more human understandable and also good for decomposing rotations into individual degrees of freedom (for kinematic joints and the like) but have disadvantages like ambiguity and gimbal lock. In practice I would prefer quaternions, as they are easier to compute with (for the computer, not for humans) and more efficient. You have to make three rotations and multiply them together when rotating by Euler angles, whereas a Quaternion is only one rotation and as it already encodes the sin and cos, the conversion from quaternion to matrix is quite efficient.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6002516', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/473404/']}
jdg_80700
stackexchange
llm_judgeable_groundtruth_similarity
14526380
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am collecting info using web.response method as a web crawler. I collect it to a string and then save it to a text file .Then I search that text file using regular expression. Now the problem is when I search that text file using regular expression I am not able to do it properly because there are many random newlines in the text file. My question is "Is there a way that the XML (HTML) document I get by web.response method can be formatted properly before saving it to text file , so that there are no random spaces and newlines in the text. I can not even post an unformatted HTML here otherwise I would have done it. Now provide the response and nothing else.
Yep, that's a deadlock all right. And a common mistake with the TPL, so don't feel bad. When you write await foo , the runtime, by default, schedules the continuation of the function on the same SynchronizationContext that the method started on. In English, let's say you called your ExecuteAsync from the UI thread. Your query runs on the threadpool thread (because you called Task.Run ), but you then await the result. This means that the runtime will schedule your " return result; " line to run back on the UI thread, rather than scheduling it back to the threadpool. So how does this deadlock? Imagine you just have this code: var task = dataSource.ExecuteAsync(_ => 42);var result = task.Result; So the first line kicks off the asynchronous work. The second line then blocks the UI thread . So when the runtime wants to run the "return result" line back on the UI thread, it can't do that until the Result completes. But of course, the Result can't be given until the return happens. Deadlock. This illustrates a key rule of using the TPL: when you use .Result on a UI thread (or some other fancy sync context), you must be careful to ensure that nothing that Task is dependent upon is scheduled to the UI thread. Or else evilness happens. So what do you do? Option #1 is use await everywhere, but as you said that's already not an option. Second option which is available for you is to simply stop using await. You can rewrite your two functions to: public static Task<T> ExecuteAsync<T>(this OurDBConn dataSource, Func<OurDBConn, T> function){ string connectionString = dataSource.ConnectionString; // Start the SQL and pass back to the caller until finished return Task.Run( () => { // Copy the SQL connection so that we don't get two commands running at the same time on the same open connection using (var ds = new OurDBConn(connectionString)) { return function(ds); } });}public static Task<ResultClass> GetTotalAsync( ... ){ return this.DBConnection.ExecuteAsync<ResultClass>( ds => ds.Execute("select slow running data into result"));} What's the difference? There's now no awaiting anywhere, so nothing being implicitly scheduled to the UI thread. For simple methods like these that have a single return, there's no point in doing an " var result = await...; return result " pattern; just remove the async modifier and pass the task object around directly. It's less overhead, if nothing else. Option #3 is to specify that you don't want your awaits to schedule back to the UI thread, but just schedule to the thread pool. You do this with the ConfigureAwait method, like so: public static async Task<ResultClass> GetTotalAsync( ... ){ var resultTask = this.DBConnection.ExecuteAsync<ResultClass>( ds => return ds.Execute("select slow running data into result"); return await resultTask.ConfigureAwait(false);} Awaiting a task normally would schedule to the UI thread if you're on it; awaiting the result of ContinueAwait will ignore whatever context you are on, and always schedule to the threadpool. The downside of this is you have to sprinkle this everywhere in all functions your .Result depends on, because any missed .ConfigureAwait might be the cause of another deadlock.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14526380', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1609429/']}
jdg_80701
stackexchange
llm_judgeable_groundtruth_similarity
12346349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was finding the difference between JSON and XML. As, both are for data interchange between systems, but there is a big difference between JSON and XML that JSON is more lightweight than XML. But I am unable to find the actual reason that why the JSON is light-weight. What makes JSON light-weight?? One answer I found is that in JSON there is not a lot of extra xml markup. What does it actual mean. Are there some more reasons that describes why JSON is lightweight? Now provide the response and nothing else.
Have you looked at examples of JSON and XML documents? While both JSON and XML are used to represent arbitrary trees of data, XML is far more verbose. Every element in the tree has a name (the element type name), and the element must be enclosed in a matching pair of tags. By contrast, JSON expresses trees in a "nested array" type of notation akin to that of Javascript (in fact, a JSON document can literally be parsed as Javascript to result in the corresponding array). Take a look at the differences . XML is of course semantically richer, since elements can be decorated with an arbitrary number of attributes , and elements may contain mixed content of free text flow and further, nested elements. For example, the following snippet would be tedious to express as JSON: <paragraph> This is an entire paragraph of <emph>text</emph>. It is interspersed with further, nested <emph>XML elements</emph>, as well as the occasional <link href="http://stackoverflow.com/a/12346394/596781">hyperlink</link>.</paragraph> On the other hand, data -like documents can be much simpler in JSON. Imagine this: <properties> <property type="int" name="ID">123</property> <property type="date" name="birthday">19700101</property> <references> <url>http://stackoverflow.com/a/12346394/596781</url> <doi>something.made.up</doi> </references></properties> This becomes a very compact JSON array. A secondary consideration is the amount of toolsets that surround both formats. While JSON tools are mainly about parsing JSON data, the W3C has been developing a very large set of adherent technologies to manipulate XML in a systematic fashion, such as XSLT and XPATH. In a nutshell, and as a very personal opinion, I'd say that XML is about the document and JSON is about data. XML will feel more natural and useful for large, structured documents, while JSON is often the simpler and less verbose format for transmitting data sets.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12346349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1064076/']}
jdg_80702
stackexchange
llm_judgeable_groundtruth_similarity
19569
Below is a question asked on the forum economics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm interested in how we model a consumer with a changing utility function over time. Example: an individual's demand for candy/sweets at 10 years of age will be different when he is 20,30 or 40 years old (based on casual observation). This would seem to indicate the utility derived from this "input" in our utility function has changed. In basic intertemporal models the general assumption is that the objective utility function does not change. mathematically it makes the model simple to compute, however it does not reflect the changes in preferences over time. However, if we do allow the individuals utility function to change over time in our model, does this not violate the requirements for a rational consumer ? How do we model an individual with changing preferences over time and is it economically sound to do so? Now provide the response and nothing else.
In The Economic Approach to Human Behavior , Gary Becker said: The combined assumptions of maximizing behavior , market equilibrium , and stable preferences , used relentlessly and unflinchingly, form the heart of the economic approach as I see it. The emphasis is mine. Here "stable preferences" refer to preferences (and by association, the utility functions representing them) that are more or less the same across different periods. The reason for preference stability is obvious. If we allow preference or utility function to change arbitrarily, then we'd be able to explain pretty much anything by attributing the cause to some appropriately chosen change in people's preference. When it comes to the intertemporal preferences in particular, I agree with @MaartenPunt that, at least in principle, one can incorporate time-dependence into utility function. For instance, in the usual discounted utility framework, we have \begin{equation}U(\mathbf x_t)=\sum_{t=0}^\infty D(t)u(\mathbf x_t)\end{equation}where $\mathbf x_t$ is a vector of consumption goods at time $t$, $D(\cdot)$ is a discount function (e.g. $D(t)=\delta^t$ as in an exponential discounting model), and $u(\cdot)$ is the time-invariant period utility function. To incorporate time-dependence, we can simply allow $u(\cdot)$ to also be a function of time \begin{equation}U(\mathbf x_t)=\sum_{t=0}^\infty D(t)u(\mathbf x_t,\color{red}t).\end{equation}To make @MaartenPunt's third comment more explicit, suppose \begin{equation}u(\mathbf x,t)=\alpha_t^1v(x^1)+\cdots+\alpha_t^iv(x^i)+\cdots+\alpha_t^nv(x^n)\end{equation}where $x^i$ denotes the quantity consumed of good $i$ and $\alpha_t^i$s are the time-dependent weights on the utility derived from each good $i$.So the same consumption bundle will generate possibly different levels of utility in different time periods. For example, $\alpha_{10}^\text{candy}>\alpha_{40}^\text{candy}$ would capture the fact that a 10-year-old values a candy more than a 40-year-old does. On the other hand, a time-invariant preference would imply that $\alpha_t^i=\alpha^i$ for all $t=0,1,\dots$. The above discussion is however distinctly different from the paper you linked to in the comment, which is about dynamic (in)consistency of choices. In the literature on intertemporal choices, the main focus is usually about whether some optimal consumption profile decided at time $t$ will remain optimal when reevaluated at some future time $t+k$. Usually papers in this literature maintain the assumption of time-invariant period utility function, i.e. $u(\mathbf x,t)=u(\mathbf x)$, but play with various forms of the discount function $D(\cdot)$ (e.g. hyperbolic or quasi-hyperbolic discounting) to generate predictions that match experimental data.
{}
{'log_upvote_score': 4, 'links': ['https://economics.stackexchange.com/questions/19569', 'https://economics.stackexchange.com', 'https://economics.stackexchange.com/users/8815/']}
jdg_80703
stackexchange
llm_judgeable_groundtruth_similarity
20741419
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know there is no WHERE clause in firebase. But how we can implement pattern searching in firebase as we can do in SQL using LIKE operator. How we can implement following sql query in firebase? SELECT * FROM Customers WHERE City LIKE 's%'; Now provide the response and nothing else.
The Idea A near solution is to create an index of your data which is indexed by City (or whichever other field you'd like to search by) in a server-side NodeJS script. This will not quite replicate the behavior of LIKE , however it will allow for easy querying by City name, which may eliminate the need for that behavior. The Node Script This is typically done via a simple server-side script written in NodeJS. Imagine a scenario where your user data was stored in /users/data . Your script would look like this. var ref = new Firebase("<Your Firebase>.firebaseio.com");// A Firebase ref where all our users are storedvar userDataRef = ref.child('users/data');// A Firebase ref which is where we store our indexvar byCityRef = ref.child("users/byCity");// Then bind to users/data so we can index each user as they're addeduserDataRef.on("child_added", function (snapshot) { // Load the user details var user = snapshot.val(); // Use the snapshot name as an ID (i.e. /users/data/Tim has an ID of "Tim") var userID = snapshot.name(); // Push the userID into users/byCity/{city} byCityRef.child(user.city).push(userID);}); This script will create a structure like this: { "users": { "data": { "Tim": {"hair": "red", "eyes": "green", "city": "Chicago"} }, "byCity": { "Chicago": { "-asd09u12": "Tim" } } }} The Client Script Once we've indexed our data, querying against it is simple and can be done in two easy steps. var ref = new Firebase("<Your Firebase>.firebaseio.com");var userDataRef = ref.child('users/data');var byCityRef = ref.child('users/byCity')// Load children of /users/byCity/ChicagobyCityRef.child('Chicago').on("child_added", function (snapshot) { // Find each user's unique ID var userID = snapshot.val(); // Then load the User's data from /users/data/{ID} userDataRef.child(userID).once(function (snapshot) { // userID = "Tim" // user = {"hair": "red", "eyes": "green", "city": "Chicago"} var user = snapshot.val(); });}); Now you have the near realtime load speed of Firebase with powerful querying capabilities!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20741419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3060968/']}
jdg_80704
stackexchange
llm_judgeable_groundtruth_similarity
167864
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: EDIT: QUESTION HAS BEEN ANSWERED SEE THE BOTTOM OF MY POST FOR MY FINAL SCRIPT... Man, I'm a powershell noob... I have this link: http://gallery.technet.microsoft.com/ScriptCenter/en-us/da3fee00-e79d-482b-91f2-7c729c38068f I'd like to use that to take a list of servers, run it against that list, and then get a report of each server with disk space similar to: SERVER1 C: Total=120GB Free Space=60GBD: Total=400GB Free Space=200GB etc. The problem is... I don't know what to do with the script on that link to get it to work. Copy/paste it into notepad and save as a .ps1?? Doesn't seem to be that way in the description. Will it work if I run it from my Win7 box with PS, or does every server have to have Powershell installed for a remote PS script to work? Is there a way to setup the script to email me? That way I can set it as a weekly task or similar. Thank you! ======== Final script function Get-FreeDisk{ [CmdletBinding()] param( [Parameter(Position=0, ValueFromPipeline=$true)] [string[]]$Computername="localhost", [int] [ValidateRange(0,100)] $MinPercFree = 100, [Management.Automation.PSCredential]$Credential = ([Management.Automation.PSCredential]::Empty) ) begin{ [String[]]$computers = @() } process{ $computers += $Computername } end{ Get-WmiObject -computername $computers -Credential $Credential ` -Query "select __SERVER, Caption,Label,Capacity,FreeSpace from Win32_Volume where DriveType != 5 and Capacity > 0" | ` Add-Member -Name Free -MemberType ScriptProperty -PassThru -Value {($this.FreeSpace/10000) / ($this.Capacity/1000000)} | ` Where { $_.Free -lt $MinPercFree } | ` sort __SERVER,Caption | ` Format-Table @{Label="Computer"; Expression={$_.__SERVER}}, Caption, Label,` @{Label="Size/MB"; FormatString="{0,7:N0}"; Expression={$_.Capacity / 1mb}},` @{Label="FreeSpace/MB"; FormatString="{0,7:N0}"; Expression={$_.Freespace / 1mb}}, ` @{Label="Free"; FormatString="{0,3:N0}%"; Expression={$_.Free}} -AutoSize }}Get-Content .\servers.txt | Get-FreeDisk | Format-Table -AutoSize | Out-File diskusage.txtSend-MailMessage -To [email protected] -Subject "Server Disk Report" -From [email protected] -SmtpServer mail.domain.com -Attachments "diskusage.txt" Now provide the response and nothing else.
IIS will respect both of those limits. If you set a 500MB private byte limit, as soon as a worker process attempts to commit 501MB, IIS will spin up a new worker process and kill the old one. If you set a 3GB virtual memory limit, as soon as a worker process attempts to reserve 3.001GB, IIS will spin up a new worker process and kill the old one. If you are on a 64bit platform, you should be aware that ASP.NET applications aggressively reserve virtual memory. As an example, I have an app on a farm that uses only 88MB of private bytes, but its sitting at 5.4GB Virtual Size right now. I believe the virtual memory reservation is a function of physical RAM on the server. It's also important to understand that on a 64bit platform, reserving large portions of virtual memory has zero performance impact. Basically, if you are having memory consumption issues on an IIS server, the setting you want to limit is Private Memory/Bytes, this is what corresponds to actual memory usage.
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/167864', 'https://serverfault.com', 'https://serverfault.com/users/7861/']}
jdg_80705
stackexchange
llm_judgeable_groundtruth_similarity
58691278
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to redirect users to login page if they are not authenticated. I need something like route.beforeEach in Vue.js, ideally: sapper.beforeRouteChange((to, from, next) => { const isAuth = "[some session or token check]"; if (!isAuth) { next('/login') } next()}) I found Sapper - protected routes (route guard) this question but I think it's not enough for my needs. What if token or auth changes in runtime? OR is it covered by reactivity? Edit 1: I think that this issue on Sapper GitHub solves my problem. Now provide the response and nothing else.
So I placed this code to /src/routes/_layout.svelte : import AuthMiddleware from "../methods/authMiddleware.js"; import { goto, stores } from '@sapper/app'; const { page } = stores(); if (typeof window !== "undefined" && typeof document !== "undefined") { page.subscribe(({ path, params, query }) => { const from = window.location.pathname; const redirect = (href) => { goto(href); } AuthMiddleware.beforeChange(from, path, redirect, params, query); }) } And this is authMiddleware.js file: export default class AuthMiddleware { static beforeChange(from, to, redirect, params, query) { if (!AuthMiddleware._isUserAuthenticated()) { redirect("/login"); } } // ~ static _isUserAuthenticated() { return true; // TODO: Implement }} more information on route hooks can be found here https://github.com/sveltejs/sapper/issues/30 https://sapper.svelte.dev/docs/#Stores
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58691278', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9276806/']}
jdg_80706
stackexchange
llm_judgeable_groundtruth_similarity
3624753
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have a namedtuple like this: EdgeBase = namedtuple("EdgeBase", "left, right") I want to implement a custom hash-function for this, so I create the following subclass: class Edge(EdgeBase): def __hash__(self): return hash(self.left) * hash(self.right) Since the object is immutable, I want the hash-value to be calculated only once, so I do this: class Edge(EdgeBase): def __init__(self, left, right): self._hash = hash(self.left) * hash(self.right) def __hash__(self): return self._hash This appears to be working, but I am really not sure about subclassing and initialization in Python, especially with tuples. Are there any pitfalls to this solution? Is there a recommended way how to do this? Is it fine? Thanks in advance. Now provide the response and nothing else.
edit for 2017: turns out namedtuple isn't a great idea . attrs is the modern alternative. class Edge(EdgeBase): def __new__(cls, left, right): self = super(Edge, cls).__new__(cls, left, right) self._hash = hash(self.left) * hash(self.right) return self def __hash__(self): return self._hash __new__ is what you want to call here because tuples are immutable. Immutable objects are created in __new__ and then returned to the user, instead of being populated with data in __init__ . cls has to be passed twice to the super call on __new__ because __new__ is, for historical/odd reasons implicitly a staticmethod .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3624753', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160206/']}
jdg_80707
stackexchange
llm_judgeable_groundtruth_similarity
230433
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Four identical unfair dice are tossed once. Find the probability of events in which the sum of numbers (facing up) is 18. Given the probability of face 6 is $1/2$ and that of each other face is $1/10$ . My attempt ClearAll[data,prob]data = DeleteCases[Tuples@Table[Range[1, 6], 4], x__ /; Total@x != 18];prob = (# /. {p_ /; p != 6 -> 1/10, 6 -> 1/2}) & /@ data;Times @@@ prob // Total with output $12/125$ . Now provide the response and nothing else.
pd = EmpiricalDistribution[({1, 1, 1, 1, 1, 5}/10) -> Range[6]];Probability[Total[Array[x, 4]] == 18, Thread[Array[x, 4] \[Distributed] pd]](* result 12/125 *)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/230433', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/42141/']}
jdg_80708
stackexchange
llm_judgeable_groundtruth_similarity
13322119
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've made a simple shell for linux. It's reading line by line with getline() until ctrl+d (eof/-1) is entered into standard input. While entering into stdin line by line code like that: ls -al &ls -a -l My shell works pretty well. I've tried to run script through my shell, but it's not working. When I execute script, my shell is automatically executed (1st line) but the shell do not interprete other lines. #!/home/arbuz/Patryk/projekt/a.outls -al &ls -a -l What could cause it? I have to say that I'm very beginner in linuxes and teacher didnt say anything about all that stuff. Just a homework. I've done some researches but that's all I've found. Here's code of my Shell. I've added shell path into etc/shells but its still not working #include <fcntl.h>#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <stdbool.h>int main(){ ssize_t bufer_size = 0; char* line = NULL; int line_size; while ((line_size = getline(&line, &bufer_size, stdin)) != -1) // while end of file { char** words_array; words_array = (char**)malloc(200 * sizeof(char*)); int words_count = 0; int i; int j = 0; int words_length = 0; char word[100]; for (i = 0; i < line_size; i++) { if (line[i] == ' ' || line[i] == '\n') { words_array[words_count] = (char*)malloc(words_length * sizeof(char)); int b; for (b = 0; b < words_length; b++) { words_array[words_count][b] = word[b]; } j = 0; words_count++; words_length = 0; } else { word[j] = line[i]; j++; words_length++; } } bool run_in_background = false; if (words_array[words_count - 1][0] == '&') { run_in_background = true; words_array[words_count - 1] = NULL; } int a = fork(); if (a == 0) // child process { execvp(words_array[0], words_array); } else // parent process { if (run_in_background == true) { printf("\n ---- running in background. \n"); } else { printf("\n ---- running normal \n"); wait(NULL); } } } return 0;} Now provide the response and nothing else.
Your shell must accept command line arguments. In this case, your program will be called like this: /home/arbuz/Patryk/projekt/a.out your_script So you'll need a main() of this signature: int main(int argc, char* argv[]) and then parse the arguments. argc contains the amount of arguments. The script's filename is passed in argv[1] . You'll need to open it (using fopen() ) and read commands from it instead of stdin . You should probably make sure that your shell ignores the first line of a file if it starts with a # . If your script is called without an absolute path (a path that doesn't start with a / ), then the filename is relative to the current directory. You can get that from the environment or programmatically with getcwd() .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13322119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1162289/']}
jdg_80709
stackexchange
llm_judgeable_groundtruth_similarity
7743
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking to calculate the physical address corresponding to a logical address in a paging memory management scheme. I just want to make sure I am getting the calculation right, as I fear I could be wrong somewhere. So, the data I have is as follows: The logical address: $717$ Logical memory size: $1024$ bytes ($4$ pages) Page Table: \begin{array}{| c | c |}\hlinePage\ Number & Frame\ Number\\ \hline0 & 5\\ \hline1 & 2\\ \hline2 & 7\\ \hline3 & 0\\ \hline\end{array} Physical memory: $16$ frames So, with $1024$ bytes in the logical memory, and $4$ pages, then each page is $256$ bytes. Therefore, the size of the physical memory must be $4096$, right? ($256 \times 16$). Then, to calculate the logical address offset: $$1024 \mod 717 = 307$$ Is that how we calculate the offset? And, we can assume that $717$ is in page $2$ ($\frac{1024}{717} = 2.8$)? So, according to the page table, the corresponding frame number is $3$. And so to get the physical address, we multiply the frame number and page size? $$2 \times 256 = 768$$ Then, do we add the offset, like so: $$768 + 307 = 1,075$$ Thank you for taking the time to read. If I don't quite have this correct, would you be able to advise on the correct protocol to calculating this? Now provide the response and nothing else.
You are correct in your reasoning that the pages are $256$ bytes and that the physical memory capacity is $4096$ bytes. However, there are errors after that. The offset is the distance (in bytes) relative to the start of the page. I.e., logical_address mod page_size. The bits for this portion of the logical address are not translated (given power of two page size). The logical (virtual) page number is number of (virtual) page counting from zero. I.e., $$\frac{logical\_address}{page\_size}$$ As you noted, the physical page is determined by the translation table, indexed using the logical (virtual) address. Once the physical page number had been found, the physical address of the start of that page is found by multiplying the physical page number by the page size. The offset is then added to determine the precise physical address. I.e., $$(physical\_page\_number \times page\_size) + offset$$ So a logical address of, e.g., $508$, with $256$ byte pages would have an offset of$$508 \mod 256 = 252$$The logical/virtual page number would be$$\frac{508}{256} = 1$$With the given translation table, logical page $1$ translates to the physical page number $2$. The physical address would then be$$physical\_page\_number \times page\_size + offset = 2 \times 256 + 252 = 764$$
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/7743', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2420/']}
jdg_80710
stackexchange
llm_judgeable_groundtruth_similarity
16399279
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to make a Bland-Altman plot in Python? I can't seem to find anything about it. Another name for this type of plot is the Tukey mean-difference plot . Example: Now provide the response and nothing else.
If I have understood the theory behind the plot correctly, this code should provide the basic plotting, whereas you can configure it to your own particular needs. import matplotlib.pyplot as pltimport numpy as npdef bland_altman_plot(data1, data2, *args, **kwargs): data1 = np.asarray(data1) data2 = np.asarray(data2) mean = np.mean([data1, data2], axis=0) diff = data1 - data2 # Difference between data1 and data2 md = np.mean(diff) # Mean of the difference sd = np.std(diff, axis=0) # Standard deviation of the difference plt.scatter(mean, diff, *args, **kwargs) plt.axhline(md, color='gray', linestyle='--') plt.axhline(md + 1.96*sd, color='gray', linestyle='--') plt.axhline(md - 1.96*sd, color='gray', linestyle='--') The corresponding elements in data1 and data2 are used to calculate the coordinates for the plotted points. Then you can create a plot by running e.g. from numpy.random import randombland_altman_plot(random(10), random(10))plt.title('Bland-Altman Plot')plt.show()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16399279', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/329829/']}
jdg_80711
stackexchange
llm_judgeable_groundtruth_similarity
14784841
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to use the new TileProvider functionality of the latest Android Maps API (v2) to overlay some custom tiles on the GoogleMap . However as my users will not have internet a lot of the time, I want to keep the tiles stored in a zipfile/folder structure on the device. I will be generating my tiles using Maptiler with geotiffs . My questions are: What would be the best way to store the tiles on the device? How would I go about creating a TileProvider that returns local tiles? Now provide the response and nothing else.
You can put tiles into assets folder (if it is acceptable for the app size) or download them all on first start and put them into device storage (SD card). You can implement TileProvider like this: public class CustomMapTileProvider implements TileProvider { private static final int TILE_WIDTH = 256; private static final int TILE_HEIGHT = 256; private static final int BUFFER_SIZE = 16 * 1024; private AssetManager mAssets; public CustomMapTileProvider(AssetManager assets) { mAssets = assets; } @Override public Tile getTile(int x, int y, int zoom) { byte[] image = readTileImage(x, y, zoom); return image == null ? null : new Tile(TILE_WIDTH, TILE_HEIGHT, image); } private byte[] readTileImage(int x, int y, int zoom) { InputStream in = null; ByteArrayOutputStream buffer = null; try { in = mAssets.open(getTileFilename(x, y, zoom)); buffer = new ByteArrayOutputStream(); int nRead; byte[] data = new byte[BUFFER_SIZE]; while ((nRead = in.read(data, 0, BUFFER_SIZE)) != -1) { buffer.write(data, 0, nRead); } buffer.flush(); return buffer.toByteArray(); } catch (IOException e) { e.printStackTrace(); return null; } catch (OutOfMemoryError e) { e.printStackTrace(); return null; } finally { if (in != null) try { in.close(); } catch (Exception ignored) {} if (buffer != null) try { buffer.close(); } catch (Exception ignored) {} } } private String getTileFilename(int x, int y, int zoom) { return "map/" + zoom + '/' + x + '/' + y + ".png"; }} And now you can use it with your GoogleMap instance: private void setUpMap() { mMap.setMapType(GoogleMap.MAP_TYPE_NONE); mMap.addTileOverlay(new TileOverlayOptions().tileProvider(new CustomMapTileProvider(getResources().getAssets()))); CameraUpdate upd = CameraUpdateFactory.newLatLngZoom(new LatLng(LAT, LON), ZOOM); mMap.moveCamera(upd);} In my case I also had a problem with y coordinate of tiles generated by MapTiler, but I managed it by adding this method into CustomMapTileProvider: /** * Fixing tile's y index (reversing order) */private int fixYCoordinate(int y, int zoom) { int size = 1 << zoom; // size = 2^zoom return size - 1 - y;} and callig it from getTile() method like this: @Overridepublic Tile getTile(int x, int y, int zoom) { y = fixYCoordinate(y, zoom); ...} [Upd] If you know exac area of your custom map, you should return NO_TILE for missing tiles from getTile(...) method. This is how I did it: private static final SparseArray<Rect> TILE_ZOOMS = new SparseArray<Rect>() {{ put(8, new Rect(135, 180, 135, 181 )); put(9, new Rect(270, 361, 271, 363 )); put(10, new Rect(541, 723, 543, 726 )); put(11, new Rect(1082, 1447, 1086, 1452)); put(12, new Rect(2165, 2894, 2172, 2905)); put(13, new Rect(4330, 5789, 4345, 5810)); put(14, new Rect(8661, 11578, 8691, 11621));}};@Overridepublic Tile getTile(int x, int y, int zoom) { y = fixYCoordinate(y, zoom); if (hasTile(x, y, zoom)) { byte[] image = readTileImage(x, y, zoom); return image == null ? null : new Tile(TILE_WIDTH, TILE_HEIGHT, image); } else { return NO_TILE; }}private boolean hasTile(int x, int y, int zoom) { Rect b = TILE_ZOOMS.get(zoom); return b == null ? false : (b.left <= x && x <= b.right && b.top <= y && y <= b.bottom);}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14784841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1848904/']}
jdg_80712
stackexchange
llm_judgeable_groundtruth_similarity
903002
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the easiest way to know if large number is divisible by 57? For example, how could I deduce that 57 divides 300000177? Now provide the response and nothing else.
As noted in the comments, $n$ is divisible by $57$ if and only if $n$ is divisible by $3$ and by $19$. Divisibility test for $3$ is well known (as noted by Semiclassical above): Just see if the sum of the digits is divisible by $3$. Divisibility test for $19$: Take the last digit of $n$ and double it. Add this result to what is left of $n$ after removing the last digit. Your original number is divisible by $19$ if and only if your final answer is divisible by $19$. Reasoning: Write $n=10a+b$ where $b$ is a $1$-digit number. We have:$$19\mid 10a+b \Leftrightarrow 19\mid 10a+b+19b \Leftrightarrow 19\mid 10a+20b\Leftrightarrow 19\mid a+2b$$ (last follows since $10$ and $19$ are relatively prime). For longer numbers, apply the rule as many times as necessary. Example: $n=2137$. $213+2\cdot 7=227$. $22+2\cdot 7=36$. So $2137$ is not divisible by $19$. (Note: Divisibility tests here assume that $n$ is expressed in base 10).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/903002', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/169715/']}
jdg_80713
stackexchange
llm_judgeable_groundtruth_similarity
3789976
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's say we have a vector space $V$ with a basis $\{x_1,x_2,\cdots,x_n\}$ then is $\{x_1+x_2,x_2+x_3,\cdots,x_{n-1}+x_n,x_n+x_1\}$ a basis too? My Answer: For n=2 clearly this is false because of the following counter example: \begin{pmatrix}1 & 0 \\0 & 1 \end{pmatrix} If we apply the above to get the new set \begin{pmatrix}1 & 1 \\1 & 1 \end{pmatrix} which is not linearly indepedent to form a basis. But what about $n\geq3 ?$ I believe it should work by intuition that $v_1 = x_1+x_2$ can only be formed using $x_1$ and $x_2$ and so on hence any of the vectors cannot be formed using the others by any linear combination. Now provide the response and nothing else.
It is a matter to show whether the $n \times n$ tranformation matrix $$\left( {\matrix{ 1 & 1 & 0 & \cdots & 0 \cr 0 & 1 & 1 & \cdots & 0 \cr 0 & 0 & 1 & \ddots & \vdots \cr \vdots & \vdots & \vdots & \ddots & 1 \cr 1 & 0 & 0 & \cdots & 1 \cr } } \right)$$ is invertible or not By developing its determinant on the first column it is easy to show that it is $2$ if $n$ is odd and null if $n$ is even (for $3 \le n$ ).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3789976', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/816116/']}
jdg_80714
stackexchange
llm_judgeable_groundtruth_similarity
13131003
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to show percent CPU usage in PHP. Is not important if get values by cron in shell > output to file > parse in PHP or directly get value in php. I try many solutions found on internet but nothing was useful. With load average I can't display 0-100% graphic bar and functions I found for percentage output give me bad values or only value for first core. It would be nice to get number of percentage usage for every core. Is there solution for this? EDIT: I make temporary solution, it works good but it is not best way. cron job every one minute run php script which exec command for grep cpu info from "top" and save it to file, on end script wait 3 seconds and loop 20-times (way to get update every 3 seconds)php script: <?phpfor($i=0; $i<=20; $i++) { //cpu load exec("top -b -n 1 | grep 'Cpu(s):' > /some/file.cpu"); //ram usage exec("top -b -n 1 | grep 'Mem:' > /some/file.ram"); //wait 3sec sleep(3);}?> and now from this files I can parse informations. New question is how to make daemon script to run this commands every 3 seconds. I think solution with php script and cron is only temporary solution and is not best way. daemon will be much better. Now provide the response and nothing else.
after searching on forums and trying many methods, best accurate is this: $stat1 = file('/proc/stat'); sleep(1); $stat2 = file('/proc/stat'); $info1 = explode(" ", preg_replace("!cpu +!", "", $stat1[0])); $info2 = explode(" ", preg_replace("!cpu +!", "", $stat2[0])); $dif = array(); $dif['user'] = $info2[0] - $info1[0]; $dif['nice'] = $info2[1] - $info1[1]; $dif['sys'] = $info2[2] - $info1[2]; $dif['idle'] = $info2[3] - $info1[3]; $total = array_sum($dif); $cpu = array(); foreach($dif as $x=>$y) $cpu[$x] = round($y / $total * 100, 1); now stats are in $cpu['user'], $cpu['nice'], $cpu['sys'], $cpu['idle']
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13131003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1631683/']}
jdg_80715
stackexchange
llm_judgeable_groundtruth_similarity
7992
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I leave my cell phone with the charger plugged in all the time, would this weaken the batter, and why? I've heard that you should only charge the cell phone when you receive a 'battery low' message and leave it off the charger at other times? The cell phone has got a Li-Ion battery. Now provide the response and nothing else.
All newer phones use Lithium polymer batteries. Why is it Partially Charged? To decrease their aging they are intended to be stored at 40% charge. This means when you receive your phone it should be at 40% charge, otherwise they will have aged your battery for you. (you are probably used to the effects of aging, like a 2 year old phone seeming to have very short battery life). When you get your phone you can use it until it is discharged, but they normally say 'charge it' because people will not notice the partial charge. Do Not Fully Discharge You should not fully worry about fully discharging, this is superstition to earlier battery technologies. Fully discharging a lithium battery is one of the best ways to make it fail. below a certain charge they will have their overcharge protection circuitry fail and you cannot charge it at all. I have seen studies that show that this makes up more than 75% of "failed" lithium batteries. Lithium Battery Aging Lithium batteries have a set number of charge discharge cycles before they fail. This might be a number like 500 cycles. You actually get more like 1000 cycles if you only discharge to 50% before recharge. Lithiums really do not like a deep discharge, I cannot stress this enough. If you would like more information about lithium battery technology let me know, I can get you many links, just drop me a comment. I have a few answers on the electronics and robotics stack exchange about it. Can I leave it plugged in all the time? Yes, and no. This is very dependent on whom makes your device. For example, my Lenovo laptop will not apply a charge to the battery unless it is under 97%. When it does charge the battery it charges directly to 100%, then stops until the battery sags below 97%. Many laptops did not do this, on most just applying charge if it is not 100%. This would put the battery through thousands of charge cycles in a week when you are not using the battery. This ages a battery quickly. If your phone maker took the time and paid the extra cash then your phone will stop charging once it reaches full charge and just power the system from the wall outlet. It is significantly more likely that your phone is charging your battery on a short cycle and aging it thoroughly. Myths Some people have some confusion from some of the myths that go about. The primary one is memory. As Battery University will say, this is mostly extinct, and actually applies to nickel-cadmium batteries . As was stated in a comment about crystals Battery university has in reference to nickel-cadmium: With memory, the crystals grow and conceal the active material from the electrolyte. In advanced stages, the sharp edges of the crystals penetrate the separator, causing high self-discharge or electrical short. Now, talking about Lithium batteries, which your phone uses, there is even more difference. To quote them battery university directly from their simple guidelines: Avoid frequent full discharges because this puts additional strain on the battery. Several partial discharges with frequent recharges are better for lithium-ion than one deep one. Recharging a partially charged lithium-ion does not cause harm because there is no memory. (In this respect, lithium-ion differs from nickel-based batteries.) Short battery life in a laptop is mainly cause by heat rather than charge / discharge patterns. I understand how this may go against what you have been taught, but I am someone who not only has research this but uses lithium batteries in my day to day work as an engineer.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/7992', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1999/']}
jdg_80716
stackexchange
llm_judgeable_groundtruth_similarity
182178
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Is there an example of an irreducible polynomial $f(x) \in \mathbb{Q}[x]$ with a real root expressible in terms of real radicals and another real root not expressible in terms of real radicals? Now provide the response and nothing else.
The answer to the question is yes (so the answer to the title is no ) and I will give an example later. Let me first recall a couple of results. The first one is the following, that can be found in [Cox, Galois Theory , Theorem 8.6.5]. Theorem 1. Let $F$ be a subfield of $\mathbb{R}$ and let $f \in F[x]$ be an irreducible polynomial with splitting field $F \subset L \subset \mathbb{R}$ . Then the following conditions are equivalent. (1) Some root of $f$ is expressible by real radicals over $F$ . (2) All roots of $f$ are expressible by real radicals over $F$ in which only square roots appear. (3) $[L:F]$ is a power of $2$ . So, if $f$ splits completely over $\mathbb{R}$ , the existence of a root expressible by real radicals forces all the roots to be so. On the other hand, when $f$ does not split completely in $\mathbb{R}$ this is no longer true . Let us state our second result, that can be found in [A. Loewy, Über die Reduktion algebraischer Gleichungen durch Adjunktion insbesondere reeller Radikale , Math. Zeitschr. 15 , 261-273 (1922)], see also Cox book, Theorem 8.6.12. Theorem 2. Let $F$ be a subfield of $\mathbb{R}$ and $f \in F[x]$ irreducible of degree $2^mn$ , with $n$ odd. Then $f$ has at most $2^m$ roots expressible by real radicals over $F$ . In particular, when $f \in \mathbb{Q}[x]$ is irreducible and of odd degree, Theorem 2 implies that at most one root of $f$ is expressible by real radicals. Note that if the degree is $3$ then Theorem 2 is consistent with Cardano's formulas, and if the degree is a power of $2$ then it is consistent with Theorem 1. Finally, let us give the following example answering the question, that can be found in Loewy's paper quoted above, page 272. Let us consider the polynomial $$x^6+6x^4-234x^2-54x-3 =(x^3+(3+9 \sqrt{3})x+ \sqrt{3})(x^3+(3-9 \sqrt{3})x- \sqrt{3}).$$ It is irreducible over $\mathbb{Q}$ by Eisenstein's criterion and it has one real root expressible by real radicals, three real roots not expressible by real radicals and two complex roots.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/182178', 'https://mathoverflow.net', 'https://mathoverflow.net/users/29500/']}
jdg_80717
stackexchange
llm_judgeable_groundtruth_similarity
226334
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Alright, so this is a question about the functional way to break a for/while loop. Since we're on the Mathematica SE, I'm interested in the ways a Mathematica vet would handle this, however the question is similar in spirit to this question. I am also interested in lazy evaluation in Mathematica. For instance, consider writing an algorithm to detect whether an array is monotonic or not. How could I rewrite the algorithm below so that it does not check the entire array and, does not store the entire input array in memory? n = 1000;input = {5, 4, 3}~Join~Range[1, n];AllTrue[Differences[input], # >= 0 &] || AllTrue[Differences[input], # <= 0 &] In Python 3+, one way to do this is shown below. All the operations below work on an iterator level, so only the necessary elements are computed. You can test this by setting n=100000000 and compare to the algorithm above. from itertools import chain, islice, teedef pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) return zip(a, islice(b, 1, None))def isMonotonic(iterable): pw_iterable = pairwise(iterable) all_increasing = all(x <= y for x, y in pw_iterable) all_decreasing = all(x >= y for x, y in pw_iterable) return all_decreasing or all_increasingn = 1000arr = chain([5,4,3], range(1, n+1)) # obviously, non-monotonicprint(isMonotonic(arr)) I hope I've made clear my broader set of questions about computations in which a loop should be allowed to terminate early and the later elements in the list need not be computed. I would love to see how this would be done in an idiomatic Mathematica way. @xzczd's hint to look at the lazy-computations tag helped me find this related question . TL;DR: there have been a number of attempts at implementing lazy functionality. These two appear to be the most up-to-date: lazyLists package Streaming package (doesn't appear to be actively maintained, but a comment in 2019 by L. Shifrin reports it may get more attention); see an introductory post here Now provide the response and nothing else.
In my lazyLists package mentioned by the OP, you would do something like this to find out if a list is monotonic: << lazyLists`n = 100000;(* lazy representation of the example input *)input = lazyCatenate[{{3, 4, 2}, lazyGenerator[# &, 1, 1, n, 1]}];monotonicQ[lz_lazyList, test_] := Catch[ FoldList[ If[TrueQ @ test[#2, #1], #2, Throw[False, "nonmonotonic"]]&, lz ][[-1]]; (* taking the last part iterates through the lazyList *) True , "nonmonotonic"];monotonicQ[input, Greater] False You can also use partitionedLazyList to generate elements in batches, which is usually faster for long arrays.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/226334', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/34831/']}
jdg_80718
stackexchange
llm_judgeable_groundtruth_similarity
11236379
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been using library(multicore) on linux (8 cores) for parallel processing, but it was not fast enough (sigh..) so I'm trying to utilize my window workstation (8 cores, too). (So it will be using 16 cores if I can use both). It seems like foreach, snow, doSMP is used on Window and multicore is frequently used for Linux. Can anyone share the experience of using both system for parallel processing or point out any good example? I tried to use SNOW but I couldn't find the relevant packages(Rmpi) on windows version R in order to connect window to linux machine, which makes me think that this could be impossible.. Any advice will be much appreciated! Moon Now provide the response and nothing else.
RESOLVED : I fixed the issues. It is working so well for both of iPhone and iPad. Modal View Controller with no black background just clearColor/transparent. The only thing that I need to change is I replaced UIModalPresentationFullScreen to UIModalPresentationCurrentContext. How simple is that! FirstViewController.m UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil]; UIViewController *vc = [storyboard instantiateViewControllerWithIdentifier:@"SecondViewController"]; vc.view.backgroundColor = [UIColor clearColor]; self.modalPresentationStyle = UIModalPresentationCurrentContext; [self presentViewController:vc animated:NO completion:nil]; NOTICE: If you are using a modalPresentationStyle property of navigationController: FirstViewController.m UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil]; UIViewController *vc = [storyboard instantiateViewControllerWithIdentifier:@"SecondViewController"]; vc.view.backgroundColor = [UIColor clearColor]; self.navigationController.modalPresentationStyle = UIModalPresentationCurrentContext; [self presentViewController:vc animated:NO completion:nil]; NOTICE: The bad news is that the above solution doesn't work on iOS 7. The good news is that I fixed the issue for iOS7! I asked somebody for help and here is what he said: When presenting a view controller modally, iOS removes the view controllers underneath it from the view hierarchy for the duration it is presented. While the view of your modally presented view controller is transparent, there is nothing underneath it except the app window, which is black. iOS 7 introduced a new modal presentation style, UIModalPresentationCustom, that causes iOS not to remove the views underneath the presented view controller. However, in order to use this modal presentation style, you must provide your own transition delegate to handle the presentation and dismiss animations. This is outlined in the 'Custom Transitions Using View Controllers' talk from WWDC 2013 https://developer.apple.com/wwdc/videos/?id=218 which also covers how to implement your own transition delegate. You may see my solution for the above issue in iOS7: https://github.com/hightech/iOS-7-Custom-ModalViewController-Transitions
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11236379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1486507/']}
jdg_80719
stackexchange
llm_judgeable_groundtruth_similarity
37479122
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to select or remove numbers that are in list (0-24) if (numberList.Contains(int.Parse(c[0]))) { numberList.Remove(int.Parse(c[0])); } if (int.Parse(c[1] + (a.Info[i].Dydis / a.Greitis)) >= 60) { int v = int.Parse(c[0]) + 1; if (numberList.Contains(v)) numberList.Remove(v); } if ((rez(a.Info[i].Dydis, a.Greitis)) > 1) { numberList.Add(99); } } if (numberList.Count != 0) { Ats k = new Ats(a.Vardas, a.Data, numberList); nera.Add(k); } I'm trying something like this, but it isn't working : for (int i = 0; i < a.LCount; i++) { string[] c = a.Info[i].Laikas.Split(':'); var q = from d in numberList where (numberList.Contains(int.Parse(c[0]))) select new { Number = int.Parse(c[0]), }; How could I convert this thing to a LINQ? Now provide the response and nothing else.
1)You should do the pair sampling before feeding the data into a session. Label every pair a boolean label, say y = 1 for matched-pair, 0 otherwise. 2) 3) Just calculate both pos/neg terms for every pair, and let the 0-1 label y to choose which to add to the loss. First create placeholders, y_ is for boolean labels. dim = 64x1_ = tf.placeholder('float32', shape=(None, dim))x2_ = tf.placeholder('float32', shape=(None, dim))y_ = tf.placeholder('uint8', shape=[None]) # uint8 for boolean Then the loss tensor can be created by the function. def loss(x1, x2, y): # Euclidean distance between x1,x2 l2diff = tf.sqrt( tf.reduce_sum(tf.square(tf.sub(x1, x2)), reduction_indices=1)) # you can try margin parameters margin = tf.constant(1.) labels = tf.to_float(y) match_loss = tf.square(l2diff, 'match_term') mismatch_loss = tf.maximum(0., tf.sub(margin, tf.square(l2diff)), 'mismatch_term') # if label is 1, only match_loss will count, otherwise mismatch_loss loss = tf.add(tf.mul(labels, match_loss), \ tf.mul((1 - labels), mismatch_loss), 'loss_add') loss_mean = tf.reduce_mean(loss) return loss_meanloss_ = loss(x1_, x2_, y_) Then feed your data (random generated for example): batchsize = 4x1 = np.random.rand(batchsize, dim)x2 = np.random.rand(batchsize, dim)y = np.array([0,1,1,0])l = sess.run(loss_, feed_dict={x1_:x1, x2_:x2, y_:y})
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37479122', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5452648/']}
jdg_80720
stackexchange
llm_judgeable_groundtruth_similarity
2114719
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have to find the limit without L'hopital's rule :$$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $$ Is it possible? I thought about using squeeze theorem or something, but it didn't work out. Hints are more than welcome! P.S - I didn't study Taylor series or Integrals yet. Now provide the response and nothing else.
$$\begin{align}\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2}&=\lim_{x \to 0} \ln (x^2+1)^{\frac{1}{x^2}}\\&=\ln\left(\lim_{x \to 0} (x^2+1)^{\frac{1}{x^2}}\right)\\&=\ln e=1 \end{align}$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2114719', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/88027/']}
jdg_80721
stackexchange
llm_judgeable_groundtruth_similarity
70622
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to solve this problem: A cylinder is rolling down an inclined plane (angle between plane and horizon α). Coefficient of friction is µ. What is the translational and angular speed of the cylinder when it's traveled distance is l( at the beginning v = 0)? Assume that it rolls without slipping. So basically I started with energy conservation: $$E_p=mgh=mgl \sin(\alpha)$$$$E_p = E_r + E_t = \frac{1}{2}mv^2 + \frac{1}{2}I\omega^2 $$$$v^2 = 4gl \sin(\alpha)/3$$Now the tricky part is that we need to find angular speed, but we don't have the radius of cylinder... is there a way to find the radius? I was thinking of maybe momentum conservation law(though I don't understand it completely)? Any help appreciated! Now provide the response and nothing else.
Totally revised I had initially understood your question as asking about the continuum limit of the Hubbard model. Judging from your other question I now realize you meant to ask about current operators on a lattice model. As in, from the perspective of the lattice model alone (not thinking about any real space it might have been derived from) how do we talk about conserved current? Note that since we are on a lattice there is no longer a normal notion of a derivative, so the continuity equation $\nabla\cdot j = \partial_t\rho$ no longer makes any sense. Also the Peierls/ minimal coupling prescription is ambiguous because it depends on the path we take from one site to another through the real continuous space. Before I go any further let me say it almost always sufficient to define a conserved current via an approximate formula like $$j(q) \approx e\int_{BZ} \!\!\!dk v(k)\psi^\dagger(k)\psi(k+q),$$especially since the lattice model is itself usually an approximation, and we are almost always concerned about a very small portion of the Brillouin Zone. But lets we would like a conserved current on the lattice. We would like to interpret the continuity equation as a finite difference, something like $j(x+a) - j(x) = \partial_t \rho$. I believe the correct way to talk about this is to imagine you current operator lives on the edges of your lattice. For definiteness let's say we have a (hyper)cubic lattice. Your vertices are labeled by a vector of integers $\vec{n}$, with the edges connecting nearest neighbors. Orient your edges so that they point in the direction of increasing coordinate. Now associated with each edge $e$ define an operator $j$. Our continuity equation is then $$\sum_{e\in in(n)}\!\!j(e) - \!\!\sum_{e\in out(n)}\!\!\!j(e) = \partial_t Q(\vec{n})$$ (which is just Kirchoff's Law). Now we want a local definition of $j$ in terms of the fields, which we should get because of something like Noether's theorem. For nearest neighbor hopping like in the Hubbard model we get $$j(e) = i\left[\bar{c}_{n_2}c_{n_1} - \bar{c}_{n_1} c_{n_2}\right]$$where $\bar{c}$ and $c$ are the electron creation and annihilation operators respectively, and $n_1$ and $n_2$ are the two vertices connected by $e$. You can calculate by hand that this satisfies the continuity equation. Note that we cannot in general just define $j$ to be a vector that lives on the vertices (like by averaging over the edges as below). This works in the the square lattice but not for example the triangular lattice in the plane. You can see this since there are six nearest neighbors but only two dimensions, so a vector does not have enough degrees of freedom to satisfy a continuity equation. Its also good form to have the current live on edges because this is the way to turn geometric objects on the continuum into geometric objects on a lattice: Scalars like the electric potential live on vertices, 1-forms like the current or E-field live on the edges, 2-forms like the magnetic field live on the plaquettes, etc... This definition also allows you to connect to the minimal substitution prescription, since the gauge fields lives on the edges. To take the continuum limit of this formulation you simply define the vector $$\tilde{j}(x_n) = \sum_{e\in in(n)}\hat{e}j(e) - \sum_{e\in out(n)}\hat{e}j(e)$$,where $x_n$ is the real space position of the $\vec{n}$ site and $\hat{e}$ is the real space vector correpsonding to the lattice edge. If you look at this in fourier space you'll see that it recovers the usual continuity equation. I mentioned Noether's theorem in the original version of this answer. There is, I've convinced myself, a statement that any local symmetry generator on a lattice has a conserved current, but its clunky and I don't really see any reason to state it in generality. Let me say this specific case: suppose your Hamiltonian can be written as $$H= \sum_n\mathcal{H}_1(\psi(n),n) + \sum_{<nm>}H_2(\psi^i_n,\psi^j_m;n,m)$$,where $\psi^i_n$ are fields that live on the $n$ vertex, and the sum $<mn>$ is taken over all pairs $<m,n>$ nearest neighbors. So you have a Hamiltonian which has term that involve at most nearest (no next-to nearest neighbors for example.) Now if you have a local symmetry generator $Q(n)$, which acts on the fields $\psi(n)$ and $[\sum_n Q(n), H]= 0$, then there is a conserved current $$j_Q(e) = \frac{i}{2}\left[Q(n)-Q(m),H_2(\psi^i_n,\psi^j_m,n,m)\right]$$ where $n$ and $m$ are the two vertices $e$ connects. This $j_Q$ satisfies the continuity equation. You can just plug in and calculate. Note the $Q(n)$ has to be local; as in $[Q(n),\psi^i_n]$ depends only on the fields $\psi^j_n$ and $[Q(n),\psi^i_m] = 0$ when $n\neq m$. But this gives you, for example, the spin and charge currents for an arbitrary Hamiltonian as above.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/70622', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/26896/']}
jdg_80722
stackexchange
llm_judgeable_groundtruth_similarity
2551337
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following compiles fine: Object o = new Object(); System.out.println(o instanceof Cloneable); But this doesn't: String s = new String(); System.out.println(s instanceof Cloneable); A compiler error is thrown. What is the problem? Now provide the response and nothing else.
A more blatant incarnation of your problem is the following: if ("foo" instanceof Number) // "Incompatible conditional operand types String and Number" This is specified in JLS 15.20.2 Type comparison operator instanceof : RelationalExpression: RelationalExpression instanceof ReferenceType If a cast of the RelationalExpression to the ReferenceType would be rejected as a compile-time error, then the instanceof relational expression likewise produces a compile-time error. In such a situation, the result of the instanceof expression could never be true. That is, since this cast expression generates a compile time error: (Number) "foo" so must this expression: ("foo" instanceof Number) Your case is a bit more subtle, but the principle is the same: String is a final class String does not implement Cloneable Therefore you can't do (Cloneable) aString Therefore also you can't do aString instanceof Cloneable
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2551337', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/268850/']}
jdg_80723
stackexchange
llm_judgeable_groundtruth_similarity
7543786
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Whats design pattern does UITableView use to populate and what are the benefits?Is it delegate pattern? Reason I am asking is that it's not just delegate but the datasource as well.Seems more like along the line with MVC. I have just gone through a couple of tutorials online their\my code is working but it looks like I am missing the point.I end with all these methods in my main controller. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation{ // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait);}- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1;}// Customize the number of rows in the table view.- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 10;//any number based on datasource size.}// Customize the appearance of table view cells.- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... cell.text = [names objectAtIndex:indexPath.row];//names is an array. return cell;}- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {} It is a view based application . Should it be in a sperate controller.Otherwise it just looks messy and over the top way of doing something simple. I am not at all saying Objective C or apple is wrong but just that I am a beginner and missing the whole point of this delegate and datasource setup. so to summarise can someone please explain: 1-Whats the benefit of this delegate and datasource setup? 2-Whats the name of this design pattern? 3-Should I have a separate controller (in view based application)? Now provide the response and nothing else.
As the author of the VCR gem, you'd probably expect me to recommend it for cases like these. I do indeed recommend it for testing HTTP-dependent code, but I think there's an underlying problem with your design. Don't forget that TDD (test-driven development) is meant to be a design discipline, and when you find it painful to easily test something, that's telling you something about your design. Listen to your tests' pain! In this case, I think your model has no business making the 3rd party API call. It's a pretty significant violation of the single responsibility principle. Models should be responsible for the validation and persistence of some data, but this is definitely beyond that. Instead, I would recommend you move the 3rd party API call into an observer. Pat Maddox has a great blog post discussing how observers can (and should) be used to loosely couple things without violating the SRP (single responsibility principle), and how that makes testing, much, much easier, and also improves your design. Once you've moved that into an observer, it's easy enough to disable the observer in your unit tests (except for the specific tests for that observer), but keep it enabled in production and in your integration tests. You can use Pat's no-peeping-toms plugin to help with this, or, if you're on rails 3.1, you should check out the new functionality built in to ActiveModel that allows you to easily enable/disable observers .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7543786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/851578/']}
jdg_80724
stackexchange
llm_judgeable_groundtruth_similarity
51947791
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a class looking as below : @Data@Builderpublic class Foo { private String param; /** My custom builder.*/ public static FooBuilder builder(String _param){ return builder().param(_param); }} I get the following error : [ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:javadoc (default-cli) on project foo: An error has occurred in JavaDocs report generation: [ERROR] Exit code: 1 - /home/workspace/foo/src/main/java/com/foo/Foo.java:34: error: cannot find symbol [ERROR] public static FooBuilder builder(String _param) [ERROR] ^ [ERROR] symbol: class FooBuilder [ERROR] location: class Foo Now provide the response and nothing else.
In order to solve this issue, I have to use Lombok's delombok feature (cf : https://projectlombok.org/features/delombok ). lombok doesn't cover all tools. For example, lombok cannot plug into javadoc ... which run on java sources. Delombok still allows you to use lombok with these tools by preprocessing your java code into java code with all of lombok's transformations already applied. I did this using Maven by adding the following plugins : <plugin> <groupId>org.projectlombok</groupId> <artifactId>lombok-maven-plugin</artifactId> <version>1.18.0.0</version> <configuration> <sourceDirectory>${project.basedir}/src/main/java</sourceDirectory> <outputDirectory>${delombok.output}</outputDirectory> <addOutputDirectory>false</addOutputDirectory> </configuration> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>delombok</goal> </goals> </execution> </executions></plugin><plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <version>2.9</version> <configuration> <sourcepath>${delombok.output}</sourcepath> </configuration></plugin>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51947791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3817953/']}
jdg_80725
stackexchange
llm_judgeable_groundtruth_similarity
48272452
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have my df with multi-index columns. All of my values are in float, and I want to merge values with in first level of multi-index. Please see below for detail. first bar baz foo second one two one two one A 0.895717 0.805244 1.206412 2.565646 1.431256 B 0.410835 0.813850 0.132003 0.827317 0.076467 C 1.413681 1.607920 1.024180 0.569605 0.875906 first bar baz foo A (0.895717+0.805244) (1.206412+2.565646) 1.431256 B (0.410835+0.813850) (0.132003+0.827317) 0.076467 C (1.413681+1.607920) (1.024180+0.569605) 0.875906 The values are actually added (I just didn't feel like doing all this :)). Bottom line is that I just want to level-up(higher level I guess) and within the index, add all the values. Please let me know a good way to do this. Thank you! Now provide the response and nothing else.
I believe you're looking for a groupby along the first axis. df.groupby(level=0, axis=1).sum() Or (more succinctly), df.sum(level=0, axis=1) The level argument to sum implies grouping. dffirst bar baz foo second one two one two one twoA 2 3 3 4 10 8B 22 16 7 3 2 26C 4 5 1 9 6 5df.sum(level=0, axis=1)first bar baz fooA 5 7 18B 38 10 28C 9 10 11 Performance wise, there's hardly any difference between the two methods outlined above (the latter is a few ticks faster).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48272452', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9001495/']}
jdg_80726
stackexchange
llm_judgeable_groundtruth_similarity
45547228
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a dataframe df containing date of a measurement and the measurements ( duration , km ) dfOut[20]: Date duration km0 2015-03-28 09:07:00.800001 0 01 2015-03-28 09:36:01.819998 1 22 2015-03-30 09:36:06.839997 1 33 2015-03-30 09:37:27.659997 nan 54 2015-04-22 09:51:40.440003 3 75 2015-04-23 10:15:25.080002 0 nan How can I calculate the average duration and km per day? I would like to take the mean of the rows using groupby and the date... Now provide the response and nothing else.
I think you need resample : cols = df.columns.difference(['Date'])#if possible convert to floatdf[cols] = df[cols].astype(float) #if astype failed, because non numeric data, convert them to NaNsdf[cols] = df[cols].apply(pd.to_numeric, errors='coerce')#if mixed dtypesdf[cols] = df[cols].astype(str).astype(float)#alternatively #df[cols] = df[cols].astype(str).apply(pd.to_numeric, errors='coerce') df = df.resample('d', on='Date').mean().dropna(how='all')print (df) duration kmDate 2015-03-28 0.5 1.02015-03-30 1.5 4.02015-04-22 3.0 7.02015-04-23 0.0 0.0 Or: df = df.set_index('Date').groupby(pd.Grouper(freq='d')).mean().dropna(how='all')print (df) duration kmDate 2015-03-28 0.5 1.02015-03-30 1.5 4.02015-04-22 3.0 7.02015-04-23 0.0 0.0
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45547228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2038360/']}
jdg_80727
stackexchange
llm_judgeable_groundtruth_similarity
17609609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If we allocate an object of size 1 as below int *arr = new int[1]; Should we delete the object using operator delete[] or operator delete ? The reason I am concerned is if the compiler would be smart enough to convert the statement as a single element allocation int *arr = new int which would cause calling operator delete[] UB. User Case: I have a pointer, which I would end up allocation in a varied ways but would finally like to get it deleted. So was wondering, for single element allocation, if I consistently use int *arr = new int[1] can I consistently and safely use operator delete[] Note Can you please refer me back to the standards to support your answer? Now provide the response and nothing else.
You must use delete[] and not delete . The compiler is not allowed to change new int[1] to new int . (As int is a POD type it's quite possible that new int and new int[1] do exactly the same thing under the covers, but if this is the case then delete[] on and int* and delete on an int* will also do exactly the same thing.) ISO/IEC 14882:2011 5.3.5 [expr.delete] / 2: In the first alternative ( delete object ), the value of the operand of delete may be a null pointer value, a pointer to a non-array object created by a previous new-expression , or a pointer to a subobject (1.8) representing a base class of such an object (Clause 10). If not, the behavior is undefined. As int[1] is an array object, if you try and delete it with delete and not delete[] , the behavior is undefined.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17609609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/977038/']}
jdg_80728
stackexchange
llm_judgeable_groundtruth_similarity
36701385
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I understand correctly: NSUserDefaults is an un-secure place to store sensitive data because some settings file can be hacked and the NSUserDefault values can be changed. What exactly can be hacked and not hacked? Can the Hacker see my app's Swift Code, or do they only see the list of variables and values stored in NSUserDefaults? Could I create my own "encryption" in my code (where the NSUserDefault values appear like a bunch of meaningless numbers, and the "key" is inside my code with some convoluted mathematical operation to do)? Is this safe? Note: I don't intend on encrypting anything serious like usernames or passwords, just Highscores and Bool values for whether or not levels/upgrades are unlocked. I don't want to have to learn KeychainsWrappers if my manual solution is safe. Side-question (though I haven't reached this step yet): How are in-app purchases handled? is there a Bool value that says whether or not an item was paid for, and where is that Bool stored (is it up to you to decide)? Now provide the response and nothing else.
What exactly can be hacked and not hacked? Most everything you can do can be hacked. Can the Hacker see my app's Swift Code, do they only see the list of variables A sophisticated hacker can see the executable binary but not the Swift source code. values stored in NSUserDefaults? It is trivial to see the contents of NSUserDefaults. See iExplorer and other similar apps. Could I create my own "encryption" in my code Sure, but should you? "Schneier's Law" : "Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break." Is this safe? No it is not safe. If you want use encryption you need a standard encryption algorithm such as AES but it is not that easy to create a secure scheme. You can use a wrapper library such as RNCryptor . But this creates a new problem: where to safely save the encryption key, see next point. I don't want to have to learn KeychainsWrappers if my manual solution is safe. Only if you want to securely wan to to save keys and small amounts of data. Security is hard. You need to evaluate the level of security you need, this included the level of attacker and the value of the data. Attackers run from curious students to nation states. Data values run from Tic-Tac-Toe high scores to nuclear arms details. You need to determine these first.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36701385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4401516/']}
jdg_80729
stackexchange
llm_judgeable_groundtruth_similarity
1774533
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I can find plenty of sites that say that this works, but I can't seem to find an explanation for why it works. I'm rather stumped. Now provide the response and nothing else.
Suppose you have n positions need to be filled by n items divided into m categories. Items in the same category are non-distinguishable. Let $x_1,x_2,...x_m$ denote number of items in each category, so you have $n=x_1+x_2+...+x_m$. There are $n\choose x_1$ ways to choose positions for items in category 1. There are ${n-x_1\choose x_2}$ ways to choose positions for items in category 2. There are ${n-x_1-x_2\choose x_3}$ ways to choose positions for items in category 3. ... Finally there are ${n-x_1-x_2...-x_{m-1}\choose x_m}$ ways to choose positions for items in category m. Number of ways of permutation should be: ${n\choose x_1}\times {n-x_1\choose x_2}\times {n-x_1-x_2\choose x_3} \times \cdots \times{n-x_1-x_2...-x_{m-1}\choose x_m}=\frac{n!}{x_1!\times(n-x_1)!}\times \frac{(n-x_1)!}{x_2!\times(n-x_1-x_2)!}\times \frac{(n-x_1-x_2)!}{x_3!\times(n-x_1-x_2-x_3)!}\times\frac{(n-x_1-x_2..-x_{m-1})!}{x_m!\times0!}=\frac{n!}{x_1!\times x_2!\times...x_m!}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\blacksquare$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1774533', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/147857/']}
jdg_80730
stackexchange
llm_judgeable_groundtruth_similarity
197668
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is PCI compliance required by law? We are creating a website where we will accept customer card data and pass it to merchant API. Everywhere on the internet you will find PCI is required. But what if my website is not PCI compliant and I am still receiving customers card data? Is that illegal to accept card data without being PCI compliant? We do not ever store customer card we just transmit. What problem could I face by law if I am not PCI compliant? Now provide the response and nothing else.
First, a disclaimer. Your question is partially a legal one, and I am not a lawyer. I did, however, work as a penetration tester for a registered PCI ASV for nearly 6 years, and have done quite a bit of PCI-DSS compliance validation testing. I last worked on PCI stuff when PCI 3.2 was the latest release, and it seems PCI 3.2.1 is now the latest, so there may be some changes since I last looked at it. I do not expect that changes in the new release affect the validity of my answer, though. PCI-DSS compliance is not required by law in any jurisdiction I know of (although according to comments on the other answer by phyrfox, it is now part of state legislature in some jurisdictions in the US). It is, however, generally a requirement of your contract with your payment provider. PCI-DSS is generally required whenever your infrastructure handles card data in any way. This can be storage or just transmission, and the card details do not even have to be textual - I have personally been involved in a situation where a company tried to argue that acting as a live video streaming service where customers showed their credit card to an agent for identity verification purposes did not fall under PCI-DSS because they "only" routed video data and never "handled" card data. Let's just say that their payment provider was less than convinced . Before considering implementing PCI-DSS and all the headaches that come with it, you should try to avoid needing PCI-DSS compliance at all. You can do this by not having any card data hit your server. There are usually a few ways to implement this, and most payment providers use one of these models: When you go to check out, your web application sends the basket data (or at least a total cost) along with some information about your store (merchant ID, etc.) to the payment provider as a POST request. The user then enters their card information on their payment page, and a payment confirmation is then sent back to your web application. No card information is ever sent to your site, nor is it ever entered into one of your site's pages. This allows you to avoid the requirement of PCI-DSS compliance entirely. Same as the above, except the provider's payment page is embedded in an iframe in your checkout page. This gives a more seamless experience to the user without requiring your site to directly take in card data - the iframe is considered a separate origin, and SOP should prevent your page from gaining access to data within it. This is marginally more risky than option 1, partly because of potential browser exploits, but also because a user familiar with the site expects to see the page asking for card details, and an attacker might be able to use UI redressing attacks if they compromise the page (e.g. via XSS) in order to steal card data. This is still considered sufficient to avoid PCI-DSS compliance requirements though. Have card information entered into your web page, but have the form post the data to the payment provider. This means that payment card information is entered into your web page, but it never reaches your server. This is currently a bit of a grey area in terms of whether it triggers PCI-DSS compliance requirements, and whether or not this is acceptable is dependent on your provider and ASV (in the case of needing an audit). The reason that this is risky is that an XSS vulnerability in your page allows an attacker to directly read card data. I would generally avoid this option entirely. The most common examples of the first approach are PayPal and Google Pay, where the merchant website (you in this case) hands off information to the payment provider and the rest of the process happens on their website, which is PCI-DSS compliant. This means you don't need to worry about card information at all. These approaches are more completely detailed in the Best Practices for Securing eCommerce paper released by the PCI council. The most relevant parts of that paper here are sections 2 and 6. Critically, PCI-DSS does not have to be mandated by law in order for you to see consequences by not being compliant. If you are contractually obliged to be compliant as part of your contract with your payment provider, and you are later found not to be compliant, they can force you to very quickly become compliant (which may be very expensive for you) or simply sue you for breach of contract. Additionally, in the event of a breach that reveals payment card information, you may find that a lack of compliance where compliance should have been required will count negatively against you in any subsequent legal or regulatory action. The Information Commissioner's Office (ICO) in the UK has previously fined companies large sums for breaching data protection laws, and the magnitude of those fines was often tied to whether or not the company was compliant with industry security standards, and whether they were knowingly negligent in their duty of protecting data. I would strongly suggest option 1 or 2 wherever possible. Otherwise you're probably in for a difficult and expensive experience with PCI compliance.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/197668', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/191298/']}
jdg_80731
stackexchange
llm_judgeable_groundtruth_similarity
2058635
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can't figure what is wrong here. Read what folks are saying here: http://informationideas.com/news/2006/06/14/fatal-error-cannot-use-string-offset-as-an-array-in/ and here: Cannot use string offset as an array in php I have print_r() -ed the actual values in $entries (coming from Google Calendar) and they are all fine. foreach ( $entries as $e ) { $info = array(); // added to see if pre-declaration helps $info = array( $e[ 'title' ], $e[ 'gd:when attr' ][ 'startTime' ], $e[ 'gd:where attr' ][ 'valueString' ], $e[ 'content' ] ); } What am I doing wrong? Dump of $entries : Array( [id] => http://www.google.com/calendar/feeds/u879een48cs77cp2rv7s05f5ps%40group.calendar.google.com/public/full/aev64a1c7kou9ige6n2mulm8mo [published] => 2009-12-31T15:34:47.000Z [updated] => 2009-12-31T15:34:58.000Z [category attr] => Array ( [scheme] => http://schemas.google.com/g/2005#kind [term] => http://schemas.google.com/g/2005#event ) [category] => [title attr] => Array ( [type] => text ) [title] => Happy New Year! [content attr] => Array ( [type] => text ) [content] => [link] => Array ( [0 attr] => Array ( [rel] => alternate [type] => text/html [href] => http://www.google.com/calendar/event?eid=YWV2NjRhMWM3a291OWlnZTZuMm11bG04bW8gdTg3OWVlbjQ4Y3M3N2NwMnJ2N3MwNWY1cHNAZw [title] => alternate ) [0] => [1 attr] => Array ( [rel] => self [type] => application/atom+xml [href] => http://www.google.com/calendar/feeds/u879een48cs77cp2rv7s05f5ps%40group.calendar.google.com/public/full/aev64a1c7kou9ige6n2mulm8mo ) [1] => ) [author] => Array ( [name] => New Orleans Parents Guide to Public Schools ) [gd:comments] => Array ( [gd:feedLink attr] => Array ( [href] => http://www.google.com/calendar/feeds/u879een48cs77cp2rv7s05f5ps%40group.calendar.google.com/public/full/aev64a1c7kou9ige6n2mulm8mo/comments ) [gd:feedLink] => ) [gd:eventStatus attr] => Array ( [value] => http://schemas.google.com/g/2005#event.confirmed ) [gd:eventStatus] => [gd:where attr] => Array ( [valueString] => ) [gd:where] => [gd:who attr] => Array ( [email] => [email protected] [rel] => http://schemas.google.com/g/2005#event.organizer [valueString] => New Orleans Parents Guide to Public Schools ) [gd:who] => [gd:when attr] => Array ( [endTime] => 2010-01-01 [startTime] => 2009-12-31 ) [gd:when] => [gd:transparency attr] => Array ( [value] => http://schemas.google.com/g/2005#event.opaque ) [gd:transparency] => [gCal:anyoneCanAddSelf attr] => Array ( [value] => false ) [gCal:anyoneCanAddSelf] => [gCal:guestsCanInviteOthers attr] => Array ( [value] => true ) [gCal:guestsCanInviteOthers] => [gCal:guestsCanModify attr] => Array ( [value] => false ) [gCal:guestsCanModify] => [gCal:guestsCanSeeGuests attr] => Array ( [value] => true ) [gCal:guestsCanSeeGuests] => [gCal:sequence attr] => Array ( [value] => 2 ) [gCal:sequence] => [gCal:uid attr] => Array ( [value] => [email protected] ) [gCal:uid] => ) Now provide the response and nothing else.
My bet is that either $entries is not an array one or multiple of $e are not arrays try foreach ( $entries as $e ) { $info = array(); // added to see if pre-declaration helps if (is_array($e)) // only go on if $e is actually an array $info = array( $e[ 'title' ], $e[ 'gd:when attr' ][ 'startTime' ], $e[ 'gd:where attr' ][ 'valueString' ], $e[ 'content' ] );} If you want to do it really properly, you check for each key of $e ("startTime" and so on) first using isset() or array_key_exists() .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2058635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/97767/']}
jdg_80732
stackexchange
llm_judgeable_groundtruth_similarity
40919978
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What should I use for store data client-side so that datas wouldn't be deleted when clearing browsers history? Is there any way to do that? Now provide the response and nothing else.
I solved it creating a deserializer for my class Data. public static class DataDeserilizer implements JsonDeserializer { @Override public Data deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { Gson gson = new Gson(); Data data = gson.fromJson(json, Data.class); JsonObject jsonObject = json.getAsJsonObject(); if (jsonObject.has("meta")) { JsonElement elem = jsonObject.get("meta"); if (elem != null && !elem.isJsonNull()) { Gson gsonDeserializer = new GsonBuilder() .registerTypeAdapter(MetaData.class, new CustomDeserilizer()) .create(); gsonDeserializer.fromJson(jsonObject.get("meta"), Data.class); } } return data; }} And Gson gson = new GsonBuilder() .registerTypeAdapter(Data.class,new DataDeserilizer()) .create(); Pretty obvious, but is there a more elegant solution?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40919978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7210562/']}
jdg_80733
stackexchange
llm_judgeable_groundtruth_similarity
54946697
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am inserting a list of dictionaries to a PostgreSQL database. The list will be growing quickly and the number of dict values (columns) is around 30. The simplified data: projects = [{'name': 'project alpha', 'code': 12, 'active': True},{'name': 'project beta', 'code': 25, 'active': True},{'name': 'project charlie', 'code': 46, 'active': False}] Inserting the data into the PostgreSQL database with the following code does work (as in this answer ), but I am worried about executing too many queries. for project in projects: columns = project.keys() values = project.values() query = """INSERT INTO projects (%s) VALUES %s;""" # print(cursor.mogrify(query, (AsIs(','.join(project.keys())), tuple(project.values())))) cursor.execute(query, (AsIs(','.join(columns)), tuple(values)))conn.commit() Is there a better practice? Thank you so much in advance for your help! Now provide the response and nothing else.
Use execute_values() to insert hundreds of rows in a single query. import psycopg2from psycopg2.extras import execute_values# ...projects = [{'name': 'project alpha', 'code': 12, 'active': True},{'name': 'project beta', 'code': 25, 'active': True},{'name': 'project charlie', 'code': 46, 'active': False}]columns = projects[0].keys()query = "INSERT INTO projects ({}) VALUES %s".format(','.join(columns))# convert projects values to sequence of seqeencesvalues = [[value for value in project.values()] for project in projects]execute_values(cursor, query, values)conn.commit()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54946697', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7547749/']}
jdg_80734
stackexchange
llm_judgeable_groundtruth_similarity
34301584
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to write a function that can input an iterator over a given generic type in a typesafe way. One possible use case would be writing a function like accumulate/map/fold: #include <iterator>#include <vector>#include <functional>template <typename V, typename K>K accumulate( std::function<K(K, V)> accumulator, /* WHAT TYPE DO I PUT HERE */ it, /* WHAT TYPE DO I PUT HERE */ end, K initial) { K sum = initial; for (; it != end; ++it) { V item = *it; sum = accumulator(sum, item); } return sum;} How can I do this in a way that the compiler will check types and all that good stuff? Previously asked here Now provide the response and nothing else.
Update: One can cast a CGFloat value to NSNumber and back: let c1 = CGFloat(12.3)let num = c1 as NSNumberlet c2 = num as CGFloat This preserves the precision of CGFloat and works with Swift 2and Swift 3. (Previous answer – far too complicated): There are two solutions that I found. The first uses the toll-free bridgingbetween NSNumber and CFNumber (as in What is most common and correct practice to get a CGFloat from an NSNumber? for Objective-C). It uses the fact that CFNumber has a dedicatedconversion mode for CGFloat values: extension NSNumber { // CGFloat -> NSNumber class func numberWithCGFloat(var value: CGFloat) -> NSNumber { return CFNumberCreate(nil , .CGFloatType, &value) } // NSNumber -> CGFloat var cgFloatValue : CGFloat { var value : CGFloat = 0 CFNumberGetValue(self, .CGFloatType, &value) return value }} That is simple and nice. The only drawback: I could not figure outhow to make the constructor an init method instead of a class method . The second possible solution is a bit longer: extension NSNumber { // CGFloat -> NSNumber private convenience init(doubleOrFloat d : Double) { self.init(double : d) } private convenience init(doubleOrFloat f : Float) { self.init(float : f) } convenience init(cgFloat : CGFloat) { self.init(doubleOrFloat: cgFloat.native) } // NSNumber -> CGFloat private func doubleOrFloatValue() -> Double { return self.doubleValue } private func doubleOrFloatValue() -> Float { return self.floatValue } var cgFloatValue : CGFloat { return CGFloat(floatLiteral: doubleOrFloatValue()) }} There are two private "helper" init methods with the same externalparameter name doubleOrFloat but different parameter types. From the actualtype of cgFloat.native the compiler determines which one to callin convenience init(cgFloat : CGFloat) { self.init(doubleOrFloat: cgFloat.native) } Same idea in the accessor method. From the type of self.native the compiler determines which of the two doubleOrFloatValue() methods to call in var cgFloatValue : CGFloat { return CGFloat(floatLiteral: doubleOrFloatValue()) }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34301584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1078199/']}
jdg_80735
stackexchange
llm_judgeable_groundtruth_similarity
12628
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What are the different problems that can cause an engine to burn oil and in general how difficult / expensive is each one to repair? How do you diagnose each potential cause? I found this article which describes some of the problems: Why Does My Engine Use Oil I've seen three things so far: Bad Piston Rings Bad PCV Valve Guide Seals Regarding diagnosing oil consumption and smoke the above article says the following: With older vehicles this was typically accompanied with a puff of smoke from the exhaust. With modern vehicles, the catalytic converter usually prevents smoke. Smoke in the exhaust is vaporized by the converter. Unfortunately, this may drastically raise the temperature and damage the converter, over time. Now provide the response and nothing else.
Towards the diagnosis end of things, there are some general guidelines to follow: If you are seeing smoke coming from your exhaust, what color is the smoke? If it's blue, then it's oil If it's black, it means you are running rich (too much fuel). If it's white, the car may be burning antifreeze or (quite rare) auto-trans fluid. Since it's blue smoke you are seeing, you can know that oil is the problem. What can cause oil smoke? If the smoke only comes at start-up and quickly goes away, it can be valve seals and/or valve guides. This is because while the car sits for extended periods of time, the oil has time to seep past the valve seal and collect on top of the valve (or if the valve is in the open position, it could run past and on top of the piston). When you go to start your car, the oil is then burned, giving the tell tale puff of blue smoke. COST TO FIX: There is moderate cost involved with this, depending on the vehicle/engine. If just the seals, this can be accomplished most of the time with keeping the engine mostly together and replacing the seals. Most of the cost here is labor. If the valve guides, this requires an engine tear down. Your heads will have to be taken apart and new guides installed. There is a lot more labor here and a bit more in parts. If you see smoke as you are decelerating, this too is a possible sign of valve seals and/or valve guides. This is because as you decelerate, there is a large vacuum build up within the intake tract. There is enough vacuum to pull oil past the valve seal if it's worn. COST TO FIX: Same as above. If you see smoke while accelerating, this is a sign your oil control rings are worn. These are the bottom rings used in the ring pack. When they are worn (or the cylinders are worn past tolerance), oil can flow past the rings as the piston travels down the cylinder. The oil control rings normally will scrape the cylinder of the oil, pushing it back down into the crankcase. COST TO FIX: Their fix is quite expensive, as it requires a complete engine rebuild to fix. If a PCV is bad, you normally won't see burning oil. What you'll see is seals and gaskets failing. This can cause a loss of oil (and a very dirty engine bay). It's one of those things which can sneak up on you if you aren't paying attention. While a bad PCV is not a good thing, don't look here for a reason you're seeing oil smoke coming out the tail pipe. COST TO FIX: If you catch it before it becomes an issue, it's actually quite cheap to fix ... just replace the valve. If you see leaks at seals or gaskets, your expense goes way up, but it depends on which seal or gasket is leaking. Easy to get to seals or gaskets will cost much less, because the labor is much less. The converse holds true - Labor is a determining factor here. If you are seeing blue smoke which goes away after an oil change, but slowly comes back as you get closer to your scheduled maintenance, this could be a sign you have an internal fuel leak which is thinning the oil. As the oil gets thinner it passes by the oil control rings easier, causing your vehicle to smoke. An easy way to check this is by pulling the dipstick and smelling the oil. If you smell fuel, this may be the issue. This can be caused by an injector which is stuck open, or possibly an internal failure of a fuel pressure valve which would allow the gas to escape to where it doesn't belong. COST TO FIX: Diagnosis of what is going on will take a little bit of money. This could be a very low cost fix, to a moderate cost, depending on the actual cause. If you are not seeing smoke, but you're seeing the oil drop lower and lower as time moves on, this may be a "sort of" normal oil usage in your vehicle. Take for instance my '06 Chevrolet Silverado. Its engine is an LS variant. As these engines get up in age, it is common for them to use more oil. You don't see it from the exhaust, but it uses it up, none the less. The engine still runs great and the gas mileage has stayed about the same. Your vehicle could also be losing oil via leakage (or seepage). As engines get up in mileage, this is a very common thing. If you see spots on the driveway where you park the car, this could very well be the issue. Be mindful that vehicles will loose more oil during operation than they will just sitting in your driveway. EDIT: In deference to your edit - I can see where an up-to-temperature catalytic convertor might make some difference in smoke from the tail pipe. This does not hold true, though, for start-up smoke (puff of blue smoke), as the cat is not hot enough to make a difference. It also won't make any difference once the catalyst is covered in oil suit or if there is too much oil for the cat to deal with. Sooner or later when oil gets burnt in this manner, you'll see the blue smoke. You probably have to put new cats on at that point, as well.
{}
{'log_upvote_score': 6, 'links': ['https://mechanics.stackexchange.com/questions/12628', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/7132/']}
jdg_80736
stackexchange
llm_judgeable_groundtruth_similarity
318424
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: First to cut off trivial but inapplicable answers: I can use neither the find + xargs trick nor its variants (like find with -exec ) because I need to use few such expressions per call. I will get back to this at the end. Now for a better example let's consider: $ find -L some/dir -name \*.abc | sortsome/dir/1.abcsome/dir/2.abcsome/dir/a space.abc How do I pass those as arguments to program ? Just doing it doesn't do the trick $ ./program $(find -L some/dir -name \*.abc | sort) fails since program gets following arguments: [0]: ./program[1]: some/dir/1.abc[2]: some/dir/2.abc[3]: some/dir/a[4]: space.abc As can be seen, the path with space was split and program considers it to be two different arguments. Quote until it works It seems novice users such as myself, when faced with such problems, tend to randomly add quotes until it finally works - only here it doesn't seem to help… "$(…)" $ ./program "$(find -L some/dir -name \*.abc | sort)"[0]: ./program[1]: some/dir/1.abcsome/dir/2.abcsome/dir/a space.abc Because the quotes prevent word-splitting, all the files are passed as a single argument. Quoting individual paths A promising approach: $ ./program $(find -L some/dir -name \*.abc -printf '"%p"\n' | sort)[1]: "some/dir/1.abc"[2]: "some/dir/2.abc"[3]: "some/dir/a[4]: space.abc" The quotes are there, sure. But they are no longer interpreted. They are just part of the strings. So not only they did not prevent word splitting, but also they got into arguments! Change IFS Then I tried playing around with IFS . I would prefer find with -print0 and sort with -z anyway - so that they will have no issues on "wired paths" themselves. So why not force word splitting on the null character and have it all? $ ./program $(IFS=$'\0' find -L some/dir -name \*.abc -print0 | sort -z)[0]: ./program[1]: some/dir/1.abcsome/dir/2.abcsome/dir/a[2]: space.abc So it still splits on space and does not split on the null . I tried to place the IFS assignment both in $(…) (as shown above) and before ./program . Also I tried other syntax like \0 , \x0 , \x00 both quoted with ' and " as well as with and without the $ . None of those seemed to make any difference… And here I'm out of ideas. I tried few more things but all seemed to run down to the same problems as listed. What else could I do? Is it doable at all? Sure, I could make the program accept the patterns and do searches itself. But it is a lot of double work while fixing it to a specific syntax. (What about providing files by a grep for example?). Also I could make the program accept a file with a list of paths. Then I can easily dump find expression to some temp file and provide the path to that file only. This could supported be along direct paths so that if user has just a simple path it can be provided without intermediate file. But this doesn't seem nice - one needs to create extra files and take care of them, not to mention extra implementation required. (On the plus side, however, it could be a rescue for cases in which the number of files as arguments start to cause issues with command line length…) At the end, let me remind you again that find + xargs (and alike) tricks will not work in my case. For description simplicity I'm showing only one argument. But my true case looks more like this: $ ABC_FILES=$(find -L some/dir -name \*.abc | sort)$ XYZ_FILES=$(find -L other/dir -name \*.xyz | sort)$ ./program --abc-files $ABC_FILES --xyz-files $XYZ_FILES So doing an xargs from one search still leaves me with how to deal with the other one… Now provide the response and nothing else.
Use arrays. If you don't need to handle the possibility of newlines in your filenames, then you could get away with mapfile -t ABC_FILES < <(find -L some/dir -name \*.abc | sort)mapfile -t XYZ_FILES < <(find -L other/dir -name \*.xyz | sort) then ./program --abc-files "${ABC_FILES[@]}" --xyz-files "${XYZ_FILES[@]}" If you do need to handle newlines within filenames, and have bash >= 4.4, you can use -print0 and -d '' to null-terminate the names during array construction: mapfile -td '' ABC_FILES < <(find -L some/dir -name \*.abc -print0 | sort -z) (and similarly for the XYZ_FILES ). If you don't have the newer bash, then you could use a null-terminated read loop to append filenames to the arrays e.g. ABC_FILES=()while IFS= read -rd '' f; do ABC_FILES+=( "$f" ); done < <(find -L some/dir -name \*.abc -print0 | sort -z)
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/318424', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/101845/']}
jdg_80737
stackexchange
llm_judgeable_groundtruth_similarity
21393876
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to replace multiple captured groups and replace it with the value of the captured groups from a key-value format (delimited by = ) in sed ? Sorry, that question is confusing so here is an example What I have: aaa="src is $src$ user is $user$!" src="over there" user="jason" What I want in the end: aaa="src is over there user is jason!" I don't want to hardcode the position of the $var$ because they could change. Now provide the response and nothing else.
My problem was not a jQuery or MVC4 problem as I had initially thought. It has to do with how the HTML5 compatible browser displays the date picker. I was passing the date to the view in the incorrect format. I modified my ViewModel to this and now the date populates correctly: public class EmployeeIndexViewModel{ [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)] public DateTime ToDate { get; set; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21393876', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1150923/']}
jdg_80738
stackexchange
llm_judgeable_groundtruth_similarity
38808981
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can someone please provide some insight as to when to use delegation via DelegateClass (e.g. Seller < DelegateClass(Person) ) and when to use class inheritance (e.g. Seller < Person ) in ruby? class Seller < DelegateClass(Person) def sales ... end end class Seller < Person def sales ... end end When I was looking over the Ruby on Rails source on Github I found quite a few uses of DelegateClass . Now provide the response and nothing else.
There are a couple of differences that can help provide insight as to which approach to use. 1) You can safely delegate to primitives (e.g. String), but cannot always safely inherit from them If you're building on top of Hash or String or Fixnum , you're safer using DelegateClass (or another delegator). For more on why, Steve Klabnik's cautioning is a good place to start). 2) DelegateClass makes it easy to “convert” a more general object into a more specific one This makes it easier to accept an instance of a general object and make it behave in a way that's specific to your implementation: class Message < DelegateClass(String) def print upcase endend# […]def log(message) message = Message.new(message) unless message.is_a?(Message)end 3) A gotcha: DelegateClass subclasses expect an instance of the delegated class as an argument to new This can make it tricky to “subclass” classes that you're handing to library code. For example, this is a fairly common practice that won't work out of the box with DelegateClass : class MyLogger < DelegateClass(ActiveSupport::Logger); endFoo::ThirdParty::Library.configure do |c| c.logger = MyLogger # no goodend This doesn't work because our library expects to behave like most loggers and instantiate without arguments. This can be addressed by defining initialize and creating an instance of ActiveSupport::Logger , but probably not the right solution in this case.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38808981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/435471/']}
jdg_80739
stackexchange
llm_judgeable_groundtruth_similarity
432708
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I run a simple Poisson regression, where $$Y \sim \text{Pois} (5X) $$ If I run a Poisson regression of $Y$ on $X$ , I am expecting to get back $5$ . Instead I get numbers much higher. Why is that? result <- c()for(i in 1:1000) { x <- runif(100) y <- rpois(100, 5 *x) m <- glm(y ~ x, family = 'poisson') result <- c(result, exp(coef(m)[2]))} Now provide the response and nothing else.
The default link function is the log function for Poisson, this means: $$\mathbb{E}[y]=\exp\left(\log(5)+\log(x)\right)$$ If you specify your glm model as y ~ log(x) then you should recover "1" as the coefficient and "log(5)" as the intercept
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/432708', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/151862/']}
jdg_80740