source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
5765780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Heres the code: #include <cstdlib>#include <iostream>using namespace std;class classA{ protected: void setX(int a); private: int p;};classA:: classA(){ //error here. p = 0;}void classA:: setX(int a){ p = a; }int main(){ system("PAUSE"); return EXIT_SUCCESS;} Now provide the response and nothing else.
You forgot to declare the constructor in the class definition. Declare it in public section of the class (if you want clients to create instance using it): class classA{ public: classA(); // you forgot this! protected: void setX(int a); private: int p;}; Now you can write its definition outside the class which you've already done.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5765780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/538293/']}
jdg_84541
stackexchange
llm_judgeable_groundtruth_similarity
35497598
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My json array: [{"id":"7","name":"hello"},{"id":"7","name":"shan"},{"id":"7","name":"john"}{"id":"7","name":"hello"}] I want to get a new array that matches a regular expression on name starting with a letter. I am using regexp but i don't know how to implement it. Here is my code: var newitem=_.filter(result,item=>item.name='hello'); console.log(newitem); But it returns with only strict match of name . Please help me to modify the above so that the result is a new array as described. Expected output when a usertype letter h it shows only the row {"id":"7","name":"hello"} Now provide the response and nothing else.
To check if the name starts with a string, you can use RegExp#test with regex. var newItem = _.filter(result, obj => /^[a-zA-Z]/.test(obj.name)); The regex ^[a-zA-Z] will check if the name starts with alphabet. var arr = [{ "id": "7", "name": "hello"}, { "id": "7", "name": "shan"}, { "id": "7", "name": "jhon"}, { "id": "7", "name": "hello"}, { id: 10, name: '$haun'}];var newItem = _.filter(arr, obj => /^[a-zA-Z]/.test(obj.name));console.log(newItem); <script src="https://cdnjs.com/libraries/lodash.js/"></script> Same code can be written using Array#filter . arr.filter(obj => /^[a-zA-Z]/.test(obj.name)); var arr = [{ "id": "7", "name": "hello"}, { "id": "7", "name": "*shan"}, { "id": "7", "name": "jhon"}, { "id": "7", "name": "hello"}, { id: 10, name: '$haun'}];var newItem = arr.filter(obj => /^[a-zA-Z]/.test(obj.name));console.log(newItem);document.body.innerHTML = '<pre>' + JSON.stringify(newItem, 0, 4) + '</pre>'; Update: when a usertype letter h it shows only the row You can use _.filter(result, obj => /^h/.test(obj.name)); Use i -case insensitive flag to match the alphabet irrespective of the case. That'll match both h and H .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35497598', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4225893/']}
jdg_84542
stackexchange
llm_judgeable_groundtruth_similarity
318518
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I trying to understand the manpage of the dd program, which mentions: Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. $ dd if=/dev/zero of=/dev/null& pid=$! $ kill -USR1 $pid; sleep 1; kill $pid What does pid=$! mean? Is this an assignment of a variable, which gets the pid of dd ?And is eventually used in the $pid variable? Also why do they use sleep and kill ? Is this the way to use -USR1 ? Now provide the response and nothing else.
dd if=/dev/zero of=/dev/null& The trailing & means run the prefix command in background. (Disclaimer: This is oversimplified statement) Refer to this : $! is the PID of the most recent background command. So pid=$! assign the most Recent background PID to variable pid, which is dd PID. Also why they use sleep and kill?. You need kill $pid (if not specified parameter, default signal for kill is TERM which is process termination) to terminate the dd process after you done testing, otherwise dd process may just stay in background and exhausting your CPU resources. Check your System Monitor of your platform to see. Whereas Kill -USR1 $pid print I/O statistics, it doesn't terminate the process. Without sleep 1 second, your dd process may get terminated by last command statement kill $pid ** before have the chance to write statistics output to your terminal. The processes is synchronous but trap+write operation ( kill -USR1 $pid ) may slower than terminate operation ( kill $pid ). So sleep 1 second to delay the startup of kill $pid to ensure statistics output done printing. This is the way to use -USR1? Just man dd : Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. And man 7 signal : SIGUSR1 30,10,16 Term User-defined signal 1 SIGUSR2 31,12,17 Term User-defined signal 2 Combine both statements , you should understand USR1 is User-defined signal which is defined by dd to provide a way for user to interrupt it and print I/O statistics on the fly. It's program specific handler, it doesn't means you can kill -USR1 other_program_pid and expect statistics output. Also you might interest about this: Why does SIGUSR1 cause process to be terminated? .
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/318518', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/106182/']}
jdg_84543
stackexchange
llm_judgeable_groundtruth_similarity
580
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The enlightening image below is of a lightning strike slowed down at 10,000 frames per second. It can be seen that the most intense flash produced from the lightening occurs in the direction from the ground up. Why does this final "ground-up" strike occur and why is it so much brighter and faster than the initial part of strike heading towards the ground? Now provide the response and nothing else.
Does lightning strike from the sky down, or the ground up? The answer is both. Cloud-to-ground lightning comes from the sky down, but the part you see comes from the ground up. A typical cloud-to-ground flash lowers a path of negative electricity (that we cannot see) towards the ground in a series of spurts. Objects on the ground generally have a positive charge. Since opposites attract, an upward streamer is sent out from the object about to be struck. When these two paths meet, a return stroke zips back up to the sky. It is the return stroke that produces the visible flash, but it all happens so fast - in about one-millionth of a second - so the human eye doesn't see the actual formation of the stroke. Source: National Severe Storms Laboratory The reason is that when cloud-to-ground strike approaches the ground, the presence of opposite charges on the ground enhances the strength of the electric field and the "downward leader" strike creates bridge for the "return stroke"; this per the wiki page for Lightning . Cloud to cloud and Intra-Cloud Lightning Might be worth also noting that cloud-to-ground is not as common as Cloud to cloud (CC) and Intra-Cloud (IC ) : Lightning discharges may occur between areas of cloud without contacting the ground. When it occurs between two separate clouds it is known as inter-cloud lightning, and when it occurs between areas of differing electric potential within a single cloud it is known as intra-cloud lightning. Intra-cloud lightning is the most frequently occurring type. Ground-to-Cloud Appears that ground-to-cloud is possible, though normally only a result of a man-made object creating "unnatural" electric potential, and is the least common type of lightning.
{}
{'log_upvote_score': 7, 'links': ['https://earthscience.stackexchange.com/questions/580', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/51/']}
jdg_84544
stackexchange
llm_judgeable_groundtruth_similarity
201359
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Richard Dawkins has described Ronald Fisher as "the father of modern statistics and experimental design", a line which is quoted in Fisher's Wikipedia biography . And also Anders Hald called him "a genius who almost single-handedly created the foundations for modern statistical science" in his book A History of Mathematical Statistics . I just wonder what exactly he did so people give him such a high evaluation? Now provide the response and nothing else.
It is very difficult to write an answer to the question What were the main statistical contributions of Ronald Fisher? since there are already numerous excellent works on this topic, created by excellent writers, including great statisticians e.g.: Hotelling, 1951, The Impact of R. A. Fisher on Statistics Savage, 1976, On Rereading R. A. Fisher Yates, 1964, Sir Ronald Fisher and the Design of Experiments Yates, 1962, Sir Ronald Aylmer Fisher (1890 - 1962) Pearce, 1979, Experimental Design: R. A. Fisher and Some Modern Rivals Efron, 1998, R. A. Fisher in the 21st Century These works are very difficult to match in a few simple lines on an internet Q&A board. On top of that it is not quite easy to grasp the entirety of ideas from Fisher, as Efron wrote in his work on Fisher: One difficulty in assessing the importance of Fisherian statistics is that it's hard to just say what it is. Fisher had an amazing number of important ideas and some of them, like randomization inference and conditionality, are contradictory. It's a little as if in economics Marx, Adam Smith and Keynes turned out to be the same person. Fisher was a pioneer Already a simple, but very good, source to Fisher's contribution is Wikipedia. Just reading the article on the history of statistics (or you can use any other text) will give you some insight in the amount and importance of Fisher's contributions. You will also see that it is partly time, location and luck that made Fisher a great contributor. Fisher was an important and influential statistician in the early 20th century when the basic foundations of applied statistics were created and the field was relatively small (comparable to the period of the 18th and 19th century in mathematics). The first journal of statistics and the first statistics department at a university had just been started when Fisher entered the stage. Before the beginning of the 20th century, there were mostly methods to do regression and several ideas about distributions of residual terms and errors, used in such fields as astronomy. Concepts of measurement errors and probability of results. This type of mathematics and logic (more close to pure mathematics, and... seen as more noble, and less condemned by serious mathematicians of that time), became applied more widely to fields of Fisher's choice: genetics, evolution, biology, agriculture. Since Fisher, an excellent mathematician, provided major contributions to these early developments (or may even be considered as the major driver for these developments), his work has been placed at an important position in the history of statistics. Basic concepts and tools If you look at the topics in an introduction book on statistics (specifically the mathematical concepts, or inference) you might consider Fisher as the dominating contributor. It is also Fisher who wrote the first, and most influential, introduction to statistics books : Statistical Methods for Research Workers (1925) The Design of Experiments (1935) (using the tea cup experiment to explain among others, randomization, the use of latin squares, null hypothesis, significance, sensitivity/power, and basically everything; Yates provides a historic background to this work) Note that online versions of these books exist SMRW and partially DE (see readings October 29 b) . From 1912 to 1925, Fisher: helped to improve the chi-square test (where Pearson and others were wrong about the number of degrees of freedom for many years), provided an exact test to calculate the p-value for goodness of fit with low number of observations (which was named after him as the Fisher's exact test ), wrote a proof (as an undergraduate) for Gosset's 'student's distribution' (and developed it further during his work on small observation numbers , such as ideas of using $N-1$ degrees of freedom instead of the sample size $N$ when using sample statistics) ( see historic description by Fisher's daughter Joan Fisher Box ), developed analysis of variance and the F-distribution (also named after him), and (another "little" thing that he did as an undergraduate) was developing the basics and concepts for maximum likelihood ( Aldrich's R. A. Fisher and the Making of Maximum Likelihood ). So roughly this covers most of the basic inferential tools that current introduction texts use. While doing this work on statistics Fisher tackled major problems in genetics that make people like Richard Dawkins admire him so much. Terminology Fisher introduced many concepts and terms and improved statistical language. Two recent questions on this Q&A site relate to Fisher. The question why so many variables are squared in statistics and why we so often the $L_2$ norm instead of the $L_1$ . It is Fisher who "proved" that the $L_2$ norm is a better (more efficient) estimator than the $L_1$ norm (assuming a perfect Gaussian distribution, which Fisher agreed later is not always true for 'real' errors), and introduced the terms deriving it as an 'efficient statistic' and a 'sufficient statistic' while doing so, as well as introducing the term 'variance' (in his 1920 paper A mathematical observation of the methods of determining the accuracy of observation by the mean error and the mean square error ). Foundations In the 1922 paper On the Mathematical foundations of theoretical statistics Fisher provides a short and simple overview of the main concepts, just to name the list of definitions: 'centre of location', 'consistency', 'distribution', 'efficiency', 'estimation', 'intrinsic accuracy', 'isostatistical regions', 'likelihood', 'location', 'optimum', 'scaling', 'specification', 'sufficiency', 'validity' . It requires a historian to see what Fisher contributed here in the sense of being the originator of concepts, and this also relates to Efron's statement. It is difficult to grasp what exactly is contributed by whom. But certainly Fisher helped to improve the statistical language and thinking. In that article Fisher starts mentioning the problem of applying terms like 'mean' and 'variance' to both the true distribution value as well as the estimated value. (I will try to avoid to put Fisher somewhere in a 'school' such as frequentist or Bayesian. I'd say he was just 'sufficiently' practical to whatever question was at hand). Advanced concepts In his further work Fisher developed early concepts of linear discriminant analysis: what linear function of the four measurements $X=\lambda_1 x_1 +\lambda_2 x_2 + \lambda_3 x_3 + \lambda_4 x_4$ will maximize the ratio of the difference between the specific means to the standard deviations within the species? The Use of Multiple Measurements in Taxonomic Problems, 1936 and the concept of estimation by likelihood that Fisher explored further, and has two concepts named after him, Fisher information and Fisher score . See Theory of statistical estimation, 1925 , Two new properties of mathematical likelihood, 1934 , and The logic of inductive inference, 1935 . More links: R.A. Fisher Guide , by John Aldrich. An enormous source, if not the largest, with information on Fisher, with many further references. Michael Hardy's answer on Mathoverflow on a question about the greatest Mathematicians: https://mathoverflow.net/a/173374 Written by StackExchangeStrike
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/201359', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/95083/']}
jdg_84545
stackexchange
llm_judgeable_groundtruth_similarity
17917
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am reading the Wikipedia article for process management . My focus is on Linux. I cannot figure out the relation and differences between system call, message passing and interrupt, in their concepts and purposes. Are they all for processes to make requests to kernel for resources and services? Some quotes from the article and some other: There are two possible ways for an OS to regain control of the processor during a program’s execution in order for the OS to perform de-allocation or allocation: The process issues a system call (sometimes called a software interrupt); for example, an I/O request occurs requesting to access a file on hard disk. A hardware interrupt occurs; for example, a key was pressed on the keyboard, or a timer runs out (used in pre-emptive multitasking). There are two techniques by which a program executing in user mode can request the kernel's services: * System call* Message passing an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt. Now provide the response and nothing else.
All modern operating systems support multitasking. This means that the system is able to execute multiple processes at the same time; either in pseudo-parallel (when only one CPU is available) or nowadays with multi-core CPUs being common in parallel (one task/core). Let's take the simpler case of only one CPU being available. This means that if you execute at the same time two different processes (let's say a web browser and a music player) the system is not really able to execute them at the same time. What happens is that the CPU is switching from one process to the other all the time; but this is happening extremely fast, thus you never notice it. Now let's assume that while those two processes are executing, you press the reset button (bad boy). The CPU will immediately stop whatever is doing and reboot the system. Congratulations: you generated an interrupt. The case is similar when you are programming and want to ask for a service from the CPU. The difference is that in this case you execute software code -- usually library procedures that are executing system calls (for example fopen for opening a file). Thus, 1 describes two different ways of getting attention from the CPU. Most modern operating systems support two execution modes: user mode and kernel mode. By default an operating system runs in user mode. User mode is very limited. For example, all I/O is forbidden; thus, you are not allowed to open a file from your hard disk. Of course this never happens in real, because when you open a file the operating system switches from user to kernel mode transparently. In kernel mode you have total control of the hardware. If you are wondering why those two modes exist, the simplest answer is for protection. Microkernel-based operating systems (for example MINIX 3) have most of their services running in user mode, which makes them less harmful. Monolithic kernels (like Linux) have almost all their services running in kernel mode. Thus a driver that crashes in MINIX 3 is unlikely to bring down the whole system, while this is not unusual in Linux. System calls are the primitive used in monolithic kernels (shared data model) for switching from user to kernel mode. Message passing is the primitive used in microkernels (client/server model). To be more precise, in a message passing system programmers also use system calls to get attention from the CPU. Message passing is visible only to the operating system developers. Monolithic kernels using system calls are faster but less reliable, while microkernels using message passing are slower but have better fault isolation. Thus, 2 mentions two different ways of switching from user to kernel mode. To revise, the most common way of creating a software interrupt, aka trap, is by executing a system call. Interrupts on the other hand are generated purely by hardware. When we interrupt the CPU (either by software or by hardware) it needs to save somewhere its current state -- the process that it executes and at which point it did stop -- otherwise it will not be able to resume the process when switching back. That is called a context switch and it makes sense: Before you switch off your computer to do something else, you first need to make sure that you saved all your programs/documents, etc so that you can resume from the point where you stopped the next time you'll turn it on :) Thus, 3 explains what needs to be done after executing a trap or an interrupt and how similar the two cases are.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/17917', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/674/']}
jdg_84546
stackexchange
llm_judgeable_groundtruth_similarity
299596
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone explain me why the voltage of the red circle is 1.85V? I know how the 25k-25k divider works but just don't know how the +5V-200k network affects the result. Now provide the response and nothing else.
Try to use superposition principle. First set 3.3V to 0V and solve for \$V_O'\$ simulate this circuit – Schematic created using CircuitLab $$V_O' = 5V*\frac{25k||25k}{25k||25k + 200k} = 5V*\frac{12.5k}{12.5k + 200k} = 0.2941V $$Next turn on 3.3V and turn off 5V source (set to 0V) simulate this circuit $$V_O''= 3.3V * \frac{25k||200k}{25k+25k||200k} = 3.3V\frac{22.22k}{22.22k + 25k} = 1.552V$$ and finally we have the answer $$V_O=V_O'+V_O'' = 0.294 + 1.552V = 1.846V$$ Or try to do nodal analysis. $$\frac{3.3V - V_O}{25k\Omega} + \frac{5V - V_O}{200k\Omega}=\frac{V_O}{25k\Omega} $$ And solve for \$V_O\$
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/299596', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/121144/']}
jdg_84547
stackexchange
llm_judgeable_groundtruth_similarity
26879391
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've updated to the new Xcode 6.1 and server 4.0. I was able to get continuous integration on the simulators but not the devices in the previous version. Now that I've updated, I'm not getting any continuous integration. I'm getting the following 2 errors: Terminated xcodebuild since it produced no output for too long The xcodebuild task completed with uncaught signal. Any ideas? Now provide the response and nothing else.
When I changed my signing certificate on my Xcode CI machine I could see that the bots got stuck on signing the build products. Checking running processes on the Xcode CI machine showed me what executable was trying to access my signing certificate /usr/bin/codesign --force --sign 043...DE2A --entitlements /Library/Develop... I opened Keychain and found my signing certificate in the System keychain, cmd + i to show info and added codesign to the list of executables that could access it without authenticating.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26879391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3011530/']}
jdg_84548
stackexchange
llm_judgeable_groundtruth_similarity
65664
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Bob and Alice have together a sum of $103$ years old. In $4$ years, Alice will have $2$ times the age of Bob. What age will they have ? I'm trying different equation and doing my substitution but I can't get a valid answer (a integer). $$ x+y = 103 ,$$ $$ (x+4) + (y+4) = 2x $$ etc.. Thanks for help, it's appreciated ! Now provide the response and nothing else.
Consider an analogous problem: Art and Bill's answers currently have $31$ total upvotes. If both receive $4$ more upvotes then Art will have twice as many upvotes as Bill. How many is that? Let $\rm\:A,B\:$ be the upvotes Art, Bill have after the $4$ upvotes. We are given $\rm\:A = 2\:B\:,\:$ and before the $4$ upvotes they had sum $31\:,\:$ i.e. $\rm\: A\!-\!4\:+\:B\!-\!4\: =\: 31\:.\:$ Hence $\rm\ 39 = A + B = 2\:B + B = 3\:B\:.\:$ Therefore $\rm\:B = 13\:$ and $\rm\:A = 2\:B = 26\:.\:$ Extra credit: Whose answer will be accepted?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/65664', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16304/']}
jdg_84549
stackexchange
llm_judgeable_groundtruth_similarity
3594388
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Ruby code with different classes in a few files. In one file, I start the execution. This file require s my other files. Is this a good way to start a ruby code? When I run the code from a symbolic link, for example DIR2/MyRubyCode is a link to the main file DIR1/MyRubyCode.rb , then my requires will fail. I solved the problem by adding the path DIR1 to $LOAD_PATH before the require , but I think there would be much better ways to do it. Do you have any suggestions about that? Now provide the response and nothing else.
If you want to check if a Ruby file is being ' require 'ed or executed with ' ruby MyRubyCode.rb ', check the __FILE__ constant. # If the first argument to `ruby` is this file.if $0 == __FILE__ # Execute some stuff.end As far as the require/$LOAD_PATH issue, you could always use the relative path in the require statement. For example: # MyRubyCode.rbrequire "#{File.dirname(__FILE__)}/foo_class"require "#{File.dirname(__FILE__)}/bar_module" Which would include the foo_class.rb and bar_module.rb files in the same directory as MyRubyCode.rb .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3594388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/420892/']}
jdg_84550
stackexchange
llm_judgeable_groundtruth_similarity
17741881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi all, Here's my code: if ($value==0){ $array = array( "NAMES" => array("John", "Sara", "Mark"), "LASTNAMES" => array ("Smith", "Lockwood", "Grant") );}else { $array = array( "NAMES" => array("John", "Sara", "Mark"), "LASTNAMES" => array ("Smith", "Lockwood", "Grant"), "ITEMPURCHASED" => array ("DVD", "Book", "Comic") ); } As you can see I'm reapeating the whole array code just because the ITEMPURCHASED element. Is there a way not to repeat the whole array code? Can an if be used inside the array which adds just the ITEMPURCHASED element? If so, how? Thanks a ton Now provide the response and nothing else.
$array = array( "NAMES" => array("John", "Sara", "Mark"), "LASTNAMES" => array ("Smith", "Lockwood", "Grant"));if ($value != 0){ $array['ITEMPURCHASED'] = array ("DVD", "Book", "Comic");} Alternatively, you could set the value to NULL if $value is not equal to 0 by using a ternary operator . $array = array( "NAMES" => array("John", "Sara", "Mark"), "LASTNAMES" => array ("Smith", "Lockwood", "Grant"), "ITEMPURCHASED" => ($value != 0) ? array ("DVD", "Book", "Comic") : null);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17741881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/541684/']}
jdg_84551
stackexchange
llm_judgeable_groundtruth_similarity
3338684
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new here at stack overflow . :-) How can I create a div that automatically change it's height to get all the space filled?I've tried with height:"auto" but doesn't work... :( For example: <div style="height:300px"> <div style="height:50px">...</div> <div>The height of this div varies from page to page...</div> <div style="height:???">SPROING!!</div> <div style="height:50px">...</div></div> EDITED the example to match better my needings. Thanks. Now provide the response and nothing else.
It's not doable with CSS2.1, but it is possible with CSS3 attribute substring-matching selectors (which are supported in IE7+): div[class^="status-"], div[class*=" status-"] Notice the space character in the second attribute selector. This picks up div elements whose class attribute meets either of these conditions: [class^="status-"] — starts with "status-" [class*=" status-"] — contains the substring "status-" occurring directly after a space character. Class names are separated by whitespace per the HTML spec , hence the significant space character. This checks any other classes after the first if multiple classes are specified, and adds a bonus of checking the first class in case the attribute value is space-padded (which can happen with some applications that output class attributes dynamically). Naturally, this also works in jQuery, as demonstrated here . The reason you need to combine two attribute selectors as described above is because an attribute selector such as [class*="status-"] will match the following element, which may be undesirable: <div id='D' class='foo-class foo-status-bar bar-class'></div> If you can ensure that such a scenario will never happen, then you are free to use such a selector for the sake of simplicity. However, the combination above is much more robust. If you have control over the HTML source or the application generating the markup, it may be simpler to just make the status- prefix its own status class instead as Gumbo suggests .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/3338684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/402542/']}
jdg_84552
stackexchange
llm_judgeable_groundtruth_similarity
17516556
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to coding, and trying to make a script to run as a cron. Overall, I need a specific type of user from one table to find the ID's in the second. This is for an accounting task on a larger website script, so changing anything with how the table isn't an option. User table columns are as followsid -- username -- password -- email -- deleted Deleted is either 0 or 1 and the column we will be searching for. transaction table columns are as followsid -- time -- owner -- banker -- type -- amount -- reason So i need to pull each unique user who has not been deleted (a 0) and go through all their transactions (owner) and add them all up (amount). Perform some math on the result, and then create a new transaction for that user. This is what I've got so far, doing a bit of hack job from some code within the webscript as well. <?php$con=mysqli_connect("localhost", "username", "password", "database"); $uniqeMembers = mysqli_query($con,"SELECT DISTINCT id FROM users WHERE deleted='0'");while ($id = $uniqueMembers->fetchRow()) { $playerCreds = $playerCreds = getCol("SELECT SUM(amount) FROM transactions WHERE owner='$id' LIMIT 1"); $interest = $playerCreds * .05; mysqli_query($con,"INSERT INTO transactions (time, owner, banker, type, amount, reason) VALUES ($TIMEMARK, $id, 14, 0, $interest, 'Interest')"); }mysqli_close($con);?> Right now, the code doesn't appear to be doing anything. Any help? Or a simpler method? Now provide the response and nothing else.
Considering that it literally took me the whole entire day to find what the solution to this was, I figured I'd at least show StackOverflow for future reference what I came across and what did and did not work for this question. tl;dr: Scroll to the The Code section What I found Good ol' control panel If you are looking for the really easy way to do this you can simply go and open Contol Panel. Depending on what version of Windows you are on (in my case I'm on Windows 8), the path to the page is Control Panel >> Network and Internet >> Network and Sharing Center and then you can click on the link next to "Connections: " which will give you a window that looks like what is below. The current link speed is highlighted in red which in my case is 36.0 Mbps. Though, of course, this might not satisfy your original question if you were intending to integrate some code with the actual value. WMI With a mix of Googling and whatnot, I thought I might have found something in Windows Management Instrumentation . Long story short, AFAIK, WMI does not have what we're looking for. WMI , in short, is a giant object database (thatcan also be queried through SQL) that allows you to query information about aWindows machine such as process, disks, etc. In WMI, everything isrepresented by a class with a series of instances each with a set of properties . Anyhow, WMI Explorer allows you to view all of this on your machine. I (supposedly) found two classes on MSDN that might have the info on link speed but from WMI Explorer, there was nothing useful. The first class, MSFT_NetAdapter , did not even show up in WMI Explorer on my machine. The second class, Win32_NetworkAdapter , showed up in WMI Explorer, but the Speed property was still incorrect. The same network adapter was showing a value of 168000000 or 168 Mbps which is not right. Though I find this strange because there was already a MaxSpeed but it was blank. Scratch WMI off the list. Win32 P/Invoke Yes, of course , the solution to everything is always calling unmanaged Win32 APIs using P/Invoke magic. This is the route used to solve the problem . Luckily, the IP_ADAPTER_ADDRESSES structure solves the problem. If you look at the MSDN page, it's a fairly large structure but what is important here is TransmitLinkSpeed which actually works . Calling the GetAdaptersAddresses() function will return the actual structure. Now, the actual C# P/Invoke code. Luckily, pinvoke.net already had interop for this function which I've added. This is all that was necessary. The Code Finally, here is your code patched up with the new P/Invoke black magic. I've made it work as a console application for demo purposes: Using Statements: using System;using System.Threading; Code: class Program{ private static void Main(string[] args) { Timer ticker = new Timer(Update, null, 0, 1000); // Keep the main thread from dying while (true) { Thread.Sleep(1000); } } private static void Update(object state) { ulong speed = 0; string adapter = ""; string[] nameSearches = { "Wireless", "WiFi", "802.11", "Wi-Fi" }; // The enum value of `AF_INET` will select only IPv4 adapters. // You can change this to `AF_INET6` for IPv6 likewise // And `AF_UNSPEC` for either one foreach (IPIntertop.IP_ADAPTER_ADDRESSES net in IPIntertop.GetIPAdapters(IPIntertop.FAMILY.AF_INET)) { bool containsName = false; foreach (string name in nameSearches) { if (net.FriendlyName.Contains(name)) { containsName = true; } } if (!containsName) continue; speed = net.TrasmitLinkSpeed; adapter = net.FriendlyName; break; } string temp; if (speed == 0) { temp = "There is currently no Wi-Fi connection"; } else { temp = string.Format("Current Wi-Fi Speed: {0} Mbps on {1}", (speed / 1000000.0), adapter); } Console.WriteLine(temp); }} You are then going to be looking for the actual IPIntertop class that I updated. Since it's pretty big you can find it updated at pinvoke.net or on this PasteBin in case something goes down. Bottom Line Windows has a lot of APIs which are somewhat broken (WMI), can have a few "leaky abstractions" (.Net), or can be a pain to work with (Win32). Sigh , that is a lot and I hope it helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17516556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2558850/']}
jdg_84553
stackexchange
llm_judgeable_groundtruth_similarity
33703624
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How does tf.app.run() work in Tensorflow translate demo? In tensorflow/models/rnn/translate/translate.py , there is a call to tf.app.run() . How is it being handled? if __name__ == "__main__": tf.app.run() Now provide the response and nothing else.
if __name__ == "__main__": means current file is executed under a shell instead of imported as a module. tf.app.run() As you can see through the file app.py def run(main=None, argv=None): """Runs the program with an optional 'main' function and 'argv' list.""" f = flags.FLAGS # Extract the args from the optional `argv` list. args = argv[1:] if argv else None # Parse the known flags from that list, or from the command # line otherwise. # pylint: disable=protected-access flags_passthrough = f._parse_flags(args=args) # pylint: enable=protected-access main = main or sys.modules['__main__'].main # Call the main function, passing through any arguments # to the final program. sys.exit(main(sys.argv[:1] + flags_passthrough)) Let's break line by line: flags_passthrough = f._parse_flags(args=args) This ensures that the argument you pass through command line is valid,e.g. python my_model.py --data_dir='...' --max_iteration=10000 Actually, this feature is implemented based on python standard argparse module. main = main or sys.modules['__main__'].main The first main in right side of = is the first argument of current function run(main=None, argv=None) . While sys.modules['__main__'] means current running file(e.g. my_model.py ). So there are two cases: You don't have a main function in my_model.py Then you have tocall tf.app.run(my_main_running_function) you have a main function in my_model.py . (This is mostly the case.) Last line: sys.exit(main(sys.argv[:1] + flags_passthrough)) ensures your main(argv) or my_main_running_function(argv) function is called with parsed arguments properly.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33703624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2669077/']}
jdg_84554
stackexchange
llm_judgeable_groundtruth_similarity
26746361
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With Perl you can check if an array contains a value $ perl -e '@foo=(444,555,666); print 555 ~~ @foo ? "T" : "F"'T However with awk, this similar command is checking the array indexes rather thanvalues $ awk 'BEGIN {split("444 555 666", foo); print 555 in foo ? "T" : "F"}'F How can I check if an array contains a particular value with awk? Now provide the response and nothing else.
Awk noob here. I digested Steven's answer and ended up with this hopefully easier to understand snippet below. There are 2 more subtle problems: An Awk array is actually a dictionary. It's not ["value1", "value2"] , it's more like {0: "value1", 1: "value2"} . in checks for keys, and there is no built-in way to check for values. So you have to convert your array (which is actually a dictionary) to a dictionary with the values as keys. BEGIN { split("value1 value2", valuesAsValues) # valuesAsValues = {0: "value1", 1: "value2"} for (i in valuesAsValues) valuesAsKeys[valuesAsValues[i]] = "" # valuesAsKeys = {"value1": "", "value2": ""}}# Now you can use `in`($1 in valuesAsKeys) {print} For one-liners: echo "A:B:C:D:E:F" | tr ':' '\n' | \awk 'BEGIN{ split("A D F", parts); for (i in parts) dict[parts[i]]=""} $1 in dict'
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26746361', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1002260/']}
jdg_84555
stackexchange
llm_judgeable_groundtruth_similarity
17382939
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently deploying OpenERP 7 on a CentOS 6.4 machine.I'm virtualizing the whole thing so that I can backup the whole system if I mess up. Everything seems to be fine, I changed my models, my views, but it's been 2 days I've been banging my head against a seemingly impossible problem. My deployment is for a book factory, so the product needed to be customized (number of pages, binding type , and so forth...) In the new product view, I need something that can sum to fields x_colorpages and x_bwpages to populate a third field: x_totalpages. Easy huh? I've got my XML , product.normal.form view based on the product.poduct object When I change the value in "x_paginecolori" or "xpagineBN" it should automatically trigger the onchange function. So I went in the product module code and coded my cal_change_event In the python file product.py located in: /usr/lib/python2.6/site-packages/openerp-7.0_20130524_231019-py2.6.egg/openerp/addons/account/product.py def cal_change_event(self, cr, uid, ids, x_PagineColori, x_PagineBN): res = {} sum = 0if not x_PagineColori: return {}else if not x_PagineBN: return {}else: sum = x_PagineColori + x_PagineBN res={'x_pagine':sum } return {'value':res} NO matter what i do , the onchange event will allways trigger a warning popup: File "/usr/lib/python2.6/site-packages/openerp-7.0_20130524_231019-py2.6.egg/openerp/osv/osv.py", line 185, in execute_cr return getattr(object, method)(cr, uid, *args, **kw)AttributeError: 'product.product' object has no attribute 'cal_change_event' It seems to me that the function is not seen, where should I put it?How can Product.Product not have this attribute if I placed it in the class inside product.py? I grep'd the whole CENTOS installation in search for multiple instances of openerp , or to see if I missed something about product.py It seems that the product.py is not the right file, even if I erase other On_change functions, nothing bad really happens.But the directory must be right, because the error log gives me that same location in the debug popup window. Should I be building a custom module fron scratch to make this work? Is it forbidden somehow to modify the standard product module? Thanks in advance for any insight, my brain is defaulting on this one. here is my Xml: <?xml version="1.0"?> <form string="Product" version="7.0"> <sheet> <div> <label for="x_partner2" string="Cliente Referente del prodotto"/> <field name="x_partner2" /> </div> <field name="image_medium" widget="image" class="oe_avatar oe_left"/> <div class="oe_title"> <div class="oe_edit_only"> <label for="name" string="Product Name"/> </div> <h1> <field name="name"/> </h1> <label for="categ_id" class="oe_edit_only"/> <h2><field name="categ_id"/></h2> <div name="options" groups="base.group_user"> <field name="sale_ok"/> <label for="sale_ok"/> </div> </div> <div class="oe_right oe_button_box" name="buttons"> </div> <notebook> <page string="Information"> <group> <group> <field name="type"/> <field name="uom_id" on_change="onchange_uom(uom_id,uom_po_id)" groups="product.group_uom"/> <field name="list_price"/> </group> <group> <field name="default_code"/> <label for="ean13"/> <div name="ean"> <field name="ean13" placeholder="e.g. 5901234123457"/> </div> </group> <group> <field groups="product.group_product_variant" name="variants"/> <field name="price_margin" groups="product.group_product_variant"/> <field name="price_extra" groups="product.group_product_variant"/> </group> </group> <field name="description" placeholder="describe the product characteristics..."/> </page> <page string="Dati del volume Universal Book"> <group> <group> <field name="x_DataCreazione" /> <field name="x_DataUltimaModifica" /> <field name="x_Autore" /> <field name="x_Larghezza"/> <field name="x_Altezza" /> <field name="x_Dorso"/> <field name="x_Rilegatura" /> <field name="x_volumiformati" /> </group> </group> <group> <field name="x_PagineColori" on_change="cal_change_event(x_PagineColori,x_PagineBN)" /> <field name="x_PagineBN" on_change="cal_change_event(x_PagineColori,x_PagineBN)" /> <field name="x_pagine" /> </group> <group> <field name="x_prezzo" /> </group> </page> <page string="Copertina"> <group> <group> <field name="x_TipoCartaCopertina"/> <field name="x_TipoStampaCopertina"/> <field name="x_TipoFFR"/> <field name="x_PlastificazioneCopertina" /> <field name="x_NumeroCopertinePerPagina" /> </group> <group> <field name="x_TipoCartaSovracopertina"/> <field name="x_TipoStampaScopertina"/> <field name="x_TipoFFRSovracopertina"/> <field name="x_PlastificazioneSovracopertina"/> </group> </group> <label for="x_aletta" string="Libro con alette?"/> <field name="x_aletta" /> <label for="x_Note" string="Note"/> <field name="x_Note" /> </page> <page string="Margini"> <group> <field name="x_pagriferimento"/> <field name="x_MargineDestro"/> <field name="x_MargineDestro"/> <field name="x_MargineInferiore" /> <field name="x_MargSinistro" /> <field name="x_MargineSuperiore"/> <field name="x_RiferimentoLIFRE" /> <field name="x_MergineSuperioreLIFRE" /> <field name="x_MargineInferioreLIFRE" /> </group> </page> <page string="Procurements" groups="base.group_user"> <group name="procurement"> <group name="general"> <field name="cost_method" groups="product.group_costing_method"/> <field name="standard_price" attrs="{'readonly':[('cost_method','=','average')]}"/> </group> <group name="procurement_uom" groups="product.group_uom" string="Purchase"> <field name="uom_po_id"/> </group> </group> <separator string="Description for Suppliers"/> <field name="description_purchase" placeholder="This note will be displayed on requests for quotation..."/> </page> <page string="Inventory" groups="base.group_user"> <group name="inventory"> <group name="status" string="Status"> <field name="state"/> <field name="product_manager"/> </group> <group name="Weights" groups="product.group_stock_packaging" string="Weights"> <field digits="(14, 3)" name="volume" attrs="{'readonly':[('type','=','service')]}"/> <field name="weight" attrs="{'readonly':[('type','=','service')]}"/> <field name="weight_net" attrs="{'readonly':[('type','=','service')]}"/> </group> </group> </page> <page string="Sales" attrs="{'invisible':[('sale_ok','=',False)]}"> <group name="sale"> <group string="Sale Conditions"> <label for="warranty"/> <div> <field name="warranty" class="oe_inline"/> months </div> </group> <group groups="product.group_uos" string="Unit of Measure"> <field name="uos_id"/> <field name="uos_coeff"/> <field name="mes_type"/> </group> </group> <field name="packaging" groups="product.group_stock_packaging"> <form string="Packaging" version="7.0"> <group col="4"> <field name="ean"/> <field name="sequence" invisible="1"/> <newline/> <field name="qty"/> <field name="ul"/> <field name="weight_ul"/> <separator colspan="4" string="Palletization"/> <field name="ul_qty"/> <field name="rows"/> <field name="weight"/> <separator colspan="4" string="Pallet Dimension"/> <field name="height"/> <field name="width"/> <field name="length"/> </group> <separator colspan="4" string="Description"/> <field name="name"/> </form> </field> <separator string="Description for Quotations"/> <field name="description_sale" placeholder="note to be displayed on quotations..."/> </page> </notebook> </sheet> <div class="oe_chatter"> <field name="message_follower_ids" widget="mail_followers"/> <field name="message_ids" widget="mail_thread"/> </div> </form> Now provide the response and nothing else.
You're doing a lot in one function. The code may be a bit easier to follow if you break it up into two functions, one to make the list of primes and another to test if a specific number is prime: function listPrimes( nPrimes ) { var primes = []; for( var n = 2; nPrimes > 0; n++ ) { if( isPrime(n) ) { primes.push( n ); --nPrimes; } } return primes;}function isPrime( n ) { var max = Math.sqrt(n); for( var i = 2; i <= max; i++ ) { if( n % i === 0 ) return false; } return true;} Now you can run it in Node: var fs = require('fs');fs.writeFileSync( 'test.txt', listPrimes(100) ); or directly in the browser console: listPrimes( 100 ); (I didn't test the code in Node, only in the browser.) A couple of related notes: The sqrt() calculation is moved outside the loop in isPrime() , so it doesn't have to be recalculated for each number you're testing. The nPrimes variable lets you generate the exact number of primes you want without the 542 hack. Having written this simple version, it's interesting to look at possible optimizations. One is to check for divisibility only on the previously generated primes, instead of checking all integers up to the square root. You could do that like this: function listPrimes( nPrimes ) { var primes = []; for( var n = 2; nPrimes > 0; n++ ) { if( isPrime( n, primes ) ) { primes.push( n ); --nPrimes; } } return primes;}function isPrime( n, primes ) { var max = Math.sqrt(n); for( var i = 0; i < primes.length && primes[i] <= max; i++ ) { if( n % primes[i] === 0 ) return false; } return true;} That may be faster if you're generating a large number of primes, although for 100 of them it hardly matters and I'd be inclined to stick with the simpler code. Of course if you're talking about optimization, it's always worth considering a different algorithm. The Sieve of Eratosthenes is a fun one because it's fast and fairly simple and easy to understand. That Wikipedia article has a great illustration of how it works. In JavaScript it might look something like this: function listPrimes( max ) { // Start with an empty list of primes var primes = []; // Initialize the sieve - each number is prime unless proven otherwise var sieve = new Array( max ); for( var i = 1; i <= max; i++ ) { sieve[i] = true; } // Now check each number from 2 through max for( var p = 2; p <= max; p++ ) { if( sieve[p] ) { // p is prime, save it in the output list primes.push( p ); // Mark p * 2, p * 3, p * 4, etc. as non-prime for( var t = p * 2; t <= max; t += p ) { sieve[t] = false; } } } return primes;} Yes, after recommending splitting the code into two functions, I'm now back to one function. :-) One difference about the Sieve is that you can't really say, "please give me the first N primes"; instead you ask it, "please give me all the primes less than N". But if N is a large number, it is much faster than the other approach.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17382939', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2535010/']}
jdg_84556
stackexchange
llm_judgeable_groundtruth_similarity
360409
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking at replacing an old file server sharing a single RAID5 volume powered by 6 10K rpm disks with 8MB caches. The replacement will be 2 RAID10 volumes of 4 disks each. The new disks have 64MB caches... but are only 7200 RPM SATA drives (WD RE4). My question, then, is how would you expect the two volumes to perform comparatively? This is already a done deal, but I ask the question because I'd like to have an idea of what to expect before I deploy this. My gut tells me the new system should be a nice performance upgrade, especially because of the split to two volumes instead of one. I'm also curious how it would compare if I only had one new volume instead of two. I worry I may be under-estimating the impact of the rotation speed of the old array. I also worry because, while I expect the new system to be faster, it's also much larger , and therefore will eventually see an increase in the absolute amount of data transferred per unit of time... so is it likely to be enough faster to handle this increase? Since I know it does matter, this is for a traditional file server that will be used for mapped drives. No database, web servers, or anything like that. If I get the chance, I'll run some actual benchmarks to compare the new and old arrays with real-world usage and post them after deployment, but again, the true test won't be until the new system is handling as much data transfer relative to it's size as the current system. Now provide the response and nothing else.
Why have 2 x 4-disk-R10's - why not 1 x 8-disk-R10 with two logical disks? That'll perform far better and be generally safer too. Now on to your 10k vs. 7.2k question - while the 7.2k disks will inherently be at least 40% slower on random IOPS (yes not so much slower for sequential work) what I'd be more worried about is that most 7.2k disks have a 30% 'duty cycle' - i.e. they're designed to only be busy 30% of the time and exceeding this level actually has a MASSIVE impact on their MTBF. I can attest to this after seeing literally hundreds of 7.2k disks fail in less than a month on some arrays we had, switching to 10k disks reduced this number down to single figures - so watch out, 7.2 disks aren't supposed to be driven all the time ok.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/360409', 'https://serverfault.com', 'https://serverfault.com/users/2869/']}
jdg_84557
stackexchange
llm_judgeable_groundtruth_similarity
16203378
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So, I have a countdown timer in my activity. Now, I have a button which opens another activity. When that button is clicked I need to store the current countdown timer value, and start a countdown timer in the second activity from that saved value. I was thinking to pass the time using the Intent but I don't know what value to pass. How can I do that? My timer: public String formatTime(long millis) { String output = "00:00"; long seconds = millis / 1000; long minutes = seconds / 60; seconds = seconds % 60; minutes = minutes % 60; String sec = String.valueOf(seconds); String min = String.valueOf(minutes); if (seconds < 10) sec = "0" + seconds; if (minutes < 10) min= "0" + minutes; output = min + " : " + sec; return output; } And in onCreate method: // New timer for 40 minutes, starts after initialization new MyCount(2400000, 1000) { // Updates the text on your "scoreboard" every second public void onTick(long millisUntilFinished) { vreme.setText("" + formatTime(millisUntilFinished)); } public void onFinish() { } }.start(); } public class MyCount extends CountDownTimer { public MyCount(long millisInFuture, long countDownInterval) { super(millisInFuture, countDownInterval); } public void onFinish() { } public void onTick(long millisUntilFinished) { vreme.setText("" + millisUntilFinished / 1000); } Now provide the response and nothing else.
This usually means your connection string isn't any good. If you look at the stack trace, you'll notice that this is failing when trying to interpret your connection string. Check your connection string to make sure it is correct - or post it here for help (but without any sensitive information such as passwords ;) ) UPDATE According to the SqlDatabase documentation the SqlDatabase class takes a connection string, not a key to the connection string configuration. So new SqlDatabase("SiteSqlServer"); Should be var connection = ConfigurationManager.ConnectionStrings["SiteSqlServer"];Database objDB = new SqlDatabase(connection.ConnectionString); (I have omitted any defensive code here for brevity)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16203378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2229473/']}
jdg_84558
stackexchange
llm_judgeable_groundtruth_similarity
5653241
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two arrays that both look like this: Array( [0] => Array ( [name] => STRING [value] => STRING ) [1] => Array ( [name] => STRING [value] => STRING ) [2] => Array ( [name] => STRING [value] => STRING )) and I would like to be able to replicate array_intersect by comparing the ID of the sub arrays within the two master arrays. So far, I haven't been successful in my attempts. :( Now provide the response and nothing else.
Use array_uintersect() to use a custom comparison function, like this: $arr1 = array( array('name' => 'asdfjkl;', 'value' => 'foo'), array('name' => 'qwerty', 'value' => 'bar'), array('name' => 'uiop', 'value' => 'baz'), );$arr2 = array( array('name' => 'zxcv', 'value' => 'stuff'), array('name' => 'asdfjkl;', 'value' => 'foo'), array('name' => '12345', 'value' => 'junk'), array('name' => 'uiop', 'value' => 'baz'), );$intersect = array_uintersect($arr1, $arr2, 'compareDeepValue');print_r($intersect);function compareDeepValue($val1, $val2){ return strcmp($val1['value'], $val2['value']);} which yields, as you would hope: Array( [0] => Array ( [name] => asdfjkl; [value] => foo ) [2] => Array ( [name] => uiop [value] => baz ))
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5653241', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/429567/']}
jdg_84559
stackexchange
llm_judgeable_groundtruth_similarity
37278990
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a crypto https://nodejs.org/api/crypto.html for password encryption and authentication. I am working on the change password page and is having problem determining whether the password provided by the user has the same hash as the existing password. Below is my code. var createSalt = function createSalt() { return crypto.randomBytes(128).toString('base64');};var hashPwd = function hashPwd(salt, pwd) { var hmac = crypto.createHmac('sha256', salt); return hmac.update(pwd).digest('hex');};//use password , create salt, hash and compare with the existingvar salt = createSalt();var passHash = hashPwd(salt,data.Password);console.log('the password is', user.PassHash === passHash); I am expecting if the console message above to print true where existing user password match. However, the two hashes does not seem to match at all. Please what am i missing ? How do achieve this? I want to make sure users password match his existing password before he can change a new one. Any help would be appreciated. Now provide the response and nothing else.
I think your problem is in the salt. Usually you have to store the salt you used to hash the first time and reuse it the second time around. The reason for the salt is to make sure that the hash doesn't map to the original pass if some hacker would retrieve it from a compromised system (using a rainbow table attack). See Why do we use the "salt" to secure our passwords? If you would try var salt = crypto.randomBytes(128).toString('base64');var hashPwd = function hashPwd(salt, pwd) { var hmac = crypto.createHmac('sha256', salt); return hmac.update(pwd).digest('hex');};//use password , create salt, hash and compare with the existingvar passHash = hashPwd(salt,data.Password);console.log('the password is', user.PassHash === passHash); It would work as long as you don't restart the server (assuming you store the salt var outside scope of the function invoked to respond to the http request). A better solution (imo) is what bcrypt is doing. There you generate a salt per password, but to verify that a password is correct you use compare, which uses the salt stored in the hash. This way you can use different salts with each password, meaning you don't have to worry as much about a salt being compromised. npm install bcrypt … var bcrypt = require('bcrypt');var hash = bcrypt.hashSync("my password");bcrypt.compareSync("my password", hash); // truebcrypt.compareSync("not my password", hash); // false There is also compareAsync and other async variants. See also: https://www.npmjs.com/package/bcrypt-nodejs
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37278990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2270185/']}
jdg_84560
stackexchange
llm_judgeable_groundtruth_similarity
506092
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a class that offers up a few events. That class is declared globally but not instanced upon that global declaration--it's instanced on an as-needed basis in the methods that need it. Each time that class is needed in a method, it is instanced and event handlers are registered. Is it necessary to remove the event handlers explicitly before the method goes out of scope? When the method goes out of scope, so goes the instance of the class. Does leaving event handlers registered with that instance that is going out of scope have a memory footprint implication? (I'm wondering if the event handler keeps the GC from seeing the class instance as no longer being referenced.) Now provide the response and nothing else.
In your case, everything is fine. It's the object which publishes the events which keeps the targets of the event handlers live. So if I have: publisher.SomeEvent += target.DoSomething; then publisher has a reference to target but not the other way round. In your case, the publisher is going to be eligible for garbage collection (assuming there are no other references to it) so the fact that it's got a reference to the event handler targets is irrelevant. The tricky case is when the publisher is long-lived but the subscribers don't want to be - in that case you need to unsubscribe the handlers. For example, suppose you have some data transfer service which lets you subscribe to asynchronous notifications about bandwidth changes, and the transfer service object is long-lived. If we do this: BandwidthUI ui = new BandwidthUI();transferService.BandwidthChanged += ui.HandleBandwidthChange;// Suppose this blocks until the transfer is completetransferService.Transfer(source, destination);// We now have to unsusbcribe from the eventtransferService.BandwidthChanged -= ui.HandleBandwidthChange; (You'd actually want to use a finally block to make sure you don't leak the event handler.) If we didn't unsubscribe, then the BandwidthUI would live at least as long as the transfer service. Personally I rarely come across this - usually if I subscribe to an event, the target of that event lives at least as long as the publisher - a form will last as long as the button which is on it, for example. It's worth knowing about this potential issue, but I think some people worry about it when they needn't, because they don't know which way round the references go. EDIT: This is to answer Jonathan Dickinson's comment. Firstly, look at the docs for Delegate.Equals(object) which clearly give the equality behaviour. Secondly, here's a short but complete program to show unsubscription working: using System;public class Publisher{ public event EventHandler Foo; public void RaiseFoo() { Console.WriteLine("Raising Foo"); EventHandler handler = Foo; if (handler != null) { handler(this, EventArgs.Empty); } else { Console.WriteLine("No handlers"); } }}public class Subscriber{ public void FooHandler(object sender, EventArgs e) { Console.WriteLine("Subscriber.FooHandler()"); }}public class Test{ static void Main() { Publisher publisher = new Publisher(); Subscriber subscriber = new Subscriber(); publisher.Foo += subscriber.FooHandler; publisher.RaiseFoo(); publisher.Foo -= subscriber.FooHandler; publisher.RaiseFoo(); }} Results: Raising FooSubscriber.FooHandler()Raising FooNo handlers (Tested on Mono and .NET 3.5SP1.) Further edit: This is to prove that an event publisher can be collected while there are still references to a subscriber. using System;public class Publisher{ ~Publisher() { Console.WriteLine("~Publisher"); Console.WriteLine("Foo==null ? {0}", Foo == null); } public event EventHandler Foo;}public class Subscriber{ ~Subscriber() { Console.WriteLine("~Subscriber"); } public void FooHandler(object sender, EventArgs e) {}}public class Test{ static void Main() { Publisher publisher = new Publisher(); Subscriber subscriber = new Subscriber(); publisher.Foo += subscriber.FooHandler; Console.WriteLine("No more refs to publisher, " + "but subscriber is alive"); GC.Collect(); GC.WaitForPendingFinalizers(); Console.WriteLine("End of Main method. Subscriber is about to " + "become eligible for collection"); GC.KeepAlive(subscriber); }} Results (in .NET 3.5SP1; Mono appears to behave slightly oddly here. Will look into that some time): No more refs to publisher, but subscriber is alive~PublisherFoo==null ? FalseEnd of Main method. Subscriber is about to become eligible for collection~Subscriber
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/506092', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2536/']}
jdg_84561
stackexchange
llm_judgeable_groundtruth_similarity
299512
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What steps do I take? Any gotchas to be aware of or tips to enhance the IDE experience that are specific to SQL Server when using Emacs? Now provide the response and nothing else.
Connecting To connect to a SQL Server database instance from Emacs: M-x sql-ms RETM-x sql-mode You will be prompted for standard connection information specifically the following: User Password Server Database For SQL Server Authentication, type in the necessary user and password info. However, if connecting via Windows Authentication, then press RETURN for both user and password leaving them blank. Viewing Output Results Note that to see the text of any output results in the *SQL* buffer, the 'go' statement should be called at some point. A couple of ways of doing this. For example, this sql statement will execute but it will not show any result text in the *SQL* buffer in its current format: select 'foo' as bar However, if a 'go' is appended to the end: select 'foo' as bargo the following will be displayed in the *SQL* buffer: bar ----- foo(1 row affected) Alternatively, if you do not want to have 'go' statements littering the text of your SQL script then call 'go' on the fly to see all output results since the last time that the previous 'go' statement was sent to the sql process: C-c C-s go RET This is helpful if you need to view any error messages that might not initially show in the *SQL* buffer.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/299512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4872/']}
jdg_84562
stackexchange
llm_judgeable_groundtruth_similarity
4837103
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to avoid the use of the Role Provider and Membership Provider since its way too clumsy in my opinion, and therefore I'm trying to making my own "version" which is less clumsy and more manageable/flexible. Now is my question.. is there an alternative to the Role Provider which is decent? (I know that I can do custom Role provier, membership provider etc.) By more manageable/flexible I mean that I'm limited to use the Roles static class and not implement directly into my service layer which interact with the database context, instead I'm bound to use the Roles static class which has its own database context etc, also the table names is awful.. Thanks in advance. Now provide the response and nothing else.
I'm in the same boat as you - I've always hated the RoleProviders. Yeah, they're great if you want to get things up and running for a small website , but they're not very realistic. The major downside I've always found is that they tie you directly to ASP.NET. The way I went for a recent project was defining a couple of interfaces that are part of the service layer (NOTE: I simplified these quite a bit - but you could easily add to them): public interface IAuthenticationService{ bool Login(string username, string password); void Logout(User user);}public interface IAuthorizationService{ bool Authorize(User user, Roles requiredRoles);} Then your users could have a Roles enum: public enum Roles{ Accounting = 1, Scheduling = 2, Prescriptions = 4 // What ever else you need to define here. // Notice all powers of 2 so we can OR them to combine role permissions.}public class User{ bool IsAdministrator { get; set; } Roles Permissions { get; set; }} For your IAuthenticationService , you could have a base implementation that does standard password checking and then you could have a FormsAuthenticationService that does a little bit more such as setting the cookie etc. For your AuthorizationService , you'd need something like this: public class AuthorizationService : IAuthorizationService{ public bool Authorize(User userSession, Roles requiredRoles) { if (userSession.IsAdministrator) { return true; } else { // Check if the roles enum has the specific role bit set. return (requiredRoles & user.Roles) == requiredRoles; } }} On top of these base services, you could easily add services to reset passwords etc. Since you're using MVC, you could do authorization at the action level using an ActionFilter : public class RequirePermissionFilter : IAuthorizationFilter{ private readonly IAuthorizationService authorizationService; private readonly Roles permissions; public RequirePermissionFilter(IAuthorizationService authorizationService, Roles requiredRoles) { this.authorizationService = authorizationService; this.permissions = requiredRoles; this.isAdministrator = isAdministrator; } private IAuthorizationService CreateAuthorizationService(HttpContextBase httpContext) { return this.authorizationService ?? new FormsAuthorizationService(httpContext); } public void OnAuthorization(AuthorizationContext filterContext) { var authSvc = this.CreateAuthorizationService(filterContext.HttpContext); // Get the current user... you could store in session or the HttpContext if you want too. It would be set inside the FormsAuthenticationService. var userSession = (User)filterContext.HttpContext.Session["CurrentUser"]; var success = authSvc.Authorize(userSession, this.permissions); if (success) { // Since authorization is performed at the action level, the authorization code runs // after the output caching module. In the worst case this could allow an authorized user // to cause the page to be cached, then an unauthorized user would later be served the // cached page. We work around this by telling proxies not to cache the sensitive page, // then we hook our custom authorization code into the caching mechanism so that we have // the final say on whether or not a page should be served from the cache. var cache = filterContext.HttpContext.Response.Cache; cache.SetProxyMaxAge(new TimeSpan(0)); cache.AddValidationCallback((HttpContext context, object data, ref HttpValidationStatus validationStatus) => { validationStatus = this.OnCacheAuthorization(new HttpContextWrapper(context)); }, null); } else { this.HandleUnauthorizedRequest(filterContext); } } private void HandleUnauthorizedRequest(AuthorizationContext filterContext) { // Ajax requests will return status code 500 because we don't want to return the result of the // redirect to the login page. if (filterContext.RequestContext.HttpContext.Request.IsAjaxRequest()) { filterContext.Result = new HttpStatusCodeResult(500); } else { filterContext.Result = new HttpUnauthorizedResult(); } } public HttpValidationStatus OnCacheAuthorization(HttpContextBase httpContext) { var authSvc = this.CreateAuthorizationService(httpContext); var userSession = (User)httpContext.Session["CurrentUser"]; var success = authSvc.Authorize(userSession, this.permissions); if (success) { return HttpValidationStatus.Valid; } else { return HttpValidationStatus.IgnoreThisRequest; } }} Which you can then decorate on your controller actions: [RequirePermission(Roles.Accounting)]public ViewResult Index(){ // ...} The advantage of this approach is you can also use dependency injection and an IoC container to wire things up. Also, you can use it across multiple applications (not just your ASP.NET one). You would use your ORM to define the appropriate schema. If you need more details around the FormsAuthorization/Authentication services or where to go from here, let me know. EDIT: To add "security trimming", you could do it with an HtmlHelper. This probably needs a little more... but you get the idea. public static bool SecurityTrim<TModel>(this HtmlHelper<TModel> source, Roles requiredRoles){ var authorizationService = new FormsAuthorizationService(); var user = (User)HttpContext.Current.Session["CurrentUser"]; return authorizationService.Authorize(user, requiredRoles);} And then inside your view (using Razor syntax here): @if(Html.SecurityTrim(Roles.Accounting)){ <span>Only for accounting</span>} EDIT: The UserSession would look something like this: public class UserSession{ public int UserId { get; set; } public string UserName { get; set; } public bool IsAdministrator { get; set; } public Roles GetRoles() { // make the call to the database or whatever here. // or just turn this into a property. }} This way, we don't expose the password hash and all other details inside the session of the current user since they're really not needed for the user's session lifetime.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4837103', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/407674/']}
jdg_84563
stackexchange
llm_judgeable_groundtruth_similarity
25326124
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For example, C11 dictates that size_t should be declared in the following header files: stddef.h stdio.h stdlib.h string.h time.h uchar.h wchar.h When reading C11, I found there are many other data types declared in more than one standard header files. Questions Let's say in the case of size_t . Why not just in stddef.h for simplicity? Let's say a C compiler implements size_t in those header files. Are they guaranteed to have the same definition in those header files? Now provide the response and nothing else.
As an example of a function declared in stdio.h that requires size_t be predeclared, consider snprintf() . As it is, if you want to use it in your code, all you need to do is #include <stdio.h> . If size_t were declared only in stddef.h , you would have to #include <stddef.h>#include <stdio.h> Not only that, but since stdio.h declares snprintf whether you use it or not, you would have to include both files every time you needed anything in stdio.h to avoid compiler errors; stdio.h would have an artificial dependency on stddef.h . That causes your source code to become longer and more brittle (notice that if you reverse the order of the two directives it would also break). Instead, we write header files so that they stand alone and do not depend on other headers, and this is what the C standardization committee decided on for the standard library.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25326124', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/515592/']}
jdg_84564
stackexchange
llm_judgeable_groundtruth_similarity
15349979
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a template in which I can add a CSS error class to a div when the validation of a component has failed and it renders a pretty nice effect on the browser. Now, I don't need to add a css class to a component ( this won't help me), but rather I need to change the css of the html that surrounds it, this is pretty simple with jQuery, however I can't seem to find a javascript callback for failed validation, is this possible? I'm also using primefaces (in case they provide such capabilities). Markup: <div class="control-group ERROR_CLASS_GOES_HERE_IF_VALIDATION_FAILED"> <label class="control-label">Input value:</label> <div class="controls"> <h:inputText class=" inputbox" type="text" required="true" /> <!--Component that can fail --> </div></div> if the input text is empty, I need the div that wraps the "control group" to have an extra class. I can turn it into a <h:panelGroup> so it is a JSF component but still I wouldn't know how to do it. Javascript seems easier as I can do a: jQuery("#ID_OF_DIV").addClass("error_class") Now provide the response and nothing else.
Just let JSF/EL conditionally print the class based on FacesContext#isValidationFailed() . <div class="control-group #{facesContext.validationFailed ? 'error_class' : ''}"> You only need to ensure that this element is covered by ajax update/render. Another way would be hooking on the oncomplete event of an arbitrary PrimeFaces ajax based component. There's an args object available in the scope which in turn has a validationFailed property. E.g. <p:commandButton oncomplete> or even <p:ajaxStatus oncomplete> . <p:ajaxStatus ... oncomplete="if (args &amp;&amp; args.validationFailed) $('#ID_OF_DIV').addClass('error_class')">
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15349979', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556730/']}
jdg_84565
stackexchange
llm_judgeable_groundtruth_similarity
13846676
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I found an example of some Perl code I needed, but it had something in it that I didn't recognise. my $i //= '08'; I can't find any reference to this anywhere! It appears to be the same as: my $i = '08'; Am I missing something? Now provide the response and nothing else.
The //= operator is the assignment operator version of the // or 'logical defined-or' operator. In the context of a my variable declaration, the variable is initially undefined so it is equivalent to assignment (and would be better written as my $i = '08'; ). In general, though, $i //= '08'; is a shorthand for: $i = (defined $i) ? $i : '08'; It is documented in the Perl operators ( perldoc perlop ) in two places (tersely under the assignment operators section, and in full in the section on 'logical defined-or'). It was added in Perl 5.10.0.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13846676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/588958/']}
jdg_84566
stackexchange
llm_judgeable_groundtruth_similarity
46594187
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm restructuring my Wagtail app to remove an IndexPage that only has a single item in it, and moving that item to be a child of the current IndexPage's parent. basically moving from this: Page--| |--IndexPage--| |--ChildPages (there's only ever 1 of these) to this: Page--| |--ChildPage I've made the changes to the models so that this structure is used for creating new content and fixed the relevant views to point to the ChildPage directly. But now I want to migrate the current data to the new structure and I'm not sure how to go about it... Ideally this would be done in a migration so that we would not have to do any of this manipulation by hand. Is there a way to move these ChildPage's up the tree programmatically during a migration? Now provide the response and nothing else.
Unfortunately there's a hard limitation that (probably) rules out the possibility of doing page tree adjustments within migrations: tree operations such as inserting, moving and deleting pages are implemented as methods on the Page model, and within a migration you only have access to a 'dummy' version of that model, which only gives you access to the database fields and basic ORM methods, not those custom methods. (You might be able to work around this by putting from wagtail.wagtailcore.models import Page in your migration and using that instead of the standard Page = apps.get_model("wagtailcore", "Page") approach, but I wouldn't recommend that - it's liable to break if the migration is run at a point in the migration sequence where the Page model is still being built up and doesn't match the 'real' state of the model.) Instead, I'd suggest writing a Django management command to do the tree manipulation - within a management command it is safe to import the Page model from wagtailcore, as well as your specific page models. Page provides a method move(target, pos) which works as per the Treebeard API - the code for moving your child pages might look something like: from myapp.models import IndexPage# ...for index_page in IndexPage.objects.all(): for child_page in index_page.get_children(): child_page.move(index_page, 'right') index_page.delete()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46594187', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2014660/']}
jdg_84567
stackexchange
llm_judgeable_groundtruth_similarity
20075502
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to check if a user account already exists in the system. $SamAc = Read-Host 'What is your username?'$User = Get-ADUser -Filter {sAMAccountName -eq "$SamAc"} I'm not sure why, but $User will always return null even if {sAMAccountName -eq "$SamAc"} is supposed to be true. What am I missing here? Edit: This is what was missing: $User = Get-ADUser -Filter "sAMAccountName -eq '$SamAc'" Editor's note: The script block ( { ... } ) was replaced with a string . Now provide the response and nothing else.
There is valuable information in the existing answers, but I think a more focused summary is helpful. Note that the original form of this answer advocated strict avoidance of script blocks ( {...} ) and AD-provider variable evaluation , but this has been replaced with more nuanced recommendations. Option A: Letting the AD provider resolve - stand-alone only - variable references: Get-ADUser -Filter 'sAMAccountName -eq $SamAc' # note the '...' quoting Note the use of '...' , i.e. a verbatim ( single -quoted) string , because the string's value is to be passed as-is to the AD provider (cmdlet). While use of a script block ( { ... } ), Get-ADUser -Filter { sAMAccountName -eq $SamAc } , technically works too (its verbatim content , sans { and } , is converted to a string ), it is conceptually problematic - see bottom section. Do not quote the variable reference ( "$SamAc" ). Use only stand-alone variable references (e.g, $SamAc ); expressions are not supported (e.g., $user.SamAccountName or "$name*" or $("admin_" + $SamAc) ); if necessary, use an intermediate, auxiliary variable ; e.g.: $name = "admin_" + $SamAc; Get-ADUser -Filter 'sAMAccountName -eq $name' Generally, only a subset of PowerShell's operators are supported , and even those that are do not always behave the same way - see bottom section. Caveat : If you use Get-ADUser via an implicitly remoting module - whether self-created via Import-PSSession or, in PowerShell v7+, via the Windows Compatibility feature - neither '...' nor { ... } works, because the variable references are then evaluated on the remote machine, looking for the variables there (in vain); if (Get-Command Get-ADUser).CommandType returns Function , you're using an implicitly remoting module. If implicit remoting is involved, you MUST use string interpolation , as shown next. Option B: Using PowerShell's string interpolation (expandable strings), up front: Get-ADUser -Filter "sAMAccountName -eq `"$SamAc`"" # note the "..." quoting Using "..." , i.e. an expandable ( double -quoted) string makes PowerShell interpolate (expand) all variable references and subexpression up front , in which case the AD provider sees only the (variable-free) result. As shown above, for string operands embedded quoting then is necessary . For embedded quoting, '...' is a simpler alternative to `"...`" ( `" is an _escaped " ), but note that this assumes that an expanded value doesn't itself contain ' , which is a distinct possibility with last names, for instance. Also, be sure to ` -escape constants such as $true , $false , and $null inside the "..." string, which are always recognized by the AD provider; i.e., use `$true , `$false and `$null , so that PowerShell doesn't expand them up front. Caveat : Using an expandable string does not work with all data types , at least not directly : for instance, the default stringification of a [datetime] instance (e.g., 01/15/2018 16:00:00 is not recognized by the AD provider; in this case, embedding the result of a call to the instance's .ToFileTime() (or .ToFileTimeUtc() ?) method into the string may be necessary (as suggested in the comments on this post ); I'm unclear on whether there are other data types that require similar workarounds. On the plus side, string interpolation allows you to embed entire expressions and even commands in a "..." string, using $(...) , the subexpression operator ; e.g.: # Property access.Get-ADUser -Filter "sAMAccountName -eq `"$($user.SamAccountName)`""# String concatenationGet-ADUser -Filter "sAMAccountName -eq `"$('admin_' + $SamAc)`"" Background Any argument you pass to -Filter is coerced to a string first , before it is passed to the Get-ADUser cmdlet, because the -Filter parameter is of type [string] - as it is for all provider cmdlets that support this parameter; verify with Get-ADUser -? With -Filter in general, it is up to the cmdlet (the underlying PowerShell provider ) to interpret that string, using a domain-specific (query) language that often has little in common with PowerShell . In the case of Get-ADUser , that domain-specific language (query language) is documented in Get-Help about_ActiveDirectory_Filter . Note: As of this writing, no newer version of this legacy topic exists; this GitHub issue requests one. With Get-AdUser , the language supported by -Filter is certainly modeled on PowerShell, but it has many limitations and some behavioral differences that one must be aware of , notably: As Santiago Squarzon points out, these limitations and difference stem from the fact that the language is translated into an LDAP filter behind the scenes , it is therefore constrained by its features and behaviors. (Note that you can use the -LDAPFilter parameter in lieu of -Filter to directly pass an LDAP filter). Only a limited subset of PowerShell operators are supported, and some exhibit different behavior; here's a non-exhaustive list: -like / -notlike only support * in wildcard expressions (not also ? and character sets/ranges ( [...] ) '*' by itself represents any nonempty value (unlike in PowerShell's wildcard expressions, where it also matches an empty one). Instead of -eq "" or -eq $null to test fields for being empty, use -notlike '*' . Certain AD fields, e.g., DistinguishedName , only support '*' by itself , not as part of a larger pattern; that is, they only support an emptiness test. There is no support for regex matching. -lt / -le and -gt / -ge only perform lexical comparison. Referencing a nonexistent / misspelled property name causes the Get-ADUser command to quietly return $null . As stated, only stand-alone variable references are supported (e.g, $SamAc ), not also expressions (e.g., $SamAc.Name or $("admin_" + $SamAc) ) While you can use a script block ( { ... } ) to pass what becomes a string to -Filter , and while this syntax can be convenient for embedding quotes , it is problematic for two reasons: It may mislead you to think that you're passing a piece of PowerShell code; notably, you may be tempted to use unsupported operators and expressions rather than simple variable references. It creates unnecessary work (though that is unlikely to matter in practice), because you're forcing PowerShell to parse the filter as PowerShell code first, only to have the result converted back to a string when the argument is bound to -Filter .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20075502', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1571299/']}
jdg_84568
stackexchange
llm_judgeable_groundtruth_similarity
9842
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If, say, a particle with energy $E<V_0$, approaches a finite potential barrier with height $V_0$, and happens to tunnel through, where would the particle be during the time period when it is to the left of the potential barrier and to the right of the potential barrier? Surely there must be a finite amount of time for it to travel through to the other side, unless it simply teleports there? If it travels through with energy less than $V_0$, however, doesn't that mean it cannot enter in the region of the potential barrier? Now provide the response and nothing else.
Isn't the whole point here that one cannot say where the particle IS exactly? One can only calculate the probabilities of it being at one place. Tunneling means the probability of it being inside the barrier isn't zero (since we want the probability distrubition to be continuous). There is always penetration of the wave function into the barrier. IMHO tunneling means the penetration goes deep enough to actually reach the other side, so the wave function of the particle is propagated further on that side too, meaning there's a chance the particle went through. During the passing the particle has had a chance of being inside the barrier. I don't know if it's correct to say that when the particle has passed, it has been inside the barrier, but that just because the notion of the particle actually being somewhere is somewhat wrong.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/9842', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/202/']}
jdg_84569
stackexchange
llm_judgeable_groundtruth_similarity
52069803
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I ran following code many times but why the result for prefix increment , fetch_add() shows the correct result while with add operation (+), it prints the wrong result? #include <iostream>#include <mutex>#include <future>using namespace std;atomic <int> cnt (0);void fun(){ for(int i =0; i <10000000 ; ++i) { //++cnt; // print the correct result 20000000 //cnt = cnt+1; // print wrong result, arbitrary numbers cnt.fetch_add(1); // print the correct result 20000000 }}int main(){ auto fut1 = async(std::launch::async, fun); auto fut2 = async(std::launch::async, fun); fut1.get(); fut2.get(); cout << "value of cnt: "<<cnt <<endl;} Now provide the response and nothing else.
++cnt and cnt.fetch_add(1) are truly atomic operations. One thread is blocked while the other thread reads, increments, and updates the value. As such, the two threads cannot step on each other's toes. Access to cnt is fully serialized, and the final result is as you would expect. cnt = cnt+1; is not fully atomic. It involves three separate operations, only two of which are atomic, but one is not. By the time a thread has atomically read the current value of cnt and made a copy of it locally, the other thread is no longer blocked and can freely modify cnt at will while that copy is being incremented. Then, the assignment of the incremented copy back to cnt is done atomically, but will be assigning a stale value if cnt has already been modified by the other thread. So the final result is random and not what you would expect.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52069803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/791403/']}
jdg_84570
stackexchange
llm_judgeable_groundtruth_similarity
1700560
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The equations $x^3 + 2 x^2 +2x +1 = 0$ and $x^{200} + x^{130} + 1 = 0 $ have exactly one common root; no common root; exactly three common roots; exactly two common roots. I factored the first equation. I think the roots are $-1$, $\omega$ and $\omega^2$. Now provide the response and nothing else.
$$x^3+2x^2+2x+1=(x+1)(x^2+x+1)$$ The roots are $-1,\omega,$ and $\omega^2$, where $\omega,\omega^2$ are non real cube roots of unity. Substituting in the other equation,$$(-1)^{200}+(-1)^{130}+1=1\ne0$$$$\omega^{200}+\omega^{130}+1=\omega^2+\omega+1=0$$$$(\omega^2)^{200}+(\omega^2)^{130}+1=\omega+\omega^2+1=0$$ Thus, two roots are common.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1700560', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/177835/']}
jdg_84571
stackexchange
llm_judgeable_groundtruth_similarity
15456642
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to make a POST request with Android, but I'm not succeeding. I think the problem is in how to set the parameters for resquisição and Header. Below is my method I do the request ... public void testPostDate() { HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; Gson gson = new Gson(); CrimePOST.Crime crime = new CrimePOST().new Crime(10, "São Paulo", "descrição", 10.00, 30.00); CrimePOST crimePost = new CrimePOST(); crimePost.setCrime(crime); List<NameValuePair> params = new LinkedList<NameValuePair>(); params.add(new BasicNameValuePair("token", "0V1AYFK12SeCZHYgXbNMew==$tRqPNplipDwtbD0vxWv6GPJIT6Yk5abwca3IJa6JhMs=")); String json = gson.toJson(crimePost); String paramString = URLEncodedUtils.format(params, Utils.ENCODE); try { HttpPost post = new HttpPost( "http://safe-sea-4024.herokuapp.com/crimes/mobilecreate" + "?" + paramString); post.setHeader("Content-Type", "application/json"); StringEntity entitty = new StringEntity(json); entitty.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(entitty); response = client.execute(post); /* Checking response */ if (response != null) { InputStream in = response.getEntity().getContent(); String a = toString(in); System.out.println(a); } } catch (Exception e) { e.printStackTrace(); }} This method is responsible for converting an inputStream to String private String toString(InputStream is) throws IOException { byte[] bytes = new byte[1024]; ByteArrayOutputStream baos = new ByteArrayOutputStream(); int lidos; while ((lidos = is.read(bytes)) > 0) { baos.write(bytes, 0, lidos); } return new String(baos.toByteArray());} Really I am passing the Header correctly? Now provide the response and nothing else.
if (isVowel == True): should be if isVowel(char): isVowel is a function object. isVowel == True will always be False. Note that you could also do this, faster and more simply with str.translate . In [90]: 'Abracadabra'.translate(None, 'aeiouAEIOU')Out[90]: 'brcdbr' or, (as EOL points out) using regex: In [93]: import reIn [95]: re.sub(r'(?i)[aeiou]', '', 'Abracadabra')Out[95]: 'brcdbr' However, str.translate is faster in this case: In [94]: %timeit 'Abracadabra'.translate(None, 'aeiouAEIOU')1000000 loops, best of 3: 316 ns per loopIn [96]: %timeit re.sub(r'(?i)[aeiou]', '', 'Abracadabra')100000 loops, best of 3: 2.26 us per loop
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15456642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2052937/']}
jdg_84572
stackexchange
llm_judgeable_groundtruth_similarity
53033
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In thermodynamics, the Helmholtz free energy is deemed as a thermodynamic potential which calculates the “useful” work retrievable from a closed thermodynamic system at a constant temperature and volume. For such a system, the negative of the difference in the Helmholtz energy is equal to the maximum amount of work extractable from a thermodynamic process in which both temperature and volume are kept constant. In these conditions, it is minimized and held constant at equilibrium. ( citation ) My questions are How can one say that if we subtract "TS" from "U" then what we get is free energy? Why do we have to subtract "TS"? What if we subtract pV instead of "TS". If the system is in constant temperature and Volume , then how it is possible to do work? Is it the statement true for Ideal gas also. If it is true then a thermodynamic system(with ideal gas) at a constant temperature and volume is also at constant pressure. so it is at equilibrium. Does that mean for a ideal gas system free energy is always minimum? Now provide the response and nothing else.
Lets start with the definition of free energy. Free energy is the maximum work you can get out of a system subject to thermodynamic equilibration under some constraints. These constraints could be constant $(T, P)$, $(T,V)$ etc. The constant part can be confusing but can be explained. Before I do so, let me explain the difference between a thermodynamic potential, free energy, and work. Thermodynamic potentials are extensive state functions like, internal energy $U$, Helmholtz potential $A$, Gibbs potential $G$, Enthalpy $H$ etc. These potentials are state functions and depend on the system state (nothing to be held constant here) and turn out to be Legendre transforms of each other. Strictly speaking, free energy is the difference in the value of the potential in the non-equilibrium state and the equilibrium state. Therefore, free energy is a function of two states: the initial non-equlibrium state and final equilibrium state. Often this subtle detail is ignored and one tends to call the Gibbs function as the Gibbs free energy and so on. Well this is wrong but happens very often. Anyways next difference is between free energy and work. Free energy is the maximum work one can obtain if the system goes to equilibrium in a reversible manner subject to constraints. Actual work in any system will be lower because of irreversible processes (friction, chemical reaction etc). OK let's now derive Helmholtz free energy from internal energy. Consider that I have a rigid box with a partition connected to a shaft (to do work). There is the same gas on either side of the box but one side $A$ is at higher pressure and other side $B$ at lower pressure as shown in Figure below. The combined system is in mechanical non-equilibrium. If you let the piston go, it will provide work to the outside world while mechanically equilibrating the stuff inside the box. If you were to do this allowing external heat exchange such that you hold $T$ fixed and the total $V$ is fixed of the box, lets calculate how much work will you get. Lets write the balance equations for energy and entropy.\begin{align*}&[E]:\quad dU = -\delta W+\delta Q \\&[S]:\quad dS= \frac{\delta Q}{T}+ \delta S_{gen}\end{align*}The $\delta S_{gen}$ is the entropy generated due to friction, chemical reaction, (if stuff was reacting), etc. So now lets substitute for $\delta Q$ from the second to the first equation and re-arrange.\begin{align*}& \delta W = -(dU-TdS+T\delta S_{gen})\end{align*}The maximum work you can get from this process is when everything inside is reversible, i.e., $\delta S_{gen}=0$\begin{align*}&\delta W_{max} = -(dU-TdS)=-(dU-d(TS))=-dA \;(\text{constant }T)\\&W_{max}= -\int_{initial}^{final}{dA}= A_{init}-A_{final}\end{align*}So, if you process was at constant $T,V$ and reversible , you will have the Helmholtz free energy (difference in Helmholtz potential) as work. Definition of Helmholtz potential $A$ : The Helmholtz potential $A$ has been defined as $U-TS$ purely out of convenience to not have to repeatedly keep saying work is $-(dU-d(TS))$ at constant $T,V$. It has been defined as a useful grouping of variables. The "constant" confusion : Many times you will find processes that happen in a constant $T, V$ setting where this definition is directly useful. But even otherwise one must realize that the free energy is only a measure of work from a reversible path so any reversible path that starts at some $T,V$ and ends at same $T,V$ will give the same result regardless of what you did in the middle, as long as you are reversible! Therefore do we have to be constant $T,V$ always? No, but we have to start and end at same $T,V$. If we do, you will see that the work (net for whatever you did) is the change in the Helmholtz potential or Helmholtz free energy. Non-reversible equilibration Let us say you don't take the work out form this system. You don't move the piston, but break the barrier, allowing completely irreversible free expansion between both sides. In such a case if you go the equation above you will see,\begin{align*}&\delta W=0\\& T\delta S_{gen} = -dA >0\end{align*}This means irreversibility during equilibration destroys Helmholtz free energy or potential to do work. So irreversible or not once the system has reached equilibrium it sits at minimum Helmholtz free energy. If you managed to take it out as work, good for you, otherwise you have generated a bunch of entropy and made nature happy :). Ideal gas : Nowhere in my discussion I have assumed ideal gas/non-ideal gas so everything above applies to ideal gases. One last thing In a similar manner you can show that for $T,P$ constant the free energy is Gibbs. In $S,V$ constrained systems the internal energy $U$ itself is the free energy (I mean differences of $U$, $G$). Try working it out, and let me know if you can't get it. These definitions are useful. You often see $G$ used in chemistry a lot since many chemical reactions were done in an open beaker/petri-dish where the atmosphere would keep $T, P$ constant.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/53033', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/20591/']}
jdg_84573
stackexchange
llm_judgeable_groundtruth_similarity
1921514
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I developed an application to display some text at defined intervals in the Android emulator screen. I am using the Handler class. Here is a snippet from my code: handler = new Handler();Runnable r = new Runnable() { public void run() { tv.append("Hello World"); }};handler.postDelayed(r, 1000); When I run this application the text is displayed only once. Why? Now provide the response and nothing else.
The simple fix to your example is : handler = new Handler();final Runnable r = new Runnable() { public void run() { tv.append("Hello World"); handler.postDelayed(this, 1000); }};handler.postDelayed(r, 1000); Or we can use normal thread for example (with original Runner) : Thread thread = new Thread() { @Override public void run() { try { while(true) { sleep(1000); handler.post(this); } } catch (InterruptedException e) { e.printStackTrace(); } }};thread.start(); You may consider your runnable object just as a command that can be sent to the message queue for execution, and handler as just a helper object used to send that command. More details are here http://developer.android.com/reference/android/os/Handler.html
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/1921514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131062/']}
jdg_84574
stackexchange
llm_judgeable_groundtruth_similarity
1421597
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a console application written in C#/.NET that I want to run from a script (nant). If an exception occurs in the console application, I would like nant to continue, but in Windows Vista there is a popup that searches for solutions and asks for debug etc. I would like to avoid the popup with "program stopped working" when an exception happens in the console application. How can I control this from C#/.NET? (A similar question addresses the issue for the C language , but I would like a solution for C#/.NET.) (To clarify: I would like the exception to be passed to nant, but without the popup.) Now provide the response and nothing else.
The JIT debugger popup occurs when there's an unhandled exception. That is, an exception tunnels all the way up the stack to the root of any thread in the runtime. To avoid this, you can handle the AppDomain.CurrentDomain.UnhandledException event and just call Environment.Exit(1) to exit gracefully. This will handle all exceptions on all threads within your AppDomain. Unless you're doing anything special, your app probably only has one AppDomain, so putting this in your public static void Main method should suffice: AppDomain.CurrentDomain.UnhandledException += (sender, args) =>{ Console.Error.WriteLine("Unhandled exception: " + args.ExceptionObject); Environment.Exit(1);}; You should probably use the NAnt logger to write out the error in this case too (can't recall the API for this offhand though.) You can also disable JIT debugging on the machine. I would only recommend this in certain circumstances such as for a dedicated build server.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1421597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41094/']}
jdg_84575
stackexchange
llm_judgeable_groundtruth_similarity
46971969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Pandas dataframe which has Encoding: latin-1 and is delimited by ; . The dataframe is very large almost of size: 350000 x 3800 . I wanted to use sklearn initially but my dataframe has missing values ( NAN values ) so i could not use sklearn's random forests or GBM. So i had to use H2O's Distributed random forests for the Training of the dataset. The main Problem is the dataframe is not efficiently converted when i do h2o.H2OFrame(data) . I checked for the possibility for providing the Encoding Options but there is nothing in the documentation. Do anyone have an idea about this? Any leads could help me. I also want to know if there are any other libraries like H2O which can handle NAN values very efficiently? I know that we can impute the columns but i should not do that in my dataset because my columns are values from different sensors, if the values are not there implies that the sensor is not present. I can use only Python Now provide the response and nothing else.
import h2oimport pandas as pddf = pd.DataFrame({'col1': [1,1,2], 'col2': ['César Chávez Day', 'César Chávez Day', 'César Chávez Day']})hf = h2o.H2OFrame(df) Since the problem that you are facing is due to the high number of NANs in the dataset, this should be handled first. There are two ways to do so. Replace NAN with a single, obviously out-of-range value. Ex. If a feature varies between 0-1 replace all NAN with -1 for that feature. Use the class Imputer to handle NAN values. This will replace NAN with either of mean, median or mode of that feature.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46971969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3847509/']}
jdg_84576
stackexchange
llm_judgeable_groundtruth_similarity
6615489
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: INTRODUCTION: I'm a bioinformatician. In my analysis which I perform on all human genes (about 20 000) I search for a particular short sequence motif to check how many times this motif occurs in each gene. Genes are 'written' in a linear sequence in four letters (A,T,G,C). For example: CGTAGGGGGTTTAC... This is the four-letter alphabet of genetic code which is like the secret language of each cell, it’s how DNA actually stores information. I suspect that frequent repetitions of a particular short motif sequence (AGTGGAC) in some genes are crucial in a specific biochemical process in the cell. Since the motif itself is very short it is difficult with computational tools to distinguish between true functional examples in genes and those that look similar by chance. To avoid this problem I get sequences of all genes and concatenated into a single string and shuffled. The length of each of the original genes was stored. Then for each of the original sequence lengths, a random sequence was constructed by repeatedly picking A or T or G or C at random from the concatenated sequence and transferring it to the random sequence. In this way, the resulting set of randomized sequences has the same length distribution, as well as the overall A,T,G,C composition. Then I search for the motif in these randomized sequences. I performed this procedure 1000 times and averaged the results. 15000 genes that do not contain a given motif 5000 genes that contain 1 motif 3000 genes that contain 2 motifs 1000 genes that contain 3 motifs ... 1 gene that contain 6 motifs So even after 1000 times randomization of true genetic code, there aren't any genes which have more than 6 motifs. But in the true genetic code, there are a few genes which contain more then 20 occurrences of the motif, which suggest that these repetition might be functional and it's unlikely to find them in such an abundance by pure chance. PROBLEM: I would like to know the probability of finding a gene with let's say 20 occurrences of the motif in my distribution. So I want to know the probability to find such a gene by chance. I would like to implement this in Python, but I don't know how. Can I do such an analysis in Python? Any help would be appreciated. Now provide the response and nothing else.
In SciPy documentation you will find a list of all implemented continuous distribution functions. Each one has a fit() method , which returns the corresponding shape parameters. Even if you don't know which distribution to use you can try many distrubutions simultaneously and choose the one that fits better to your data, like in the code below. Note that if you have no idea about the distribution it may be difficult to fit the sample. import matplotlib.pyplot as pltimport scipyimport scipy.statssize = 20000x = scipy.arange(size)# creating the dummy sample (using beta distribution)y = scipy.int_(scipy.round_(scipy.stats.beta.rvs(6,2,size=size)*47))# creating the histogramh = plt.hist(y, bins=range(48))dist_names = ['alpha', 'beta', 'arcsine', 'weibull_min', 'weibull_max', 'rayleigh']for dist_name in dist_names: dist = getattr(scipy.stats, dist_name) params = dist.fit(y) arg = params[:-2] loc = params[-2] scale = params[-1] if arg: pdf_fitted = dist.pdf(x, *arg, loc=loc, scale=scale) * size else: pdf_fitted = dist.pdf(x, loc=loc, scale=loc) * size plt.plot(pdf_fitted, label=dist_name) plt.xlim(0,47)plt.legend(loc='upper left')plt.show() References: - Distribution fitting with Scipy - Fitting empirical distribution to theoretical ones with Scipy (Python)?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6615489', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/428727/']}
jdg_84577
stackexchange
llm_judgeable_groundtruth_similarity
2526
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In this bash script under Linux Alpine they have (line 8): exec su-exec "$ZOO_USER" "$0" "$@" As far as I know, su-exec is however an Apache httpd dependecy - can I avoid it installing it under Ubuntu and use some more plain alternative? I have tested so far: su -c "$0 $@" $ZOO_USER But then the call through Docker CMD shows that I might have some sort of escaping/quotes error, CMD ["zkServer.sh", "start-foreground"] resulting in the following output; obviously there is a misplacement for the username field in the command sequence. No passwd entry for user 'start-foreground' But what is then the difference to the original (su-exec), which works fine? And how to get it right with su ? ==== UPD: su-exec is here a third-parthy Alpine Linux package https://github.com/ncopa/su-exec Now provide the response and nothing else.
wrong argument order to su $ZOO_USER needs to be the first argument to su with the "$@" coming afterwards. su -c "$0" "$ZOO_USER" "$@" It is ok for the options to come before the username. It is a good idea to keep them in double quotes so that there aren't issues if someone accidentally puts spaces into it. Your error message reinforces that this is an order of arguments problem. even better Based on comments this would be even better: su -c "$0" "$ZOO_USER" -- "$@" su man page excerpt Additional arguments may be provided after the username, in which case they are supplied to the user's login shell. In particular, an argument of -c will cause the next argument to be treated as a command by most command interpreters. The command will be executed by the shell specified in /etc/passwd for the target user. You can use the -- argument to separate su options from the arguments supplied to the shell. busybox docs excerpt: In a comment the OP asked about why this wouldn't work in Alpine Linux. Since alpine is based on busybox I looked up the busybox docs. According to the docs busybox doesn't support sending additional arguments to su : susu [OPTIONS] [-] [username]Change user id or become rootOptions: -p, -m Preserve environment -c CMD Command to pass to 'sh -c' -s SH Shell to use instead of default shell
{}
{'log_upvote_score': 4, 'links': ['https://devops.stackexchange.com/questions/2526', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/707/']}
jdg_84578
stackexchange
llm_judgeable_groundtruth_similarity
16947266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've created a directive with a binding using "scope". In some cases, I want to bind a constant object. For instance, with HTML: <div ng-controller="Ctrl"> <greeting person="{firstName: 'Bob', lastName: 'Jones'}"></greeting></div> and JavaScript: var app = angular.module('myApp', []);app.controller("Ctrl", function($scope) {});app.directive("greeting", function () { return { restrict: "E", replace: true, scope: { person: "=" }, template: '<p>Hello {{person.firstName}} {{person.lastName}}</p>' };}); Although this works, it also causes a JavaScript error: Error: 10 $digest() iterations reached. Aborting! (Fiddle demonstrating the problem) What's the correct way to bind a constant object without causing the error? Now provide the response and nothing else.
Here's the solution I came up with, based on @sh0ber's answer: Implement a custom link function. If the attribute is valid JSON, then it's a constant value, so we only evaluate it once. Otherwise, watch and update the value as normal (in other words, try to behave as a = binding). scope needs to be set to true to make sure that the assigned value only affects this instance of the directive. (Example on jsFiddle) HTML: <div ng-controller="Ctrl"> <greeting person='{"firstName": "Bob", "lastName": "Jones"}'></greeting> <greeting person="jim"></greeting></div> JavaScript: var app = angular.module('myApp', []);app.controller("Ctrl", function($scope) { $scope.jim = {firstName: 'Jim', lastName: "Bloggs"};});app.directive("greeting", function () { return { restrict: "E", replace: true, scope: true, link: function(scope, elements, attrs) { try { scope.person = JSON.parse(attrs.person); } catch (e) { scope.$watch(function() { return scope.$parent.$eval(attrs.person); }, function(newValue, oldValue) { scope.person = newValue; }); } }, template: '<p>Hello {{person.firstName}} {{person.lastName}}</p>' };});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16947266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49381/']}
jdg_84579
stackexchange
llm_judgeable_groundtruth_similarity
24345220
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why does the code below crash the .NET compiler? It was tested on csc.exe version 4.0. See e.g. here for online demo on different version - it crashes in the same manner while it says dynamic is not supported https://dotnetfiddle.net/FMn59S : Compilation error (line 0, col 0): Internal Compiler Error (0xc0000005 at address xy): likely culprit is 'TRANSFORM'. The extension method works fine on List<dynamic> though. using System;using System.Collections.Generic;static class F { public static void M<T>(this IEnumerable<T> enumeration, Action<T> action){} static void U(C.K d) { d.M(kvp => Console.WriteLine(kvp)); }}class C { public class K : Dictionary<string, dynamic>{}} Update: this doesn't crash the compiler static void U(Dictionary<string, dynamic> d){ d.M(kvp => Console.WriteLine(kvp));} Update 2: the same bug was reported in http://connect.microsoft.com/VisualStudio/feedback/details/892372/compiler-error-with-dynamic-dictinoaries . The bug was reported for FirstOrDefault, but it seems the compiler crashes on any extension method applied to class derived from Dictionary<T1,T2>, where at least one of the parameter types is dynamic . See an even more general description of the problem below by Erik Funkenbusch. Update 3: another non-standard behaviour. When I try to call extension method as a static method, that is, F.M(d, kvp => Console.WriteLine(kvp)); , the compiler doesn't crash, but it cannot find the overload: Argument 1: cannot convert from 'C.K' to 'System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string,dynamic>>' Update 4 - SOLUTION (kind of): Hans sketched 2nd workaround, which is semantically equivalent to original code, but works only for extension method call and not for standard call. Since the bug is likely caused by the fact that the compiler fails to cast class derived from generic class with multiple parameters (with one being dynamic) to its supertype, the solution is to provide an explicit cast. See https://dotnetfiddle.net/oNvlcL : ((Dictionary<string, dynamic>)d).M(kvp => Console.WriteLine(kvp));M((Dictionary<string, dynamic>)d, kvp => Console.WriteLine(kvp)); Now provide the response and nothing else.
It is dynamic that is triggering the instability, the crash disappears when you replace it by object . Which is one workaround, the other is to help it infer the correct T: static void U(C.K d) { d.M(new Action<KeyValuePair<string, dynamic>>(kvp => Console.WriteLine(kvp)));} The feedback report that you found is a strong match, no need to file your own I'd say.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24345220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1749408/']}
jdg_84580
stackexchange
llm_judgeable_groundtruth_similarity
77641
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Use congruences to show that $6$ divides $n^3 – n$ for every integer $n$. I did this same problem using induction, and I don't understand how to do it using congruences. Is this using modulo? Now provide the response and nothing else.
One simple approach is just to try all the residues modulo $6$: $0^3-0=0\equiv 0 \pmod {6},\ldots 5^3-5=120\equiv 0 \pmod {6}$. Alternately, you can factor $n^3-n$ and argue there must be a factor that is divisible by $2$ and one divisible by $3$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/77641', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18631/']}
jdg_84581
stackexchange
llm_judgeable_groundtruth_similarity
8482028
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is a good way to handle dynamic sql parameters? I have a search form that takes in a whole bunch of different search parameters. If the parameters are empty and I have the parameter in the sql string will it screw or slow down the query? Now provide the response and nothing else.
Depending on the specific implementation, we have two general approaches to this problem: 1) Dynamically build the filter statement for the SQL query in code skipping any parameters that are empty. This is the best approach if you allow the user to select multiple values for a single column (i.e. select 0 or more of the 50 states to filter the data). For example: Assuming txtCondition1 and txtCondition2 are textboxes: // Assuming conn is an open SqlConnection System.Text.StringBuilder sbSQL = new StringBuilder(500); List<SqlParameter> cParameters = new List<SqlParameter>(); // Add a default condition of 1=1 so that all subsequent conditions can be added // with AND instead of having to check to see whether or not any other conditions // were added before adding AND. sbSQL.Append("SELECT * FROM MyTestTable WHERE 1 = 1 "); if (!String.IsNullOrEmpty(txtCondition1.Text)) { sbSQL.Append(" AND Column1 = @Column1"); cParameters.Add(new SqlParameter("@Column1", txtCondition1.Text)); } if (!String.IsNullOrEmpty(txtCondition1.Text)) { sbSQL.Append(" AND Column2 = @Column2"); cParameters.Add(new SqlParameter("@Column2", txtCondition2.Text)); } SqlCommand oCommand = new SqlCommand(sbSQL.ToString, conn); if (cParameters.Count != 0) { oCommand.Parameters.AddRange(cParameters.ToArray()); } // Do something with oCommand 2) If the values are more constrained, we usually pass them to a stored procedure, which is responsible for determining whether or not the value is to be evaluated by testing the parameter for "emptinesss", either null, empty string, 0 for numerics, etc.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8482028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/173432/']}
jdg_84582
stackexchange
llm_judgeable_groundtruth_similarity
18406165
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: import timedef timer(): now = time.localtime(time.time()) return now[5]run = raw_input("Start? > ")while run == "start": minutes = 0 current_sec = timer() #print current_sec if current_sec == 59: mins = minutes + 1 print ">>>>>>>>>>>>>>>>>>>>>", mins I want to create a kind of stopwatch that when minutes reach 20 minutes, brings up a dialog box, The dialog box is not the problem. But my minutes variable does not increment in this code. Now provide the response and nothing else.
See Timer Objects from threading. How about from threading import Timerdef timeout(): print("Game over")# duration is in secondst = Timer(20 * 60, timeout)t.start()# wait for time completiont.join() Should you want pass arguments to the timeout function, you can give them in the timer constructor: def timeout(foo, bar=None): print('The arguments were: foo: {}, bar: {}'.format(foo, bar))t = Timer(20 * 60, timeout, args=['something'], kwargs={'bar': 'else'}) Or you can use functools.partial to create a bound function, or you can pass in an instance-bound method.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18406165', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2711485/']}
jdg_84583
stackexchange
llm_judgeable_groundtruth_similarity
106234
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $ (X,\mathcal{F},\mu) $ and $ (G,\mathcal{G},\nu) $ be two measure spaces with $ \mu $ and $ \nu $ being $ \sigma $-finite. Per definition, the linear span of$$\{\mathbf{1}_{C}~|~C \in \mathcal{F} \otimes \mathcal{G} ~ \text{and} ~ (\mu \otimes \nu)(C) < \infty\}$$is dense in $ {L^{p}}(X \times Y,\mathcal{F} \otimes \mathcal{G},\mu \otimes \nu) $ for any $ p \in [1,\infty) $. This should also be true for the linear span of$$\{\mathbf{1}_{A \times B}~|~A \in \mathcal{F}, ~ B \in \mathcal{G} ~ \text{and} ~ \mu(A),\nu(B) < \infty\}.$$Do you know a reference? Thank you. Now provide the response and nothing else.
A useful density lemma is the following. Let $(X,\mathcal{A}, \mu)$ be a measure space and let $\Gamma\subset\mathcal{P}(X)$ a ring of sets of finite measure that generates the $\sigma$-algebra $\mathcal{A}$. Then, the linear span of the characteristic functions of sets in $\Gamma$ is dense in $L^p$ (here $1\le p < +\infty$) In your case, of course, you can take $\Gamma$ to be the collection of finite unions of Cartesian products of measurable sets of finite measure. Incidentally, the hypothesis on $\Gamma$ in that density lemma can be weakened (one does not need all the ring strucure of $\Gamma$). Let's say that $\Gamma\subset \mathcal{P}(X)$ is a "semi-ring" ( warning : not standard; I borrowed it from Halmos, with a slightly more general meaning) if the following holds: For all $A$ and $B$ in $\Gamma$ the sets $A\setminus B$ and $A\cap B$ are both expressible as union of countably many disjoint element of $\Gamma$. Then the above lemma holds true. The notion of "semi-ring"is also interesting, in that it is a convenient domain for a completely additive set function, in order that the Caratheodory's Extension Theorem holds.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/106234', 'https://mathoverflow.net', 'https://mathoverflow.net/users/19874/']}
jdg_84584
stackexchange
llm_judgeable_groundtruth_similarity
680972
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I came across this question on another forum. The question is: $$ \text{If $m,n\in \mathbb{Z}_+$ such that $3m^2+m=4n^2+n$, then $(m-n)$ is a perfect square.}$$ I have managed to partially prove this using this question as motivation as follows. Let $m>n$ and $k^2 = m-n$. The problem then becomes to show $k$ is an integer. Making the substitution $m=n+k^2$ we get $$3(n+k^2)^2+(n+k^2) = 4n^2+n$$ And solving for $n$ yields $$n = 3k^2\pm |k|\sqrt{12k^2+1}$$ So $n$ will be an integer if and only if $12k^2+1$ is a perfect square. This is where the previous question comes in. We want all solutions $(k,N)$ to $12k^2+1=N^2$, i.e. $$N^2-12k^2=1$$Using Pell's equation and Wikipedia (Pell Equation) as a guide we find the fundamental solution as $y_1=k=2, x_1=N=7$, and hence all other solutions are $x_i, y_i$ where$$x_i+y_i\sqrt{12} = \left(7+2\sqrt{12}\right)^i.$$ It is not hard to see $y_i$ is an integer for all $i$. My conclusion is then: If $(m,n)$ is a solution then $k^2=(m-n)\in S=\{y_i^2\}_{i=1}^{\infty} = \{2^2, 28^2, 390^2,...\}$. My questions are : $\ \ \ \bullet$ I made the assumption that $m>n$, is this easy to show? $\ \ \ \bullet$ If $y\in S$, is there always a solution $(m,n)$ with $(m-n)=y$ ? $\ \ \ \bullet$ More importantly: Is there an easier way to prove this? Now provide the response and nothing else.
Rewrite the original equation $3m^2+m=4n^2+n$ as $$12m^2+12n^2+m-n-24mn=16n^2+9m^2-24mn.$$ This factors as $$(m-n)(12(m-n)+1)=(4n-3m)^2.$$ Since $\gcd(m-n,12(m-n)+1)=1$, it follows that $m-n$ is a perfect square, as desired.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/680972', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/49975/']}
jdg_84585
stackexchange
llm_judgeable_groundtruth_similarity
5125408
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following contrived (and yes, terrible) example: template<typename... Args>void something(Args... args){ std::tuple<Args...> tuple; // not initializing for sake of example std::get<0>(tuple) = 5;} It works if you call it like so: int x = 10;something<int>(x); However, it does not work if you call it like this: int x = 10;something<int&>(x); Because of the assignment to 5. Assuming that I cannot, for whatever reason, initialize the tuple when it is defined, how might I get this to work when specifying the type as a reference? Specifically, I would like the tuple to be std::tuple<int> even when Args... is int& . The actual use case for this involves deserializing a string into a tuple where Args... are the parameter types to a function that is then called by unpacking the tuple. It all works great except when the function takes a parameter by reference. I'm using gcc 4.5.2 but would accept an answer that isn't implemented yet in this compiler. Now provide the response and nothing else.
I don't understand your question. This code works fine with GCC and I see no reason why it shouldn't work. #include <tuple>template<typename... Args>void something(Args... args){ std::tuple<Args...> tuple{args...}; std::get<0>(tuple) = 5;}int main() { int x = 10; something<int&>(x);}[js@HOST2 cpp]$ g++ -std=c++0x main1.cpp[js@HOST2 cpp]$ I don't know what you mean by "initialize the template". Since you have now updated the question, I can update my answer template<typename... Args>void something(Args... args){ std::tuple<typename std::decay<Args>::type...> tuple; std::get<0>(tuple) = 5;} decay removes const / volatile , removes references and transforms array and function types to element and function pointers respectively. That is what you seem to look for.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5125408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/43496/']}
jdg_84586
stackexchange
llm_judgeable_groundtruth_similarity
7397769
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to force my Java application to run with a single instance. I found on this link this very nice piece of code that solve the problem using socket instead of using the file system. here the as i adjusted: package cern.ieplc.controller;import java.net.InetAddress;import java.net.InetSocketAddress;import java.net.ServerSocket;import java.net.Socket;import java.net.UnknownHostException; import org.apache.log4j.Logger; import java.io.BufferedReader;import java.io.IOException;import java.io.InputStreamReader;import java.io.OutputStream; public class ApplicationInstanceManager { public interface ApplicationInstanceListener { public void newInstanceCreated(); } private static final Logger log = Logger.getLogger(CustomLock.class); private static ApplicationInstanceListener subListener; /** Randomly chosen, but static, high socket number */ public static final int SINGLE_INSTANCE_NETWORK_SOCKET = 44331; /** Must end with newline */ public static final String SINGLE_INSTANCE_SHARED_KEY = "$$NewInstance$$\n"; private static ServerSocket socket; /** * Registers this instance of the application. * * @return true if first instance, false if not. */ public static boolean registerInstance() { // returnValueOnError should be true if lenient (allows app to run on network error) or false if strict. boolean returnValueOnError = true; // try to open network socket // if success, listen to socket for new instance message, return true // if unable to open, connect to existing and send new instance message, return false try { socket = new ServerSocket(SINGLE_INSTANCE_NETWORK_SOCKET, 10, InetAddress.getByAddress(new byte[]{127,0,0,1})); socket.setReuseAddress(true);//allows the socket to be bound even though a previous connection is in a timeout state. socket.bind(new InetSocketAddress(SINGLE_INSTANCE_NETWORK_SOCKET)); log.debug("Listening for application instances on socket " + SINGLE_INSTANCE_NETWORK_SOCKET); Thread instanceListenerThread = new Thread(new Runnable() { public void run() { boolean socketClosed = false; while (!socketClosed) { if (socket.isClosed()) { socketClosed = true; } else { try { Socket client = socket.accept(); BufferedReader in = new BufferedReader(new InputStreamReader(client.getInputStream())); String message = in.readLine(); if (SINGLE_INSTANCE_SHARED_KEY.trim().equals(message.trim())) { log.debug("Shared key matched - new application instance found"); fireNewInstance(); } in.close(); client.close(); } catch (IOException e) { socketClosed = true; } } } } }); instanceListenerThread.start(); // listen } catch (UnknownHostException e) { log.error(e.getMessage(), e); return returnValueOnError; } catch (IOException e) { log.debug("Port is already taken. Notifying first instance."); try { Socket clientSocket = new Socket(InetAddress.getByAddress(new byte[]{127,0,0,1}), SINGLE_INSTANCE_NETWORK_SOCKET); OutputStream out = clientSocket.getOutputStream(); out.write(SINGLE_INSTANCE_SHARED_KEY.getBytes()); out.close(); clientSocket.close(); log.debug("Successfully notified first instance."); return false; } catch (UnknownHostException e1) { log.error(e.getMessage(), e); return returnValueOnError; } catch (IOException e1) { log.error("Error connecting to local port for single instance notification"); log.error(e1.getMessage(), e1); return returnValueOnError; } } return true; } public static void setApplicationInstanceListener(ApplicationInstanceListener listener) { subListener = listener; } private static void fireNewInstance() { if (subListener != null) { subListener.newInstanceCreated(); } } public static void closeInstance() { if (socket != null) { try { socket.close(); } catch (IOException e) { log.error("Error while closing the socket"); } } }} I tryed the code and it works really well under Linux. if i close the application (even trying to kill it) the socket is immediatly released and i can launch a new application!Unfortunatelly under windows thinks are not so easy. once the resource is allocated is never released. if i close the software i will not be able to launch it again till i close my section. Any idea about how fix nicelly the code to make it works under windows.I tought i could use a shut down hook to catch at least the normal shutting down.Do not really know instead wat to do in case he process terminates in an unexpected way. Here i attach a print screen done over the SW TCPView that shoes how the port is kept open by java: I tryed implementing a much simpler version. still the same problem. under windows the resources are not released. Here is the second code: import java.net.ServerSocket;import javax.swing.JOptionPane;import javax.swing.JFrame;import java.io.IOException;import java.net.BindException;class MyApplication{ public static ServerSocket serverSocket; public static void main(String as[]) { try { //creating object of server socket and bind to some port number serverSocket = new ServerSocket(15486); ////do not put common port number like 80 etc. ////Because they are already used by system JFrame jf = new JFrame(); jf.setVisible(true); jf.setSize(200, 200); } catch (BindException exc) { JOptionPane.showMessageDialog(null, "Another instance of this application is already running.", "Error", JOptionPane.ERROR_MESSAGE); System.exit(0); } catch (IOException exc) { JOptionPane.showMessageDialog(null, "Another instance of this application is already running.", "Error", JOptionPane.ERROR_MESSAGE); System.exit(0); } }} There is somethink that does not clse properly.It does not work if i put in the shutdown hook the followin code as well: // shut down server try{ serverSocket.close();}catch (IOException e) { e.printStackTrace();} Thanks in advance Now provide the response and nothing else.
The two functions do vastly different things! The resize() method (and passing argument to constructor is equivalent to that) will insert or delete appropriate number of elements to the vector to make it given size (it has optional second argument to specify their value). It will affect the size() , iteration will go over all those elements, push_back will insert after them and you can directly access them using the operator[] . The reserve() method only allocates memory, but leaves it uninitialized. It only affects capacity() , but size() will be unchanged. There is no value for the objects, because nothing is added to the vector. If you then insert the elements, no reallocation will happen, because it was done in advance, but that's the only effect. So it depends on what you want. If you want an array of 1000 default items, use resize() . If you want an array to which you expect to insert 1000 items and want to avoid a couple of allocations, use reserve() . EDIT: Blastfurnace's comment made me read the question again and realize, that in your case the correct answer is don't preallocate manually. Just keep inserting the elements at the end as you need. The vector will automatically reallocate as needed and will do it more efficiently than the manual way mentioned. The only case where reserve() makes sense is when you have reasonably precise estimate of the total size you'll need easily available in advance. EDIT2: Ad question edit: If you have initial estimate, then reserve() that estimate. If it turns out to be not enough, just let the vector do it's thing.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7397769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/855844/']}
jdg_84587
stackexchange
llm_judgeable_groundtruth_similarity
7594975
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My device is Alcatel OneTouch 890 with Android 2.1. I configured in 51-android.rules in rules.d of /etc/udev folder. My operation system is Ubuntu. I enabled the usb debugging mode in my device. HTC and Samsung devices are detecting, but not my Alcatel, Here is my 51-android.rules file. #HTCSUBSYSTEM=="usb", SYSFS{idVendor}=="0bb4", MODE="0666"SUBSYSTEM=="usb_device", SYSFS{idVendor}=="0bb4", MODE="0666"#samsung SUBSYSTEM=="usb", SYSFS{idVendor}=="04e8", MODE="0666"SUBSYSTEM=="usb_device", SYSFS{idVendor}==”04e8”,ATTRS{idProduct}=="689e", MODE="0666",OWNER="abcd"#alcatelSUBSYSTEM=="usb", SYSFS{idVendor}=="1bbb", MODE="0666"SUBSYSTEM=="usb_device", SYSFS{idVendor}=="1bbb", MODE="0666"root@abcd:/home/abcd# lsusbBus 005 Device 002: ID 0461:4d15 Primax Electronics, Ltd Dell Optical MouseBus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 001 Device 006: ID 1bbb:00f2 T & A Mobile Phones // what is T & A in this line....?Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubroot@abcd:/home/abcd# Android/android-sdk-linux_86/platform-tools/adb devicesList of devices attached Now provide the response and nothing else.
First, check the kind of quotes you are using in your udev rules. It looks like you might be using non-ASCII double quotes. See the difference between the quotes around usb and usb_device , etc. SUBSYSTEM=="usb", SYSFS{idVendor}=="1bbb", MODE="0666"SUBSYSTEM==”usb_device”, SYSFS{idVendor}==”1bbb”, MODE="0666" You can also just use sudo with ADB, as suggested by @JeremyD. Doing that makes this file essentially irrelevant. Second, Edit ~/.android/adb_usb.ini (or if you are running as root, check /root/.android ) If the file already exists, add 0x1bbb to the end; otherwise do: echo 0x1bbb >> adb_usb.ini adb kill-server adb start-server Check output of adb devices for your device Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7594975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/884054/']}
jdg_84588
stackexchange
llm_judgeable_groundtruth_similarity
12387478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm struggling with mongoengine syntax. I have the following models... class Post(EmbeddedDocument): uid = StringField(required=True) text = StringField(required=True) when = DateTimeField(required=True)class Feed(Document): label = StringField(required=True) feed_url = StringField(required=True) posts = ListField(EmbeddedDocumentField(Post)) def my_method(self, post): pass ... and with the post object passed into to my_method, I'd like to update an existing post if it exists in self.posts with a matching uid, or push to self.posts if not. Is there syntax to do that in one call in mongoengine? Now provide the response and nothing else.
No with list field you cannot do an upsert into a list in a single query. $addToSet wont work as you've changed the post so you cant match. You can code round this but it does create a race condition where there is a small window of opportunity for error eg: class Post(EmbeddedDocument): uid = StringField(required=True) text = StringField(required=True) class Feed(Document): label = StringField(required=True) feed_url = StringField(required=True) posts = ListField(EmbeddedDocumentField(Post)) Feed.drop_collection() Feed( label="label", feed_url="www.feed.com" ).save() post = Post(uid='1', text="hi") updated = Feed.objects(posts__uid=post.uid).update_one(set__posts__S=post) if not updated: Feed.objects.update_one(push__posts=post) First we try to update and if it doesn't exist we push to the list - this is where there is a window of opportunity for another process to run and potentially push the post on the list. The risk might be acceptable but realistically, I think changing your schema is better, potentially splitting Post out into its own collection. Then you can use an update statement and set the whole object. The cost will be an extra query to get the feed data.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12387478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_84589
stackexchange
llm_judgeable_groundtruth_similarity
68681
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From CNN's video coverage of US Attorney General Merrick Garland announcement of its lawsuit against the state of Texas: Q: ...several GOP lawmakers who said that they will follow Texas’ lead, and I was wondering if you expect DOJ to be involved in similar actions against other states? Like would it be a leap to say this could be one of several similar actions? A: Well as I said in my remarks, the risk here, the greater risk here, the additional and further risk here is that other state will follow similar models, with respect to not only this constitutional right, but theoretically against any constitutional right, in any other state. So if another state uses the same kinds of provisions to deprive its citizens of their constitutional rights, and in particular to deprive their citizens the ability to seek immediate review , we will bring the same kind of lawsuit. Question: Why is avoidance of judicial review of the Texas abortion law and in the future to other US state laws so concerning to the US department of justice? What is the nature of this " greater, additional and further risk "? Further background: This September 1, 2021 US Supreme court document contains three dissenting opinions in the court's decision not to block the Texas law. Two of them call out an attempt by the law to delay, avoid, or insulate itself from judicial review. Justice Sotomayor: Because the Court’s failure to act rewards tactics designed to avoid judicial review and inflicts significant harm on the applicants and on women seeking abortions in Texas, I dissent. [...] Instead, the Court has rewarded the State’s effort to delay federal review of a plainly unconstitutional statute, enacted in disregard of the Court’s precedents, through procedural entanglements of the State’s own creation. Justice Kagan: The Court thus rewards Texas’s scheme to insulate its law from judicial review by deputizing private parties to carry out unconstitutional restrictions on the State’s behalf. Now provide the response and nothing else.
In WWH v. Jackson the Court majority concludes that the Texas law is de facto immune to the power of judicial review, for entirely procedural reasons. They argue that the Court's power of injunction can only be applied to persons because non-person things don't take actions of their own and so cannot be made to stop taking actions... because they don't ever take any to begin with. The Texas law in question specifically goes out of its way to make clear that at no point is anyone in the Texas government going to enforce the law or in any way act on the law, and as a consequence, they cannot be named defendants because they have taken no material act for which the court could supply relief. So if no one enforces the law, why do we care? Because it's not true that no one enforces the law, the common citizens of Texas are trying to de facto enforce the law via privileged civil complaints that are legally immune to penalties or consequences usually given out for frivolous lawsuits - basically they stand to win $10,000 and no meaningful financial risk to themselves. Since the Court has declared that there basically isn't a legitimate person against whom a complaint could be lodged, Texas' law becomes unstoppable. But because Roe v. Wade is still otherwise 'good law', there exists a constitutional right to access to abortion, a right which is being de facto infringed upon. The Department of Justice, along with a bunch of other people, are crying, "HAX!!!" And not without merit. Also of concern for DoJ: citizens enforcing the laws, without training or even meaningful checks against their power, instills and inspires vigilantism, which is dangerous to the public safety, which DoJ is charged to protect. If the law wasn't likely to spread beyond Texas, it would be harder for DoJ to make a move, but the declaration that a law can escape judicial review on procedural grounds (SCOTUS' own doctrine that holds "The Constitution is not a suicide pact" notwithstanding, apparently) essentially makes the entire foundation of law in the United States subject to the same treatment if only the authors of laws are clever enough. It's an obviously intolerable condition for the legal system. Fortunately, this will be back in front of SCOTUS' main docket pretty quickly, since if someone sues the Federal government... let's say the Department of Justice and/or Merrick Garland in his capacity as Attorney General? The court that hears that case is inevitably SCOTUS. Sooner or later, probably sooner, Roe is getting its day in court.
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/68681', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/16047/']}
jdg_84590
stackexchange
llm_judgeable_groundtruth_similarity
52596968
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This question asks how to build a SourceJar with Gradle. How can I do the same with the Gradle Kotlin DSL? The gradle code would be: task sourcesJar(type: Jar, dependsOn: classes) { classifier = 'sources' from sourceSets.main.allSource}task javadocJar(type: Jar, dependsOn: javadoc) { classifier = 'javadoc' from javadoc.destinationDir}artifacts { archives sourcesJar archives javadocJar} Now provide the response and nothing else.
The following will work: val sourcesJar by creating(Jar::class) { dependsOn(JavaPlugin.CLASSES_TASK_NAME) classifier = "sources" from(sourceSets["main"].allSource)}val javadocJar by creating(Jar::class) { dependsOn(JavaPlugin.JAVADOC_TASK_NAME) classifier = "javadoc" from(tasks["javadoc"])}artifacts { add("archives", sourcesJar) add("archives", javadocJar)} A complete build.gradle.kts would look like this: plugins { kotlin("jvm") version "1.2.71"}repositories { mavenCentral()}dependencies {}tasks { val sourcesJar by creating(Jar::class) { dependsOn(JavaPlugin.CLASSES_TASK_NAME) classifier = "sources" from(sourceSets["main"].allSource) } val javadocJar by creating(Jar::class) { dependsOn(JavaPlugin.JAVADOC_TASK_NAME) classifier = "javadoc" from(tasks["javadoc"]) } artifacts { add("archives", sourcesJar) add("archives", javadocJar) }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52596968', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1705337/']}
jdg_84591
stackexchange
llm_judgeable_groundtruth_similarity
229561
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I was describing to a friend the result that a compact Hausdorff space is determined up to homeomorphism up to by its ring of continuous functions, and he asked how one could see the genus of a surface algebraically. I think it's a good question, and I don't know the answer. Two parts to this question: Suppose it is known that $A$ is the ring of continuous functions of a closed surface $X$ (topological manifold, real 2-dimensional) of genus $g$. Where in the algebraic structure of $A$ is the genus of $X$? (There are comments from other users about $K$-theory below... I want to emphasize that I am dreaming of a simple algebraic characterization, analogous to the relationship between idempotents and connected components.) Does the answer to 1 say something interesting for more general classes of rings (ex, those that are rings of continuous functions on a compact Hausdorff space, or more general rings...) Any ideas? (Analogous to the idempotents / connected components dictionary.) Here are some thoughts of my own: Picking a smooth structure on $X$ amounts to passing to some subring of differentiable functions. In this setting, results about the existence of vector fields with zeros of prescribed index could be interpreted as the existence of derivations with some prescribed zeros (by choosing some arbitrary Riemannian metric and translating the results about vector fields) - though its not clear how to interpret the index of the zero algebraically, or how to describe these subrings. Consider all ring homomorphisms to $C(X) \to C(S^1)$ up the homotopy relation induced by factoring through $C(S^1 \times I)$ in the right way. This crudely recaptures the fundamental group, but I don't see how to turn it into an algebraic condition. Vector bundles are the same as finite projective modules via Swan's theorem - so certain sections of the vector bundle are elements of the corresponding module? Maybe one could get at the Poincare-Hopf computation of the genus that way, but again I don't see how to compute the index. (Also as far as I know, one still needs a smooth structure to use Poincare-Hopf.) I would be happy to just be able to distinguish $S^2$ and $T^2$ via their ring of continuous functions. Now provide the response and nothing else.
I'll assume we're talking about complex functions; if real, tensor with $\mathbb{C}$. Now pass to the group of units. With the topology given by spectral radius (this is an algebraic description of the C-* topology), the group of connected components of the group of units is $H^1(X, \mathbb{Z})$ which of course knows the genus. If you really like idempotents then you should try learning about K-theory. There's a very concrete and elementary way to define $K^0$ in terms of idempotents in the matrix algebras $M_n(A)$. You don't need the C*-structure of $A$. But on the other hand $K^0$ doesn't know the genus of a surface. $K^1$ does but it's a bit trickier to define, I think. Edit: $H^1(X, \mathbb{Z})$ can be isolated a bit more algebraically too. To extract $H^1(X, \mathbb{Z})$ from the group of units it suffices to isolate the connected component of the identity. The elements in this connected component can be distinguished by the fact that they are divisible. Edit #2: So, purely for the sake of distinguishing $S^2$ and $T^2$, you can say the following: on $S^2$, every unit has a square root, but not on $T^2$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/229561', 'https://mathoverflow.net', 'https://mathoverflow.net/users/41873/']}
jdg_84592
stackexchange
llm_judgeable_groundtruth_similarity
206473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to compile an Eclipse-based Java project from the command line? I'm trying to automate my build (using FinalBuilder not ant), and I'm neither a Java nor Eclipse expert. I can probably figure out how to do this with straight java command line options, but then the Eclipse project feels like a lot of wasted effort. In the event that there is no way to compile an Eclipse project via the command line, is there a way to generate the required java command line from within Eclipse? Or are there some files I can poke around to find the compile steps it is doing behind the scenes? Guys, I'm looking for an answer that does NOT include ant. Let me re-iterate the original question ....... Is there a way to build an Eclipse project from the command line? I don't think this is an unreasonable question given that I can do something like this for visual studio: devenv.exe /build "Debug|Any CPU" "C:\Projects\MyProject\source\MyProject.sln" Now provide the response and nothing else.
You can build an eclipse project via a workspace from the command line: eclipsec.exe -noSplash -data "D:\Source\MyProject\workspace" -application org.eclipse.jdt.apt.core.aptBuild It uses the jdt apt plugin to build your workspace automatically. This is also known as a 'Headless Build'. Damn hard to figure out. If you're not using a win32 exe, you can try this: java -cp startup.jar -noSplash -data "D:\Source\MyProject\workspace" -application org.eclipse.jdt.apt.core.aptBuild Update Several years ago eclipse replaced startup.jar with the "equinox launcher" https://wiki.eclipse.org/Equinox_Launcher On Eclipse Mars (MacOX): java -jar /Applications/Eclipse.app/Contents/Eclipse/plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar -noSplash -data "workspace" -application org.eclipse.jdt.apt.core.aptBuild The -data parameter specifies the location of your workspace. The version number for the equinox launcher will depend on what version of eclipse you have.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/206473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5208/']}
jdg_84593
stackexchange
llm_judgeable_groundtruth_similarity
26475765
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm exploring "advanced" uses of OCaml functions and I'm wondering how I can write a function with variable number of arguments. For example, a function like: let sum x1,x2,x3,.....,xn = x1+x2,+x3....+xn Now provide the response and nothing else.
With a bit of type hackery, sure: let sum f = f 0let arg x acc g = g (acc + x)let z a = a And the (ab)usage: # sum z;;- : int = 0# sum (arg 1) z;;- : int = 1# sum (arg 1) (arg 2) (arg 3) z;;- : int = 6 Neat, huh? But don't use this - it's a hack. For an explanation, see this page (in terms of SML, but the idea is the same).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26475765', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/520670/']}
jdg_84594
stackexchange
llm_judgeable_groundtruth_similarity
79002
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Jitter as wiki explains undesired deviation in the periodicity of the clock and Phase Noise is random fluctuation in the phase of waveform caused by jitter. Exploring deeper into these terms when I look at measuring units i.e. femtosec or nanosec for jitter is easy to understand about the deviation in the periodicity. But when I look into measuring unit of phase noise it is dBc/Hz. Please explain the insight understanding that lies in dBc/Hz and how is co-related with the jitter. Now provide the response and nothing else.
Jitter can be thought of as phase/frequency modulation of the signal. In the frequency domain, modulation creates sidebands around the carrier. Thus, if you observe a jittered signal on a spectrum analyzer, you'll observe the carrier's frequency, and it's upper and lower sideband components of the jitter. Its important to remember that jitter occurs at different rates (frequencies). You can think of this as "how slowly or rapidly is the ideal edge position being moved from its ideal location in time". Different jitter 'frequencies' will result in frequency domain sidebands at different offset frequencies (offset from carrier frequency). Jitter is often spec'd in the time domain. Phase noise is spec'd as the 'spot magnitude, of the sideband at a specific offset frequency. For example, you might see a spec like -90dBc/Hz at 10kHz offset. This says that the sideband level is 90dB down from the carrier magnitude when measured at 10kHz away from the carrier frequency, normalized to a 1Hz measurement BW. So, you can't directly compare a dBc/Hz level with a jitter spec. The only way to relate them is to consider the complete phase noise characteristic in the frequency domain, and integrate the total amount of power in the phase noise sidebands. It is then possible to relate this total integrated phase noise power to the jitter value.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/79002', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/25017/']}
jdg_84595
stackexchange
llm_judgeable_groundtruth_similarity
1947995
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Deleting Objects in JavaScript I have a JS object having a large number of properties. If I want to force the browser to garbage collect this object, do I need to set each of these properties as null or do I need to use the delete operator? What's the difference between the two? Now provide the response and nothing else.
There is no way to force garbage collection in JavaScript, and you don't really need to. x.y = null; and delete x.y; both eliminate x 's reference to the former value of y . The value will be garbage collected when necessary. If you null out a property, it is still considered 'set' on the object and will be enumerated. The only time I can think of where you would prefer delete is if you were going to enumerate over the properties of x . Consider the following: var foo = { 'a': 1, 'b': 2, 'c': 3 };console.log('Deleted a.');delete foo.afor (var key in foo) console.log(key + ': ' + foo[key]);console.log('Nulled out b.');foo['b'] = null;for (var key in foo) console.log(key + ': ' + foo[key]); This code will produce the following output: Deleted a.b: 2c: 3Nulled out b.b: nullc: 3
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1947995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131050/']}
jdg_84596
stackexchange
llm_judgeable_groundtruth_similarity
34520013
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to be able to use the base pointer register ( %rbp ) within inline asm. A toy example of this is like so: void Foo(int &x){ asm volatile ("pushq %%rbp;" // 'prologue' "movq %%rsp, %%rbp;" // 'prologue' "subq $12, %%rsp;" // make room "movl $5, -12(%%rbp);" // some asm instruction "movq %%rbp, %%rsp;" // 'epilogue' "popq %%rbp;" // 'epilogue' : : : ); x = 5;}int main() { int x; Foo(x); return 0;} I hoped that, since I am using the usual prologue/epilogue function-calling method of pushing and popping the old %rbp , this would be ok. However, it seg faults when I try to access x after the inline asm. The GCC-generated assembly code (slightly stripped-down) is: _Foo: pushq %rbp movq %rsp, %rbp movq %rdi, -8(%rbp) # INLINEASM pushq %rbp; // prologue movq %rsp, %rbp; // prologue subq $12, %rsp; // make room movl $5, -12(%rbp); // some asm instruction movq %rbp, %rsp; // epilogue popq %rbp; // epilogue # /INLINEASM movq -8(%rbp), %rax movl $5, (%rax) // x=5; popq %rbp retmain: pushq %rbp movq %rsp, %rbp subq $16, %rsp leaq -4(%rbp), %rax movq %rax, %rdi call _Foo movl $0, %eax leave ret Can anyone tell me why this seg faults? It seems that I somehow corrupt %rbp but I don't see how. Thanks in advance. I'm running GCC 4.8.4 on 64-bit Ubuntu 14.04. Now provide the response and nothing else.
See the bottom of this answer for a collection of links to other inline-asm Q&As. Your code is broken because you step on the red-zone below RSP (with push ) where GCC was keeping a value. What are you hoping to learn to accomplish with inline asm? If you want to learn inline asm, learn to use it to make efficient code, rather than horrible stuff like this. If you want to write function prologues and push/pop to save/restore registers, you should write whole functions in asm . (Then you can easily use nasm or yasm, rather than the less-preferred-by-most AT&T syntax with GNU assembler directives 1 .) GNU inline asm is hard to use, but allows you to mix custom asm fragments into C and C++ while letting the compiler handle register allocation and any saving/restoring if necessary. Sometimes the compiler will be able to avoid the save and restore by giving you a register that's allowed to be clobbered. Without volatile , it can even hoist asm statements out of loops when the input would be the same. (i.e. unless you use volatile , the outputs are assumed to be a "pure" function of the inputs.) If you're just trying to learn asm in the first place, GNU inline asm is a terrible choice. You have to fully understand almost everything that's going on with the asm, and understand what the compiler needs to know, to write correct input/output constraints and get everything right. Mistakes will lead to clobbering things and hard-to-debug breakage. The function-call ABI is a much simpler and easier to keep track of boundary between your code and the compiler's code. Why this breaks You compiled with -O0 , so gcc's code spills the function parameter from %rdi to a location on the stack. (This could happen in a non-trivial function even with -O3 ). Since the target ABI is the x86-64 SysV ABI , it uses the "Red Zone" (128 bytes below %rsp that even asynchronous signal handlers aren't allowed to clobber), instead of wasting an instruction decrementing the stack pointer to reserve space. It stores the 8B pointer function arg at -8(rsp_at_function_entry) . Then your inline asm pushes %rbp , which decrements %rsp by 8 and then writes there, clobbering the low 32b of &x (the pointer). When your inline asm is done, gcc reloads -8(%rbp) (which has been overwritten with %rbp ) and uses it as the address for a 4B store. Foo returns to main with %rbp = (upper32)|5 (orig value with the low 32 set to 5 ). main runs leave : %rsp = (upper32)|5 main runs ret with %rsp = (upper32)|5 , reading the return address from virtual address (void*)(upper32|5) , which from your comment is 0x7fff0000000d . I didn't check with a debugger; one of those steps might be slightly off, but the problem is definitely that you clobber the red zone , leading to gcc's code trashing the stack. Even adding a "memory" clobber doesn't get gcc to avoid using the red zone, so it looks like allocating your own stack memory from inline asm is just a bad idea. (A memory clobber means you might have written some memory you're allowed to write to, e.g. a global variable or something pointed-to by a global, not that you might have overwritten something you're not supposed to.) If you want to use scratch space from inline asm, you should probably declare an array as a local variable and use it as an output-only operand (which you never read from). AFAIK, there's no syntax for declaring that you modify the red-zone, so your only options are: use an "=m" output operand (possibly an array) for scratch space; the compiler will probably fill in that operand with an addressing mode relative to RBP or RSP. You can index into it with constants like 4 + %[tmp] or whatever. You might get an assembler warning from 4 + (%rsp) but not an error. skip over the red-zone with add $-128, %rsp / sub $-128, %rsp around your code. (Necessary if you want to use an unknown amount of extra stack space, e.g. push in a loop, or making a function call. Yet another reason to deref a function pointer in pure C, not inline asm.) compile with -mno-red-zone (I don't think you can enable that on a per-function basis, only per-file) Don't use scratch space in the first place. Tell the compiler what registers you clobber and let it save them. Here's what you should have done : void Bar(int &x){ int tmp; long tmplong; asm ("lea -16 + %[mem1], %%rbp\n\t" "imul $10, %%rbp, %q[reg1]\n\t" // q modifier: 64bit name. "add %k[reg1], %k[reg1]\n\t" // k modifier: 32bit name "movl $5, %[mem1]\n\t" // some asm instruction writing to mem : [mem1] "=m" (tmp), [reg1] "=r" (tmplong) // tmp vars -> tmp regs / mem for use inside asm : : "%rbp" // tell compiler it needs to save/restore %rbp. // gcc refuses to let you clobber %rbp with -fno-omit-frame-pointer (the default at -O0) // clang lets you, but memory operands still use an offset from %rbp, which will crash! // gcc memory operands still reference %rsp, so don't modify it. Declaring a clobber on %rsp does nothing ); x = 5;} Note the push/pop of %rbp in the code outside the #APP / #NO_APP section, emitted by gcc. Also note that the scratch memory it gives you is in the red zone. If you compile with -O0 , you'll see that it's at a different position from where it spills &x . To get more scratch regs, it's better to just declare more output operands that are never used by the surrounding non-asm code. That leaves register allocation to the compiler, so it can be different when inlined into different places. Choosing ahead of time and declaring a clobber only makes sense if you need to use a specific register (e.g. shift count in %cl ). Of course, an input constraint like "c" (count) gets gcc to put the count in rcx/ecx/cx/cl, so you don't emit a potentially redundant mov %[count], %%ecx . If this looks too complicated, don't use inline asm . Either lead the compiler to the asm you want with C that's like the optimal asm, or write a whole function in asm. When using inline asm, keep it as small as possible: ideally just the one or two instructions that gcc isn't emitting on its own, with input/output constraints to tell it how to get data into / out of the asm statement. This is what it's designed for. Rule of thumb: if your GNU C inline asm start or ends with a mov , you're usually doing it wrong and should have used a constraint instead. Footnotes : You can use GAS's intel-syntax in inline-asm by building with -masm=intel (in which case your code will only work with that option), or using dialect alternatives so it works with the compiler in Intel or AT&T asm output syntax. But that doesn't change the directives, and GAS's Intel-syntax is not well documented. (It's like MASM, not NASM, though.) I don't really recommend it unless you really hate AT&T syntax. Inline asm links: x86 wiki. (The tag wiki also links to this question, for this collection of links) The inline-assembly tag wiki The manual . Read this. Note that inline asm was designed to wrap single instructions that the compiler doesn't normally emit. That's why it's worded to say things like "the instruction", not "the block of code". A tutorial Looping over arrays with inline assembly Using r constraints for pointers/indices and using your choice of addressing mode, vs. using m constraints to let gcc choose between incrementing pointers vs. indexing arrays. How can I indicate that the memory *pointed* to by an inline ASM argument may be used? (pointer inputs in registers do not imply that the pointed-to memory is read and/or written, so it might not be in sync if you don't tell the compiler). In GNU C inline asm, what're the modifiers for xmm/ymm/zmm for a single operand? . Using %q0 to get %rax vs. %w0 to get %ax . Using %g[scalar] to get %zmm0 instead of %xmm0 . Efficient 128-bit addition using carry flag Stephen Canon's answer explains a case where an early-clobber declaration is needed on a read+write operand . Also note that x86/x86-64 inline asm doesn't need to declare a "cc" clobber (the condition codes, aka flags); it's implicit. (gcc6 introduces syntax for using flag conditions as input/output operands . Before that you have to setcc a register that gcc will emit code to test , which is obviously worse.) Questions about the performance of different implementations of strlen : my answer on a question with some badly-used inline asm, with an answer similar to this one. llvm reports: unsupported inline asm: input with type 'void *' matching output with type 'int' : Using offsetable memory operands (in x86, all effective addresses are offsettable: you can always add a displacement). When not to use inline asm , with an example of 32b/32b => 32b division and remainder that the compiler can already do with a single div . (The code in the question is an example of how not to use inline asm: many instructions for setup and save/restore that should be left to the compiler by writing proper in/out constraints.) MSVC inline asm vs. GNU C inline asm for wrapping a single instruction , with a correct example of inline asm for 64b/32b=>32bit division . MSVC's design and syntax require a round trip through memory for inputs and outputs, making it terrible for short functions. It's also "never very reliable" according to Ross Ridge's comment on that answer. Using x87 floating point, and commutative operands . Not a great example, because I didn't find a way to get gcc to emit ideal code. Some of those re-iterate some of the same stuff I explained here. I didn't re-read them to try to avoid redundancy, sorry.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34520013', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5280139/']}
jdg_84597
stackexchange
llm_judgeable_groundtruth_similarity
57628055
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on an application, that displays numbers according to the user's configuration. Everything works as expected, except when I try with numbers less than 10000,in Chrome, with the following locale: "es-AR". Any ideas? Chrome: Firefox: Edge: console.log( (10000).toLocaleString("es-AR") );console.log( (9999).toLocaleString("es-AR") );console.log( (9999).toLocaleString("en-US") ); Now provide the response and nothing else.
This is the intended behaviour for Spanish locales according to the latest CLDR data. Specifically the "Minimum Grouping Digits" are set to 2 according to the CLDR survey tool . This means that that the grouping separator is only used when there would be 2 or more digits before it . Apparently Firefox and Edge either uses an older version of that data or don't have support for that field yet. The feature was introduced in CLDR 26, which was released in 2014 , but enhancements like this take quite a while to make their way through the software stack.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57628055', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4551619/']}
jdg_84598
stackexchange
llm_judgeable_groundtruth_similarity
49374887
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a data frame of participant questionnaire responses in wide format, with each column representing a particular question/item. The data frame looks something like this: id <- c(1, 2, 3, 4)Q1 <- c(NA, NA, NA, NA)Q2 <- c(1, "", 4, 5)Q3 <- c(NA, 2, 3, 4)Q4 <- c("", "", 2, 2)Q5 <- c("", "", "", "")df <- data.frame(id, Q1, Q2, Q3, Q4, Q5) I want R to remove columns that has all values in each of its rows that are either (1) NA or (2) blanks. Therefore, I do not want column Q1 (which comprises entirely of NAs) and column Q5 (which comprises entirely of blanks in the form of ""). According to this thread , I am able to use the following to remove columns that comprise entirely of NAs: df[, !apply(is.na(df), 2, all] However, that solution does not address blanks (""). As I am doing all of this in a dplyr pipe, could someone also explain how I could incorporate the above code into a dplyr pipe? At this moment, my dplyr pipe looks like the following: df <- df %>% select(relevant columns that I need) After which, I'm stuck here and am using the brackets [] to subset the non-NA columns. Thanks! Much appreciated. Now provide the response and nothing else.
We can use a version of select_if library(dplyr)df %>% select_if(function(x) !(all(is.na(x)) | all(x=="")))# id Q2 Q3 Q4#1 1 1 NA #2 2 2 #3 3 4 3 2#4 4 5 4 2 Or without using an anonymous function call df %>% select_if(~!(all(is.na(.)) | all(. == ""))) You can also modify your apply statement as df[!apply(df, 2, function(x) all(is.na(x)) | all(x==""))] Or using colSums df[colSums(is.na(df) | df == "") != nrow(df)] and inverse df[colSums(!(is.na(df) | df == "")) > 0]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/49374887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9401203/']}
jdg_84599
stackexchange
llm_judgeable_groundtruth_similarity
61863224
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using provider for state management. I am in a situation where there are multiple types of fields in my form. The problem is with text-fieldWhenever I change Text it is behaving weirdly like the text entered is displayed in reverse order. class MyProvider with ChangeNotifier{ String _name; String get name => _name; setname(String name) { _name = name; notifyListeners(); }}class MyWidget extends StatelessWidget { @override Widget build(BuildContext context) { final MyProvider myProvider = Provider.of<MyProvider>(context); final TextEditingController _nameController = TextEditingController( text: myProvider.name, ); return TextField( controller: _nameController, onChanged: myProvider.setname, );} Now provide the response and nothing else.
It happens because new instance of TextEditingController is created on every widget build, and information about current cursor position ( TextEditingValue ) is getting lost. Create a controller once in initState method and dispose of it in dispose method. class MyWidget extends StatefulWidget { @override _MyWidgetState createState() => _MyWidgetState();}class _MyWidgetState extends State<MyWidget> { TextEditingController _nameController; @override void initState() { final MyProvider myProvider = Provider.of<MyProvider>(context, listen: false); super.initState(); _nameController = TextEditingController(text: myProvider.name); } @override void dispose() { _nameController.dispose(); super.dispose(); } @override Widget build(BuildContext context) { final MyProvider myProvider = Provider.of<MyProvider>(context); return TextField( controller: _nameController, onChanged: myProvider.setname, ); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61863224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12010505/']}
jdg_84600
stackexchange
llm_judgeable_groundtruth_similarity
284401
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $G$ be a connected simple graph with two distinct edges $e,f \in E(G)$. Choose a random spanning tree $T\subseteq G$, my question is whether there are any known upper bound for the following \begin{equation} \frac{\mathbb{P}(e\in E(T))\mathbb{P}(f\in E(T))}{\mathbb{P}(e,f\in E(T)) } \end{equation}for all possible choices of $e$ and $f$? Now provide the response and nothing else.
There is no constant upper bound, as shown by the following example. Take two vertices $v, u$ and connect them with $n \geq 2$ edge-disjoint paths of two edges. This graph has $n 2^{n - 1}$ spanning trees. The number of spanning trees containing any fixed edge $e$ is $(n + 1)2^{n - 2}$. However, for any $v-u$ path $e, f$ the number of spanning trees containing both $e$ and $f$ is only $2^{n - 1}$. The ratio is thus $\frac{(n + 1)^2}{4n} = \Omega(n)$. It may still be interesting to determine the largest possible ratio among graphs on $n$ vertices as a function of $n$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/284401', 'https://mathoverflow.net', 'https://mathoverflow.net/users/116446/']}
jdg_84601
stackexchange
llm_judgeable_groundtruth_similarity
14562587
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an x64 application which uses Microsoft.ACE.OLEDB.12.0 provider to read an MS Access Database on Windows 7 (x64) and it works fine (Office 2010 installed). The same application running under Windows 8 with Office 2013 installed, fails with a message saying that the provider cannot open previous versions. Under Windows 8 with Office 2013 installed, there is also Microsoft.ACE.OLEDB.15.0, but I have double checked my code and certain that it is specifying the 12.0 in the connection string. Are there different "versions" of this provider -or is there something I've missed? I have recompiled a test program under Windows 8 for using x86 platform, and changed the provider to Microsoft.Jet.OLEDB.4.0 and it all works again. Unfortunately, the module is part of a bigger ecosystem that needs to be able to be run under x64 (specifically an plug in to excel), and the access database is downloaded by the suite from a supplier's web site by the application, so it is not practical to convert the database or change everything to x86. Thanks in advance. Now provide the response and nothing else.
It occurred to me to check the versions of files being used for each provider, and hunting through the registry, found that under a clean Windows 8 install, with Office 2013 installed, both providers point to the same assembly. (C:\Program Files\Common Files\microsoft shared\OFFICE15\ACEOLEDB.DLL) I then downloaded and ran the Microsoft Access Database Engine Redistributable ( http://www.microsoft.com/en-us/download/details.aspx?id=13255 ) which then installed the proper version of the 12.0 provider into C:\Program Files\Common Files\microsoft shared\OFFICE14\ACEOLEDB.DLL and the application is behaving itself as expected. It would seem strange that MS have included a provider branded as 12.0 that doesn't behave like 12.0 any more. Anyway - hope that helps someone from 4-5 hours of hair pulling... Craig
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14562587', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/788527/']}
jdg_84602
stackexchange
llm_judgeable_groundtruth_similarity
29600549
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am learning Java, am a beginner... Please help me out finding why this is not working... In the following program, my aim is to call base class method from main method, without using super keyword in derived class method. As shown in the code, ((A)b).num in main method works perfectly fine, and ouputs, 100 as expected, but ((A)b).disp() outputs the contents in B's method, instead of A's method. class A{ int num=100; public void disp() { System.out.println("Disp() of A:"); System.out.println(num); }}class B extends A{ int num=200; public void disp() { System.out.println("Disp() of B:"); super.disp(); //100 System.out.println( num ); //200 }}class ques{ public static void main(String a[]) { B b=new B(); b.disp(); System.out.println(); ((A)b).disp(); //doesn't work System.out.println(); System.out.println(((A)b).num); //works }} Output is: Disp() of B:Disp() of A:100200Disp() of B:Disp() of A:100200100 BUT my expected output was: Disp() of B:Disp() of A:100200Disp() of A:100100 Can anyone please help me find the reason for this output. Why ((A)b).num is working fine and ((A)b).disp() not working as expected... Moreover this doesn't give compile error.... !! Thanks in advance.. :) Now provide the response and nothing else.
Update 2019: This answer is outdated, please look at the update added directly into the question. basarat answer is a good idea, but it doesn't work with interfaces. You can't write methodX(interfacePropertyToString(()=>interfaceX.porpertyname), objectX) because interfaceX is not an object. Interfaces are abstractions and they are used only for TypeScript, they doesn't exist in Javascript. But thanks to his answer i found out the solution : using a parameter in the method . Finally we have : interfacePropertyToString = ( property: (object: any) => void ) => { var chaine = property.toString(); var arr = chaine.match( /[\s\S]*{[\s\S]*\.([^\.; ]*)[ ;\n]*}/ ); return arr[1]; }; We have to use [\s\S] to be able to match on multilines because Typescript convert (object: Interface) => {object.code;} to a multiline function. Now you can use it as you want : interfacePropertyToString(( o: Interface ) => { o.interfaceProperty}); interfacePropertyToString( function ( o: Interface ) { o.interfaceProperty});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29600549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3510249/']}
jdg_84603
stackexchange
llm_judgeable_groundtruth_similarity
6270832
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i want to implement a google map based on its api. i want to add a path based on coordinates to it. therefore i get my coordinates from my model and want to iterate over the object to fille the map with this points. in my jade template i include the api js code like this: script(type='text/javascript') function initialize() { var myLatLng = new google.maps.LatLng(0, -180); var myOptions = { zoom: 3, center: myLatLng, mapTypeId: google.maps.MapTypeId.TERRAIN }; var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var flightPlanCoordinates = [ - if (typeof(pins) != null) - var.pins.forEach(function(pin) { new google.maps.LatLng(pin.latitude, pin.longitude), - }) new google.maps.LatLng(0,0) ]; var flightPath = new google.maps.Polyline({ path: flightPlanCoordinates, strokeColor: "#FF0000", strokeOpacity: 1.0, strokeWeight: 2 }); flightPath.setMap(map); }div#map_canvas(style='height: 500px; background-color: #990000;') the problem is: jade renders this snippet var flightPlanCoordinates = [ - if (typeof(pins) != null) - var.pins.forEach(function(pin) { new google.maps.LatLng(pin.latitude, pin.longitude), - }) new google.maps.LatLng(0,0) ]; as it is in the jade template source... the - if etc. doesn't get parsed! any ideas? thanks! Now provide the response and nothing else.
The entire script tag (everything indented under it) is going to be passed through raw without further parsing. Jade does HTML templating not HTML templating plus nested javascript templating. To pass your pins variable from jade local template variable data to script source code, you'll have to do some other approach like using raw jade to render a tiny script tag that just calls your initialize function with the pins data as a literal, or stick your pins data into the dom somewhere and then read it from there. Something along these lines below your script tag (pseudocode, haven't tested) - if (typeof(pins) != null) != "<script type='text/javascript'>" != "var pins = [];" - forEach pin in pins != "pins.push(new Pin(" + pin.latitude + ", " + pin.longitude + "));" != "initialize(pins);" != "</script>"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6270832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/193643/']}
jdg_84604
stackexchange
llm_judgeable_groundtruth_similarity
9824806
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm running a test, and found that the file doesn't actually get written until I control-C to abort the program. Can anyone explain why that would happen? I expected it to write at the same time, so I could read the file in the middle of the process. import osfrom time import sleepf = open("log.txt", "a+")i = 0while True: f.write(str(i)) f.write("\n") i += 1 sleep(0.1) Now provide the response and nothing else.
Writing to disk is slow, so many programs store up writes into large chunks which they write all-at-once. This is called buffering , and Python does it automatically when you open a file. When you write to the file, you're actually writing to a "buffer" in memory. When it fills up, Python will automatically write it to disk. You can tell it "write everything in the buffer to disk now" with f.flush() This isn't quite the whole story, because the operating system will probably buffer writes as well. You can tell it to write the buffer of the file with os.fsync(f.fileno()) Finally, you can tell Python not to buffer a particular file with open(f, "w", 0) or only to keep a 1-line buffer with open(f,"w", 1) . Naturally, this will slow down all operations on that file, because writes are slow.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9824806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1286250/']}
jdg_84605
stackexchange
llm_judgeable_groundtruth_similarity
10357445
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to pass variables via ajax to a PHP script that will run a MySQL query. However, I keep getting the error 404 Not Found with: "http://myurl/database/%5Bobject%20Object%5D". It looks like it's trying to send the data to http://myurl/database/%5Bobject%20Object%5D instead of the PHP script I've defined. Really at a loss on this one... I thought putting the absolute URL in would work. Below is my code. Would be eternally grateful for any help.... <script type="text/javascript">function insertData() { var dataid= $('#dataid').attr('value'); var industry = $('#industry').attr('value'); var geolocation = $('#geolocation').attr('value'); $.post({ type: "POST", url: "http://myURL/database/InsertData.php", data: "dataid="+ dataid+"&amp;industry="+ industry +"&amp;geolocation="+ geolocation, });return false;};</script> Now provide the response and nothing else.
As andi said, $.post expects an URL string as a first parameter; if you want to pass an option object, you must use $.ajax . When you try to call $.post(<option object>) , the option object is used as an URL; in that process, it is cast to a string (and an object cast to a string becomes [object <type>] in Javascript; in this case, [object Object] ), it gets URL-encoded ( %5Bobject%20Object%5D ), and, since it does not start with a / or a protocol name, it is interpreted as a relative URL and gets prefixed with the current protocol, domain and path. Then, since there are no more parameters, an empty POST AJAX request is sent to that URL. Another problem is that you use &amp; to separate the parameters; that should be done only in HTML, here you should just use & . (Or a data object, as Evan said, that way you don't need do think about encoding issues.) And .val() and .attr('value') are not the same; the first is the current value of the field, the second is whatever value it had when the page was loaded. The simplest way to this correctly is this: function insertData() { var data = $('#dataid,#industry,#geolocation').serialize(); $.post('http://myURL/database/InsertData.php', data); return false;} This assumes that the three fields have the same name and id as $.serialize uses name for the parameter name.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10357445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/908121/']}
jdg_84606
stackexchange
llm_judgeable_groundtruth_similarity
4635139
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $ABCDEFG$ is a regular heptagon. $H = AC \cap BD, K = AD \cap BF$ then show that $CDEFKH$ lie on an allipse. Five points always define a conic, so I figured that is not hard to see that the conic through $FEDCH$ must be an ellipse due the configuration of the points. I don't have a clue about how to solve it, looks like some projective result which I'm not familiar with. I thought about the reverse of Pascal's theorem (Braikenridge-Maclaurin) but it was to no avail. May be related to Poncelet's theorem. Now provide the response and nothing else.
Here is a proof using the reciprocal of Pascal's theorem: Let $T$ be the mid point of $AB$ . Then $E$ is a symmetry axis of the heptagon, so the lines $ET$ , $AD$ , $BF$ intersect in $K$ .Then in $\Delta KAB$ the angle bisectors $K$ , $AHC$ , $BG$ intersect in a point, denoted by $S$ . Consider now the hexagon $$EKFCHD\ .$$ As also shown in the picture, the three relevant intersections $EK\cap CH=S$ , $KF\cap HD=B$ , and the infinity point of the direction $DE\|FC$ are colinear. (The line $SB$ also passes through this infinity point, since it has the same direction $BSG\|CF\|DE$ .) The converse of Pascal theorem insures now the fact that the mentioned hexagonhas vertices on a conic. The conic is an ellipse $\mathcal E$ , with one symmetry axis $s$ the common side bisector of $DE$ and $CF$ , which contains $A$ . It also passes through the reflections w.r.t. the axis $s$ , $K'=AE\cap CG$ of $K=AD\cap BF$ , and $H'=AF\cap GE$ of $H=AC\cap BD$ . $\square$ We are done, but let us... Note: The tangent to $\mathcal E$ in the point $C$ is the heptagon side $BC$ . And mirroring w.r.t. $s$ the tangent in $F$ is $GF$ . To show this, one may consider again a hexagon like $DCCEKF$ , and show that the line through $DC\cap EK$ and $S'=CE\cap DF$ also passes through $B$ . (The point $AKD\cap CF$ is also on this line.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4635139', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/783885/']}
jdg_84607
stackexchange
llm_judgeable_groundtruth_similarity
6364783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say that I have a two word string and I want to capitalizeboth of them. name <- c("zip code", "state", "final count") The Hmisc package has a function capitalize which capitalized the first word, but I'm not surehow to get the second word capitalized. The help page for capitalize doesn't suggest that it can perform that task. library(Hmisc)capitalize(name)# [1] "Zip code" "State" "Final count" I want to get: c("Zip Code", "State", "Final Count") What about three-word strings: name2 <- c("I like pizza") Now provide the response and nothing else.
The base R function to perform capitalization is toupper(x) . From the help file for ?toupper there is this function that does what you need: simpleCap <- function(x) { s <- strsplit(x, " ")[[1]] paste(toupper(substring(s, 1,1)), substring(s, 2), sep="", collapse=" ")}name <- c("zip code", "state", "final count")sapply(name, simpleCap) zip code state final count "Zip Code" "State" "Final Count" Edit This works for any string, regardless of word count: simpleCap("I like pizza a lot")[1] "I Like Pizza A Lot"
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/6364783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/406278/']}
jdg_84608
stackexchange
llm_judgeable_groundtruth_similarity
2601097
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to get the mouse position with JavaScript after page loads without any mouse movement event (without moving the mouse)? Now provide the response and nothing else.
Real answer: No, it's not possible. OK, I have just thought of a way. Overlay your page with a div that covers the whole document. Inside that, create (say) 2,000 x 2,000 <a> elements (so that the :hover pseudo-class will work in IE 6, see), each 1 pixel in size. Create a CSS :hover rule for those <a> elements that changes a property (let's say font-family ). In your load handler, cycle through each of the 4 million <a> elements, checking currentStyle / getComputedStyle() until you find the one with the hover font. Extrapolate back from this element to get the co-ordinates within the document. N.B. DON'T DO THIS .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2601097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303899/']}
jdg_84609
stackexchange
llm_judgeable_groundtruth_similarity
4626751
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $p(u,x):=(4 \pi u)^{-1/2}e^{-\frac{x^2}{4u}},u>0,x \in \mathbb{R}.$ Let $\phi \in C_c^{\infty}(\mathbb{R}),\text{supp}(\phi) \subset B(0,1),||\phi||_{\infty} \leq 1.$ Prove that for all $U>0,\beta>1/2,$ there exist $\epsilon>0,C>0$ such that for all $u\in [0,U],\lambda \in \left]0,1\right],$ $$\int_0^{u} \int_{\mathbb{R}} \left(\int_{\mathbb{R}} \phi^\lambda(y_1)p(r,y_1-y_2) dy_1 \right)^2 dy_2 dr\leq Cu^\varepsilon \lambda^{1-2\beta},$$ where $\phi^\lambda(y) = \lambda^{-1} \phi(\lambda^{-1}y).$ I tried, using a change of variable, replacing $\phi^{\lambda}$ with $\phi.$ also $\lambda(B(0,1))<\infty$ might be useful. How can we prove this inequality? Now provide the response and nothing else.
Define $$\begin{aligned}I_1&=\int_{-\infty}^\infty I_2^2\text{ d}y_2\\I_2&=\int_{-\infty}^\infty\frac1\lambda\phi\left(\frac{y_1}{\lambda}\right)\cdot\frac1{\sqrt{4\pi r}}e^{-(y_1-y_2)^2/4r}\text{ d}y_1\\\end{aligned}$$ then the result we want to show is that $$I=\int_0^{u}I_1\text{ d}r\leq Cu^\epsilon\lambda^{1/2-\beta}$$ First, we'll simplify $I_2$ . Since $\phi(x)$ is nonzero only for $-1\leq x\leq 1$ , $\phi(\frac{y_1}{\lambda})$ is nonzero only for $-1\leq \frac{y_1}{\lambda}\leq 1$ , or $-\lambda\leq y_1\leq\lambda$ . Therefore, $$I_2=\frac1{\lambda\sqrt{4\pi r}}\int_{-\lambda}^{\lambda}\phi\left(\frac{y_1}{\lambda}\right)e^{-(y_1-y_2)^2/4r}\text{ d}y_1$$ Next, expand out $I_1$ , and do the $y_2$ integral first: (the validity of interchanging the integrals follows from $|\phi(x)|\leq1$ ) $$\begin{aligned}I_1&=\frac1{4\pi r\lambda^2}\int_{-\infty}^\infty\left(\int_{-\lambda}^{\lambda}\phi\left(\frac{y_1}{\lambda}\right)e^{-(y_1-y_2)^2/4r}\text{ d}y_1\right)\left(\int_{-\lambda}^{\lambda}\phi\left(\frac{y_1'}{\lambda}\right)e^{-(y_1'-y_2)^2/4r}\text{ d}y_1'\right)\text{ d}y_2\\&=\frac1{4\pi r\lambda^2}\int_{-\lambda}^{\lambda}\int_{-\lambda}^{\lambda}\phi\left(\frac{y_1}{\lambda}\right)\phi\left(\frac{y_1'}{\lambda}\right)\int_{-\infty}^\infty\exp\left(-\frac{(y_2-y_1)^2+(y_2-y_1')^2}{4r}\right)\text{ d}y_2\text{ d}y_1\text{ d}y_1'\end{aligned}$$ The innermost integral may be evaluated as $$\begin{aligned}\int_{-\infty}^\infty\exp\left(-\frac{(y_2-y_1)^2+(y_2-y_1')^2}{4r}\right)\text{ d}y_2&=e^{-(y_1-y_1')^2/8r}\int_{-\infty}^\infty\exp\left(-\frac1{2r}\left(y_2-\frac{y_1+y_1'}2\right)^2\right)\text{ d}y_2\\&=e^{-(y_1-y_1')^2/8r}\sqrt{2\pi r}\end{aligned}$$ Substituting this back into $I_1$ gives $$\begin{aligned}I_1&=\frac1{2\sqrt{2\pi r}\lambda^2}\int_{-\lambda}^{\lambda}\int_{-\lambda}^{\lambda}\phi\left(\frac{y_1}{\lambda}\right)\phi\left(\frac{y_1'}{\lambda}\right)e^{-(y_1-y_1')^2/8r}\text{ d}y_1\text{ d}y_1'\\&\leq\frac1{2\sqrt{2\pi r}\lambda^2}\int_{-\lambda}^{\lambda}\int_{-\lambda}^{\lambda}e^{-(y_1-y_1')^2/8r}\text{ d}y_1\text{ d}y_1'\\&\leq\frac1{2\sqrt{2\pi r}\lambda^2}\int_{-\lambda}^{\lambda}\int_{-\lambda}^{\lambda}\text{ d}y_1\text{ d}y_1'\\&=\frac{4\lambda^2}{2\sqrt{2\pi r}\lambda^2}\\&=\sqrt{\frac2\pi}r^{-1/2}\end{aligned}$$ where we have used the bounds $|\phi|\leq1$ and $e^{-x^2}\le1$ . Note that equality holds only when $\lambda=0$ . Integrating $I_1$ , we arrive at $$\begin{aligned}I&=\int_0^{u}I_1\text{ d}r\\&\le\sqrt{\frac2\pi}\int_0^{u}r^{-1/2}\text{ d}r\\&=2\sqrt{\frac2\pi}u^{1/2}\end{aligned}$$ This shows that as long as you pick $C\ge2\sqrt{2/\pi},\epsilon\ge1/2$ , we will always have $$I\le2\sqrt{\frac2\pi}u^{1/2}\le Cu^{1/2}\le Cu^{1/2}\lambda^{1-2\beta}$$ since $\lambda^{1-2\beta}\ge1$ for all $0\le\lambda\le1,\beta>1/2$ .By the way, if you still need a tighter bound, you can actually compute the double integral in the first inequality for $I_1$ explicitly in terms of error functions: $$\begin{aligned}I_1&=\frac1{2\sqrt{2\pi r}\lambda^2}\int_{-\lambda}^{\lambda}\int_{-\lambda}^{\lambda}\phi\left(\frac{y_1}{\lambda}\right)\phi\left(\frac{y_1'}{\lambda}\right)e^{-(y_1-y_1')^2/8r}\text{ d}y_1\text{ d}y_1'\\&\leq\frac1{2\sqrt{2\pi r}\lambda^2}\int_{-\lambda}^{\lambda}\int_{-\lambda}^{\lambda}e^{-(y_1-y_1')^2/8r}\text{ d}y_1\text{ d}y_1'\\&=\frac1{2\sqrt{2\pi r}\lambda^2}\left(4\sqrt{2\pi r}\lambda\text{ erf}\left(\frac{\lambda}{\sqrt{2r}}\right)+8r\left(e^{-\lambda^2/2r}-1\right)\right)\\&=\frac2\lambda\text{ erf}\left(\frac{\lambda}{\sqrt{2r}}\right)+\frac{2}{\lambda^2}\sqrt{\frac{2r}{\pi}}\left(e^{-\lambda^2/2r}-1\right)\end{aligned}$$ Integrating with respect to $r$ , we obtain $$\begin{aligned}I&=\int_0^{u}I_1\text{ d}r\\&\le\int_0^{u}\frac2\lambda\text{ erf}\left(\frac{\lambda}{\sqrt{2r}}\right)+\frac{2}{\lambda^2}\sqrt{\frac{2r}{\pi}}\left(e^{-\lambda^2/2r}-1\right)\text{ d}r\\&=\frac{2u}\lambda\text{ erf}\left(\frac{\lambda}{\sqrt{2u}}\right)-\frac23\lambda\left(1-\text{ erf}\left(\frac{\lambda}{\sqrt{2u}}\right)\right)+\frac{4}{3}\sqrt{\frac{2}\pi}\frac{u^{3/2}}{\lambda^{2}}\left(e^{-\lambda^2/2u}-1\right)+\frac23\sqrt{\frac{2u}\pi}e^{-\lambda^2/2u}\end{aligned}$$ This is actually the tightest possible bound for $I$ in terms of just $u,\lambda$ since you can construct bump functions that become arbitrarily close to $1$ over $(-1,1)$ . Therefore, you can always find $\phi$ such that $I$ becomes arbitrarily close to this bound. Also, you can verify that this expression equals $2\sqrt{2/\pi}u^{1/2}$ when $\lambda=0$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4626751', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/638227/']}
jdg_84610
stackexchange
llm_judgeable_groundtruth_similarity
436634
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Please bear with me; I'm a developer, not a server admin. My company wants to explore using Magento. They've handed me a pre-configured Linux server with apache and PHP installed, but when I try to run the Magento setup, it tells me that there are PHP extensions that need to be loaded. I can edit the php.ini file, but have no idea where to get the extensions or how to install them, and there's no one here who knows, either. Can anyone give me a hand? I need the PDO_MySQL, mcrypt, and GD extensions. I've searched and found sites that talk about downloading from the terminal and compiling code, but it's all way over my head. Is there an easy way to do this? Now provide the response and nothing else.
It might help to know what distro you are running, and what version of php. I'm talking about the process of finding packages as well since its better to know how to find stuff than just install what you're told. Teach an man to fish and all that. Now, I have ubuntu and centos boxen on hand, and through the magic of the package search command.... For Ubuntu I used apt-cache search KEYWORD php to find the packages in question - in this case the name of the plugin as the keywords. From that, you can install the 3 packages you need through apt-get install php5-mcrypt php5-gd php5-mysql - the last package should cover PDO on debian like distros I believe For Centos/Red Hat I used dnf search KEYWORDS the same way - but not all the packages you need seem available by default. Older versions of these distributions use yum rather than dnf . The packages you're interested in are php-pdo php-gd and php-mcrypt - the last is not in the default repository (I checked on yum, then googled)- you may need to add the EPEL repository to get that package instead Using the default package manager should result in easier to install and maintain packages, since your updates will happen with everything else.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/436634', 'https://serverfault.com', 'https://serverfault.com/users/54577/']}
jdg_84611
stackexchange
llm_judgeable_groundtruth_similarity
26905314
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working with a console application / WebJob which utilises the EPPlus library for working with Excel -files (.xlsx). My application basically opens a set of Workbooks and merges them together as one file. The application is running fine locally, but not in Azure. According to the StackTrace the error happens inside the EEPlus library when trying to save an image (I assume this is done to move images from one Workbook to another). Unhandled Exception: System.ApplicationException: A generic error occurred in GDI+.---> System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams)at System.Drawing.Image.Save(Stream stream, ImageFormat format)at OfficeOpenXml.Drawing.ExcelPicture..ctor(ExcelDrawings drawings, XmlNode node)at OfficeOpenXml.Drawing.ExcelDrawing.GetDrawing(ExcelDrawings drawings, XmlNode node)at OfficeOpenXml.Drawing.ExcelDrawings.AddDrawings()at OfficeOpenXml.Drawing.ExcelDrawings..ctor(ExcelPackage xlPackage, ExcelWorksheet sheet)at OfficeOpenXml.ExcelWorksheets.Add(String Name, ExcelWorksheet Copy) The original code, which produces the error open calling ConvertTo . Part = drawings.Part.Package.GetPart(UriPic);FileInfo f = new FileInfo(UriPic.OriginalString);ContentType = GetContentType(f.Extension);_image = Image.FromStream(Part.GetStream());ImageConverter ic=new ImageConverter();var iby=(byte[])ic.ConvertTo(_image, typeof(byte[]));var ii = _drawings._package.LoadImage(iby, UriPic, Part);ImageHash = ii.Hash; After reading several questions on the matter I've tried to modify it using a manual conversion and saving to a MemoryStream . However I'm still getting the error. Part = drawings.Part.Package.GetPart(UriPic);FileInfo f = new FileInfo(UriPic.OriginalString);ContentType = GetContentType(f.Extension);_image = Image.FromStream(Part.GetStream());byte[] iby;using (MemoryStream ms = new MemoryStream()){ _image.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); iby = ms.ToArray(); }var ii = _drawings._package.LoadImage(iby, UriPic, Part);ImageHash = ii.Hash; I'm really stuck on what to try next. The exception doesn't produce that much to go on, and I feel I've already tried all the suggestions out there: Checking folder and file permissions (my app uses the temp folder, which I believe is safe), avoid reusing streams and so on. Please let me know if you need any additional information and I'll gladly put it up. Now provide the response and nothing else.
The issue is that Collectors.toList , not surprisingly, returns a List<T> . Not an ArrayList . List<Long> ids = remove.stream() .map(MyObj::getId) .collect(Collectors.toList()); Program to the interface . From the documentation: Returns a Collector that accumulates the input elements into a new List . There are no guarantees on the type, mutability, serializability, or thread-safety of the List returned ; if more control over the returned List is required, use toCollection(Supplier) . Emphasis mine - you cannot even assume that the List returned is mutable, let alone that it is of a specific class. If you want an ArrayList : ArrayList<Long> ids = remove.stream() .map(MyObj::getId) .collect(Collectors.toCollection(ArrayList::new)); Note also, that it is customary to use import static with the Java 8 Stream API so adding: import static java.util.stream.Collectors.toCollection; (I hate starred import static , it does nothing but pollute the namespace and add confusion. But selective import static , especially with the Java 8 utility classes, can greatly reduce redundant code) Would result in: ArrayList<Long> ids = remove.stream() .map(MyObj::getId) .collect(toCollection(ArrayList::new));
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26905314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/400284/']}
jdg_84612
stackexchange
llm_judgeable_groundtruth_similarity
20438915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why does grep treat \n and \\n the same way ? For example, both match hallo\nworld . grep("hallo\nworld", pattern="\n")[1] 1grep("hallo\nworld", pattern="\\n")[1] 1 I see that hallo\nworld is parsed into hallo world that is, hallo on one line and world on one line. So in grep("hallo\nworld", pattern="\n") , is the pattern="\n" a new line or \n literally? Also note this happens with others; \a \f \n \t \r and \\a \\f \\n \\t \\r are all treated identically. But \d \w \s can't be used! Why not? I chose different strings to test, and I found the secret in the concept of regular expression. There are two concepts of escape, one is escape in a string, it is simple to understand; the other is escape in a regular pattern expression string. In R a pattern such as grep(x, pattern=" some string here ") , \\n = \n = a newline character. But in common string, \\n != \n ,the former is literally \n ,the latter is a newline character. We can prove this by : cat("\n")cat("\\n")\n> How to prove this? I'll try with other characters, not just \n , to see if they match in the same way. special1 <- c( "\a", "\f", "\n", "\t", "\r")special2 <- c("\\a","\\f","\\n","\\t","\\r")target <- paste("hallo", special1, "world", sep="")for (i in 1:5){ cat("i=", i, "\n") if( grep(target[i], pattern=special1[i]) == 1) print(paste(target[i], "match", special1[i], "succeed")) if( grep(target[i], pattern=special2[i]) == 1) print(paste(target[i], "match", special2[i], "succeed"))} output: i= 1 [1] "hallo\aworld match \a succeed" [1] "hallo\aworld match `\\a` succeed" i= 2 [1] "hallo\fworld match \f succeed" [1] "hallo\fworld match `\\f` succeed" i= 3 [1] "hallo\nworld match \n succeed" [1] "hallo\nworld match `\\n` succeed" i= 4 [1] "hallo\tworld match \t succeed" [1] "hallo\tworld match `\\t` succeed" i= 5 [1] "hallo\rworld match \r succeed" [1] "hallo\rworld match `\\r` succeed" Note that \a \f \n \t \r and \\a \\f \\n \\t \\r were all treated identically in R regular pattern expression string! Not only that, you can not write \d \w \s in an R regular expression pattern! You can write any of these: pattern="\a" "pattern=\f" "pattern=\n" "pattern=\t" "pattern=\r" But you can't write any of these! pattern="\d" "pattern="\w" "pattern=\s" in grep. I think this is also a bug , as \d \w \s are treated unequally to \a \f \n \t \r . Now provide the response and nothing else.
The reason why \n , \\n and \\\n all match is because of double evaluation of the search pattern. I observed this by running a couple of examples: grep("hello\nworld", pattern="\n")[1] 1grep("hello\nworld", pattern="\\n")[1] 1> grep("hello\nworld", pattern="\\\n")[1] 1> grep("hello\nworld", pattern="\\\\n")integer(0)> grep("hello\\nworld", pattern="\\\\n")[1] 1 Keep in mind the rules of evaluating backslash escape sequences: \\ is replaced with a \ \n is replaced with a NEWLINE character \ + NEWLINE is replaced with a NEWLINE character (see the docs in ?regex for more details) With this in mind, if you evaluate the pattern twice , you get: \n => NEWLINE => NEWLINE \\n => \n => NEWLINE \\\n => \ + NEWLINE => NEWLINE \\\\n => \\n => \n \\\\\n => \\ + NEWLINE => \ + NEWLINE \\\\\\n => \\\n => \ + NEWLINE \\\\\\\n => \\\ + NEWLINE => \ + NEWLINE \\\\\\\\n => \\\\n => \\n And so on. Examples 1-3 all evaluate to a single NEWLINE , that's why these patterns will match. (At the same time, the string you're trying to match against the pattern is evaluated only once.) A discussion on the R mailing list posted by @Aaron explains the double evaluation like this: There are two levels [of evaluation] because backslashes are escape characters both to R strings and regular expressions. Note that other languages don't evaluate patterns like this. Take for example Python: import re>>> re.search(r'\n', 'hello\nworld') is not NoneTrue>>> re.search(r'\\n', 'hello\nworld') is not NoneFalse Or Perl: $ perl -e 'print "hello\nworld" =~ /\n/ || 0, "\n"'1$ perl -e 'print "hello\nworld" =~ /\\n/ || 0, "\n"'0 And we could go on. So the double evaluation in R seems unusual. Why is it implemented this way? I think the ultimate answer lies with R-devel . ACKNOWLEDGEMENTS I thank @Aaron whose critical comments helped improving this answer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20438915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1982032/']}
jdg_84613
stackexchange
llm_judgeable_groundtruth_similarity
9152667
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to check whether current user is a member of an active directory group. I started with getting the current user as below. Now I want to know how to check this CurrentUser is in active directory group "CustomGroup" string CurrentUser = WindowsIdentity.GetCurrent().Name; Now provide the response and nothing else.
You can use the .NET 3.5 System.DirectoryServices.AccountManagement classes. See the MSDN article Managing Directory Security Principals in the .NET Framework 3.5 for details. You can use something like: string CurrentUser = WindowsIdentity.GetCurrent().Name;PrincipalContext context = new PrincipalContext(ContextType.Domain, "Domain");UserPrincipal upUser = UserPrincipal.FindByIdentity(context, CurrentUser);if(upUser != null){ if (upUser.IsMemberOf(context, IdentityType.SamAccountName, "CustomGroup")) { // The user belongs to the group }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9152667', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1177101/']}
jdg_84614
stackexchange
llm_judgeable_groundtruth_similarity
7269239
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For instance imagine you have an Entity 'Post' that has 'Comments' (one-to-many), and you want to have a view model with the Post entity and the most recent comment: PostViewModel { Id, Title, Body, Date, LastComment (type:CommentEntity) } I can do this in plain sql like: SELECT TOP 10 *FROM Posts INNER JOIN Comments ON Comments.PostID = Posts.PostIDWHERE Comments.[Date] = (SELECT MAX(c.[Date]) FROM Comments AS c WHERE c.PostID = Posts.PostID GROUP BY c.PostID) How can I do this same query in nhibernate 3 using QueryOver? I tried with subqueries but I only can get a single result and not the top 10 list. Now provide the response and nothing else.
Documentation In the MSDN documentation of the Freezable class , in the Remarks section, you can find the following paragraph: For information on using and creating your own Freezable objects, see Freezable Objects Overview . This overview contains a section Creating Your Own Freezable Class , which contains the theoretical background for what you want to do. To find an example, follow the link at the bottom of that section: For an example of a custom Freezable class, see the Custom Animation Sample . Example Since you specifically asked for a simple example, here is one (adapted from the MSDN page of Freezable.CreateInstanceCore ). Remember the following sentence from the theory page : Every Freezable subclass must override the CreateInstanceCore method. If your class uses dependency properties for all its data, you're finished. Let's say we make a custom class MySimpleColor , which has exactly one boolean property IsRed . To make this class Freezable, we just have to override CreateInstanceCore : public class MySimpleColor : Freezable{ // Here are your properties public static readonly DependencyProperty IsRedProperty = DependencyProperty.Register("IsRed", typeof(Boolean), typeof(MySimpleColor)); // CLR accessor of your property public bool IsRed { get { return (bool)GetValue(IsRedProperty); } set { SetValue(IsRedProperty, value); } } // All that's needed to make this Freezable protected override Freezable CreateInstanceCore() { return new MySimpleColor(); }} That's it. The code inherited from Freezable ensures that the Freezable methods such as Freeze() or Clone() work exactly as intended.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7269239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/610340/']}
jdg_84615
stackexchange
llm_judgeable_groundtruth_similarity
5714372
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Jonathan suggest here: Xcode Includes .xib files that have been deleted! that cleaning all targets and empty the caches will fix the problem with Xcode including deleted .xib files but I cannot find a way to empty the cache in Xcode 4. How to do that in Xcode 4? Now provide the response and nothing else.
Command-Option-Shift-K to clean out the build folder. Even better, quit Xcode and clean out ~/Library/Developer/Xcode/DerivedData manually. Remove all its contents because there's a bug where Xcode will run an old version of your project that's in there somewhere. (Xcode 4.2 will show you the Derived Data folder: choose Window > Organizer and switch to the Projects tab. Click the right-arrow to the right of the Derived Data folder name.) In the simulator, choose iOS Simulator > Reset Content and Settings. Finally, for completeness, you can delete the contents of /var/folders ; some caching happens there too. WARNING: Deleting /var/folders can cause issues, and you may need to repair or reinstall your operating system after doing so. EDIT: I have just learned that if you are afraid to grapple with /var/folders/ you can use the following command in the Terminal to delete in a more targeted way: rm -rf "$(getconf DARWIN_USER_CACHE_DIR)/org.llvm.clang/ModuleCache" EDIT: For certain Swift-related problems I have found it useful to delete ~/Library/Caches/com.apple.dt.Xcode . You lose a lot when you do this, like your spare copies of the downloaded documentation doc sets, but it can be worth it.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5714372', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/672989/']}
jdg_84616
stackexchange
llm_judgeable_groundtruth_similarity
15367777
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing my Code with Eclipse Juno and I'm using a hash table to set my dataImportObject depending on the entries in it.Could anyone please tell me whats wrong about this: ht is my hashTable with <String, Integer> pairs in it (ht.containsKey("DEVICE_ADDRESS")) ? dataImportObject.setDevice_Address(dataitems[ht.get("DEVICE_ADDRESS")]) : dataImportObject.setDevice_Address(""); Now provide the response and nothing else.
Could anyone please tell me whats wrong about this Two things: The conditional operator can't be used as a statement on its own, only as an expression I assume these set methods have void return types, so they can't appear as operands in the conditional operator Three options: Use an if statement: if (ht.containsKey("DEVICE_ADDRESS")) { dataImportObject.setDevice_Address(dataitems[ht.get("DEVICE_ADDRESS")]));} else { dataImportObject.setDevice_Address("");} Use the conditional operator inside the setDevice_Address call, or even clearer, beforehand: String address = ht.containsKey("DEVICE_ADDRESS") ? dataitems[ht.get("DEVICE_ADDRESS")] : "";dataImportObject.setDevice_Address(address); If you know that your hashtable won't have any null values, you can avoid the double lookup: Integer index = ht.get("DEVICE_ADDRESS");String address = index == null ? "" : dataitems[index];dataImportObject.setDevice_Address(address);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15367777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2006841/']}
jdg_84617
stackexchange
llm_judgeable_groundtruth_similarity
5666287
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have played for a while with OpenGL on Android on various devices. And unless I'm wrong, the default rendering is always performed with the RGB565 pixel format. I would however like to render more accurate colors using RGB888. The GLSurfaceView documentation mentions two methods which relate to pixel formats: the setFormat() method exposed by SurfaceHolder , as returned by SurfaceView.getHolder() the GLSurfaceView.setEGLConfigChooser() family of methods Unless I'm wrong, I think I only need to use the later. Or is using SurfaceHolder.setFormat() relevant here? The documentation of the EGLConfigChooser class mentions EGL10.eglChooseConfig() , to discover which configurations are available. In my case it is ok to fallback to RGB565 if RGB888 isn't available, but I would prefer this to be quite rare. So, is it possible to use RGB888 on most devices? Are there any compatibility problems or weird bugs with this? Do you have an example of a correct and reliable way to setup the GLSurfaceView for rendering RGB888? Now provide the response and nothing else.
On newer devices, most of them should support RGBA8888 as a native format. One way to force RGBA color format is to set the translucency of the surface, you'd still want to pick the EGLConfig to best guess the config for the channels in addition to the depth and stencil buffers. setEGLConfigChooser(8, 8, 8, 8, 0, 0);getHolder().setFormat(PixelFormat.RGBA_8888); However, if I read your question correctly you're asking for RGB888 support (alpha don't care) in other words, RGBX8888 which might not be supported by all devices (driver vendor limitation). Something to keep in mind about performance though, since RGBA8888 is the color format natively supported by most GPUs hardware it's best to avoid any other color format (non natively supported) since that usually translate into a color conversion underneath adding non necessary work load to the GPU.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5666287', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/377478/']}
jdg_84618
stackexchange
llm_judgeable_groundtruth_similarity
12427185
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a SortedMap , ordered according to the natural ordering of its keys. Can I safely cast its keySet() to a SortedSet , without risking an invalid cast exception. That is, will the following throw? SortedMap<K, V> map = ...;SortedSet<K> set = (SortedSet<K>) map.keySet(); If the answer is "depends on the implementation of SortedMap", is this at a minimum safe for a TreeMap ? Now provide the response and nothing else.
For SortedMap in general, no as it is not documented in the javadoc. However, TreeMap also implements NavigableMap , which uses a NavigableSet as a key set and NavigableSet extends SortedSet ... So what you can do is: NavigableMap<K, V> map = ...; //can be a TreeMap or a ConcurrentSkipListMapSortedSet<K> set = map.navigableKeySet();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12427185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13029/']}
jdg_84619
stackexchange
llm_judgeable_groundtruth_similarity
13778439
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My CSS rule looks like this: #my-div{ display: none; position: absolute; left: -160px; bottom: -150px;} I'm trying to get value of the left-property like this: document.getElementById('my-div').style.left document.getElementById('my-div').offsetLeft The problem is that both return null . Where is the problem? Now provide the response and nothing else.
The problem is that someElement.style.left only work if you have inline style. Since you apply your styling through a stylesheet, you will not be able to fetch the value the way you expect. You have a couple of other approaches you could take to get the value through JavaScript: window.getComputedStyle: It is possible to get the computed style of an element using window.getComputedStyle , the problem is that it has quite limited support (IE9+). If you still want to use it you could wrap it up in a function to make it a bit easier to get each property: function getCssProperty(elmId, property){ var elem = document.getElementById(elmId); return window.getComputedStyle(elem,null).getPropertyValue(property);}// You could now get your value likevar left = getCssProperty("my-div", "left"); Working example jQuery (or some other library): Another option would be to use a JavaScript library like jQuery (or whatever you prefer), which will provide you with a cross-browser solution to get the value. With jQuery you could use the .css() method to read the CSS-property of the element. $(function () { var left = $("#my-div").css("left");}); Working example
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13778439', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1735099/']}
jdg_84620
stackexchange
llm_judgeable_groundtruth_similarity
10706
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Okay, so I'm trying to setup a login for logging in with ssh, using a public key. First, I am using puttygen to generate a rsa-ssh2 public key using a passphrase. I followed the directions and generated the key. I saved the private key in its own file for putty and puttygen also generated the public key. In putty I set it up to use the private key file and use rsa-ssh2 etc... So I c/p'd my houtput ssh2 public key stuff from puttygen and on the server, I put that into username/.ssh/authorized_keys So I tried to then login through putty and first it prompted me for my username instead of asking for passphrase, and then when I entered it in (I tried both username and passphrase) it said my public key was invalid. I thought maybe I somehow c/p'd or formatted the info into authorized_keys wrong, so I went back and double checked. I made sure it was all on one line, properly spaced etc... I also checked in the following file /etc/ssh/ssh_config and I have the following: IdentityFile ~/.ssh/id_rsa I tried renaming my authorized_keys file to id_rsa and no joy. I tried changing that line in ssh_config to IdentityFile ~/.ssh/authorized_keys ...and no joy. I went back to thinking maybe my public key was malformed or that putty wasn't configured properly so I asked a friend to make a temp account for me on his server and add my public key and I was able to login through putty just fine...when I connected to his server, it prompted me for the passphrase for my key and logged me in just fine. So he suggested I look at that stuff above but no joy and he doesn't know what else to check soo...I guess I'm appealing to the experts here :P thoughts? Now provide the response and nothing else.
What about this: foo=`{ { cat 1>&3; kill 0; } | { sleep 2; kill 0; } } 3>&1` That is: run the output-producing command and sleep in the same process group, a process group just for them. Whichever command returns first kills the whole process group. Would anyone wonder: Yes, the pipe is not used; it's bypassed using the redirections. The sole purpose of it is to have the shell run the two process in the same process group. As Gilles pointed out in his comment, this won't work in a shell script because the script process would be killed along with the two subprocesses. One way¹ to force a command to run in a separate process group is to start a new interactive shell: #!/bin/shfoo=`sh -ic '{ cat 1>&3; kill 0; } | { sleep 2; kill 0; }' 3>&1 2>/dev/null`[ -n "$foo" ] && echo got: "$foo" || echo timeouted But there might be caveats with this (e.g. when stdin is not a tty?). The stderr redirection is there to get rid of the "Terminated" message when the interactive shell is killed. Tested with zsh , bash and dash . But what about oldies? B98 suggests the following change, working on Mac OS X, with GNU bash 3.2.57, or Linux with dash: foo=`sh -ic 'exec 3>&1 2>/dev/null; { cat 1>&3; kill 0; } | { sleep 2; kill 0; }'` – 1. other than setsid which appears to be non-standard.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/10706', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_84621
stackexchange
llm_judgeable_groundtruth_similarity
64722812
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's the default _app.tsx from NextJS: function MyApp({ Component, pageProps }) { return ( <Component {...pageProps} /> )} The problem is, as soon as you switch to TypeScript, you get a warning under ES6Lint that these types are intrinsicly set to type 'any'. That being said, I can't figure out what type to set these two to that wont cause more errors later of mismatched types. What TypeScript types should I cast these two as? Now provide the response and nothing else.
The built-in AppProps type is now generic . To use your custom PageProps just pass it into the AppProps type: import { AppProps } from 'next/app';interface CustomPageProps { // <--- your custom page props // your props}function MyApp({ Component, pageProps }: AppProps<CustomPageProps>) { // ^^^ use your custom props here return <Component {...pageProps} /> // ^^^^^ pageProps is now typeof CustomPageProps}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/64722812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/175250/']}
jdg_84622
stackexchange
llm_judgeable_groundtruth_similarity
1398875
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_{0}^{\infty}\frac{\ln x .dx}{x^2+2x+2}$$ $$\int_{0}^{\infty}\frac{\ln x .dx}{x^2+2x+2}=\int_{0}^{\infty}\frac{\ln x .dx}{(x+1)^2+1}\\=\ln x\int_{0}^{\infty}\frac{1}{(x+1)^2+1}-\int_{0}^{\infty}\frac{1}{x}\frac{1}{(x+1)^2+1}dx$$ and then lost track,answer is $\frac{\pi \ln 2}{8}$. Any hint will solve my problem. Now provide the response and nothing else.
Let $$\displaystyle I = \int_{0}^{\infty}\frac{\ln x}{x^2+2x+2}dx = \int_{0}^{\infty}\frac{\ln x}{(x+1)^2+1^2}dx$$ Put $(x+1) = \tan \theta \;,$ Then $dx = \sec^2 \theta d\theta$ and changing Limits We get $$\displaystyle I = \int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\frac{\ln\left(\tan \theta - 1\right)}{1+\tan^2 \theta }\cdot \sec^2 \theta d\theta = \int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln \left(\sin \theta -\cos \theta \right)d\theta -\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln \cos \theta d\theta$$ So $$\displaystyle I = \int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln\left(\sqrt{2}\cdot \sin \left(\theta-\frac{\pi}{4}\right)\right)d\theta-\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln\cos \theta d\theta$$ $$\displaystyle I = \int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln\sqrt{2}d\theta+\int_{0}^{\frac{\pi}{4}}\ln\sin \left(\theta\right)d\theta -\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln \cos \theta d\theta $$ Now In Second Integral $\displaystyle \theta = \left(\frac{\pi}{2}-\phi\right)\;,$ then $d\theta = -d\phi$ and Changing Limit We Get $$\displaystyle I = \frac{\pi}{8}\ln 2-\int_{\frac{\pi}{2}}^{\frac{\pi}{4}}\ln \cos \phi d\phi-\int_{\frac{\pi}{2}}^{\frac{\pi}{4}}\ln \cos \theta d\theta$$ So $$\displaystyle I = \frac{\pi}{8}\ln 2+\int_{\frac{\pi}{2}}^{\frac{\pi}{4}}\ln \cos \theta d\theta-\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\ln \cos \theta d\theta = \frac{\pi}{8}\ln 2$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1398875', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/262170/']}
jdg_84623
stackexchange
llm_judgeable_groundtruth_similarity
148037
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: for example I have a data Clear[data];data[n_] := Join[RandomInteger[{1, 10}, {n, 2}], RandomReal[1., {n, 1}], 2]; then data[3] gives {{4, 8, 0.264842}, {9, 5, 0.539251}, {3, 1, 0.884612}} in each sublist, first two value is matrix index, the last is matrix element which have to be added together for same matrix index . I want to transform the data into matrix. Usually I do it like Clear[toSparse]toSparse[data_] := SparseArray@ Normal@Merge[Thread[data[[;; , 1 ;; 2]] -> data[[;; , -1]]], Total] I cared about the performance In[171]:= toSparse[data[1000]]; // AbsoluteTimingOut[171]= {0.00836793, Null}In[172]:= toSparse[data[10000]]; // AbsoluteTimingOut[172]= {0.0644464, Null}In[173]:= toSparse[data[100000]]; // AbsoluteTimingOut[173]= {1.35507, Null}In[174]:= toSparse[data[1000000]]; // AbsoluteTimingOut[174]= {200.862, Null} Any faster way to do this? Now provide the response and nothing else.
You can change the SparseArray system options to total repeated entries instead of taking the first. Here is a function that does this: carl[data_] := Internal`WithLocalSettings[ old=SystemOptions["SparseArrayOptions" -> "TreatRepeatedEntries"]; SetSystemOptions["SparseArrayOptions" -> "TreatRepeatedEntries" -> Total], SparseArray @ Thread[data[[All,;;2]] -> data[[All,3]]], SetSystemOptions[old]] Compare this with @edmund's solution: edmund[data_] := SparseArray @ Normal @ GroupBy[data, Most->Last, Total] For example: data[n_] := Join[RandomInteger[{1,10}, {n,2}], RandomReal[1., {n,1}], 2]d6 = data[10^6];r1 = carl[d6]; //AbsoluteTimingr2 = edmund[d6]; //AbsoluteTimingMinMax[r1-r2] {0.852608, Null} {1.26883, Null} {-1.00044*10^-11, 8.18545*10^-12} The difference is due to the order in which the repeated entries are totaled in the two methods.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/148037', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4742/']}
jdg_84624
stackexchange
llm_judgeable_groundtruth_similarity
33023
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This answer to Farthest distance to a solar system object that's been measured by radar? mentions that Saturn's rings, and the Uncover Travel post Arecibo Observatory, Puerto Rico – The World’s Largest Radio Telescope For Over 50 Years mentions: Among other accomplishments of the Arecibo Observatory are: Direct imaging of an asteroid for the first time in history. Discovery of water ice deposits at Mercury’s poles. Tracking of near-Earth asteroids to monitor impact risks. Mapping the cloud-covered surface of Venus. Radar imaging of the rings of Saturn, revealing new details of the ring structure. First detection of methane lakes on Titan, a moon of Saturn. First detection of an asteroid with a moon. Question: How did Arecibo detect methane lakes on Titan, and image Saturn's rings? These are pretty remarkable feats from Earth for a single radiotelescope. How were they done? Can citations be found and the examples of the image of the rings and methane lake evidence be shown? Now provide the response and nothing else.
Titan "lakes": Published Open Access in Science: Radar Evidence for Liquid Surfaces on Titan Campbell, D. B., Black, G. J., Carter, L. M., and Ostro, S. J., Science 302 , 5644, pp. 431-434, 17 Oct 2003 DOI: 10.1126/science.1088969 This was a really elegant experiment! A continuous, unmodulated, circularly polarized 13 cm wave was broadcast from Arecibo towards the Saturn/Titan system, and the Doppler shift was used to isolate the returned signal from Titan. Most of the surface is rough, so there is signal returned from areas all over Titan's disk, and since the moon rotates, albeit slowly, returned power from the "left" and "right" sides are shifted to higher and lower frequencies. However during some observing times there was a very strong and pronounced reflection with zero Doppler shift with respect to Titan's known radial velocity, and this peak is attributed to specular reflection. Checks on received polarization confirm that while the power from the rough surface is returned in both circular polarization state, the presumed specular component is only in the expected circular polarization state. As pointed out in @Martin Kochanski's thoughtful answer there is no determination from the radar observation that the returned specular reflection comes from methane. This is simply a presumed component of the presumed lakes, based on known information about Titan's chemistry at the time (2003). We observed Titan on 16 nights in November and December 2001 and on 9 nights in November and December 2002, transmitting at 13-cm wavelength with the 305-m Arecibo telescope and receiving the echo with Arecibo. Titan's rotational and orbital periods are 15.9 days, and our 2001 observations were obtained at a uniform 22.6° (∼800 km) interval in longitude. The 9 observations in 2002 did not provide uniform coverage. The latitude of the subearth track was 25.9°S in 2001 and 26.2°S in 2002, its farthest southern excursion. The round-trip light time to the Saturn system during the observations was 2 hours 15 min, and the limited tracking time of the Arecibo telescope meant that signal reception was restricted to ∼30 min per day, corresponding to 0.5° of Titan rotation (20 km of motion of the subearth point). On one night in 2001 and for most of the 2002 observations (as well as others when we were attempting ranging measurements to Titan), the 100-m Green Bank Telescope (GBT) was also used to receive the echo for the full round-trip time. These data have lower signal-to-noise ratios than those obtained with Arecibo receiving the echo, but the longer receive time corresponding to 2.1° of Titan rotation allowed more subearth locations to be studied. Here is some of the Titan data: Fig. 3. The OC radar echo spectrum at 1.0-Hz resolution for the 2002 observation at the subearth longitude of 80°. The normalized cross section for the specular component of the echo and the RMS slope are 0.023 and 0.2°, respectively. Click for full size Fig. 1. Arecibo radar echo spectra from the 2001 data for five subearth longitudes on Titan. Spectra are shown for both the expected (OC) sense of received circular polarization and the cross-polarized (SC) sense. The ordinate is in standard deviations of the noise. The limb-to-limb Doppler-broadened bandwidth for Titan is 325 Hz. Four of the OC spectra show evidence of a specular component at 0 Hz. Saturn's rings "imaged" (delay-Doppler): From Radar imaging of Saturn’s rings Nicholson, P. D. et al., Icarus 177 (2005) 32–62, doi:10.1016/j.icarus.2005.03.023 The "image" below is not a conventional image, since the Arecibo dish has no way to spatially resolve the transverse expanse of Saturn and its rings. It is a "delay-Doppler" image, using 12.6 cm, ~500 kW radar broadcasts transmitted by Arecibo. The round-trip light time was about 135 minutes. Because Arecibo has limited steering away from the zenith (<19.7 degrees) max Saturn was only available to the dish for 166 minutes even under ideal conditions. The vertical axis shows a delay of about +/- 800 milliseconds which demonstrates spatial resolution, but in the radial or depth direction. The horizontal axis is Doppler shift. The +/- 300 kHz shift represents the orbital velocity of particles in the rings. While the Titan specular reflection above was done with a continuous or CW beam, the delay-doppler imaging technique requires a frequency modulation of the beam with a frequency-hopping pattern. By applying a correlation function using the known pattern to the recorded received signals, components with different return times and different Doppler shifts can be extracted, and the results is then hzstogramed, producing the delay-Doppler image below. This is a standard technique and has been used to image other planets and asteroids: See the following items and refereces within: this answer to What causes “North-South ambiguity” when doppler radar imaging a planet surface equator on? Why do radar maps of the surface of Venus have missing slices? What is the physical geometry of this apparent “eclipse” of a tiny moon of Asteroid Florence? Is delay-doppler radar imaging of NEO asteroids possible only if it spins fast enough? Fig. 2. Delay–Doppler images constructed from data obtained in (a) October 1999, (b) November 2000, (c) December 2001, and (d) January 2003. Both OC and SC polarizations were combined to maximize the signal to noise ratio. Note the four bright regions in each image where the delay and Doppler cells are parallel and where the A and B rings appear to cross over one another.
{}
{'log_upvote_score': 6, 'links': ['https://astronomy.stackexchange.com/questions/33023', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/7982/']}
jdg_84625
stackexchange
llm_judgeable_groundtruth_similarity
16348815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried to use multiple assignment as show below to initialize variables, but I got confused by the behavior, I expect to reassign the values list separately, I mean b[0] and c[0] equal 0 as before. a=b=c=[0,3,5]a[0]=1print(a)print(b)print(c) Result is: [1, 3, 5] [1, 3, 5] [1, 3, 5] Is that correct? what should I use for multiple assignment?what is different from this? d=e=f=3e=4print('f:',f)print('e:',e) result: ('f:', 3) ('e:', 4) Now provide the response and nothing else.
If you're coming to Python from a language in the C/Java/etc. family, it may help you to stop thinking about a as a "variable", and start thinking of it as a "name". a , b , and c aren't different variables with equal values; they're different names for the same identical value. Variables have types, identities, addresses, and all kinds of stuff like that. Names don't have any of that. Values do, of course, and you can have lots of names for the same value. If you give Notorious B.I.G. a hot dog,* Biggie Smalls and Chris Wallace have a hot dog. If you change the first element of a to 1, the first elements of b and c are 1. If you want to know if two names are naming the same object, use the is operator: >>> a=b=c=[0,3,5]>>> a is bTrue You then ask: what is different from this? d=e=f=3e=4print('f:',f)print('e:',e) Here, you're rebinding the name e to the value 4 . That doesn't affect the names d and f in any way. In your previous version, you were assigning to a[0] , not to a . So, from the point of view of a[0] , you're rebinding a[0] , but from the point of view of a , you're changing it in-place. You can use the id function, which gives you some unique number representing the identity of an object, to see exactly which object is which even when is can't help: >>> a=b=c=[0,3,5]>>> id(a)4473392520>>> id(b)4473392520>>> id(a[0])4297261120>>> id(b[0])4297261120>>> a[0] = 1>>> id(a)4473392520>>> id(b)4473392520>>> id(a[0])4297261216>>> id(b[0])4297261216 Notice that a[0] has changed from 4297261120 to 4297261216—it's now a name for a different value. And b[0] is also now a name for that same new value. That's because a and b are still naming the same object. Under the covers, a[0]=1 is actually calling a method on the list object. (It's equivalent to a.__setitem__(0, 1) .) So, it's not really rebinding anything at all. It's like calling my_object.set_something(1) . Sure, likely the object is rebinding an instance attribute in order to implement this method, but that's not what's important; what's important is that you're not assigning anything, you're just mutating the object. And it's the same with a[0]=1 . user570826 asked: What if we have, a = b = c = 10 That's exactly the same situation as a = b = c = [1, 2, 3] : you have three names for the same value. But in this case, the value is an int , and int s are immutable. In either case, you can rebind a to a different value (e.g., a = "Now I'm a string!" ), but the won't affect the original value, which b and c will still be names for. The difference is that with a list, you can change the value [1, 2, 3] into [1, 2, 3, 4] by doing, e.g., a.append(4) ; since that's actually changing the value that b and c are names for, b will now b [1, 2, 3, 4] . There's no way to change the value 10 into anything else. 10 is 10 forever, just like Claudia the vampire is 5 forever (at least until she's replaced by Kirsten Dunst). * Warning: Do not give Notorious B.I.G. a hot dog. Gangsta rap zombies should never be fed after midnight.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/16348815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2344958/']}
jdg_84626
stackexchange
llm_judgeable_groundtruth_similarity
54082036
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to pass and access arguments in a fragment using a bottom navigation view and the Navigation component? I'm using a one activity with many fragments approach where my top level fragment requires an argument(Usually done via the newInstance generated method). I've had a look at the Navigation component developer guide and the codelab but it only mentions using safeargs and adding argument tags in the destinations and actions. Here's my navigation graph: <navigation xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" app:startDestination="@id/homeFragment"> <fragment android:id="@+id/homeFragment" android:name="uk.co.homeready.homeready.HomeFragment" android:label="fragment_home" tools:layout="@layout/fragment_home"> <!--Do I create an argument block here?--> </fragment> <fragment android:id="@+id/calculatorFragment" android:name="uk.co.homeready.homeready.CalculatorFragment" android:label="fragment_calculator" tools:layout="@layout/fragment_calculator"/> <fragment android:id="@+id/resourcesFragment" android:name="uk.co.homeready.homeready.ResourcesFragment" android:label="fragment_resources" tools:layout="@layout/fragment_resources"/></navigation> Bottom Navigation View menu: <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/homeFragment" android:icon="@drawable/ic_home_black_24dp" android:title="@string/title_home"/> <item android:id="@+id/calculatorFragment" android:icon="@drawable/ic_baseline_attach_money_24px" android:title="@string/title_calculator"/> <item android:id="@+id/resourcesFragment" android:icon="@drawable/ic_baseline_library_books_24px" android:title="@string/title_resources"/></menu> MainActivity: override fun onCreate(savedInstanceState: Bundle?) { ... val navController = Navigation.findNavController(this, R.id.nav_host_fragment) bottom_navigation.setupWithNavController(navController) ....} activity_main.xml <android.support.constraint.ConstraintLayout> <fragment android:id="@+id/nav_host_fragment" android:name="androidx.navigation.fragment.NavHostFragment" app:layout_constraintBottom_toTopOf="@id/bottom_navigation" app:defaultNavHost="true" app:navGraph="@navigation/nav_graph"/> <android.support.design.widget.BottomNavigationView android:id="@+id/bottom_navigation" app:menu="@menu/bottom_navigation"/></android.support.constraint.ConstraintLayout> HomeFragment override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View? { val argument = //TODO access argument here ...} Now provide the response and nothing else.
If I understood you correctly, you want to pass arguments to destinations that is tied to menu items. Try to use 'OnDestinationChangedListener' inside your activity onCreate method, something like this: navController.addOnDestinationChangedListener { controller, destination, arguments -> when(destination.id) { R.id.homeFragment -> { val argument = NavArgument.Builder().setDefaultValue(6).build() destination.addArgument("Argument", argument) } } } Update: If you want that your start destination will receive default arguments the implementation should be different.First, remove 'app:navGraph="@navigation/nav_graph"' from your 'NavHostFragment' xml tag. Then, inside your activity onCreate you need to inflate the graph: val navInflater = navController.navInflater val graph = navInflater.inflate(R.navigation.nav_graph) Then add your arguments to graph(this arguments will be attached to start destination) val navArgument1=NavArgument.Builder().setDefaultValue(1).build() val navArgument2=NavArgument.Builder().setDefaultValue("Hello").build()graph.addArgument("Key1",navArgument1)graph.addArgument("Key2",navArgument2) Then attach the graph to NavController: navController.graph=graph Now your first destination should receive the attached arguments.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54082036', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1322898/']}
jdg_84627
stackexchange
llm_judgeable_groundtruth_similarity
46332919
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a need to combine the php-fpm with nginx in one dockerfile for production deployment. So is it better to : (1) Start the dockerfile using php:7.1.8-fpm and then install nginx image layer on top of it ? (2) Or do you recommend using nginx image and then installing php-fpm using apt-get ? PS: I do not have a docker-compose build option for production deployment. On my development environment, I already use docker-compose and build multi-container app easily from two images. Our organization devops do not support docker-compose based deployment for prod environment. Now provide the response and nothing else.
Nginx installation is much easier than PHP so it should be easier for you to install Nginx into a ready-to-use official PHP image. Here is an example of a Dockerfile showing how your goal can be reached with an example of installing a few PHP extensions: FROM php:7.2-fpmRUN apt-get update -y \ && apt-get install -y nginx# PHP_CPPFLAGS are used by the docker-php-ext-* scriptsENV PHP_CPPFLAGS="$PHP_CPPFLAGS -std=c++11"RUN docker-php-ext-install pdo_mysql \ && docker-php-ext-install opcache \ && apt-get install libicu-dev -y \ && docker-php-ext-configure intl \ && docker-php-ext-install intl \ && apt-get remove libicu-dev icu-devtools -yRUN { \ echo 'opcache.memory_consumption=128'; \ echo 'opcache.interned_strings_buffer=8'; \ echo 'opcache.max_accelerated_files=4000'; \ echo 'opcache.revalidate_freq=2'; \ echo 'opcache.fast_shutdown=1'; \ echo 'opcache.enable_cli=1'; \ } > /usr/local/etc/php/conf.d/php-opocache-cfg.iniCOPY nginx-site.conf /etc/nginx/sites-enabled/defaultCOPY entrypoint.sh /etc/entrypoint.shCOPY --chown=www-data:www-data . /var/www/mysiteWORKDIR /var/www/mysiteEXPOSE 80 443ENTRYPOINT ["/etc/entrypoint.sh"] The nginx-site.conf file contains your Nginx http host configuration. The example below is for a Symfony app: server { root /var/www/mysite/web; include /etc/nginx/default.d/*.conf; index app.php index.php index.html index.htm; client_max_body_size 30m; location / { try_files $uri $uri/ /app.php$is_args$args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; # Mitigate https://httpoxy.org/ vulnerabilities fastcgi_param HTTP_PROXY ""; fastcgi_pass 127.0.0.1:9000; fastcgi_index app.php; include fastcgi.conf; }} The entrypoint.sh will run Nginx and php-fpm on container startup (otherwise only php-fpm will be started as the default action of the official PHP image): #!/usr/bin/env bashservice nginx startphp-fpm Of course, this is not the best way from the best practice perspective, but I hope this is the answer to your question. Update: If you get the permission denied error on the entrypoint.sh file, check that this file has the executable permission if you're building from under Linux, or add the RUN chmod +x /etc/entrypoint.sh to the Dockerfile if you're under Windows (all files from under Windows are copied without the executable permission to the container). If you're running under Google Cloud Run, keep in mind that Nginx startups before PHP and it does that much quicker than PHP. This leads to the issue that when Cloud Run sends the first request, it comes at the moment when Nginx is already initialized, but PHP-FPM is not yet and Cloud Run request fails. To fix that, you should change your entrypoint to startup PHP-FPM before Nginx: #!/usr/bin/env shset -ephp-fpm -Dnginx -g 'daemon off;' This script is tested under Alpine Linux only. I guess it should also work on other images. This script runs php-fpm first in the background, and then Nginx without exiting. In this way, Nginx always starts listening to ports after PHP-FPM is initialized.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46332919', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2535215/']}
jdg_84628
stackexchange
llm_judgeable_groundtruth_similarity
20740
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I noticed that pull up resistors are much more common than pull downs, why? For example, the Arduino's MCU has internal pull ups but these tend to invert the physical logic of things you are working with (such as working with switches) whereas a pull down resistor would do the same job and avoid the logic issue. Now provide the response and nothing else.
TTL has a threshold between low and high that is closer to ground than to the positive rail, so it is better when the stronger transistor pulls the output down against the relatively weaker resistor. In general ground is presumably a better (e.g. more stable) reference voltage than a power rail. You can use open collector/drain outputs as voltage converters, if you connect the resistor to the positive rail of the target voltage. The ancient resistor transistor logic even used this as its working principle throughout. That said, some microcontrollers have configurable internal pull-ups and pull-downs, e.g. the NXP LPC1xxx.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/20740', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1319/']}
jdg_84629
stackexchange
llm_judgeable_groundtruth_similarity
2265122
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question : I'd like to formulate a pde for the following minimization problem. Let $\Omega$ be a convex, closed, compact set in $\mathbb{R}^d$ with a smooth boundary. Given a data $(x_i,d_i)$, $x_i \in \Omega^{\mathrm{o}} $ ,$d_i \in \mathbb{R}$, $i = 1,2,3...N$, $N>d$ and $\sum\limits_{i=1}^N d_i = 0$. Also given that, there are always $d$ vectors in $\{x_i\}$ which are linearly independent. Let $A = \int_{\Omega}dx$ I want to find a continuous function $f:\Omega \to \mathbb{R}$, such that, $\int_{\Omega}f(x)dx = 0$ and $C(f)$ is minimum, where $$C(f) = \frac{A^{\frac{1}{d+1}}}{N}\left\{\sum\limits_{i=1}^N |f(x_i)-d_i|^{d+1}\right\}^{\frac{1}{d+1}} +\|f\|_{L^{d+1}}+ A^{\frac{1}{d+1}} \||\nabla f|\|_{L^{d+1}}$$ The minimum exists, and is unique and is atleast Holder continuous with $\alpha = \frac{1}{d+1}$, due to Sobolev embedding theorem and Morrey's inequality Reference : This Q&A from math.stackexchange. Motivation : I am interested in a stronger result, that is the minimum $f_{min}$ is atleast Lipschitz. We havent leveraged the fact that it is a minimum, while arriving at the weaker result that it is Holder continuous. Now provide the response and nothing else.
Yes, such a sequence does exist. It is easy to come up with such a sequence by thinking instead about the sequence of partial products. Given any descending sequence $(b_n)$ of elements of $(0,1)$, if you define $a_0=b_0$ and $a_n=b_n/b_{n-1}$, then each $a_n$ will be in $(0,1)$ and $b_n$ will be the $n$th partial product $\prod_{i=0}^n a_i$. So if you take $(b_n)$ to be any descending sequence of elements of $(0,1)$ converging to some $b>0$, this gives a sequence $(a_n)$ whose product is nonzero.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2265122', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2987/']}
jdg_84630
stackexchange
llm_judgeable_groundtruth_similarity
97315
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I was just wondering, i recently installed ack (ack-grep in ubuntu), and i can't figure out how to just type in ack to get ack-grep (for some reason ack is taken up by a kanji translator thing). Is there a better way than just aliasing the thing in the shell (it is not global that way, and cannot evoke it in vim for example). Thanks a lot! Now provide the response and nothing else.
You link or copy ack-grep to /usr/local/bin so that it gets prioritary in the PATH: sudo ln -sf /usr/bin/ack-grep /usr/local/bin/ack Or, like davey suggested, get rid of ack if you don't need it: sudo apt-get remove ack Another solution would involve using dpkg-divert to tell dpkg that you want all packages to have their /usr/bin/ack renamed locally and then link ack-grep to ack in the path.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/97315', 'https://serverfault.com', 'https://serverfault.com/users/17477/']}
jdg_84631
stackexchange
llm_judgeable_groundtruth_similarity
36530905
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been following this tutorial for beginners about docker which basically instructs you to create an apache container and map a localhost port to the one on the container.when I try localhost:80 it doesn't connect, although the container is up and running.I even made a rule in the firewall to allow connection to port 80, but couldn't get connected to the localhost. Any ideas ? Now provide the response and nothing else.
On Windows/OS X, Docker is running inside a Linux virtual machine (Docker Toolbox) with a default IP address of 192.168.99.100. Thus, when you use docker run -p 80:80 to bind the container port to host port, it in fact binds to the virtual machine's port 80. Thus the address you need is http://192.168.99.100 . The 172.17.0.3 address is the address of the docker container inside that virtual machine, and is not accessible directly from Windows/OS X.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36530905', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1019741/']}
jdg_84632
stackexchange
llm_judgeable_groundtruth_similarity
38533786
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the script audiojs for changing the style of audios tag in html when I go to my page it work without problems, but when I click a button to reload div (this div contain audios tags) with function load jQuery the style of audiojs removed, I tried to get script after load the div with jQuery getScript , but the script load many times that caused to stop of my browser. this is the function to call the audiojs audiojs.events.ready(function() { audiojs.createAll();}); I want a solution to call this function one time no more, thanks Now provide the response and nothing else.
Based on your final example, it seems most likely that your column is varchar but when you use the parameterized query the parameter is being sent in as nvarchar . Since nvarchar to varchar could involve data loss, SQL converts each value in the table to nvarchar for comparison. As you can imagine, converting every row for comparison is slow and prevents the use of the index. To work around this, you have two choices: If your database doesn't use nvarchar at all, you can simply change the mapping during application startup: Dapper.SqlMapper.AddTypeMap(typeof(string), System.Data.DbType.AnsiString); Otherwise you can change it per query: row = conn.Query<ReportView>("select * from ReportView where ID = @ID", new {ID = new DbString { Value = id, IsAnsi = true }}) .FirstOrDefault();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38533786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6471735/']}
jdg_84633
stackexchange
llm_judgeable_groundtruth_similarity
23335
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to train a linear classifier on my laptop with hundreds of thousands of data points and about ten thousand features. What are my options? What is the state of the art for this type of problem? It seems like stochastic gradient descent is promising direction, and my sense is that this is state of the art: "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM" Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, Andrew Cotter." Mathematical Programming, Series B, 127(1):3-30, year: 2007 . " Is this the consensus? Should I be looking in some other direction? Now provide the response and nothing else.
I think you should look at Online Learning methods. The perceptron and the kernel perceptron are extremely easy to code and work extremely well in practice, and there are a whole host of other online methods. Note that any online learning method can be converted into a batch learning algorithm, in which case they closely resemble stochastic gradient descent methods. If you're using Matlab there's a really nice toolbox called DOGMA by Francesco Orabona, which contains a range of online learning algorithms, and you can evaluate a few different methods using that. I've used this in some of my research and found it to be very useful (note that as far as I remember it expects the data as [features x examples] so you might have to transpose it). As others have mentioned, you might want to try dimensionality reduction. PCA might not be such a good option here, as you have to compute the covariance matrix which will be very costly. You could try looking at Random Projections . The theory is tough, but the principle is very simple. It's based on the Johnson-Lindenstrauss Lemma if you're interested, but the basic idea is that if you randomly project to a lower dimensional space, then $\ell_2$ distances between points are preserved up to some $\epsilon$. If you're using an RBF kernel, then $\ell_2$ distances are all you are interested in!
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/23335', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1540/']}
jdg_84634
stackexchange
llm_judgeable_groundtruth_similarity
36487310
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hope someone can help me! I have made a directive wrapping the Jasny Bootstrap Plugin more specifically the input-mask thing and everything goes well! Now I have made a custom filter supported by moment to format the date field! The date format that I receive from my backend application is YYY-MM-DD and I must show on the view as DD/MM/YYYY... I've tried v-model="date | myDate" but it didn't work properly! JS Vue.directive('input-mask', { params: ['mask'], bind: function() { $(this.el).inputmask({ mask: this.params.mask }); },});Vue.filter('my-date', function(value, formatString) { if (value != undefined) return ''; if (formatString != undefined) return moment(value).format(formatString); return moment(value).format('DD/MM/YYYY');});var vm = new Vue({ el: 'body', data: { date: '2015-06-26', }}); HTML <label>Date</label><input type="text" class="form-control" v-input-mask mask="99/99/9999" v-model="date"><p>{{ date | myDate 'dd/mm/yyyy' }}</p> There is the JSBin if somebody's interested! Thanks in advance! EDIT: Explaining better what I expect =) When the page first load the input receive the value of 2015-06-26 and I would like to show that value as DD/MM/YYYY so 26/06/2015! It works properly only after I start typing something! Now provide the response and nothing else.
I understand what you are trying to do, however, because of the two way binding when using v-model, it may be better to just format the date as you receive it from the server, and then, use it with the desired format in your front-end app ( 'DD/MM/YYYY' ). When sending the data back to the back-end, you just format it back to the desired server format ( 'YYYY-MM-DD' ). In your Vue app, the work flow would be something like this: new Vue({ el: 'body', data: { date: null, }, methods: { getDataFromServer: function() { // Ajax call to get data from server // Let's pretend the received date data was saved in a variable (serverDate) // We will hardcode it for this ex. var serverDate = '2015-06-26'; // Format it and save to vue data property this.date = this.frontEndDateFormat(serverDate); }, saveDataToServer: function() { // Format data first before sending it back to server var serverDate = this.backEndDateFormat(this.date); // Ajax call sending formatted data (serverDate) }, frontEndDateFormat: function(date) { return moment(date, 'YYYY-MM-DD').format('DD/MM/YYYY'); }, backEndDateFormat: function(date) { return moment(date, 'DD/MM/YYYY').format('YYYY-MM-DD'); } } }); This works well for me, hope it helps. Here is a fiddle for it: https://jsfiddle.net/crabbly/xoLwkog9/ Syntax UPDATE: ... methods: { getDataFromServer() { // Ajax call to get data from server // Let's pretend the received date data was saved in a variable (serverDate) // We will hardcode it for this ex. const serverDate = '2015-06-26' // Format it and save to vue data property this.date = this.frontEndDateFormat(serverDate) }, saveDataToServer() { // Format data first before sending it back to server const serverDate = this.backEndDateFormat(this.date) // Ajax call sending formatted data (serverDate) }, frontEndDateFormat(date) { return moment(date, 'YYYY-MM-DD').format('DD/MM/YYYY') }, backEndDateFormat(date) { return moment(date, 'DD/MM/YYYY').format('YYYY-MM-DD') } } })
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36487310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2103584/']}
jdg_84635
stackexchange
llm_judgeable_groundtruth_similarity
61576373
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have an InputSelect in a blazor editform that is bound to a model value and also has an onchange event that changes other properties in the model based on the new value. binding to both @bind-Value and @onchange does not work (im guessing because bind value uses both the value and the value changed properties of the input. I can bind to oninput, but I would like to know if there is a better way to do this. <InputSelect id="inputPeriod" name="inputPeriod" class="form-control" @bind-Value="model.Period" @oninput="periodChanged">protected void periodChanged(ChangeEventArgs e) {} I can bind to oninput like this but ideally I would like to bind to the @onchange event after the model property has been updated, or know what the best practice is for this. without using the bind-value the model validation will not work so the only alternative way I can think of is to have the change events working inside the properties in my model, but that seems wrong Now provide the response and nothing else.
Here's a silly sample in which you have to enter your name, and then select your pet, the result of which is renaming you after your dear pet. The sample describes how you can manipulate your model when a field changes: <EditForm EditContext="@EditContext"><DataAnnotationsValidator /><div class="form-group"> <label for="name">Enter your Name: </label> <InputText Id="name" Class="form-control" @bind-Value="@person.Name"></InputText> <ValidationMessage For="@(() => person.Name)" /></div><div class="form-group"> <label for="body">Select your pet: </label> <InputSelect @bind-Value="@person.Pet"> <option value="Cat">Cat</option> <option value="Dog">Dog</option> <option value="Horse">Horse</option> <option value="Lion">Lion</option> </InputSelect> <ValidationMessage For="@(() => person.Pet)" /></div><p> <button type="submit">Submit</button></p></EditForm> @code { private EditContext EditContext; private Person person = new Person(); protected override void OnInitialized() { EditContext = new EditContext(person); EditContext.OnFieldChanged += EditContext_OnFieldChanged; base.OnInitialized(); } // Note: The OnFieldChanged event is raised for each field in the model private void EditContext_OnFieldChanged(object sender, FieldChangedEventArgs e) { if (e.FieldIdentifier.FieldName == "Pet") { person.Name = person.Pet; } } public class Person { public string ID { get; set; } public string Name { get; set; } public string Pet { get; set; } }} the only alternative way I can think of is to have the change events working inside the properties in my model, but that seems wrong Not at all... There's no requirement for models to implement INotifyPropertyChanged, but if they do, you can easily wire that up to the EditContext. Then you have no need to use the built-in Input* components - you can instead bind to regular HTML elements and still get modification notifications. This provides more flexibility in how the UI is rendered, at the cost of more complexity and boilerplate in your model classes. Hope this helps...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61576373', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1784280/']}
jdg_84636
stackexchange
llm_judgeable_groundtruth_similarity
44894652
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi I'm trying to add text to the 1st line of a file using sedso far iv'e tried #!/bin/bashtouch testsed -i -e '1i/etc/example/live/example.com/fullchain.pem;\' test And this dosn't work also tried #!/bin/bashtouch testsed -i "1i ssl_certificate /etc/example/live/example.com/fullchain.pem;" test this dosn't seem to work eitheroddly when I try #!/bin/bashtouch testecho "ssl_certificate /etc/example/live/example.com/fullchain.pem;" > test I get the 1st line of text to appear when i use cat test but as soon as i type sed -i "2i ssl_certificate_key /etc/example/live/example.com/privkey.pem;" I can't see the information that i sould do on line 2 this being ssl_certificate_key /etc/example/live/example.com/privkey.pem; so my question to summerise Can text be inserted into the 1st line of a newly created file using sed? If yes whats the best way of inserting text after the 1st line of text? Now provide the response and nothing else.
Suppose you have a file like this: onetwo Then to append to the first line: $ sed '1 s_$_/etc/example/live/example.com/fullchain.pem;_' fileone/etc/example/live/example.com/fullchain.pem;two To insert before the first line: $ sed '1 i /etc/example/live/example.com/fullchain.pem;' file/etc/example/live/example.com/fullchain.pem;onetwo Or, to append after the first line: $ sed '1 a /etc/example/live/example.com/fullchain.pem;' fileone/etc/example/live/example.com/fullchain.pem;two Note the number 1 in those sed expressions - that's called the address in sed terminology. It tells you on which line the command that follows is to operate. If your file doesn't contain the line you're addressing, the sed command won't get executed. That's why you can't insert/append on line 1, if your file is empty. Instead of using stream editor , to append (to empty files), just use a shell redirection >> : echo "content" >> file
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44894652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3700919/']}
jdg_84637
stackexchange
llm_judgeable_groundtruth_similarity
34219097
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know it is not a good practice, but I need to write operational decision results into a remote database at the end of execution of the ruleflow. Does IBM ODM 8.7 provides any mechanism to achieve that ? Any workaround ? Any help ? Now provide the response and nothing else.
A simple way would be to create a Pattern with the line separator and split the input String as a Stream . Then, each line is split with a space (keeping only 2 parts) and mapped to a MyObject . Finally, an array is constructed with the result. public static void main(String[] args) { String str = "row1col1 row2col2\r\nrow2col1 row2col2\r\nrow3col1 row3col2"; MyObject[] array = Pattern.compile(System.lineSeparator(), Pattern.LITERAL) .splitAsStream(str) .map(s -> s.split("\\s+", 2)) .map(a -> new MyObject(a[0], a[1])) .toArray(MyObject[]::new); System.out.println(Arrays.toString(array));} Using splitAsStream can be advantageous over Stream.of(...) if the input String is long. I assumed in the code that the line separator of the String was the default line separator ( System.lineSeparator() ) of the OS but you can change that if it is not. Instead, if you are reading from a file, you could use Files.lines() to get a hold of a Stream of all the lines in the file: MyObject[] array = Files.lines(path) .map(s -> s.split("\\s+", 2)) .map(a -> new MyObject(a[0], a[1])) .toArray(MyObject[]::new);System.out.println(Arrays.toString(array));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34219097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5396696/']}
jdg_84638
stackexchange
llm_judgeable_groundtruth_similarity
2301259
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it necessary that composite-id should be mapped to class ?? can it be like this ? <composite-id> <key-property=..../> <key-property=..../></composite-id> or should be <composite-id class=....> <key-property=..../> <key-property=..../></composite-id> should that necessary that if we have composite key then that class should implement equals() and override() method ? Now provide the response and nothing else.
Hibernate needs to be able to compare and serialize identifiers. So the identifier class must be serializable, and override hashCode() and equals() consistently with the database's notion of composite key equality. If you have a composite id mapped as properties of the entity, the entity itself is the identifier. A second approach is called a mapped composite identifier, where the identifier properties named inside the <composite-id> element are duplicated on both the persistent class and a separate identifier class Finally, a composite-id may be a component class. In this case the component class is the identifier class. Note that it is strongly recommended to have the ID a separate class. Otherwise you will have only very awkward ways to lookup your object using session.get() or session.load(). Relevant sections of the Reference Documentation: composite-id Components as composite identifiers In this example, a composite-id is mapped as properties of the entity. (The following assume you are defining the Employee class). <composite-id> <key-property name="EmployeeNumber"/> <key-property name="Dependent"/></composite-id>class EmployeeAssignment implements Serializable{ string getEmployeeNumber() void setEmployeeNumber( string value ) string getDepartment() void setDepartment( string value ) boolean equals( Object obj ) int hashCode()} A mapped composite-id: <composite-id class="EmployeeAssignmentId" mapped="true"> <key-property name="EmployeeNumber"/> <key-property name="Dependent"/></composite-id>class EmployeeAssignment{ string getEmployeeNumber() void setEmployeeNumber( string value ) string getDepartment() void setDepartment( string value )}class EmployeeAssignmentId implements Serializable{ string getEmployeeNumber() void setEmployeeNumber( string value ) string getDepartment() void setDepartment( string value ) boolean equals( Object obj ) int hashCode()} A component as a composite-id: <composite-id name="Id" class="EmployeeAssignmentId"> <key-property name="EmployeeNumber"/> <key-property name="Dependent"/></composite-id>class EmployeeAssignment{ EmployeeAssignmentId getId() void setId( EmployeeAssignmentId value )}class EmployeeAssignmentId implements Serializable{ string getEmployeeNumber() void setEmployeeNumber( string value ) string getDepartment() void setDepartment( string value ) boolean equals( Object obj ) int hashCode()}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2301259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/275964/']}
jdg_84639
stackexchange
llm_judgeable_groundtruth_similarity
18753
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Consider a compact manifold M . For a vector field X on M , let $\phi_X$ denote the diffeomorphism of M given by the time 1 flow of X . If X and Y are two vector fields, is $\phi_X \circ \phi_Y$ necessarily of the form $\phi_Z$ for some vector field Z ? Since $X\mapsto \phi_X$ can be thought of as the exponential map from the Lie algebra of vector fields to the group of diffeomorphisms, an obvious candidate is that Z should be given by the Baker-Campbell-Hausdorff formula $B(X, Y) = X+Y+\frac{1}{2}[X,Y]+\cdots$. But does this hold in this infinite-dimensional setting? If so, in which sense does the series converge to Z ? Also, I'm interested in the case where M is a symplectic manifold and we consider only symplectic vector fields (ie. vector fields for which the contraction with the symplectic form is a closed 1-form). Locally, X and Y are the Hamiltonian vector fields associated to smooth functions f and g , so I assume that asking whether B(X, Y) makes sense/is symplectic corresponds to asking whether B(f, g) makes sense/defines a smooth function (where, of course, we use the Poisson bracket in the expansion of B(f, g) ). The right-hand side of B(f,g) consists of lots of iterated directional derivatives of f and g in the X f and X g directions; it is not clear to me that the coefficients in the BCH formula make the series converge (uniformly, say) for any choice of f and g . Now provide the response and nothing else.
To answer your first question, the composition of two time-1 flows won't necessarily be another time-1 flow. One way to see this is to note that when a time-1 flow $\phi_X$ has a periodic point $P$ (period > 1), then $P$ can't be hyperbolic since it lies on a closed orbit of the flow for $X$. (The eigenvector of $D\phi_X$ tangent to this orbit has corresponding eigenvalue 1.) Now, take a flow on $S^2$ whose time-1 map rotates the sphere, switching the north and south poles. Take a second flow for which both poles are hyperbolic attracting fixed points. Composing the two time-1 maps gives you a new diffeomorphism with hyperbolic points of period 2.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/18753', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4747/']}
jdg_84640