source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 904 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to generate periodic (daily, monthly) web analytics dashboard reports. They will be static and don't require interaction, so imagine a PDF file as the target output. The reports will mix tables and charts (mainly sparkline and bullet graphs created with ggplot2). Think Stephen Few/Perceptual Edge style dashboards, such as: but applied to web analytics. Any suggestions on what packages to use creating these dashboard reports? My first intuition is to use R markdown and knitr, but perhaps you've found a better solution. I can't seem to find rich examples of dashboards generated from R.
Now provide the response and nothing else.
| I think that Shiny is an overkill in this situation and doesn't match your requirement of dashboard reports to be static . I guess, that your use of the term "dashboard" is a bit confusing, as some people might consider that it has more emphasis of interactivity ( real-time dashboards ), rather than information layout , as is my understanding (confirmed by the "static" requirement). My recommendation to you is to use R Markdown and knitr , especially since these packages have much lower learning curve than Shiny . Moreover, I have recently run across an R package, which, in my view, ideally suits your requirement of embedding small charts/plots in a report, as presented on your picture above. This package generates static or dynamic graphical tables and is called sparkTable ( http://cran.r-project.org/web/packages/sparkTable ). Its vignette is available here (there is no link to it on the package's home page): http://publik.tuwien.ac.at/files/PubDat_228663.pdf . Should you ever need some interactivity , sparkTable provides some via its simple interface to Shiny . | {} | {'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/904', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/-1/']} | jdg_75941 |
stackexchange | llm_judgeable_groundtruth_similarity | 1555833 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I had a question about the uniqueness of group elements. Let the Klein Four group be defined as the group generated by the elements ${1,a,b,c}$ such that $a^2=b^2=c^2=1$ and $ab=c$, $bc=a$, $ca=b$, and $1$ is the identity element. Let the "Klein Five" group be defined as the group generated by the elements ${1,a,b,c,d}$ such that $a^2=b^2=c^2=d^2=1$ and $ab=c$, $bc=d$, $cd=a$, $da=b$, and $1$ is the identity element. If I manipulate the symbols of the "Klein Five" group I defined above, I can show every element is equivalent to the identity. From $ab=c=ad$ I can see $a$ is the identity, from $bc=d=ba$ I can see $b$ is the identity, and so on. This gives that $a=b=c=d=1$. In some sense, this group doesn't seem to exist. I can't make a group such that $a \neq b \neq c \neq d \neq 1$ with the constraints above. How do I know that the same isn't true of the Klein Four group? How do I know there isn't some set of constraints that makes the group "non-existent" for unique elements in the same way as the "Klein Five" group? Any help would be appreciated!
Now provide the response and nothing else.
| You can simply write out the full list of elements and multiplication table of the Klein Four group, and check that it really satisfies the group axioms (and that $1$, $a$, $b$, $c$, are distinct elements and in fact are all the elements of the group). If you tried to do this with the "Klein Five" group, you would run into trouble: for any multiplication table you tried to write down with $1$, $a$, $b$, $c$, and $d$ as distinct elements which satisfied your equations, it would fail to satisfy the group axioms (by the argument you give). In general, though, it is a very hard problem to identify whether a group presentation (i.e., a list of generators and relations between them) describes only the trivial group. In fact, it is so hard that you can prove it is unsolvable in general, in the sense that there exists no algorithm that takes a finite presentation of a group and decides whether the group is trivial (this is a variant of the so-called "word problem" ). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1555833', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/51091/']} | jdg_75942 |
stackexchange | llm_judgeable_groundtruth_similarity | 120971 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given two permutations , I'm asked to answer is they are conjugate permutations . The two permutations are : $ \alpha=(12)(345)(78)$, $\beta=(162)(35)(89)$. Definition: Two permutations $ \sigma,\sigma'\in S_n$ are conjugate if exists $\tau \in S_n $ such that:$\sigma'=\tau\sigma\tau^{-1} = (\tau(a_0),\tau(a_1)\ldots \tau(a_k)) $, where $ \alpha=(a_0a_1\ldots a_k)$ . It took me a long time to find the correct $\tau$ that would compute the exact $\beta$ , which is: $$\tau=(13)(25)(46)(789)$$ So if we want to produce are $\beta$ we can do the following : $$(\tau(1),\tau(2))(\tau(3)\tau(4)\tau(5))(\tau(7)\tau(8))=(35)(162)(89)$$and indeed , they ($α$ and $β$) are conjugate. My question, after this "long" post, is rather simple: is there a simple way to compute the $\tau$ ? Regards
Now provide the response and nothing else.
| Write the two permutations in full cycle notation, writing cycles from longest to shortest (cycles of the same length can be ordered arbitrarily, the starting number of cycle can be chosen arbitrarily from within the cycle). In your example:$$\begin{align*}\alpha &= (3,4,5)(1,2)(7,8)(6)(9) \\\beta &= (1,6,2)(3,5)(8,9)(4)(7)\end{align*} \text{ so }\tau = \begin{bmatrix} 3 & 4 & 5 & 1 & 2 & 7 & 8 & 6 & 9 \\ 1 & 6 & 2 & 3 & 5 & 8 & 9 & 4 & 7\end{bmatrix}$$ In other words, $\tau(3)=1, \tau(4)=6$, etc. You can convert $\tau$ to cycle notation in the obvious way by "tracing", $\tau=(3,1)(4,6)(5,2)(7,8,9) = (1,3)(2,5)(4,6)(7,8,9)$. Another $\tau$ is found by: $$\begin{align*}\alpha &= (3,4,5)(7,8)(2,1)(9)(6) \\\beta &= (1,6,2)(5,3)(8,9)(4)(7)\end{align*} \text{ so }\tau = \begin{bmatrix} 3 & 4 & 5 & 7 & 8 & 2 & 1 & 9 & 6 \\ 1 & 6 & 2 & 5 & 3 & 8 & 9 & 4 & 7\end{bmatrix}$$ and $\tau= (3,1,9,4,6,7,5,2,8)$. Since you can reorder the cycles of the same length, and since you can "cycle" a cycle as much as you want, you actually get many different $\tau$, in fact an entire coset of a centralizer. One way to calculate the centralizer of $\alpha$ is to apply this procedure when $\alpha = \beta$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/120971', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23842/']} | jdg_75943 |
stackexchange | llm_judgeable_groundtruth_similarity | 8233416 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to pass an object from my main view class to other notification receiver in another class. I want to pass an object named country, that loads all the cities from an SOAP Request in the Main Controller and i want to send it to my next view. country = [[Country alloc] init]; Country header: @interface Country : NSObject{ NSString *name; NSMutableArray *cities;}@property (nonatomic,retain) NSString *name;- (void)addCity:(Cities *)city;- (NSArray *)getCities;- (int)citiesCount; @end I found a way to pass data with NSNotificatios is using a NSDictionary in UserInfo. But its not possible to send the whole object instead of converting to an NSDictionary? Or what's the best way to transfer it? Im stuck trying to figure out how to pass the objects. Actually i got working this simple NSNotification on my App. NSNotification in the Main View Controller implementation: //---Call the next View---DetailViewController *detail = [self.storyboardinstantiateViewControllerWithIdentifier:@"Detail"];[self.navigationController pushViewController:detail animated:YES]; //--Transfer Data to2View [[NSNotificationCenter defaultCenter] postNotificationName:@"citiesListComplete" object:nil]; NSNotification in 2View Controller implementation: // Check if MSG is RECEIVE- (void)checkMSG:(NSNotification *)note { NSLog(@"Received Notification");}- (void)viewDidLoad{ [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(checkMSG:) name:@"citiesListComplete" object:nil];
Now provide the response and nothing else.
| Oooooo, so close. I have a feeling you do not understand what an NSDictionary is though. Post your notification with this: Country *country = [[[Country alloc] init] autorelease];//Populate the country object however you wantNSDictionary *dictionary = [NSDictionary dictionaryWithObject:country forKey:@"Country"];[[NSNotificationCenter defaultCenter] postNotificationName:@"citiesListComplete" object:nil userInfo:dictionary]; then get the country object like this: - (void)checkMSG:(NSNotification *)note { Country *country = [[note userInfo] valueForKey:@"Country"]; NSLog(@"Received Notification - Country = %@", country);} You don't need to convert your object into a NSDictionary . Instead, you need to send a NSDictionary with your object. This allows you to send lots of information, all based on keys in the NSDictionary , with your NSNotification . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8233416', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1060541/']} | jdg_75944 |
stackexchange | llm_judgeable_groundtruth_similarity | 304287 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the correct way to initialize biases in convolutional neural networks ( tf.zeros , tf.truncated_normal , tf.random_normal ), and why? Should biases be constant? Do we even need biases in a deep neural network (like VGG-16)? In a siamese neural network, do we also share the biases along with the weights?
Now provide the response and nothing else.
| Just noting that the answer to this question suggests setting CNN biases to 0, quoting CS231n Stanford course: Initializing the biases. It is possible and common to initialize the biases to be zero, since the asymmetry breaking is provided by the small random numbers in the weights. For ReLU non-linearities, some people like to use small constant value such as 0.01 for all biases because this ensures that all ReLU units fire in the beginning and therefore obtain and propagate some gradient. However, it is not clear if this provides a consistent improvement (in fact some results seem to indicate that this performs worse) and it is more common to simply use 0 bias initialization. source: http://cs231n.github.io/neural-networks-2/ | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/304287', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/178037/']} | jdg_75945 |
stackexchange | llm_judgeable_groundtruth_similarity | 15410110 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My case is the following. My team uses TFS 2012 for source control. My teammate has created a branch from a particular changeset (not the last for that moment) of another branch. What I need is to figure out from which exact changeset the branch was created, and which branch was used. I tried to find it in "View History" of Source Control Explorer in VS. If to compare with svn, there is a property in the revision of creating a branch that stores the initial revision and initial location from which the branch was created. In TFS, changeset details look loke they do not contain such information.
Now provide the response and nothing else.
| Yes, there is. discens is right, you might use Track Changeset functionality. By the way, a similar question has already been answered on SO, here it is . The answer contains a link to a blogpost which contains many details, including API and a custom command-line tool. However, the way is not so obvious. To clarify this, here is a brief step-by-step instruction of how to achieve the result in Source Control Explorer : Locate your child branch in Source Control Explorer Right-click on it to get context menu, select View History . Locate the first changeset of the branch in the History viewer, right-click on it to get context menu, select Track Changeset . You will see the scheme of branches and the structure of their inheritance. On this step, you are finding out the parent . There is a checkbox near every detected branch, but the parent one of your child is normally unchecked. Check it first. The child branch has to be checked unconditionally, so the two checkboxes to be selected are the parent and the child. Don't forget to locate your child branch in Path filter input field using Browse... button. Press Visualize button at the bottom. You will see the picture with rectangles repesenting these two branches. The parent branch rectangle should include sorted list of changesets in this branch. The list can be huge, but the source changeset (from which your child has been originally created) should be the last. ???? PROFIT!!!! You can see a sample screenshot of the final steps in the blogpost . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15410110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2169892/']} | jdg_75946 |
stackexchange | llm_judgeable_groundtruth_similarity | 26370 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have just read an article about it here( http://www.greatdreams.com/2012.htm )(Scroll to bottom). I'm just a website developer, and I really want to know if this is really true. I'm curious. I'm pasting the content from it: It turns out that our solar system appears to belong to another galaxy that is colliding with the Milky Way. This was recently discovered when scientists were trying to figure sources for "dark matter" that would account for forces we can measure but not see visibly. Using near-infrared (wavelengths of light outside human eye and optical telescopes) a huge sister galaxy circling the Milky Way was discovered. It's called the Sagittarius dwarf galaxy (SGR for short). For those keen on the 2012 data, this is the reason our entry point to the Rift, center, heart (HunabKu) of the Milky Way is thru Sagittarius. The two collide at that point. This explains why our solar system is at an angle to the plane of the galaxy and also why we dip above and below that center line every 12,000 yrs or so.
Now provide the response and nothing else.
| Well, first of all, the entire site dedicated to the 2012 nonsense is a total hoax... I suggest that you check out this site for more information regarding the weakness and outright lies of that hoax . To address the copy/pasted nonsense... The charlatans at the site you reference have taken real terms, and mixed them up in a word salad as to make any lies or fantastic tales they tell seem plausible. For instance, the Sagittarius Dwarf Galaxy is indeed a real thing (although its discovery wasn't specifically tied to dark matter ). The dwarf galaxies that are around the milky way are not going to cause any particularly disturbing collisions in the near future. Most of them just pass through the milky way on their regular orbits. The most significant collision will take place in about 3 billion years when the Andromeda galaxy and our galaxy collide. However, when galaxies collide, it's really just a gravitational interaction. Very few (if any) actual stars hit each other). Also, the solar system is part of the Milky Way, and from everything we know about it, it has always been part of the milky way. It may get ejected in 3 billion years, but until then it shall remain part of the milky way. The second paragraph you quoted is total nonsense (above and beyond the regular nonsense of that entire site). | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/26370', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_75947 |
stackexchange | llm_judgeable_groundtruth_similarity | 48478869 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For the last 3 months or so I'm having random errors where I can't bind a specific port where our Identity server is running on my local development workstation. At first I thought it's my broken machine, so I reset everything, which kinda fixed the issue for 2 months and now it is back. In the meanwhile other developers saw the same issue. All of us who experience the issue are running Windows 10, but not everyone with Windows 10 has that issue. Restarting windows after it was shut down with shutdown /s /f /t 0 is the only reliable solution to open up the port again. After I restart normally (due to windows updates or similar) the issue comes up again an I need to shut down windows with that command. I wrote a small f# script to test which ports are affected. The exact error message I get is An attempt was made to access a socket in a way forbidden by its access permissions When I looked for it, the suggested solutions where to restart the machine, but that seems more like a work around, not a real solution. I've also found https://stackoverflow.com/a/10461978/621366 - but netstat -o doesn't list the port, neither does the tool TCPView. All of us are pretty sure that the port isn't occupied by anything. I also tried running netstat in an administrator console and similar commands inside bash on windows, but couldn't find anything. I can't even connect to the port via telnet, it says nothing is listening on the port. those ports in question are for instance: 49670 - 49689 49710 - 49749 49760 - 49779 49811 - 49830 49843 - 49882 50197 - 50216 None of us modified anything on the windows firewall or has any additional anti virus tools installed except the windows 10 default ones. So everything should be on default values. And it also worked normally for ~10 months before it broke the first time and afterwards for 2 months. In both cases after some windows updates where installed. The last time it was a bios update (probably due to the meltdown / spectre issues?). Also trying to open up the ports explicitly on the firewall didn't help. According to this answer https://stackoverflow.com/a/23982901/621366 TCPView and netstat shouldn't miss any occupied ports, but even when I enable showing unconnected endpoints in TCPView, I don't see any of the ports where I get permission denied when trying to bind them. Here a screen from the occupied ports (I marked the bordering occupied ports which are right before or after the group of permission-denied-ports) UPDATE: I've noticed that it always seems to be 160 or 180 (exact numbers) of ports which have permission denied in the ranges of 40,000+ This seems oddly coincidental to me, so obviously something is occupying the ports on purpose, but what? I can't seem to find anything in the windows event logs (although I wouldn't know what to look for exactly) and none of those ports shows up any any of my firewall rules. Also shutting down docker for windows doesn't make any difference and when a colleague mentioned that for them it's enough to restart docker for windows (in the UI go to Reset->Restart) and right now for me, even restarting with the shutdown command doesn't work anymore. UPDATE 2: The output of netstat -ano run from an administrator powershell: Proto Local Address Foreign Address State PIDTCP 0.0.0.0:135 0.0.0.0:0 LISTENING 1152TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4TCP 0.0.0.0:2179 0.0.0.0:0 LISTENING 4696TCP 0.0.0.0:5040 0.0.0.0:0 LISTENING 6616TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING 4TCP 0.0.0.0:5432 0.0.0.0:0 LISTENING 11100TCP 0.0.0.0:7680 0.0.0.0:0 LISTENING 7056TCP 0.0.0.0:17500 0.0.0.0:0 LISTENING 9668TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING 784TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING 1628TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING 2028TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING 3560TCP 0.0.0.0:49800 0.0.0.0:0 LISTENING 856TCP 0.0.0.0:49821 0.0.0.0:0 LISTENING 892TCP 0.0.0.0:50000 0.0.0.0:0 LISTENING 11100TCP 0.0.0.0:50001 0.0.0.0:0 LISTENING 11100TCP 0.0.0.0:51000 0.0.0.0:0 LISTENING 11100TCP 10.0.75.1:139 0.0.0.0:0 LISTENING 4TCP 10.0.75.1:445 10.0.75.2:44848 ESTABLISHED 4TCP 127.0.0.1:843 0.0.0.0:0 LISTENING 9668TCP 127.0.0.1:944 0.0.0.0:0 LISTENING 688TCP 127.0.0.1:944 127.0.0.1:50968 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50970 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50973 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50977 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50981 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50990 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50992 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:50996 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:51005 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:51007 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:51009 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:51015 TIME_WAIT 0TCP 127.0.0.1:944 127.0.0.1:51017 ESTABLISHED 688TCP 127.0.0.1:4380 0.0.0.0:0 LISTENING 11024TCP 127.0.0.1:6942 0.0.0.0:0 LISTENING 9296TCP 127.0.0.1:17600 0.0.0.0:0 LISTENING 9668TCP 127.0.0.1:49668 127.0.0.1:49669 ESTABLISHED 688TCP 127.0.0.1:49669 127.0.0.1:49668 ESTABLISHED 688TCP 127.0.0.1:50076 127.0.0.1:50077 ESTABLISHED 8828TCP 127.0.0.1:50077 127.0.0.1:50076 ESTABLISHED 8828TCP 127.0.0.1:50173 127.0.0.1:50174 ESTABLISHED 9668TCP 127.0.0.1:50174 127.0.0.1:50173 ESTABLISHED 9668TCP 127.0.0.1:50175 127.0.0.1:50176 ESTABLISHED 9668TCP 127.0.0.1:50176 127.0.0.1:50175 ESTABLISHED 9668TCP 127.0.0.1:50197 127.0.0.1:50198 ESTABLISHED 9668TCP 127.0.0.1:50198 127.0.0.1:50197 ESTABLISHED 9668TCP 127.0.0.1:50335 127.0.0.1:50336 ESTABLISHED 6424TCP 127.0.0.1:50336 127.0.0.1:50335 ESTABLISHED 6424TCP 127.0.0.1:50346 127.0.0.1:50347 ESTABLISHED 11100TCP 127.0.0.1:50347 127.0.0.1:50346 ESTABLISHED 11100TCP 127.0.0.1:51011 127.0.0.1:51012 ESTABLISHED 9296TCP 127.0.0.1:51012 127.0.0.1:51011 ESTABLISHED 9296TCP 127.0.0.1:51013 127.0.0.1:51014 ESTABLISHED 9296TCP 127.0.0.1:51014 127.0.0.1:51013 ESTABLISHED 9296TCP 127.0.0.1:51016 0.0.0.0:0 LISTENING 9296TCP 127.0.0.1:51017 127.0.0.1:944 ESTABLISHED 8828TCP 127.0.0.1:63342 0.0.0.0:0 LISTENING 9296TCP 127.94.0.1:946 0.0.0.0:0 LISTENING 688TCP 127.94.0.2:946 0.0.0.0:0 LISTENING 688TCP 127.94.0.3:946 0.0.0.0:0 LISTENING 688TCP 127.94.0.4:946 0.0.0.0:0 LISTENING 688TCP 169.254.105.83:139 0.0.0.0:0 LISTENING 4TCP 192.168.0.107:139 0.0.0.0:0 LISTENING 4TCP 192.168.0.107:49415 111.221.29.134:443 ESTABLISHED 4316TCP 192.168.0.107:49417 111.221.29.127:443 ESTABLISHED 4316TCP 192.168.0.107:50185 162.125.66.3:443 CLOSE_WAIT 9668TCP 192.168.0.107:50246 52.70.31.26:443 CLOSE_WAIT 9668TCP 192.168.0.107:50253 35.177.204.73:443 ESTABLISHED 2804TCP 192.168.0.107:50254 35.177.204.73:443 ESTABLISHED 2804TCP 192.168.0.107:50256 35.177.204.73:443 ESTABLISHED 2804TCP 192.168.0.107:50257 158.85.224.175:443 ESTABLISHED 10836TCP 192.168.0.107:50258 13.69.14.160:443 ESTABLISHED 8620TCP 192.168.0.107:50310 66.102.1.188:443 ESTABLISHED 11184TCP 192.168.0.107:50329 157.240.20.15:443 ESTABLISHED 10836TCP 192.168.0.107:50331 111.221.29.74:443 ESTABLISHED 10072TCP 192.168.0.107:50332 162.125.18.133:443 ESTABLISHED 9668TCP 192.168.0.107:50351 40.77.226.194:443 ESTABLISHED 8620TCP 192.168.0.107:50460 66.102.1.189:443 ESTABLISHED 10836TCP 192.168.0.107:50470 66.102.1.189:443 ESTABLISHED 10836TCP 192.168.0.107:50501 192.30.253.125:443 ESTABLISHED 11184TCP 192.168.0.107:50513 40.77.226.194:443 ESTABLISHED 8620TCP 192.168.0.107:50529 87.98.218.198:443 ESTABLISHED 12540TCP 192.168.0.107:50530 172.217.21.46:443 ESTABLISHED 10836TCP 192.168.0.107:50616 172.217.21.46:443 ESTABLISHED 10836TCP 192.168.0.107:50630 162.125.18.133:443 ESTABLISHED 9668TCP 192.168.0.107:50641 172.217.21.37:443 ESTABLISHED 10836TCP 192.168.0.107:50645 162.125.66.4:443 CLOSE_WAIT 9668TCP 192.168.0.107:50668 87.98.218.198:443 ESTABLISHED 12540TCP 192.168.0.107:50703 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50709 192.30.253.125:443 ESTABLISHED 11184TCP 192.168.0.107:50744 87.98.218.198:443 ESTABLISHED 12540TCP 192.168.0.107:50828 162.125.66.3:443 CLOSE_WAIT 9668TCP 192.168.0.107:50830 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50831 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50832 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50834 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50835 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50836 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50837 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50839 192.30.253.125:443 ESTABLISHED 11184TCP 192.168.0.107:50844 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50847 192.30.253.124:443 ESTABLISHED 11184TCP 192.168.0.107:50857 192.30.253.124:443 ESTABLISHED 11184TCP 192.168.0.107:50863 162.125.34.137:443 CLOSE_WAIT 9668TCP 192.168.0.107:50865 172.217.21.46:443 TIME_WAIT 0TCP 192.168.0.107:50866 172.217.21.46:443 ESTABLISHED 10836TCP 192.168.0.107:50910 35.186.213.138:443 TIME_WAIT 0TCP 192.168.0.107:50923 172.217.21.46:443 ESTABLISHED 10836TCP 192.168.0.107:50925 40.117.190.72:443 ESTABLISHED 4040TCP 192.168.0.107:50927 172.217.21.42:443 ESTABLISHED 11184TCP 192.168.0.107:50949 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50950 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50951 151.101.0.133:443 ESTABLISHED 11184TCP 192.168.0.107:50954 192.30.253.124:443 ESTABLISHED 11184TCP 192.168.0.107:50959 40.117.190.72:443 TIME_WAIT 0TCP 192.168.0.107:50969 192.30.253.113:22 TIME_WAIT 0TCP 192.168.0.107:50978 87.98.218.198:443 ESTABLISHED 12540TCP 192.168.0.107:50984 151.101.1.69:443 ESTABLISHED 11184TCP 192.168.0.107:50985 192.0.73.2:443 ESTABLISHED 11184TCP 192.168.0.107:50986 104.16.112.18:443 ESTABLISHED 11184TCP 192.168.0.107:50991 198.252.206.25:443 ESTABLISHED 11184TCP 192.168.0.107:50993 192.168.0.10:3910 TIME_WAIT 0TCP 192.168.0.107:50994 192.168.0.10:3910 TIME_WAIT 0TCP 192.168.0.107:50997 23.210.254.37:443 ESTABLISHED 912TCP 192.168.0.107:50998 23.210.254.37:443 ESTABLISHED 912TCP 192.168.0.107:50999 23.210.254.37:443 ESTABLISHED 912TCP 192.168.0.107:51001 23.210.254.37:443 ESTABLISHED 912TCP 192.168.0.107:51006 40.117.190.72:443 ESTABLISHED 11992TCP 192.168.0.107:51008 40.69.218.62:443 ESTABLISHED 7056TCP 192.168.0.107:51010 172.217.21.46:443 ESTABLISHED 11184TCP [::]:135 [::]:0 LISTENING 1152TCP [::]:445 [::]:0 LISTENING 4TCP [::]:2179 [::]:0 LISTENING 4696TCP [::]:5357 [::]:0 LISTENING 4TCP [::]:7680 [::]:0 LISTENING 7056TCP [::]:17500 [::]:0 LISTENING 9668TCP [::]:49664 [::]:0 LISTENING 784TCP [::]:49665 [::]:0 LISTENING 1628TCP [::]:49666 [::]:0 LISTENING 2028TCP [::]:49667 [::]:0 LISTENING 3560TCP [::]:49800 [::]:0 LISTENING 856TCP [::]:49821 [::]:0 LISTENING 892TCP [::1]:5432 [::]:0 LISTENING 11100TCP [::1]:50000 [::]:0 LISTENING 11100TCP [::1]:50001 [::]:0 LISTENING 11100TCP [::1]:51000 [::]:0 LISTENING 11100UDP 0.0.0.0:53 *:* 5620UDP 0.0.0.0:3702 *:* 2084UDP 0.0.0.0:3702 *:* 2084UDP 0.0.0.0:5050 *:* 6616UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5353 *:* 3080UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5353 *:* 11184UDP 0.0.0.0:5355 *:* 3080UDP 0.0.0.0:17500 *:* 9668UDP 0.0.0.0:49670 *:* 2084UDP 0.0.0.0:57329 *:* 5620UDP 0.0.0.0:57330 *:* 5620UDP 0.0.0.0:59529 *:* 8620UDP 0.0.0.0:60605 *:* 11184UDP 10.0.75.1:137 *:* 4UDP 10.0.75.1:138 *:* 4UDP 10.0.75.1:1900 *:* 2620UDP 10.0.75.1:61326 *:* 2620UDP 127.0.0.1:1900 *:* 2620UDP 127.0.0.1:60816 *:* 4616UDP 127.0.0.1:61328 *:* 2620UDP 169.254.105.83:137 *:* 4UDP 169.254.105.83:138 *:* 4UDP 169.254.105.83:1900 *:* 2620UDP 169.254.105.83:61330 *:* 2620UDP 172.30.146.241:67 *:* 5620UDP 172.30.146.241:68 *:* 5620UDP 172.30.146.241:1900 *:* 2620UDP 172.30.146.241:61329 *:* 2620UDP 192.168.0.107:137 *:* 4UDP 192.168.0.107:138 *:* 4UDP 192.168.0.107:1900 *:* 2620UDP 192.168.0.107:61327 *:* 2620UDP [::]:3702 *:* 2084UDP [::]:3702 *:* 2084UDP [::]:5353 *:* 11184UDP [::]:5353 *:* 11184UDP [::]:5353 *:* 3080UDP [::]:5353 *:* 11184UDP [::]:5355 *:* 3080UDP [::]:49671 *:* 2084UDP [::]:57331 *:* 5620UDP [::]:59529 *:* 8620UDP [::1]:1900 *:* 2620UDP [::1]:61323 *:* 2620UDP [fe80::30eb:ad8f:f94a:b774%26]:1900 *:* 2620UDP [fe80::30eb:ad8f:f94a:b774%26]:61324 *:* 2620UDP [fe80::718c:22bb:fd97:c06c%23]:1900 *:* 2620UDP [fe80::718c:22bb:fd97:c06c%23]:61322 *:* 2620UDP [fe80::85d0:3b5c:7746:6953%5]:1900 *:* 2620UDP [fe80::85d0:3b5c:7746:6953%5]:61325 *:* 2620 The f# code I used to test for open ports: open System.Netopen System.Net.Socketslet ipAddress = IPAddress([| (byte)0; (byte)0; (byte)0; (byte)0 |])let ipEndpoint portNumber = (IPEndPoint(ipAddress, portNumber), portNumber)let getPorts = seq { for i in 1 .. 65535 -> i }let checkIfPortAvailable (endpoint, portNumber) = use listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp) try listener.Bind(endpoint) (portNumber, true, null) with | ex -> (portNumber, false, ex)[<EntryPoint>]let main argv = getPorts |> Seq.map ipEndpoint |> Seq.map checkIfPortAvailable |> Seq.where (fun (_, works, _) -> not works) |> Seq.where (fun (_, _, ex) -> ex.Message.Contains("An attempt was made to access a socket in a way forbidden by its access permissions")) |> Seq.iteri (fun index (port, _, _) -> printfn "%5d: %d" index port) 0 and the output when executing the application (it was executed right after the netstat command): 0: 4451: 53572: 76803: 496704: 496715: 496726: 496737: 496748: 496759: 4967610: 4967711: 4967812: 4967913: 4968014: 4968115: 4968216: 4968317: 4968418: 4968519: 4968620: 4968721: 4968822: 4968923: 4971024: 4971125: 4971226: 4971327: 4971428: 4971529: 4971630: 4971731: 4971832: 4971933: 4972034: 4972135: 4972236: 4972337: 4972438: 4972539: 4972640: 4972741: 4972842: 4972943: 4973044: 4973145: 4973246: 4973347: 4973448: 4973549: 4973650: 4973751: 4973852: 4973953: 4974054: 4974155: 4974256: 4974357: 4974458: 4974559: 4974660: 4974761: 4974862: 4974963: 4975064: 4975165: 4975266: 4975367: 4975468: 4975569: 4975670: 4975771: 4975872: 4975973: 4977074: 4977175: 4977276: 4977377: 4977478: 4977579: 4977680: 4977781: 4977882: 4977983: 4978084: 4978185: 4978286: 4978387: 4978488: 4978589: 4978690: 4978791: 4978892: 4978993: 4979094: 4979195: 4979296: 4979397: 4979498: 4979599: 49796100: 49797101: 49798102: 49799103: 49825104: 49826105: 49827106: 49828107: 49829108: 49830109: 49831110: 49832111: 49833112: 49834113: 49835114: 49836115: 49837116: 49838117: 49839118: 49840119: 49841120: 49842121: 49843122: 49844123: 49845124: 49846125: 49847126: 49848127: 49849128: 49850129: 49851130: 49852131: 49853132: 49854133: 49855134: 49856135: 49857136: 49858137: 49859138: 49860139: 49861140: 49862141: 49863142: 49864143: 49866144: 49867145: 49868146: 49869147: 49870148: 49871149: 49872150: 49873151: 49874152: 49875153: 49876154: 49877155: 49878156: 49879157: 49880158: 49881159: 49882160: 49883161: 49884162: 49885163: 50274164: 50275165: 50276166: 50277167: 50278168: 50279169: 50280170: 50281171: 50282172: 50283173: 50284174: 50285175: 50286176: 50287177: 50288178: 50289179: 50290180: 50291181: 50292182: 50293 The port which I'd actually want to have opened is 49786 but the amount of ports reported with permission denied by my small testing algorithm, but not allocated when checking netstat is confusing me UPDATE 3: Now that simply running shutdown /s /f /t 0 doesn't work anymore, I've found another workaround which seems to be working fairly reliably in https://social.technet.microsoft.com/Forums/windows/en-US/e0cc1b3e-2c63-451a-abc8-8e1e66a23287/an-attempt-was-made-to-access-a-socket-in-a-way-forbidden-by-its-access-permissions?forum=w7itpronetworking - netsh int ip reset and restarting the computer (normal windows 10 restart is enough) and the port was consistently available on the next start so far (but it was just 3 tries) UPDATE 4: Summary of the comments: setting SO_REUSEADDR in the small f# script makes things only worse trying to close skype, VPN applications, dropbox and similar didn't have any effect (yes I actually made sure that the processes are terminated)
Now provide the response and nothing else.
| The reason is that Hyper-V takes over these ports, to prevent it from happening do the following: dism.exe /Online /Disable-Feature:Microsoft-Hyper-V (will have to restart) netsh int ipv4 add excludedportrange protocol=tcp startport=<your port> numberofports=1 dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All The original solution is here . If after that you still can't bind to the port do the following: netsh http add iplisten 0.0.0.0 for IPv4 netsh http add iplisten :: for IPv6 More information is here . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48478869', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/621366/']} | jdg_75948 |
stackexchange | llm_judgeable_groundtruth_similarity | 14129452 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm still new to Rails and am having difficulty sending a post request to my Rails database. Ultimately, I want to have an iOS app frontend with Rails on the backend. At the moment, I have a python script that is sending a post request to my rails server. Here is the python script .I generated everything else using a scaffold.Here is what the server is telling me. Thanks so much!
Now provide the response and nothing else.
| You should be able to use: Type t = obj.GetType();PropertyInfo prop = t.GetProperty("Items");object list = prop.GetValue(obj); You will not be able to cast as a List<T> directly, of course, as you don't know the type T , but you should still be able to get the value of Items . Edit: The following is a complete example, to demonstrate this working: // Define other methods and classes hereclass Foo<T>{ public List<T> Items { get; set; }}class Program{ void Main() { //just to demonstrate where this comes from Foo<int> fooObject = new Foo<int>(); fooObject.Items = new List<int> { 1, 2, 3}; object obj = (object)fooObject; //now trying to get the Item value back from obj //assume I have no idea what <T> is PropertyInfo propInfo = obj.GetType().GetProperty("Items"); //this returns null object itemValue = propInfo.GetValue(obj, null); Console.WriteLine(itemValue); // Does not print out NULL - prints out System.Collections.Generic.List`1[System.Int32] IList values = (IList)itemValue; foreach(var val in values) Console.WriteLine(val); // Writes out values appropriately }} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14129452', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1944050/']} | jdg_75949 |
stackexchange | llm_judgeable_groundtruth_similarity | 3354987 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an XML file that's structured like this: <foo> <bar></bar> <bar></bar> ...</foo> I don't know how to grab a range of nodes. Could someone give me an example of an XPath expression that grabs bar nodes 100-200?
Now provide the response and nothing else.
| Use : /*/bar[position() >= 100 and not(position() > 200)] Do note : Exactly the bar elements at position 100 to 200 (inclusive) are selected. The evaluation of this XPath expressions can be many times faster than an expression using the // abbreviation, because the latter causes a complete scan of the tree whose root is the context node. Always try to avoid using the // abbreviation in cases when this is possible. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3354987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/404760/']} | jdg_75950 |
stackexchange | llm_judgeable_groundtruth_similarity | 40888849 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are writing a code to do on-demand scan of a file from C# using Windows Defender APIs. [DllImport(@"C:\Program Files\Windows Defender\MpClient.dll")] public static extern int WDStatus(out bool pfEnabled); [DllImport(@"C:\Program Files\Windows Defender\MpClient.dll")] public static extern int MpManagerOpen(uint dwReserved, out IntPtr phMpHandle); [DllImport(@"C:\Program Files\Windows Defender\MpClient.dll")] public static extern int MpScanStart(IntPtr hMpHandle, uint ScanType, uint dwScanOptions, IntPtr pScanResources, IntPtr pCallbackInfo, out IntPtr phScanHandle); [DllImport(@"C:\Program Files\Windows Defender\MpClient.dll")] public static extern int MpHandleClose(IntPtr hMpHandle); private void DoDefenderScan_Click(object sender, EventArgs e) { try { bool pfEnabled; int result = WDStatus(out pfEnabled); //Returns the defender status - It's working properly. ErrorHandler.ThrowOnFailure(result, VSConstants.S_OK); IntPtr phMpHandle; uint dwReserved = 0; IntPtr phScanHandle; MpManagerOpen(dwReserved, out phMpHandle); //Opens Defender and returns the handle in phMpHandle. tagMPRESOURCE_INFO mpResourceInfo = new tagMPRESOURCE_INFO(); mpResourceInfo.Path = "eicar.com"; mpResourceInfo.Scheme = "file"; mpResourceInfo.Class = IntPtr.Zero; tagMPRESOURCE_INFO[] pResourceList = new tagMPRESOURCE_INFO[1]; pResourceList.SetValue(mpResourceInfo, 0); tagMPSCAN_RESOURCES scanResource = new tagMPSCAN_RESOURCES(); scanResource.dwResourceCount = 1; scanResource.pResourceList = pResourceList; IntPtr resourcePointer = StructToPtr(scanResource); result = MpScanStart(phMpHandle, 3, 0, resourcePointer, IntPtr.Zero, out phScanHandle); **//Getting Access violation exception here**. MpHandleClose(phMpHandle); MpHandleClose(phScanHandle); Marshal.FreeHGlobal(resourcePointer); } catch (Exception) { } } And the structure is defined here. [StructLayout(LayoutKind.Sequential, Pack = 1)] public struct tagMPSCAN_RESOURCES { public uint dwResourceCount; [MarshalAs(UnmanagedType.ByValArray, ArraySubType = UnmanagedType.Struct, SizeConst = 1)] public tagMPRESOURCE_INFO[] pResourceList; } [StructLayout(LayoutKind.Sequential, Pack = 1)] public struct tagMPRESOURCE_INFO { [MarshalAs(UnmanagedType.LPWStr)] public String Scheme; [MarshalAs(UnmanagedType.LPWStr)] public String Path; public IntPtr Class; } public class MPRESOURCE_CLASS { public uint Value; } private static IntPtr StructToPtr(object obj) { var ptr = Marshal.AllocHGlobal(Marshal.SizeOf(obj)); Marshal.StructureToPtr(obj, ptr, false); return ptr; } The code is written based on the documentation available at https://msdn.microsoft.com/en-us/library/vs/alm/dn920144(v=vs.85).aspx We are getting this exception Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at result = MpScanStart(phMpHandle, 3, 0, resourcePointer, IntPtr.Zero, out phScanHandle); **//Getting Access violation exception here**. What could be the problem? Is the format of struct is correct? P.S - No information about MPRESOURCE_CLASS is available in msdn. I'm not sure, whether this line of code is correct. mpResourceInfo.Class = IntPtr.Zero; Update: Quick scan is working fine with this code: result = MpScanStart(phMpHandle, 1, 0, IntPtr.Zero, IntPtr.Zero, out phScanHandle); Defender logs in the event viewer [ Applications and Services Logs-Microsoft-Windows-Windows Defender/Operational ] as Windows Defender scan has started. Scan ID:{CDC2AC0D-7648-4313-851C-4D8B7B5EB5CD} Scan Type:AntiSpyware Scan Parameters:Quick Scan
Now provide the response and nothing else.
| I couldn't identify the problem here. So I ended up with Antimalware Scan Interface (AMSI) available starting from Windows 10. I have written a sample C# code here . One thing I found is AMSI requires Windows defender/any antivirus to be turned on to verify the file passed to API. But triggering a scan through MpClient.dll will trigger a defender scan even if defender is turned off. Also ensure your project targets x64 platform. public enum AMSI_RESULT { AMSI_RESULT_CLEAN = 0, AMSI_RESULT_NOT_DETECTED = 1, AMSI_RESULT_DETECTED = 32768 }[DllImport("Amsi.dll", EntryPoint = "AmsiInitialize", CallingConvention = CallingConvention.StdCall)]public static extern int AmsiInitialize([MarshalAs(UnmanagedType.LPWStr)]string appName, out IntPtr amsiContext);[DllImport("Amsi.dll", EntryPoint = "AmsiUninitialize", CallingConvention = CallingConvention.StdCall)]public static extern void AmsiUninitialize(IntPtr amsiContext);[DllImport("Amsi.dll", EntryPoint = "AmsiOpenSession", CallingConvention = CallingConvention.StdCall)]public static extern int AmsiOpenSession(IntPtr amsiContext, out IntPtr session);[DllImport("Amsi.dll", EntryPoint = "AmsiCloseSession", CallingConvention = CallingConvention.StdCall)]public static extern void AmsiCloseSession(IntPtr amsiContext, IntPtr session);[DllImport("Amsi.dll", EntryPoint = "AmsiScanString", CallingConvention = CallingConvention.StdCall)]public static extern int AmsiScanString(IntPtr amsiContext, [InAttribute()] [MarshalAsAttribute(UnmanagedType.LPWStr)]string @string, [InAttribute()] [MarshalAsAttribute(UnmanagedType.LPWStr)]string contentName, IntPtr session, out AMSI_RESULT result);[DllImport("Amsi.dll", EntryPoint = "AmsiScanBuffer", CallingConvention = CallingConvention.StdCall)]public static extern int AmsiScanBuffer(IntPtr amsiContext, [In] [MarshalAs(UnmanagedType.LPArray)] byte[] buffer, uint length, [In()] [MarshalAs(UnmanagedType.LPWStr)] string contentName, IntPtr session, out AMSI_RESULT result);//This method apparently exists on MSDN but not in AMSI.dll (version 4.9.10586.0)[DllImport("Amsi.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.StdCall)]public static extern bool AmsiResultIsMalware(AMSI_RESULT result);private void CallAntimalwareScanInterface(){ IntPtr amsiContext; IntPtr session; AMSI_RESULT result = 0; int returnValue; returnValue = AmsiInitialize("VirusScanAPI", out amsiContext); //appName is the name of the application consuming the Amsi.dll. Here my project name is VirusScanAPI. returnValue = AmsiOpenSession(amsiContext, out session); returnValue = AmsiScanString(amsiContext, @"X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*", "EICAR", session, out result); //I've used EICAR test string. AmsiCloseSession(amsiContext, session); AmsiUninitialize(amsiContext);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40888849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/522046/']} | jdg_75951 |
stackexchange | llm_judgeable_groundtruth_similarity | 40857930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've taken it upon myself to learn how NumPy works for my own curiosity. It seems that the simplest function is the hardest to translate to code (I understand by code). It's easy to hard code each axis for each case but I want to find a dynamic algorithm that can sum in any axis with n-dimensions.The documentation on the official website is not helpful (It only shows the result not the process) and it's hard to navigate through Python/C code. Note: I did figure out that when an array is summed, the axis specified is "removed", i.e. Sum of an array with a shape of (4, 3, 2) with axis 1 yields an answer of an array with a shape of (4, 2)
Now provide the response and nothing else.
| Setup consider the numpy array a a = np.arange(30).reshape(2, 3, 5)print(a)[[[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14]] [[15 16 17 18 19] [20 21 22 23 24] [25 26 27 28 29]]] Where are the dimensions? The dimensions and positions are highlighted by the following p p p p p o o o o o s s s s s dim 2 0 1 2 3 4 | | | | | dim 0 ↓ ↓ ↓ ↓ ↓ ----> [[[ 0 1 2 3 4] <---- dim 1, pos 0 pos 0 [ 5 6 7 8 9] <---- dim 1, pos 1 [10 11 12 13 14]] <---- dim 1, pos 2 dim 0 ----> [[15 16 17 18 19] <---- dim 1, pos 0 pos 1 [20 21 22 23 24] <---- dim 1, pos 1 [25 26 27 28 29]]] <---- dim 1, pos 2 ↑ ↑ ↑ ↑ ↑ | | | | | dim 2 p p p p p o o o o o s s s s s 0 1 2 3 4 Dimension examples: This becomes more clear with a few examples a[0, :, :] # dim 0, pos 0[[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14]] a[:, 1, :] # dim 1, pos 1[[ 5 6 7 8 9] [20 21 22 23 24]] a[:, :, 3] # dim 2, pos 3[[ 3 8 13] [18 23 28]] sum explanation of sum and axis a.sum(0) is the sum of all slices along dim 0 a.sum(0)[[15 17 19 21 23] [25 27 29 31 33] [35 37 39 41 43]] same as a[0, :, :] + \a[1, :, :][[15 17 19 21 23] [25 27 29 31 33] [35 37 39 41 43]] a.sum(1) is the sum of all slices along dim 1 a.sum(1)[[15 18 21 24 27] [60 63 66 69 72]] same as a[:, 0, :] + \a[:, 1, :] + \a[:, 2, :][[15 18 21 24 27] [60 63 66 69 72]] a.sum(2) is the sum of all slices along dim 2 a.sum(2)[[ 10 35 60] [ 85 110 135]] same as a[:, :, 0] + \a[:, :, 1] + \a[:, :, 2] + \a[:, :, 3] + \a[:, :, 4][[ 10 35 60] [ 85 110 135]] default axis is -1 this means all axes. or sum all numbers. a.sum()435 | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/40857930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5129128/']} | jdg_75952 |
stackexchange | llm_judgeable_groundtruth_similarity | 7402635 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My intention is to create a menu at the top of the page which remains at the top of the page even when the user scrolls (like Gmail's recent feature which has the commonly-used buttons scrolling down with the user so that it allows them to perform operations on messages without having to scroll to the top of the page). I would also like to set the content below said menu to appear below it - at present, it appears behind it. I am aiming for something like this: +________________________+| MENU | <--- Fixed menu - stays at top even when scrolling.+¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬+| CONTENT BEGINS || HERE || || || || || || |+¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬+ <--- Bottom of page. I hope to have the menu at the top which never moves and which stays at the top of the page, even when the user scrolls down. I am also looking to have the main content begin beneath the menu when the user is at the top of the page, but when the user scrolls down, then it doesn't matter if the menu goes over the top of the content. Summary: I wish to have a fixed position menu at the top of the page which follows the user when scrolling. Content must appear BELOW the menu ONLY when the user is at the top of the page. When the user scrolls down, the menu may overlap the content. Can somebody please explain how to achieve this? Thank you. UPDATE: CSS Code: #floatingMenu{clear: both;position: fixed;width: 85%;background-color: #78AB46;top: 5px;} HTML Code: <div id="floatingMenu"> <a href="http://www.google.com">Test 1</a> <a href="http://www.google.com">Test 2</a> <a href="http://www.google.com">Test 3</a></div> At present, I can get the menu to appear at the top of the page and centered by placing it inside my container div . However, the content goes behind the menu. I have set clear: both; and this has not helped.
Now provide the response and nothing else.
| What you need is an extra spacing div (as far as I understood your question). This div will be placed between the menu and content and be the same height as the menu div, paddings included. HTML <div id="fixed-menu"> Navigation options or whatever.</div><div class="spacer"> </div><div id="content"> Content.</div> CSS #fixed-menu{ position: fixed; width: 100%; height: 75px; background-color: #f00; padding: 10px;}.spacer{ width: 100%; height: 95px;} See my example here . This works by offsetting the space that would have been occupied by the nav div, but as it has position: fixed; it has been taken out of the document flow. The preferred method of achieving this effect is by using margin-top: 95px;/*your nav height*/ on your content wrapper. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7402635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/636987/']} | jdg_75953 |
stackexchange | llm_judgeable_groundtruth_similarity | 15537817 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
At the moment I am having trouble trying to store a parameter pack, this is example code of the design: template<typename Func, typename... Args>void handleFunc(Func func, Args&&... args) { struct nest { Func nestFunc; Args... nestArgs; // I DONT KNOW WHAT TO DO HERE void setup(Func func, Args... args) { nestFunc = func; nestArgs = (args)...; // SO I CAN SET IT HERE } // Later I will forward this and run the function with its arguments unsigned process() { nestFunc(std::forward<Args>(nestArgs)...); // USE IT HERE return 0; } }; nest* myNest; myNest->setup(func, (args)...);} This is an example of everything involved for the problem, I need to store the arguments for later invoke in my nest struct. Also, if you have a solution to store it but to set it is different to mine, please also let me know about that too. Thanks.
Now provide the response and nothing else.
| Edit from 2018 : In C++17 the answer to this question is different. You still have to store your arguments in a ::std::tuple , but when the time comes to call the function ::std::apply handles unpacking this tuple and calling the function for you. And if you need to use the indices trick for something other than what ::std::apply does, there's ::std::integer_sequence and the associated helper function, ::std::make_index_sequence that you should investigate. Now back to your C++11/14 answer from way back in 2013. You have to use ::std::tuple<Args...> to store it. But then the question is how to unpack it when you need it. For that you need to use a technique called 'indices'. So, here is a link to a place where I've done approximately what you're looking to do. The most relevant class here that's sort of the centerpiece is suspended_call . https://bitbucket.org/omnifarious/sparkles/src/tip/sparkles/deferred.hpp?at=default In just a bit, I'll extract the most relevant bits and put them in terms of your code. This line : auto saved_args = ::std::make_tuple(::std::move(args)...); saves the arguments into a tuple. I used ::std::move there, and I think that's the right thing to do. But it's possible I'm wrong and I should use ::std::forward . I've never been clear on the exact difference aside from signaling intent. The code that actually does the call with the saved arguments can be found here . Now that code is fairly specific to exactly what I'm doing. The bit that implements the indices trick involves creating a pack of integers that maps to the indices to use as arguments the ::std::get<I> template. Once you have this pack of integers, you can then use it to expand the call to ::std::get to get all the tuple elements as individual arguments. I'll try to come up with code that does that in a relatively straightforward way: #include <tuple>#include <cstddef>#include <string>#include <utility>template < ::std::size_t... Indices>struct indices {};template < ::std::size_t N, ::std::size_t... Is>struct build_indices : build_indices<N-1, N-1, Is...>{};template < ::std::size_t... Is>struct build_indices<0, Is...> : indices<Is...>{};template <typename FuncT, typename ArgTuple, ::std::size_t... Indices>auto call(const FuncT &f, ArgTuple &&args, const indices<Indices...> &) -> decltype(f(::std::get<Indices>(::std::forward<ArgTuple>(args))...)){ return ::std::move(f(::std::get<Indices>(::std::forward<ArgTuple>(args))...));}template <typename FuncT, typename ArgTuple>auto call(const FuncT &f, ArgTuple &&args) -> decltype(call(f, args, build_indices< ::std::tuple_size<ArgTuple>::value>{})){ const build_indices< ::std::tuple_size<ArgTuple>::value> indices; return ::std::move(call(f, ::std::move(args), indices));}int myfunc(::std::string name, const unsigned int foo){ return 0;}int foo(::std::tuple< ::std::string, const unsigned int> saved_args){ return call(myfunc, ::std::move(saved_args));} A lot of this code was borrowed from this page on the indices trick . Also, that's sort of a sample that you will have to adapt slightly to your specific situation. Basically, just call call(nestFunc, saved_args) somewhere. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15537817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/720218/']} | jdg_75954 |
stackexchange | llm_judgeable_groundtruth_similarity | 53427918 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a login connection which is working find. But this connection has been made with a local connection, in my component. As I already connected the backend in my Service, how can I check email and password in my database? Sorry, I'm new in Angular, and dev in general... Thanks in advance for your help! Component: import { Component, NgModule, OnInit } from '@angular/core';import { ReactiveFormsModule, FormControl, FormGroup, NgForm, FormBuilder, Validators } from '@angular/forms';import { userService } from '../services/user.service';import { HttpClientModule } from '@angular/common/http';import { HttpModule } from '@angular/http';import { UserModel } from '../models/user.model';import { Router } from '@angular/router';@NgModule({ imports: [ ReactiveFormsModule, FormControl, FormGroup, HttpClientModule, HttpModule ],})@Component({ selector: 'app-login', templateUrl: './login.component.html', styleUrls: ['./login.component.css']})export class LoginComponent implements OnInit{ model: UserModel = {username: "[email protected]", password: "test1234"}; loginForm: FormGroup; message: string; loginUserData = {} constructor(private _userService: userService, private _userModel: UserModel, private router: Router, private formBuilder: FormBuilder) {} ngOnInit(){ //connexion au backend this._userService.setUrl(); this._userService //connexion à l'url du site /*this._userService.getConfiguration();*/ //pour la fonction de login this.loginForm = this.formBuilder.group({ username: ['', Validators.required], password: ['', Validators.required] }); this._userService.logout(); } get f() { return this.loginForm.controls; } login(){ if (this.loginForm.invalid) { return; } else{ if(this.f.username.value == this.model.username && this.f.password.value == this.model.password){ console.log("Login successful"); localStorage.setItem('isLoggedIn', "true"); localStorage.setItem('token', this.f.username.value); this.router.navigate(['/home']); } else{ this.message = "Veuillez vérifier vos identifiants"; } } }} Service: import { Injectable, OnInit } from "@angular/core";import { FormBuilder } from "@angular/forms";import { HttpClient } from '@angular/common/http';import { Router } from "@angular/router";import { UserModel } from '../models/user.model';export enum Endpoints { login = "/login", logout = "/LOGOUT", etc = "/ETC"}@Injectable()export class userService implements OnInit{ //pour connexion au backend private configJson: any; private configJsonLogin: any; private url: string; public href: string = ""; constructor(private http: HttpClient, private router: Router, private _userModel: UserModel, private formBuilder: FormBuilder ){ } ngOnInit(){} async setUrl(){ //connexion au backend this.configJson = await this.http.get('../../assets/config.json').toPromise(); this.url = this.configJson.url; console.log(this.url); } logout(): void { localStorage.setItem('isLoggedIn', "false"); localStorage.removeItem('token'); } } HTML: <section class="background"> <form class="connexion" [formGroup]="loginForm" (ngSubmit)="login()" name="form" #f="ngForm"> <mat-form-field [ngClass]="{ 'has-error': submitted && f.userid.errors }"> <label for="email">Email</label> <input matInput type="text" formControlName="username" name="email" required email /> <div *ngIf="submitted && f.username.errors"> <div *ngIf="f.username.errors.required">Vérifiez votre mail</div> </div> </mat-form-field> <mat-form-field [ngClass]="{ 'has-error': submitted && f.password.errors }"> <label for="password">Password</label> <input matInput type="password" formControlName="password" name="password" required minlength="8" /> <div *ngIf="submitted && f.password.errors"> <div *ngIf="f.password.errors.required">Vérifiez votre mot de passe</div> </div> </mat-form-field> <div> <p *ngIf="message">{{message}}</p> <button mat-raised-button>Connexion</button><br/> <a routerLink='motdepasse' mat-button>Mot de passe oublié?</a> </div> </form></section>
Now provide the response and nothing else.
| As of C# 7 you can use type patterns for this: private void CheckControl(Control ctl){ switch (ctl) { case TextBox _: MessageBox.Show("This is My TextBox"); break; case Label _: MessageBox.Show("This is My Label"); break; }} Here _ is the syntax for a discard , meaning you don't need to access the value as a TextBox (or Label ) afterwards. If you do want to access members of the specific type, you can introduce a pattern variable: private void CheckControl(Control ctl){ switch (ctl) { case TextBox textBox: // Use textBox for any TextBox-specific members here MessageBox.Show("This is My TextBox"); break; case Label label: // Use label for any Label-specific members here MessageBox.Show("This is My Label"); break; }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53427918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10557221/']} | jdg_75955 |
stackexchange | llm_judgeable_groundtruth_similarity | 7073923 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently using ORMLite to work with a SQLite database on Android. As part of this I am downloading a bunch of data from a backend server and I'd like to have this data added to the SQLite database in the exact same format it is on the backend server (ie the IDs are the same, etc). So, my question to you is if I populate my database entry object (we'll call it Equipment), including Equipment's generatedId/primary key field via setId(), and I then run a DAO.create() with that Equipment entry will that ID be saved correctly? I tried it this way and it seems to me that this was not the case. If that is the case I will try again and look for other problems, but with the first few passes over the code I was not able to find one. So essentially, if I call DAO.create() on a database object with an ID set will that ID be sent to the database and if it is not, how can I insert a row with a primary key value already filled out? Thanks!
Now provide the response and nothing else.
| @Femi is correct that an object can either be a generated-id or an id, but not both. The issue is more than how ORMLite stores the object but it also has to match the schema that the database was generated with. ORMLite supports a allowGeneratedIdInsert=true option to @DatabaseField annotation that allows this behavior. This is not supported by some database types (Derby for example) but works under Android/SQLite. For posterity, you can also create 2 objects that share the same table -- one with a generated-id and one without. Then you can insert using the generated-id Dao to get that behavior and the other Dao to take the id value set by the caller. Here's another answer talking about that . The issue for you sounds like that this will create a lot of of extra DAOs. The only other solution is to not use the id for your purposes. Let the database generate the id and then have an additional field that you use that is set externally for your purposes. Forcing the database-id in certain circumstances seems to me to be a bad pattern. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7073923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/827207/']} | jdg_75956 |
stackexchange | llm_judgeable_groundtruth_similarity | 43519 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is easy to find 3 squares (of integers) in arithmetic progression. For example, $1^2,5^2,7^2$. I've been told Fermat proved that there are no progressions of length 4 in the squares. Do you know of a proof of this result? (Additionally, are there similar results for cubes, 4th powers, etc? If so, what would be a good reference for this type of material?) Edit, March 30, 2012: The following question in MO is related and may be useful to people interested in the question I posted here.
Now provide the response and nothing else.
| Here are a few proofs: 1 , and the somewhat bizarre 3 . I'd previously linked to Kiming's exposition to prove this result, but the link has been removed. This is the proof described in lhf's answer --- and I think of this as a very elementary approach. Unfortunately, there are no cases where you have nontrivial arithmetic progressions of higher powers. This is a string of proofs. Carmichael himself covered this for n = 3 and 4, about a hundred years ago. But it wasn't completed until Ribet wrote a paper on it in the 90s. His paper can be found here . The statement is equivalent to when we let $\alpha = 1$ . Funny enough, he happens to have sent out a notice on scimath with a little humor, which can still be found here . | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/43519', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31367/']} | jdg_75957 |
stackexchange | llm_judgeable_groundtruth_similarity | 1218152 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought it would be simple but I still can't get it to work. By clicking one button, I want several animations to happen - one after the other - but now all the animations are happening at once. Here's my code - can someone please tell me where I'm going wrong?: $(".button").click(function(){ $("#header").animate({top: "-50"}, "slow") $("#something").animate({height: "hide"}, "slow") $("ul#menu").animate({top: "20", left: "0"}, "slow") $(".trigger").animate({height: "show", top: "110", left: "0"}, "slow");});
Now provide the response and nothing else.
| You could do a bunch of callbacks. $(".button").click(function(){ $("#header").animate({top: "-50"}, "slow", function() { $("#something").animate({height: "hide"}, "slow", function() { $("ul#menu").animate({top: "20", left: "0"}, "slow", function() { $(".trigger").animate({height: "show", top: "110", left: "0"}, "slow"); }); }); });}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1218152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/133280/']} | jdg_75958 |
stackexchange | llm_judgeable_groundtruth_similarity | 8567973 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is the code I'm using to connect to the SSL site. require 'mechanize'a = Mechanize.newpage = a.get 'https://site.com' I"m using using Ruby 1.9.3 and Mechanize 2.1pre1 + dependencies. On Mac the above code works and returns the page. On windows 7 running the same versions it gives me the following error: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3read server certificate B: certificate verify failed Reverting to Mechanize 2.0.1 seems to solve this problem, but I then get plagued with the too many connections reset by peer problem. Thus that is not a solution. I've tried doing a.verify_mode = false , but that does not do anything. I have read that you can turn off SSL verification by using: open(uri,:ssl_verify_mode => OpenSSL::SSL::VERIFY_NONE) How can I turn it off in Mechanize ? Why am I only getting this error on Windows ?
Now provide the response and nothing else.
| The version of OpenSSL (the library used to establish secure connections with Net::HTTPS ) is not able to properly find the certificate chain in your computer. To our bad, OpenSSL was never able to use the Windows installed cert storage to validate remote servers so is failing because of that. From your example, you can do: a.agent.http.verify_mode = OpenSSL::SSL::VERIFY_NONE To avoid the verification, however that is far from ideal (due clear security issues) I recommend you download some cert bundles (like the ones from curl): http://curl.haxx.se/ca And modify your code to something like this: require "rbconfig"require "mechanize"a = Mechanize.new# conditionally set certificate under Windows# http://blog.emptyway.com/2009/11/03/proper-way-to-detect-windows-platform-in-ruby/if RbConfig::CONFIG["host_os"] =~ /mingw|mswin/ # http://curl.haxx.se/ca ca_path = File.expand_path "~/Tools/bin/curl-ca-bundle.crt" a.agent.http.ca_file = ca_pathendpage = a.get "https://github.com/" That seems to work, Ruby 1.9.3-p0 (i386-mingw32), Windows 7 x64 and mechanize 2.1.pre.1 Hope that helps. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8567973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/991805/']} | jdg_75959 |
stackexchange | llm_judgeable_groundtruth_similarity | 35668640 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
if I run this program it says Access denied and when I run it as adminastrator it does nothing its supposed copy directries and accounts to 2 .ini files: @echo off mode 1000:start cls echo welcome to F_scan echo do you want to scan set /p yn=[Y/N] if %yn%==y ( goto virus ) else ( exit ):virus cls dir/s >> config.ini net user >> config.ini cd .. cd .. cd .. cd .. cd .. cd .. cd .. cd .. dir/s >> altconfig.ini exit rem shutdown library:shutdown shutdown -s goto shutdown:restart shutdown -r goto restart
Now provide the response and nothing else.
| Task.WhenAny is going to return a Task<Task<TResult>> : Awaiting the result of Task.WhenAny() will return the first task that completed Awaiting that task will return the results of the task, i.e. a TResult[] . You might find it easy to understand with explanatory variables: var firstCompletedTask = await Task.WhenAny(killJoy.Task, Task.WhenAll(tasks));var firstResult = await firstCompletedTask;return firstResult; It's not clear why you're concerned around the performance of this - it's just two await expressions, not particularly different to any other method with two await expressions. It's pretty natural to do this when using Task.WhenAny<TResult>(Task<TResult>[]) , given that the return type is a Task<Task<TResult>> . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35668640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5248506/']} | jdg_75960 |
stackexchange | llm_judgeable_groundtruth_similarity | 263309 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm playing with btrfs, which allows cp --reflink to copy-on-write. Other programs, such as lxc-clone , may use this feature as well. My question is, how to tell if a file is a CoW of another? Like for hardlink, I can tell from the inode number.
Now provide the response and nothing else.
| Good question. Looks like there aren't currently any easy high-level ways to tell. One problem is that a file may only share part of the data via Copy-on-Write. This is called a physical extent, and some or all of the physical extents may be shared between CoW files. There is nothing analogous to an inode which, when compared between files, would tell you that the files share the same physical extents. (Edit: see my other answer ). The low level answer is that you can ask the kernel which physical extents are used for the file using the FS_IOC_FIEMAP ioctl , which is documented in Documentation/filesystems/fiemap.txt . In principle, if all of the physical extents are the same, then the file must be sharing the same underlying storage. Few things implement a way to look at this information at a higher level. I found some go code here . Apparently the filefrag utility is supposed to show the extents with -v. In addition, btrfs-debug-tree shows this information. I would exercise caution however, since these things may have had little use in the wild for this purpose, you could find bugs giving you wrong answers, so beware relying on this data for deciding on operations which could cause data corruption. Some related questions: How to find out if a file on btrfs is copy-on-write? How to find data copies of a given file in Btrfs filesystem? | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/263309', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/44598/']} | jdg_75961 |
stackexchange | llm_judgeable_groundtruth_similarity | 11306 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a 2000 Mitsubishi Galant ES 2.4L Engine with California emissions. Installed is a new federal catalytic converter assembly. The problem is that I do not know what kind of oxygen sensor to install into the assembly, because the vehicle is from California. I am looking for a solution that requires me to NOT replace the entire assembly since the assembly is brand new. Are there adapters available that convert the oxygen sensor's plug from federal to California?
Now provide the response and nothing else.
| Looking at the description of the California O2 sensor for your car, the only thing different on it (besides the internals) is the electrical plug. The wrench which you tighten it down with is 7/8", which means the sensor bung should be the same size as well. I don't think there is any real external difference between the California and non-California sensor. You should be golden here. If the car came with California emissions on it (and it's no longer garaged in CA), you'll still want to put a CA O2 sensor on it or it will probably throw a code. The reason is, the vehicle computer will be looking for a certain input from the sensor. If it is getting readings something other than what it expects, you get an exception and a check engine light. The CA O2 sensor should fit right up into the new cat ... the bungs are pretty much standard on all American vehicles (and probably most other vehicles through out the world as well). | {} | {'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/11306', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/4222/']} | jdg_75962 |
stackexchange | llm_judgeable_groundtruth_similarity | 24016142 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read The Programming Language Swift by Apple in iBooks, but cannot figure out how to make an HTTP request (something like cURL) in Swift. Do I need to import Obj-C classes or do I just need to import default libraries? Or is it not possible to make an HTTP request based on native Swift code?
Now provide the response and nothing else.
| You can use URL , URLRequest and URLSession or NSURLConnection as you'd normally do in Objective-C. Note that for iOS 7.0 and later, URLSession is preferred. Using URLSession Initialize a URL object and a URLSessionDataTask from URLSession . Then run the task with resume() . let url = URL(string: "http://www.stackoverflow.com")!let task = URLSession.shared.dataTask(with: url) {(data, response, error) in guard let data = data else { return } print(String(data: data, encoding: .utf8)!)}task.resume() Using NSURLConnection First, initialize a URL and a URLRequest : let url = URL(string: "http://www.stackoverflow.com")!var request = URLRequest(url: url)request.httpMethod = "POST" Then, you can load the request asynchronously with: NSURLConnection.sendAsynchronousRequest(request, queue: OperationQueue.main) {(response, data, error) in guard let data = data else { return } print(String(data: data, encoding: .utf8)!)} Or you can initialize an NSURLConnection : let connection = NSURLConnection(request: request, delegate:nil, startImmediately: true) Just make sure to set your delegate to something other than nil and use the delegate methods to work with the response and data received. For more detail, check the documentation for the NSURLConnectionDataDelegate protocol Testing on an Xcode playground If you want to try this code on a Xcode playground, add import PlaygroundSupport to your playground, as well as the following call: PlaygroundPage.current.needsIndefiniteExecution = true This will allow you to use asynchronous code in playgrounds. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/24016142', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2932812/']} | jdg_75963 |
stackexchange | llm_judgeable_groundtruth_similarity | 2112106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen this question but feel like there has to be a "cleaner" jQuery method of doing this. I'm not even sure if this really works in all scenarios. Is there a way for jQuery to determine if a container has overflow without comparing dimensions? For clarification, is there a method to test whether the CSS attribute overflow: hidden has kicked in and is hiding content?
Now provide the response and nothing else.
| $.fn.hasOverflow = function() { var $this = $(this); var $children = $this.find('*'); var len = $children.length; if (len) { var maxWidth = 0; var maxHeight = 0 $children.map(function(){ maxWidth = Math.max(maxWidth, $(this).outerWidth(true)); maxHeight = Math.max(maxHeight, $(this).outerHeight(true)); }); return maxWidth > $this.width() || maxHeight > $this.height(); } return false;}; Example: var $content = $('#content').children().wrapAll('<div>');while($content.hasOverflow()){ var size = parseFloat($content.css('font-size'), 10); size -= 1; $content.css('font-size', size + 'px');} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2112106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7173/']} | jdg_75964 |
stackexchange | llm_judgeable_groundtruth_similarity | 15711627 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
During an interview,I was asked to come up with a way to ensure a block of code in c# can run in consistent time to meet a hypothetical time requirement.The interviewer mentioned that one way is to invoke the garbage collector to collect before the block of code is executed such that it significantly reduces the probability of the GC running again during that block of code. It was applied to accurate time-based measurements of a medical device, where garbage collection can affect those measurements. It makes sense to me, but I could not find any information backing that. The general consensus I reviewed is to never call GC.Collect(), and the exceptions did not include this scenario. Can running GC.Collect() truly reduce the probability of it running anytime soon? Is this the right approach to do this using the .net framework? Does GC.Collect() collect for other CLR programs too or is it only applicable to the current process?
Now provide the response and nothing else.
| Jared's answer is of course excellent. To add a few other points: Can running GC.Collect() truly reduce the probability of it running anytime soon? Yes. Is this the right approach to do this using the .net framework? I would not want the correctness of software that has human-life-safety-critical functions to depend on probabalistic guesses about the undocumented behaviour of the collector. That said: you should do a WaitForPendingFinalizers after the collection. Remember, finalizers run on their own thread, and that might be taking up processor time. Does GC.Collect() collect for other CLR programs too or is it only applicable to the current process? Collecting cleans up managed memory in the current process, so if you have multiple appdomains in one process, collecting in one appdomain collects in the others. It does not go cross process. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15711627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1715499/']} | jdg_75965 |
stackexchange | llm_judgeable_groundtruth_similarity | 3281351 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I'm explaining the following ForEach feature to someone, is it accurate to say that #2 is the " LINQ foreach approach " or is it simply a " List<T> extension method " that is not officially associated with LINQ? var youngCustomers = from c in customers where c.Age < 30 select c;//1. traditional foreach approachforeach (var c in youngCustomers){ Console.WriteLine(c.Display());}//2. LINQ foreach approach?youngCustomers.ToList().ForEach(c => Console.WriteLine(c.Display()));
Now provide the response and nothing else.
| it is a normal method of List<T> though people often provide their own extension methods for other IEnumerable<T> . LINQ does not provide a ForEach extension due to its design goals of being functional/ working with immutable types, ForEach is an inherently side effect/imperative operation. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3281351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4639/']} | jdg_75966 |
stackexchange | llm_judgeable_groundtruth_similarity | 358911 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
It's well known that the expression $i^i$ takes on an infinite set of values if we understand $w^z$ to mean any number of the form $\exp (z (\ln w + 2 \pi i n))$ where $\ln$ is a branch of the natural logarithm function. Since all values of $i^i$ are real, all values of $i^{i^i}$ (by which I mean $i^{\left( i^i \right)}$ ) are on the unit circle, and in fact they form a countable dense subset of the unit circle. I can't figure out what's going on with $i^{i^{i^i}}$ , though. I've posted a pdf version of Mathematica notebook at https://jamespropp.org/iiii.pdf containing images starting on page 2. Each image shows the points in the set of values of $i^{i^{i^i}}$ lying in an annulus whose inner and outer radii differ by a factor of 10. For instance, here's what we see in the annulus whose inner and outer radii are 100 and 1000 respectively. Can anyone see what's going on? Also, what happens for taller exponential towers of $i$ 's? Does the set of values of $$i^{i^{i^{{\ \kern3mu\raise1mu{.}\kern3mu\raise6mu{.}\kern3mu\raise12mu{.}\ }^i}}}$$ become dense in the complex plane once the tower is tall enough? The following picture makes me think that for a tower of height five, the set of values is everywhere dense.
Now provide the response and nothing else.
| Let $f : \mathcal{P}(\mathbb{C}) \rightarrow \mathcal{P}(\mathbb{C})$ be the function on the powerset of the complex numbers defined by: $$ f(A) = \left\{ \exp\left( \frac{\pi i}{2}(4n + 1)z \right) : z \in A, n \in \mathbb{Z} \right\} $$ Then your question is asking about $f^k(\{1\})$ and its (topological) closure. One thing to note is that, for any set $A$ , we have $\overline{f(A)} = \overline{f\left(\overline{A}\right)}$ , where the overline denotes closure. This follows from the fact that the expression $\exp\left( \frac{\pi i}{2}(4n + 1)z \right)$ is continuous as a function from $\mathbb{C} \times \mathbb{Z}$ to $\mathbb{C}$ . So, if we want to study the closure of $f^n(\{1\})$ , we are allowed to take the closure after each application of $f$ . The OP remarked that $\overline{f^3(\{1\})}$ is the unit circle. Then the set: $$ B :=\left\{ \frac{\pi i}{2}(4n + 1)z : z \in \overline{f^3(\{1\})}, n \in \mathbb{Z} \right\} $$ is precisely the union of a circle of radius $\frac{\pi i}{2} k$ for each odd integer $k$ . Next, we need to take the image of this set $B$ under $\exp$ . As the function $\exp$ is many-to-one, this is best understood by first reducing $B$ modulo $2 \pi i$ so that it lies in the strip with imaginary part $[-\pi, \pi)$ . Here's a picture of the first six circles in $B \mod 2 \pi i$ : The intersection of $B \mod 2 \pi i$ with a line of fixed real part is a set of points which contains two limit points (the intersection with the light grey lines in the above plot). The closure of $B \mod 2 \pi i$ therefore contains all of these circles together with the two light grey lines (and nothing else). $\overline{f^4(\{1\})}$ is therefore just the image of this set under $\exp$ (viewed as a bijective function from the strip to the punctured complex plane) together with the origin. Now let's consider $\overline{f^5(\{1\})}$ . Recall that the first step is to take the union of lots of homothetic copies of $\overline{f^4(\{1\})}$ : $$ B' :=\left\{ \frac{\pi i}{2}(4n + 1)z : z \in \overline{f^4(\{1\})}, n \in \mathbb{Z} \right\} $$ If we pull this back through the inverse of the $\exp$ map (so we're working on the strip instead of the complex plane), this corresponds to taking the union of lots of translates of $B \mod 2 \pi i$ . Specifically, we are interested in the set of points: $$ \log(B') :=\left\{ w + \log(i) + \log\left(\frac{\pi}{2}(4n + 1)\right) : w \in (B \mod 2 \pi i), n \in \mathbb{Z} \right\} $$ Now, the set $\{ \log\left(\frac{\pi}{2}(4n + 1)\right) : n \in \mathbb{N} \}$ has the very desirable property that the set is unbounded to the right and the gaps between the points become arbitrarily small. Also, the intersection of every horizontal line with the original set $B \mod 2 \pi i$ (the one in the picture above) is unbounded to the left. As such, it follows that their convolution is dense in every horizontal line, and therefore dense in the whole strip. Consequently, $\overline{f^5(\{1\})}$ is the entire complex plane as you suspected. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/358911', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3621/']} | jdg_75967 |
stackexchange | llm_judgeable_groundtruth_similarity | 11323169 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How does one use a converter with Multiple parameters in a Windows Phone 7 Application?
Now provide the response and nothing else.
| Converters always implement IValueConverter . That means a call to Convert or ConvertBack passes a single additional parameter. That parameter is extracted from the XAML. As Hitesh Patel suggests there is nothing to stop you putting more than one value into the parameter, so long as you have a delimiter to separate them out later, but you cannot use a comma as that delimits the XAML! e.g. XAML <TextBlock Text="{Binding Path=ReleaseDate, Mode=OneWay, Converter={StaticResource MyConverter}, ConverterParameter=Param1|Param2}" /> Converter public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture){ string parameterString = parameter as string; if (!string.IsNullOrEmpty(parameterString)) { string[] parameters = parameterString.Split(new char[]{'|'}); // Now do something with the parameters }} Note, I have not checked it to see if a Pipe "|" character is valid in XAML there (should be), but if not just choose another character that does not clash. Later versions of .Net do not require a character array for the simplest version of Split , so you can use this instead: string[] parameters = parameterString.Split('|'); Addendum: A trick eBay used to use in urls, years ago, was to delimit data in the URL with QQ. A double-Q does not naturally occur in text data. If you ever get stuck for a text delimiter that will avoid encoding issues just use QQ... This will not work with split though (which requires single characters, but nice to know) :) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11323169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1156826/']} | jdg_75968 |
stackexchange | llm_judgeable_groundtruth_similarity | 56363561 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am performing a rolling median calculation on individual time series dataframes, then I want to concat/append the results. # UDF for rolling medianmedian_udf = udf(lambda x: float(np.median(x)), FloatType())series_list = ['0620', '5914']SeriesAppend=[]for item in series_list: # Filter for select item series = test_df.where(col("ID").isin([item])) # Sort time series series_sorted = series.sort(series.ID, series.date).persist() # Calculate rolling median series_sorted = series_sorted.withColumn("list", collect_list("metric").over(w)) \ .withColumn("rolling_median", median_udf("list")) SeriesAppend.append(series_sorted)SeriesAppend [DataFrame[ntwrk_genre_cd: string, date: date, mkt_cd: string, syscode: string, ntwrk_cd: string, syscode_ntwrk: string, metric: double, list: array, rolling_median: float], DataFrame[ntwrk_genre_cd: string, date: date, mkt_cd: string, syscode: string, ntwrk_cd: string, syscode_ntwrk: string, metric: double, list: array, rolling_median: float]] When I attempt to .show(): 'list' object has no attribute 'show'Traceback (most recent call last):AttributeError: 'list' object has no attribute 'show' I realize this is saying the object is a list of dataframes . How do I convert to a single dataframe? I know that the following solution works for an explicit number of dataframes, but I want my for-loop to be agnostic to the number of dataframes: from functools import reducefrom pyspark.sql import DataFramedfs = [df1,df2,df3]df = reduce(DataFrame.unionAll, dfs) Is there a way to generalize this to non-explicit dataframe names?
Now provide the response and nothing else.
| Thanks everyone! To sum up - the solution uses Reduce and unionAll: from functools import reducefrom pyspark.sql import DataFrameSeriesAppend=[]for item in series_list: # Filter for select item series = test_df.where(col("ID").isin([item])) # Sort time series series_sorted = series.sort(series.ID, series.date).persist() # Calculate rolling median series_sorted = series_sorted.withColumn("list", collect_list("metric").over(w)) \ .withColumn("rolling_median", median_udf("list")) SeriesAppend.append(series_sorted)df_series = reduce(DataFrame.unionAll, SeriesAppend) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56363561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8691976/']} | jdg_75969 |
stackexchange | llm_judgeable_groundtruth_similarity | 37794402 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Xcode finds dependencies automatically sometimes. I think is is ok when I am the one who is defining the relationships and when I get lazy ... But more than often I find myself facing an existent (medium to large size) project with several targets. Since the project has been made by someone else I find it very difficult to understand what targets depends on what since not all the relationships are explicit . What are the rules Xcode use to find such relationships? ( I hope I can understand the logic so run it in my mind and maybe save me some time in the future) Or What makes a target qualifiable to be implicitly dependant of another? A target and the product it creates can be related to another target. If a target requires the output of another target in order to build, the first target is said to depend upon the second. If both targets are in the same workspace, Xcode can discover the dependency, in which case it builds the products in the required order. Such a relationship is referred to as an implicit dependency. Source: iOS Developer Library → Xcode Concepts → Xcode Target
Now provide the response and nothing else.
| This answer applies to Xcode 8.x, and I think for Xcode 9.0. First off, you need to be sure that "Find Implicit Dependencies" is enabled in the the Build panel of the Scheme that you are attempting to build. A target "A" can be made "implicitly" dependent on target "B" in two ways: Target A has a "Link Binary With Libraries" build phase that has a library in its list that has the same name as a Product of B. This product can either be in the same project or another project in the workspace. Note that I said "same name". Just because you chose libA.a from target A doesn't mean that implicit dependencies will build it if you have another libA.a product in a different target. See below for details. Target A has a "Copy Files Phase" that copies a file with a base name that matches a product of B. Normally a "Copy files" build phase cannot refer to a file that isn't in the same project as its target, but you can set up a dependency across projects if you create a dummy file for the "copy file" phase to copy that has the same name as a product of B. For example, if you have a workspace that contains two projects ProjectA and ProjectB. ProjectA has TargetA that creates libA.a, and ProjectB has TargetB that creates libB.a. TargetA could get TargetB to build libB.a by having a "fake" zero byte file as part of TargetA that happened to be named libB.a, and this would be sufficient to get libB.a made, even though the libB.a referred to in the "Copy Files" phase is a totally different file than the product output of the TargetB build. If you check the "Copy Only When Installing" box, Xcode won't actually perform the copy, but will still resolve the dependency. You can actually delete the fake file off your drive that you created solely to have something to put in the "Copy Files" phase (but you must leave it in your project). So why would anyone ever want to do the horror that is "2"? I can come up with a couple of reasons. TargetA needs some some files copied/generated by TargetB, but TargetB doesn't generate a library to link to. You could probably work around this by having TargetB generate up a small dummy library, but that may be painful for other reasons. Let's say I had projectA, targetA and libA.a (and equivalents for project B, C and D), and libA.a depended on libB.a and libC.a which both needed libD.a to be built first (possibly some headers and/or sources generated). You could do it all using the "Link With Libraries" phase (aka solution #1) but in that case you would end up with two copies of the .o files in libD in the final linked version of libA. If you do this deep enough (eg a workspace that has 40 projects that have varying levels of dependencies on one another) you will quickly end up with huge library files with several identical .o files in them, and your link times will become horrific. If you think these are contrived situations, I'm currently hitting both of them moving some legacy code from a series of explicit dependencies to implicit dependencies. Why am I moving to implicit dependencies? Because explicit dependencies in Xcode require project nesting, and once you get enough explicit dependencies, the project browser gets extremely slow, and you will see a lot of beachballs inside of Xcode for random things. What happens if you happen to have two targets inside the same workspace that generate products with the same name and depend upon them from a third target? Implicit dependencies will pick one. It appears to do a match based on the base name of the product (so foo/bar.a and baz/bar.a are the same), and will pick the first one it finds. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37794402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149008/']} | jdg_75970 |
stackexchange | llm_judgeable_groundtruth_similarity | 78787 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I checked out my apache logs, and, whooooaaa, there are a lot of bots trying to exploit phpmyadmin. The first thing i did was to change the directory name to something more obscure. But, are there any other tips to secure phpmyadmin? (The database itself is only available from the local network)
Now provide the response and nothing else.
| We do a combination of things: Protect phpMyAdmin via .htaccess or Apache configuration which requests a HTTP username/password login. Protect phpMyAdmin via .htaccess or Apache configuration to only allow access from certain trusted IP addresses Put phpMyAdmin in it's own VirtualHost and run it on a non-standard port Only allow HTTPS connections to phpMyAdmin, and not regular HTTP Only allow connections to it from the LAN (use a VPN to get through your firewall, and only allow connections if you're on that LAN/VPN) Don't name the directory it's in something obvious like /phpMyAdmin/ You could also use SSH port forwarding to utilize SSH keys. See https://stackoverflow.com/a/3687969/193494 | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/78787', 'https://serverfault.com', 'https://serverfault.com/users/4785/']} | jdg_75971 |
stackexchange | llm_judgeable_groundtruth_similarity | 325163 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given that $I,J,L$ are ideals of $K$, do we have $I(J+L)=IJ+IL$? I am confused how to do it.
Now provide the response and nothing else.
| $J \subseteq J+L$ implies $IJ \subseteq I(J+L)$. Similarily, we get $IL \subseteq I(J+L)$. Thus, $IJ+IL \subseteq I(J+L)$. Conversely, $I(J+L)$ is generated by elements of the form $i \cdot (j+l)$ with $i \in I, j \in J, l \in L$, which we can write as $i \cdot j + i \cdot l$, and therefore belong to $IJ + IL$. This proves $I(J+L) \subseteq IJ+IL$. Optional remark (for the more advanced readers): We can prove it even more abstractly, getting rid of elements completely, for a monoid object $K$ in an abelian $\otimes$-category: If $I,J,L$ are ideals, then $I(J+L)$ is by definition the image of the natural map $I \otimes (J+L) \to K$, which in turn is the image of the natural map $I \otimes (J \oplus L) \to K$. But $I \otimes (J \oplus L) \cong (I \otimes J) \oplus (I \otimes L)$, so that the image coincides with $IJ+IL$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/325163', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33826/']} | jdg_75972 |
stackexchange | llm_judgeable_groundtruth_similarity | 17307938 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Error: Failed to connect to repository : Command "/usr/bin/git ls-remote -h file:///home/myuser/path/to/project HEAD" returned status code 128:stdout:stderr: fatal: 'home/myuser/path/to/project' does not appear to be a git repositoryfatal: The remote end hung up unexpectedly I have tried the following: chmod 777 to the repo folder(folder containing .git directory) chowned to jenkins:jenkins on the repo folder tried to clone into another folder from this local repo folder: this works! When I run the above command: /usr/bin/git ls-remote -h file:///home/myuser/path/to/project HEAD on cmd I get the branches. My questions are: why is git ls-remote -h ... command called when it should be git clone ... ? How to configure jenkins git plugin to fetch code from local repo My environment: RHEL 5.9 Jenkins 1.519 installed as a service(no Web container) Git plugin
Now provide the response and nothing else.
| When installing Jenkins as a service, by default, Jenkins does not create a user directory as in: /home/jenkins . Jenkins default home directory is set to /var/lib/jenkins . From my work-around, as you would expect, jenkins has trouble accessing local resources from other users directory. I moved my cloned repo under Jenkins default home directory i.e. under /var/lib/jenkins so my Repository URL in Jenkins Project configuration looks like: file:///${JENKINS_HOME}/repo/<myprojectname> UPDATE: The above works fine ...but I found a better way to do it from this blog The steps are outlined here: look up /etc/init.d/jenkins script. There are a few $JENKINS variables defined. This should lead you to the sysconfig for jenkins i.e. /etc/sysconfig/jenkins .Stop your jenkins instance: sudo /sbin/service jenkins stop Take a backup cp /etc/sysconfig/jenkins /etc/sysconfig/jenkins.bak In this file, change the following property: $JENKINS_USER="<your desired user>" Change ownership of all related Jenkins directories: chown -R <your desired user>:<your user group> /var/lib/jenkins chown -R <your desired user>:<your user group> /var/cache/jenkins chown -R <your desired user>:<your user group> /var/log/jenkins Restart jenkins and that error should disappear sudo /sbin/service jenkins start This error should go away now! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17307938', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/405117/']} | jdg_75973 |
stackexchange | llm_judgeable_groundtruth_similarity | 507160 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
my question is about number theory,about prime numbers: here is the question: prove that there exists infinitely many prime numbers of the form 4k+1 and also of the form 4k+3 and also of the form 6k+5
Now provide the response and nothing else.
| The proof for $4k+3$ and $6k+5$ are small variants of the usual "Euclid" proof that there are infinitely many primes. To show that there is a prime of the form $4k+3$ that is $\gt n$, we consider the number $N=4n!-1$. Not all prime factors of $N$ can be congruent to $1$ modulo $4$, and all prime factors of $N$ are $\gt n$. The argument for $6k+5$ is essentially the same, we use $N=6n!-1$. The argument for $4k+1$ is harder. Let $N=(2n!)^2+1$. We then use the fact that any odd prime divisor of a number of the form $x^2+1$ must be of the shape $4k+1$. For suppose to the contrary that $p$ divides $x^2+1$, where $p$ is of the form $4k+3$. Then $x^2\equiv -1\pmod{p}$. But it is an early result in the theory of quadratic residues that the congruence $x^2\equiv -1\pmod{p}$ has no solutions if $p$ is a prime of the form $4k+3$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/507160', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/74480/']} | jdg_75974 |
stackexchange | llm_judgeable_groundtruth_similarity | 2713787 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If we put randomly 5 books of math, 6 of biology, 8 of history and 3 of literature. which the probability of the book of math are together? My work: Let $S:$"The set of solutions", then $|S|=22!$ Moreover, Let $E:$"Book of math together" a event. Let $M:$"The books of math" and we count that set as $|M|=1.$ Let $B:$"The other books" $|B|=17$ We need know of how many ways we have to order $M$ in the set of 18 books $(|B|+|M|)$ This is: $18$!. Then we have $18!$ ways of order the books of math together. In consequence, the probability of the book of math are together is: $\frac{18!}{22!}$ Is good the reasoning?
Now provide the response and nothing else.
| Assume that the books are individually distinguishable: i.e. that one math book (say) is different from another. That seems to be what you are assuming in your approach. As you say, there are $22!$ ways in total. There are $5!$ ways to arrange math books as a block. Once you have chosen one of those, there are $18!$ ways to arrange the $18$ objects consisting of the $17$ non-math books and the block of math books. Hence the answer is $$\boxed {\frac {5!\times 18!}{22!}}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2713787', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/334143/']} | jdg_75975 |
stackexchange | llm_judgeable_groundtruth_similarity | 8238691 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a UITextField where user can enter a name and save it. But, user should not be allowed to enter blank spaces in the textFiled. 1 - How can I find out,if user has entered two blank spaces or complete blank spaces in the textFiled 2 - How can i know if the textFiled is filled only with blank spaces edit - It is invalid to enter only white spaces(blank spaces)
Now provide the response and nothing else.
| You can "trim" the text, that is remove all the whitespace at the start and end. If all that's left is an empty string, then only whitespace (or nothing) was entered. NSString *rawString = [textField text];NSCharacterSet *whitespace = [NSCharacterSet whitespaceAndNewlineCharacterSet];NSString *trimmed = [rawString stringByTrimmingCharactersInSet:whitespace];if ([trimmed length] == 0) { // Text was empty or only whitespace.} If you want to check whether there is any whitespace (anywhere in the text), you can do it like this: NSRange range = [rawString rangeOfCharacterFromSet:whitespace];if (range.location != NSNotFound) { // There is whitespace.} If you want to prevent the user from entering whitespace at all, see @Hanon's solution. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8238691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/541582/']} | jdg_75976 |
stackexchange | llm_judgeable_groundtruth_similarity | 37180731 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
in my project I am including this library like this: <script src="http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/src/markerclusterer.js"></script> however the src path seems to be broken since yesterday hence it is not rendering the JS thus I am unable to show clusters on the map. I do have the local copy for this JS, I can set src path to refer to my local copy instead of above ../svnt/runk/.. path. What I am searching is there any alternative path for this library? May be it is hosted on CDN. So far, I think, above is only the place where it is hosted: Ref: https://code.google.com/p/google-maps-utility-library-v3/source/checkout
Now provide the response and nothing else.
| As Google moved the source over to GitHub a while back, the new GitHub version can be accessed from RawGit by using the following script url: https://cdn.rawgit.com/googlemaps/js-marker-clusterer/gh-pages/src/markerclusterer.js You'll also need to specify the imagePath option when instantiating your MarkerClusterer to access the images from GitHub: var mc = new MarkerClusterer(map, markers, { imagePath: 'https://cdn.rawgit.com/googlemaps/js-marker-clusterer/gh-pages/images/m' }); The following earlier SO post contains more detail regarding the imagePath reference to the cluster images: Google maps not working in https:// Whilst the above urls (with the cdn prefixes) have no traffic limits or throttling and the files are served via a super fast global CDN , please bear in mind that RawGit is a free hosting service and offers no uptime or support guarantees . This is covered in more detail in the following SO answer: Link and execute external JavaScript file hosted on GitHub This post also covers that, if you're linking to files on GitHub, in production you should consider targeting a specific release tag to ensure you're getting a specific release version of the script. However, as the custodians of the js-marker-clusterer repository have yet to create any releases, this isn't currently possible. As a result, you should seriously consider downloading and including the library and its resources directly in your project for production purposes. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37180731', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/974616/']} | jdg_75977 |
stackexchange | llm_judgeable_groundtruth_similarity | 1739799 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We've been talking about threads in my operating system class a lot lately and one question has come to my mind. Since Go, (and Java) uses User-space thread instead of kernel threads, doesn't that mean that you can't effectively take advantages of multiple cores since the OS only allocates CPU time to the process and not the threads themselves? This seems to confirm the fact that you can't Wikipedia also seems to think so
Now provide the response and nothing else.
| What makes you think Go uses User-space threads? It doesn't. It uses OS-threads and can take advantage of multiple cores. You might be puzzled by the fact that by default Go only uses 1 thread to run your program. If you start two goroutines they run in one thread. But if one goroutine blocks for I/O Go creates a second thread and continues to run the other goroutine on the new thread. If you really want to unlock the full multi-core power just use the GOMAXPROCS() function. runtime.GOMAXPROCS(4); //somewhere in main Now your program would use 4 OS-threads (instead of 1) and would be able to fully use a e.g. 4 core system. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1739799', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126912/']} | jdg_75978 |
stackexchange | llm_judgeable_groundtruth_similarity | 29472797 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are working on a latency sensitive application and have been microbenchmarking all kinds of methods (using jmh ). After microbenchmarking a lookup method and being satisfied with the results, I implemented the final version, only to find that the final version was 3 times slower than what I had just benchmarked. The culprit was that the implemented method was returning an enum object instead of an int . Here is a simplified version of the benchmark code: @OutputTimeUnit(TimeUnit.MICROSECONDS)@State(Scope.Thread)public class ReturnEnumObjectVersusPrimitiveBenchmark { enum Category { CATEGORY1, CATEGORY2, } @Param( {"3", "2", "1" }) String value; int param; @Setup public void setUp() { param = Integer.parseInt(value); } @Benchmark public int benchmarkReturnOrdinal() { if (param < 2) { return Category.CATEGORY1.ordinal(); } return Category.CATEGORY2.ordinal(); } @Benchmark public Category benchmarkReturnReference() { if (param < 2) { return Category.CATEGORY1; } return Category.CATEGORY2; } public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder().include(ReturnEnumObjectVersusPrimitiveBenchmark.class.getName()).warmupIterations(5) .measurementIterations(4).forks(1).build(); new Runner(opt).run(); }} The benchmark results for above: # VM invoker: C:\Program Files\Java\jdk1.7.0_40\jre\bin\java.exe# VM options: -Dfile.encoding=UTF-8Benchmark (value) Mode Samples Score Error UnitsbenchmarkReturnOrdinal 3 thrpt 4 1059.898 ± 71.749 ops/usbenchmarkReturnOrdinal 2 thrpt 4 1051.122 ± 61.238 ops/usbenchmarkReturnOrdinal 1 thrpt 4 1064.067 ± 90.057 ops/usbenchmarkReturnReference 3 thrpt 4 353.197 ± 25.946 ops/usbenchmarkReturnReference 2 thrpt 4 350.902 ± 19.487 ops/usbenchmarkReturnReference 1 thrpt 4 339.578 ± 144.093 ops/us Just changing the return type of the function changed the performance by a factor of almost 3. I thought that the sole difference between returning an enum object versus an integer is that one returns a 64 bit value (reference) and the other returns a 32 bit value. One of my colleagues was guessing that returning the enum added additional overhead because of the need to track the reference for potential GC. (But given that enum objects are static final references, it seems strange that it would need to do that). What is the explanation for the performance difference? UPDATE I shared the maven project here so that anyone can clone it and run the benchmark. If anyone has the time/interest, it would be helpful to see if others can replicate the same results. (I've replicated on 2 different machines, Windows 64 and Linux 64, both using flavors of Oracle Java 1.7 JVMs). @ZhekaKozlov says he did not see any difference between the methods. To run: (after cloning repository) mvn clean installjava -jar .\target\microbenchmarks.jar function.ReturnEnumObjectVersusPrimitiveBenchmark -i 5 -wi 5 -f 1
Now provide the response and nothing else.
| TL;DR: You should not put BLIND trust into anything. First things first: it is important to verify the experimental data before jumping to the conclusions from them. Just claiming something is 3x faster/slower is odd, because you really need to follow up on the reason for the performance difference, not just trust the numbers. This is especially important for nano-benchmarks like you have. Second, the experimenters should clearly understand what they control and what they don't. In your particular example, you are returning the value from @Benchmark methods, but can you be reasonably sure the callers outside will do the same thing for primitive and the reference? If you ask yourself this question, then you'll realize you are basically measuring the test infrastructure. Down to the point. On my machine (i5-4210U, Linux x86_64, JDK 8u40), the test yields: Benchmark (value) Mode Samples Score Error Units...benchmarkReturnOrdinal 3 thrpt 5 0.876 ± 0.023 ops/ns...benchmarkReturnOrdinal 2 thrpt 5 0.876 ± 0.009 ops/ns...benchmarkReturnOrdinal 1 thrpt 5 0.832 ± 0.048 ops/ns...benchmarkReturnReference 3 thrpt 5 0.292 ± 0.006 ops/ns...benchmarkReturnReference 2 thrpt 5 0.286 ± 0.024 ops/ns...benchmarkReturnReference 1 thrpt 5 0.293 ± 0.008 ops/ns Okay, so reference tests appear 3x slower. But wait, it uses an old JMH (1.1.1), let's update to current latest (1.7.1): Benchmark (value) Mode Cnt Score Error Units...benchmarkReturnOrdinal 3 thrpt 5 0.326 ± 0.010 ops/ns...benchmarkReturnOrdinal 2 thrpt 5 0.329 ± 0.004 ops/ns...benchmarkReturnOrdinal 1 thrpt 5 0.329 ± 0.004 ops/ns...benchmarkReturnReference 3 thrpt 5 0.288 ± 0.005 ops/ns...benchmarkReturnReference 2 thrpt 5 0.288 ± 0.005 ops/ns...benchmarkReturnReference 1 thrpt 5 0.288 ± 0.002 ops/ns Oops, now they are only barely slower. BTW, this also tells us the test is infrastructure-bound. Okay, can we see what really happens? If you build the benchmarks, and look around what exactly calls your @Benchmark methods, then you'll see something like: public void benchmarkReturnOrdinal_thrpt_jmhStub(InfraControl control, RawResults result, ReturnEnumObjectVersusPrimitiveBenchmark_jmh l_returnenumobjectversusprimitivebenchmark0_0, Blackhole_jmh l_blackhole1_1) throws Throwable { long operations = 0; long realTime = 0; result.startTime = System.nanoTime(); do { l_blackhole1_1.consume(l_longname.benchmarkReturnOrdinal()); operations++; } while(!control.isDone); result.stopTime = System.nanoTime(); result.realTime = realTime; result.measuredOps = operations;} That l_blackhole1_1 has a consume method, which "consumes" the values (see Blackhole for rationale). Blackhole.consume has overloads for references and primitives , and that alone is enough to justify the performance difference. There is a rationale why these methods look different: they are trying to be as fast as possible for their types of argument. They do not necessarily exhibit the same performance characteristics, even though we try to match them, hence the more symmetric result with newer JMH. Now, you can even go to -prof perfasm to see the generated code for your tests and see why the performance is different, but that's beyond the point here. If you really want to understand how returning the primitive and/or reference differs performance-wise, you would need to enter a big scary grey zone of nuanced performance benchmarking. E.g. something like this test: @BenchmarkMode(Mode.AverageTime)@OutputTimeUnit(TimeUnit.NANOSECONDS)@Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)@Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)@Fork(5)public class PrimVsRef { @Benchmark public void prim() { doPrim(); } @Benchmark public void ref() { doRef(); } @CompilerControl(CompilerControl.Mode.DONT_INLINE) private int doPrim() { return 42; } @CompilerControl(CompilerControl.Mode.DONT_INLINE) private Object doRef() { return this; }} ...which yields the same result for primitives and references: Benchmark Mode Cnt Score Error UnitsPrimVsRef.prim avgt 25 2.637 ± 0.017 ns/opPrimVsRef.ref avgt 25 2.634 ± 0.005 ns/op As I said above, these tests require following up on the reasons for the results. In this case, the generated code for both is almost the same, and that explains the result. prim: [Verified Entry Point] 12.69% 1.81% 0x00007f5724aec100: mov %eax,-0x14000(%rsp) 0.90% 0.74% 0x00007f5724aec107: push %rbp 0.01% 0.01% 0x00007f5724aec108: sub $0x30,%rsp 12.23% 16.00% 0x00007f5724aec10c: mov $0x2a,%eax ; load "42" 0.95% 0.97% 0x00007f5724aec111: add $0x30,%rsp 0.02% 0x00007f5724aec115: pop %rbp 37.94% 54.70% 0x00007f5724aec116: test %eax,0x10d1aee4(%rip) 0.04% 0.02% 0x00007f5724aec11c: retq ref: [Verified Entry Point] 13.52% 1.45% 0x00007f1887e66700: mov %eax,-0x14000(%rsp) 0.60% 0.37% 0x00007f1887e66707: push %rbp 0.02% 0x00007f1887e66708: sub $0x30,%rsp 13.63% 16.91% 0x00007f1887e6670c: mov %rsi,%rax ; load "this" 0.50% 0.49% 0x00007f1887e6670f: add $0x30,%rsp 0.01% 0x00007f1887e66713: pop %rbp 39.18% 57.65% 0x00007f1887e66714: test %eax,0xe3e78e6(%rip) 0.02% 0x00007f1887e6671a: retq [sarcasm] See how easy it is! [/sarcasm] The pattern is: the simpler the question, the more you have to work out to make a plausible and reliable answer. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/29472797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376527/']} | jdg_75979 |
stackexchange | llm_judgeable_groundtruth_similarity | 594709 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
RFC 1034 requires us to assign at least two IP addresses for DNS servers. However, redundancy can already be achieved by a single IP address if we use anycast addressing. BGP anycast seems to scale well into hundreds or even thousands of servers. If so, why do we still need multiple IP addresses for DNS servers? Does it actually enhance redundancy (contribute to availability) if we already have anycast in place, or is it just a myth? What problems and errors can we expect to face if we only use a single IP address? By that, I mean totally omitting secondary DNS addresses, or using a bogus IP (e.g. 1.2.3.4 ) for the second address when some setups require at least two.
Now provide the response and nothing else.
| A single anycast IP address does not give you the same redundancy as two unicast IP addresses in distinct IP prefixes would. Often the hardest problem for redundancy is not when something fails completely, but rather when it is misbehaving just enough to still pass the health checks, but not actually be functional. I have seen an anycast DNS setup where a DNS server went down, but packets would still get routed to that DNS server. Whatever was taking care of advertising the prefix might simply not be aware, that the DNS server had gone down. It becomes even more tricky if the DNS server in question is not an authoritative DNS server, but rather a recursive resolver. Such a recursive resolver would need to have both the anycast address for receiving queries from clients and unicast addresses for querying authoritative DNS servers. But if the unicast addresses went down, it could easily look healthy enough that it would still be routed queries. Anycast is a great tool for scalability and reducing latency. But for redundancy it should not stand alone. Multiple redundant anycast pools is however a good solution for availability. A well known example is 8.8.8.8 and 8.8.4.4. Both are anycast addresses, but they should never be routed to the same physical DNS server (assuming Google did their job well). If you have 10 physical DNS servers, you could configure them as 2 pools with 5 servers in each pool or 5 pools with 2 in each pool. You want to avoid having one physical DNS server be in multiple pools simultaneously. So how many IPs should you allocate? You need to have IPs that can be configured as anycast independently of each other. That usually means you'll need to allocate an entire /24 of IPv4 address space or /48 of IPv6 address space for each pool. This may very well limit the number of pools you can have. Additionally if we are talking authoritative servers the DNS reply with all your NS records and A and AAAA glue should fit in a single 512 byte packet. For the root servers this worked out to 13 addresses. But that did not include glue and IPv6, so the number you'd reach would be lower. Each pool should be as geographically distributed as possible. If you have 5 servers in Europe and 5 in Noth America and 2 anycast IPs, you do not create one pool spanning each continent. You put 2 from Europe in a pool with 3 from North America, and the other 5 in the other pool. If you have more than 2 anycast pools, you can let a physical server temporarily be in more than one pool. But you should never allow a physical server to be in all pools at the same time. Combining anycast and unicast is possible, but care must be taken. If you have IPs for two pools, I wouldn't combine. But if you only have a single anycast IP to work with, it may make sense to also include unicast IPs. The problem is that including unicast IPs will not give you as good latency and load balancing. If a physical server is made available by both unicast and anycast, you may risk users reaching the same server as primary and secondary and lose access if it goes down. This can be avoided by only using unicast addresses of servers not in the anycast pool or by always providing users with two unicast addresses. The more unicast addresses you put into the mix, the less queries will be sent to the anycast address, and the less benefit you will get from anycast in terms of latency and scalability. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/594709', 'https://serverfault.com', 'https://serverfault.com/users/87017/']} | jdg_75980 |
stackexchange | llm_judgeable_groundtruth_similarity | 1683680 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a macro that will throw an exception if an expression takes longer than X seconds to complete.
Now provide the response and nothing else.
| This question has better answers here: Executing a function with a timeout Futures to the rescue! user=> (let [f (future (reduce * (range 1 1001)))] (.get f 1 java.util.concurrent.TimeUnit/MILLISECONDS))java.util.concurrent.TimeoutException (NO_SOURCE_FILE:0) And to make a macro of it: (defmacro time-limited [ms & body] `(let [f# (future ~@body)] (.get f# ~ms java.util.concurrent.TimeUnit/MILLISECONDS))) So you can do this: user=> (time-limited 1 (reduce * (range 1 1001)))java.util.concurrent.TimeoutException (NO_SOURCE_FILE:0)user=> (time-limited 1 (reduce * (range 1 101)))93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1683680', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/90801/']} | jdg_75981 |
stackexchange | llm_judgeable_groundtruth_similarity | 5680659 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: private void button1_Click(object sender, EventArgs e){ var answer = MessageBox.Show( "Do you wish to submit checked items to the ACH bank? \r\n\r\nOnly the items that are checked and have the status 'Entered' will be submitted.", "Submit", MessageBoxButtons.YesNoCancel, MessageBoxIcon.Question, MessageBoxDefaultButton.Button1); if (answer != DialogResult.Yes) return; button1.Enabled = false; progressBar1.Maximum = dataGridView1.Rows.Count; progressBar1.Minimum = 0; progressBar1.Value = 0; progressBar1.Step = 1; foreach (DataGridViewRow row in dataGridView1.Rows) { if ((string) row.Cells["Status"].Value == "Entered") { progressBar1.PerformStep(); label_Message.Text = @"Sending " + row.Cells["Name"].Value + @" for $" + row.Cells["CheckAmount"].Value + @" to the bank."; Thread.Sleep(2000); } } label_Message.Text = @"Complete."; button1.Enabled = true;} This is a test I am creating to port over to my application. Everything works fine but the label_Message.text being set. It never shows up on the screen. It is being set, I did a console.write on it to verify. It's just not refreshing the screen. I get the "Complete" at the end also. Anyone have any ideas?
Now provide the response and nothing else.
| You're performing a lengthy operation on the UI thread. You should move it to a background thread (via BackgroundWorker for instance) so the UI thread can do things like repaint the screen when needed. You can cheat and execute Application.DoEvents , but I'd really recommend against it. This question and answer are basically what you're asking: Form Not Responding when any other operation performed in C# | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5680659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302613/']} | jdg_75982 |
stackexchange | llm_judgeable_groundtruth_similarity | 6085568 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I make a one to one mapping. public class Setting{ public virtual Guid StudentId { get; set; } public virtual DateFilters TaskFilterOption { get; set; } public virtual string TimeZoneId { get; set; } public virtual string TimeZoneName { get; set; } public virtual DateTime EndOfTerm { get; set; } public virtual Student Student { get; set; }} Setting Class map: public SettingMap() { // Id(Reveal.Member<Setting>("StudentId")).GeneratedBy.Foreign("StudentId"); //Id(x => x.StudentId); Map(x => x.TaskFilterOption) .Default(DateFilters.All.ToString()) .NvarcharWithMaxSize() .Not.Nullable(); Map(x => x.TimeZoneId) .NvarcharWithMaxSize() .Not.Nullable(); Map(x => x.TimeZoneName) .NvarcharWithMaxSize() .Not.Nullable(); Map(x => x.EndOfTerm) .Default("5/21/2011") .Not.Nullable(); HasOne(x => x.Student);} Student Class map public class StudentMap: ClassMap<Student> { public StudentMap() { Id(x => x.StudentId); HasOne(x => x.Setting) .Cascade.All(); }}public class Student{ public virtual Guid StudentId { get; private set; } public virtual Setting Setting { get; set; }} Now every time I try to create a settings object and save it to the database it crashes. Setting setting = new Setting { TimeZoneId = viewModel.SelectedTimeZone, TimeZoneName = info.DisplayName, EndOfTerm = DateTime.UtcNow.AddDays(-1), Student = student}; The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Settings_Students". The conflict occurred in database "Database", table "dbo.Students", column 'StudentId'.The statement has been terminated.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: The INSERT statement conflicted with the FOREIGN KEY constraint "FK_Settings_Students". The conflict occurred in database "Database", table "dbo.Students", column 'StudentId'.The statement has been terminated. What am I missing? Edit public class StudentMap: ClassMap<Student> { public StudentMap() { Id(x => x.StudentId) .GeneratedBy.Guid(); HasOne(x => x.Setting) .PropertyRef("Student") .Cascade.All(); }}public class SettingMap: ClassMap<Setting> { public SettingMap() { Id(x => x.StudentId) .GeneratedBy.Guid(); Map(x => x.TaskFilterOption) .Default(DateFilters.All.ToString()) .NvarcharWithMaxSize().Not.Nullable(); Map(x => x.TimeZoneId) .NvarcharWithMaxSize().Not.Nullable(); Map(x => x.TimeZoneName) .NvarcharWithMaxSize().Not.Nullable(); Map(x => x.EndOfTerm) .Default("5/21/2011").Not.Nullable(); References(x => x.Student).Unique(); }} Setting setting = new Setting { TimeZoneId = viewModel.SelectedTimeZone, TimeZoneName = info.DisplayName, EndOfTerm = DateTime.UtcNow.AddDays(-1), Student = student};studentRepo.SaveSettings(setting);studentRepo.Commit(); I get these error for both ways Invalid index 5 for this SqlParameterCollection with Count=5. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IndexOutOfRangeException: Invalid index 5 for this SqlParameterCollection with Count=5. Source Error: Line 76: using (ITransaction transaction = session.BeginTransaction()) Line 77: { Line 78: transaction.Commit(); Line 79: } Line 80: }
Now provide the response and nothing else.
| There are two basic ways how to map bidirectional one-to-one association in NH. Let's say the classes look like this: public class Setting{ public virtual Guid Id { get; set; } public virtual Student Student { get; set; }}public class Student{ public virtual Guid Id { get; set; } public virtual Setting Setting { get; set; }} Setting class is a master in the association ("aggregate root"). It is quite unusual but it depends on problem domain... Primary key association public SettingMap(){ Id(x => x.Id).GeneratedBy.Guid(); HasOne(x => x.Student).Cascade.All();}public StudentMap(){ Id(x => x.Id).GeneratedBy.Foreign("Setting"); HasOne(x => x.Setting).Constrained();} and a new setting instance should be stored: var setting = new Setting(); setting.Student = new Student(); setting.Student.Name = "student1"; setting.Student.Setting = setting; setting.Name = "setting1"; session.Save(setting); Foreign key association public SettingMap(){ Id(x => x.Id).GeneratedBy.Guid(); References(x => x.Student).Unique().Cascade.All();}public StudentMap(){ Id(x => x.Id).GeneratedBy.Guid(); HasOne(x => x.Setting).Cascade.All().PropertyRef("Student");} Primary key association is close to your solution. Primary key association should be used only when you are absolutely sure that the association will be always one-to-one. Note that AllDeleteOrphan cascade is not supported for one-to-one in NH. EDIT: For more details see: http://fabiomaulo.blogspot.com/2010/03/conform-mapping-one-to-one.html http://ayende.com/blog/3960/nhibernate-mapping-one-to-one | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6085568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130015/']} | jdg_75983 |
stackexchange | llm_judgeable_groundtruth_similarity | 27032517 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
word2vec is a open source tool by Google: For each word it provides a vector of float values, what exactly do they represent? There is also a paper on paragraph vector can anyone explain how they are using word2vec in order to obtain fixed length vector for a paragraph.
Now provide the response and nothing else.
| TLDR : Word2Vec is building word projections ( embeddings ) in a latent space of N dimensions, (N being the size of the word vectors obtained). The float values represents the coordinates of the words in this N dimensional space. The major idea behind latent space projections, putting objects in a different and continuous dimensional space, is that your objects will have a representation (a vector) that has more interesting calculus characteristics than basic objects. For words, what's useful is that you have a dense vector space which encodes similarity (i.e tree has a vector which is more similar to wood than from dancing). This opposes to classical sparse one-hot or "bag-of-word" encoding which treat each word as one dimension making them orthogonal by design (i.e tree,wood and dancing all have the same distance between them) Word2Vec algorithms do this: Imagine that you have a sentence: The dog has to go ___ for a walk in the park. You obviously want to fill the blank with the word "outside" but you could also have "out". The w2v algorithms are inspired by this idea. You'd like all words that fill in the blanks near, because they belong together - This is called the Distributional Hypothesis - Therefore the words "out" and "outside" will be closer together whereas a word like "carrot" would be farther away. This is sort of the "intuition" behind word2vec. For a more theorical explanation of what's going on i'd suggest reading: GloVe: Global Vectors for Word Representation Linguistic Regularities in Sparse and Explicit Word Representations Neural Word Embedding as Implicit Matrix Factorization For paragraph vectors, the idea is the same as in w2v. Each paragraph can be represented by its words. Two models are presented in the paper. In a "Bag of Word" way (the pv-dbow model) where one fixed length paragraph vector is used to predict its words. By adding a fixed length paragraph token in word contexts (the pv-dm model). By retropropagating the gradient they get "a sense" of what's missing, bringing paragraph with the same words/topic "missing" close together. Bits from the article : The paragraph vector and word vectors are averaged or concatenated to predict the next word in a context. [...] The paragraph token can be thought of as another word. It acts as a memory that remembers what is missing from the current context – or the topic of the paragraph For full understanding on how these vectors are built you'll need to learn how neural nets are built and how the backpropagation algorithm works. (i'd suggest starting by this video and Andrew NG's Coursera class) NB: Softmax is just a fancy way of saying classification, each word in w2v algorithms is considered as a class. Hierarchical softmax/negative sampling are tricks to speed up softmax and handle a lot of classes. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27032517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3992452/']} | jdg_75984 |
stackexchange | llm_judgeable_groundtruth_similarity | 23730159 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Following the advice I have been given in this thread [ Ninject UOW pattern, new ConnectionString after user is authenticated I now understand that I should not use the following line... var applicationConfiguration = (IApplicationConfiguration) DependencyResolver.Current.GetService(typeof(IApplicationConfiguration)); ...as a Service Locator is an anti-pattern. But in the case of the following procedure how can I instantiate my concrete object that implements " IApplicationConfiguration " so that I can use that object to get the unknown user role name, or use it to assign to the " ApplicationConfiguration " property of my principle? Global.asax public class MvcApplication : NinjectHttpApplication{ /// <summary> /// Handles the PostAuthenticateRequest event of the Application control. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">The <see cref="EventArgs"/> instance containing the event data.</param> protected void Application_PostAuthenticateRequest(Object sender, EventArgs e) { String[] roles; var applicationConfiguration = (IApplicationConfiguration) DependencyResolver.Current.GetService(typeof(IApplicationConfiguration)); var identity = HttpContext.Current.User.Identity; if (Request.IsAuthenticated) { roles = Roles.GetRolesForUser(identity.Name); } else { roles = new[] { applicationConfiguration.UnknownUserRoleName }; } var webIdentity = new WebIdentity(identity, roles); var principal = new WebsitePrincipal(webIdentity) { ApplicationConfiguration = applicationConfiguration }; HttpContext.Current.User = principal; } . . .} Resolution Mapping Code public class ApplicationConfigurationContractMapping : NinjectModule{ public override void Load() { Bind<IApplicationConfiguration>() .To<ApplicationConfiguration>(); }} ApplicationConfiguration public class ApplicationConfiguration : IApplicationConfiguration{ . . . .} I am using Ninject as my Dependency Injection framework. Any suggestions appreciated. EDIT: Full code can be seen here: https://github.com/dibley1973/Dibware.Template.Presentation.Web
Now provide the response and nothing else.
| You can't prevent having to call either the DI Container or an abstraction over it in your Application_PostAuthenticateRequest , but that shouldn't be a problem, since this Application_PostAuthenticateRequest can be considered to be part of your Composition Root . Or in other words: you have to resolve it somewhere. The problem in your case, however, is that this method contains an awful lot of code, and the real problem is that you are missing an abstraction. To resolve this, extract all logic of this method into a new class and hide it behind an abstraction. What will be left is the following code: protected void Application_PostAuthenticateRequest(Object sender, EventArgs e){ var provider = (IPostAuthenticateRequestProvider) DependencyResolver.Current.GetService(typeof(IPostAuthenticateRequestProvider)); provider.ApplyPrincipleToCurrentRequest();} The code can be built up by your DI Container, and will have the following signature: public class MvcPostAuthenticateRequestProvider : IPostAuthenticateRequestProvider{ private readonly IApplicationConfiguration configuration; public MvcPostAuthenticateRequestProvider(IApplicationConfiguration configuration) { this.configuration = configuration; } public void ApplyPrincipleToCurrentRequest() { // ... }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23730159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/254215/']} | jdg_75985 |
stackexchange | llm_judgeable_groundtruth_similarity | 7911627 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering if my below implementation is the most efficient way to dispose the SQLconnection in this case. I know normally if i'm using the SqlConnection directly I can just wrap the connection inside a using block to dispose it off automatically, but in this case i wanted to keep the connection open and available to All methods in the SQLRespository class. public class SqlRepository : IRepository{ private readonly string connectionString; private SqlConnection connection; public SqlRepository(string connectionString) { this.connectionString = connectionString; connection = new SqlConnection(connectionString); connection.Open(); } public void Method_A() { // uses the SqlConnection to fetch data } public void Method_B() { // uses the SqlConnection to fetch data } public void Dispose() { connection.Dispose(); }} Usage: using (IRepository repository = new SqlRepository(connectionString)){ var item = repository.items; } Update IRepository does implement IDisposable
Now provide the response and nothing else.
| Don't keep the connection open spanning calls. You're defeating connection pooling. If you're working with a connection that's pooled (like sqlserver), it will pool and reuse. Just open and close within method a & b. You could argue that if the caller does what you did with using with one method call it's fine. But if you do using {} with sqlconnection inside each worker method (1) the code will be simpler and (2) you're ensured the pooling wont be defeated (meaning your holding items out of the pooling when other requests could use it). EDIT: Adding pseudo based on comments. The pattern is problematic because a caller can do. //pseudo codeusing (SqlRepository r){ r.MethodA(); // other code here that takes some time. your holding a connection // out of the pool and being selfish. other threads could have // used your connection before you get a chance to use it again. r.MethodB();} // freed for others here. That will kill the scalability of the server - trust me. I've seen very large systems get choked by this - usually because they're spanning AT side transactions. A better pattern: class Repository{ void MethodA() { using (Sqlconnection) { // db call } } void MethodB() { using (Sqlconnection) { // you can even have multiple calls here (roundtrips) // and start transactions. although that can be problematic // for other reasons. } } Now, the pool is most effective. I realize the question was on the disposable pattern - and yes you can do it. But ... I would not let the connection span the lifetime of the repository. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7911627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1002509/']} | jdg_75986 |
stackexchange | llm_judgeable_groundtruth_similarity | 4998470 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble with figuring out the best way to store some data in my database. I've got to store DD/MM dates in a database, but I'm not sure of the best way to store this so that it can be easily sorted and searched. Basically a user will be able to save important dates in the format DD/MM, which they will be reminded of closer to the day. The DATE data type doesn't seem completely appropriate as it includes year, but I can't think of another way of storing this data. It would be possible to include a specific year to the end of all occasions, but this almost doesn't seem right.
Now provide the response and nothing else.
| I've got to store DD/MM dates in a database, but I'm not sure of the best way to store this so that it can be easily sorted and searched. The best way to store date data, even if the year component is not required, is to use date. When you need to use it, you can remove the year, or replace it with the year being compared against (or current year). Having it in date column facilitates sorting correctly, integrity, validation etc. To cater for leap years, use a year like '0004' which allows '0004-02-29'. Using year 4 makes it slightly more complicated than year 0, but as an example, this turns the date '29-Feb' (year agnostic) into a date in this year for comparison with some other field select adddate( subdate(cast('0004-02-29' as date), interval 4 year), interval year(curdate()) year)result: 2011-02-28 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4998470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56872/']} | jdg_75987 |
stackexchange | llm_judgeable_groundtruth_similarity | 213259 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I see a lot of source code that uses PImpl idiom in C++. I assume Its purpose is to hide the private data/type/implementation, so it can remove dependence, and then reduce compile time and header include issue. But interface/pure-abstract classes in C++ also have this capability, they can also be used to hide data/type/implementation. And to let the caller just see the interface when creating an object, we can declare a factory method in the interface's header. The comparison is: Cost : The interface way cost is lower, because you don't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); } , you just need to define the signature in the interface. Well understood : The interface technology is better understood by every C++ developer. Performance : Interface way performance is not worse than PImpl idiom, both requires an extra memory access. I assume the performance is same. Following is the pseudocode to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header.class BarImpl;class Bar{public: // public functions void doWork();private: // You don't need to compile Bar.cpp after changing the implementation in BarImpl.cpp BarImpl* m_impl;}; The same purpose can be implemented using interface: // Bar.hclass IBar{public: virtual ~IBar(){} // public functions virtual void doWork() = 0;};// to only expose the interface instead of class name to callerIBar* createObject(); So what's the point of PImpl?
Now provide the response and nothing else.
| First, PImpl is usually used for non-polymorphic classes. And when a polymorphic class has PImpl, it usually remains polymorphic, that is still implements interfaces and overrides virtual methods from base class and so on. So simpler implementation of PImpl is not interface, it is a simple class directly containing the members! There are three reasons to use PImpl: Making the binary interface (ABI) independent of the private members. It is possible to update a shared library without recompiling the dependent code, but only as long as the binary interface remains the same. Now almost any change in header, except for adding a non-member function and adding a non-virtual member function, changes the ABI. The PImpl idiom moves definition of the private members into the source and thus decouples the ABI from their definition. See Fragile Binary Interface Problem When a header changes, all sources including it have to be recompiled. And C++ compilation is rather slow. So by moving definitions of the private members into the source, the PImpl idiom reduces the compilation time, as fewer dependencies need to be pulled in the header, and reduces the compilation time after modifications even more as the dependents don't need to be recompiled (ok, this applies to interface+factory function with hidden concrete class too). For many classes in C++ exception safety is an important property. Often you need to compose several classes in one so that if during operation on more than one member throws, none of the members is modified or you have operation that will leave the member in inconsistent state if it throws and you need the containing object to remain consistent. In such case you implement the operation by creating new instance of the PImpl and swap them when the operation succeeds. Actually interface can also be used for implementation hiding only, but has following disadvantages: Adding non-virtual method does not break ABI, but adding a virtual one does. Interfaces therefore don't allow adding methods at all, PImpl does. Inteface can only be used via pointer/reference, so the user has to take care of proper resource management. On the other hand classes using PImpl are still value types and handle the resources internally. Hidden implementation can't be inherited, class with PImpl can. And of course interface won't help with exception safety. You need the indirection inside the class for that. | {} | {'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/213259', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/100543/']} | jdg_75988 |
stackexchange | llm_judgeable_groundtruth_similarity | 9245887 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Twitter's Bootstrap 2 finally added native responsive design. However, by default when the browser width is below a min width, it places the sidebar on top. I can see how this would work for many sites, but I actually want the sidebar at the bottom on small width layouts. I'm brand new to Twitter Bootstrap and tried to figure out what part of the CSS in bootstrap-response.css , but I didn't see anything in the section for @media (max-width: 480px) . I looked at the Bootstrap documentation for the responsive design and there isn't much detail on it. Would love some pointers...
Now provide the response and nothing else.
| You can achieve this effect by flipping the containers for the sidebar and content area and just floating them the way you want to. This can be done cleanly without messing around too much with the bootstrap stylesheet by assigning your own ids to the sidebar and content area and doing something like this: CSS .sidebar-nav { padding: 9px 0; width:100%;}#sidebar { float:left; margin-left:0;}#content { float:right !important; margin-left:auto;}@media (max-width: 767px) { #sidebar { display: inline-block; margin-top: 20px; width: 100%; } #content { float:none !important; margin-left:0; }} Then all you have to do is flip the container divs like so: <div id="content" class="span9"> ... </div> /* item you want up top first */<div id="sidebar" class="span3"> ... </div> /* last item to be rendered below */ Demo: http://jsfiddle.net/andresilich/YEUwN/1/show/ Edit here: http://jsfiddle.net/andresilich/YEUwN/1/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9245887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/286791/']} | jdg_75989 |
stackexchange | llm_judgeable_groundtruth_similarity | 8779845 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to run a javascript function each 10 seconds. I understand the syntax must work like follow but I am not getting any success: function funcName() { alert("test");}var func = funcName();var run = setInterval("func",10000) But this isn't working. Any help?
Now provide the response and nothing else.
| A lot of other answers are focusing on a pattern that does work, but their explanations aren't really very thorough as to why your current code doesn't work. Your code, for reference: function funcName() { alert("test");}var func = funcName();var run = setInterval("func",10000) Let's break this up into chunks. Your function funcName is fine. Note that when you call funcName (in other words, you run it) you will be alerting "test" . But notice that funcName() -- the parentheses mean to "call" or "run" the function -- doesn't actually return a value. When a function doesn't have a return value, it defaults to a value known as undefined . When you call a function, you append its argument list to the end in parentheses. When you don't have any arguments to pass the function, you just add empty parentheses, like funcName() . But when you want to refer to the function itself, and not call it, you don't need the parentheses because the parentheses indicate to run it. So, when you say: var func = funcName(); You are actually declaring a variable func that has a value of funcName() . But notice the parentheses. funcName() is actually the return value of funcName . As I said above, since funcName doesn't actually return any value, it defaults to undefined . So, in other words, your variable func actually will have the value undefined . Then you have this line: var run = setInterval("func",10000) The function setInterval takes two arguments. The first is the function to be ran every so often, and the second is the number of milliseconds between each time the function is ran. However, the first argument really should be a function, not a string. If it is a string, then the JavaScript engine will use eval on that string instead. So, in other words, your setInterval is running the following JavaScript code: func// 10 seconds later....func// and so on However, func is just a variable (with the value undefined , but that's sort of irrelevant). So every ten seconds, the JS engine evaluates the variable func and returns undefined . But this doesn't really do anything. I mean, it technically is being evaluated every 10 seconds, but you're not going to see any effects from that. The solution is to give setInterval a function to run instead of a string. So, in this case: var run = setInterval(funcName, 10000); Notice that I didn't give it func . This is because func is not a function in your code; it's the value undefined , because you assigned it funcName() . Like I said above, funcName() will call the function funcName and return the return value of the function. Since funcName doesn't return anything, this defaults to undefined . I know I've said that several times now, but it really is a very important concept: when you see funcName() , you should think "the return value of funcName ". When you want to refer to a function itself , like a separate entity, you should leave off the parentheses so you don't call it: funcName . So, another solution for your code would be: var func = funcName;var run = setInterval(func, 10000); However, that's a bit redundant: why use func instead of funcName ? Or you can stay as true as possible to the original code by modifying two bits: var func = funcName;var run = setInterval("func()", 10000); In this case, the JS engine will evaluate func() every ten seconds. In other words, it will alert "test" every ten seconds. However, as the famous phrase goes, eval is evil , so you should try to avoid it whenever possible. Another twist on this code is to use an anonymous function. In other words, a function that doesn't have a name -- you just drop it in the code because you don't care what it's called. setInterval(function () { alert("test");}, 10000); In this case, since I don't care what the function is called, I just leave a generic, unnamed (anonymous) function there. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8779845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1025234/']} | jdg_75990 |
stackexchange | llm_judgeable_groundtruth_similarity | 241 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After the resurrection in John's gospel, Jesus appears to the disciples and in John 20:22 he breathes on them and says, "Receive the Holy Spirit." What is happening here? Is Jesus giving them the Holy Spirit by breathing on them? Edit: Maybe this goes a ways towards answering the question, but one of the reasons I'm questioning what's happening here is that connecting John 7:39 , John 17:5 , and John 20:17 (and others) it seems like in John that the Spirit's coming is connected with Jesus' ascending to the Father.
Now provide the response and nothing else.
| Short Answer: There is strong evidence from Scripture that they actually received the Spirit at Pentecost, and that what we see in John 20:22 was Jesus giving them a visual illustration and command in preparation for that event. The Controversy For reference, here is the statement in question: He breathed on them and said to them, “Receive the Holy Spirit." -John 20:22 The question is, why did Jesus say this? There are two main views: A) Jesus said, "Receive the Holy Spirit" because He was actually imparting the Holy Spirit at this time, and wanted them to understand what was happening. The "breathing" was to illustrate the giving of Spiritual life, and would be familiar imagery from their knowledge of Genesis 2:7 . B) Jesus blew on them to illustrate 1 to His disciples that He and the Spirit were one , and that the Spirit would come forth from Him . He then commanded them in advance to "Receive the Holy Spirit" because it was almost time for Him to go to the Father and for the Spirit to come , and He wouldn't be with them to explain it when it happened. 1: Note that the Greek word for Spirit (πνεῦμα) is the same Greek word for wind , and for breath . The Debate The A group would say to the B group: Jesus said, "Receive the Holy Spirit." Clearly that means they received the Holy Spirit. The Spirit was to be given after Jesus was glorified, and He was glorified through His crucifiction and resurrection The B group would say to the A group: Jesus did not say they received the Holy Spirit. There is no record in Scripture of them receiving the Holy Spirit at the time of this command - only a record of Jesus giving the command . It is not "safe" to apply meaning beyond what was actually said , to reason from an absence of Scripture , or to interpret Jesus' commands as statements of historical record . Also, this would not be the first time Jesus did something purely for illustration. ( examples ) It is not clear in Scripture that Jesus was glorified at the time of the resurrection, as opposed to the time of His ascension. Regardless, both events (the statement in John 20:22 as well as Pentecost) were after His resurrection, so this is a moot point. Obviously it is not clear from this passage alone when the Holy Spirit was given. We need to interpret the unclear passages of Scripture in light of what we know from the more clear passages of Scripture. The Context of Scripture The event in Acts 2 is referred to in Scripture as the promise of the Father , the baptism in the Spirit , the Holy Spirit coming upon them , receiving the power to be His witnesses , and being filled with the Holy Spirit . 1) Jesus told His disciples: I am going to Him who sent Me ... it is to your advantage that I go away; for if I do not go away, the Helper will not come to you; but if I go, I will send Him to you. -John 16:5-7 This passage indicates (A) that Jesus had to go to the Father in order for the Spirit to come, and (B) that Jesus would "send" the Spirit after He went. Jesus went to the Father in Acts 1:9-11 . The A group might argue that Jesus could have gone to the Father between His crucifiction and John 20, but John 20:17 seems to say otherwise. I suppose the A group could try to explain the passage by arguing that the Spirit came according to the Father's foreknowledge that Jesus would go to the Father, but that would throw a wrench in the entire discussion by invalidating the Biblical chronology altogether! The most obvious solution is that Jesus went to the Father in Acts 1 , and the Spirit was sent in Acts 2 . 2) John 15:26-27 indicates that the coming of the Spirit would result in them being witnesses. This happened at Pentecost. (See Acts 1:4 and Acts 1:8 .) 3) John 16:7-11 indicates that the coming of the Spirit would result in the conviction of the world. This happened at Pentecost. (See Acts 1:4 and Acts 2:14-41 .) See here for a long list of further evidence in support of this interpretation. | {} | {'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/241', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/33/']} | jdg_75991 |
stackexchange | llm_judgeable_groundtruth_similarity | 41941106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to write a function for simplifying terms in a polynomial: Suppose that we can represent each terms by a two-member int array which the first member shows variable and the second member is the coefficient. for example 4a and 3b could be represented by (1, 4) and (2, 3), respectively. in these examples we have shown the variables (a, b, ..) by an integer based on alphabetic order (a:1, b:2, c:3, ...). Therefore, a polynomial could be represented as a vector<-int*> of these terms. for example 4a+2b+10c could be shown like {(1,4), (2,2), (3,10)}.Now, the goal is to simplify a polynomial. for example: 2a+3c-a+5t-3c = a+5t I have written a code in C++ to do this job but it's super slow for large polynomials. void simplify (vector<int*> &p){ vector<int*>:: iterator it1 = p.begin(); vector<int*>:: iterator it2; while (it1!=p.end()) { if ((*it1)[1]!=0) // if the coefficient is not equal to zero { it2 = it1+1; while (it2!=p.end()) { if ((*it1)[0]==(*it2)[0]) //if the variables are similar { (*it1)[1] += (*it2)[1]; (*it2)[1]=0; //set the coefficient equal to zero } it2++; } } it1++; } it1 = p.begin(); while (it1!=p.end()) //removing the terms with zero coefficient { if ((*it1)[1]==0) it1 = p.erase (it1); else it1++; }} I appreciate everyone who can show me what are the code problems and how can I increase my code speed.
Now provide the response and nothing else.
| JVM arrays, on which Kotlin arrays are mapped to, require the element type to be known at compile time to create an instance of array. So you can instantiate Array<String> or Array<Any> , but not Array<T> where T is a type parameter, representing the type that is erased at compile time and hence is unknown. To specify that a type parameter must be known at compile time it is marked with reified modifier. There are several options, what you can do in this situation: Use MutableList<T> for storing elements, which doesn't require reified T: // MutableList function, available in Kotlin 1.1val data = MutableList(rows * cols, { i -> val r = i / cols init(r, i % cols)})// or in Kotlin 1.0val data = mutableListOf<T>().apply { repeat(rows * cols) { i -> val r = i / cols add(init(r, i % cols)) }} Create an array from an inline function with reified type parameter: inline fun <reified T> Matrix2d(val rows: Int, val cols: Int, init: (Int, Int) -> T) = Matrix2d(rows, cols, Array(rows * cols, { .... })class Matrix2d<T> @PublishedApi internal constructor( val rows: Int, val cols: Int, private val data: Array<T> ) Use Array<Any?> as the storage, and cast its values to T in get function: val data = Array<Any?>(rows * cols, { .... })operator fun get(row: Int, col: Int): T = data[row * cols + col] as T Pass a parameter of type Class<T> or KClass<T> to constructor and use java reflection to create an instance of array. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41941106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3140062/']} | jdg_75992 |
stackexchange | llm_judgeable_groundtruth_similarity | 309628 |
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use "Other" flags on posts quite a bit, mostly to ask the mods to purge all of the post's comments when they're obsolete or non-constructive. Many of these get handled within a couple of hours. I've had 6 handled within the last 2 days. But others - including very similar flags, just asking for comment purges! - remain in review for ages. My oldest unresolved flag has been in the queue for 20 days now. I'm curious - what's going on behind the scenes? How come I've had flags from the last 24 hours handled within hours of creating them despite having nearly month-old flags still in the queue?
Now provide the response and nothing else.
| I can provide my own (detailed) perspective on the variable handling time for "other" flags. As bluefeet states, this largely has to do with the incredibly diverse nature of these flags and the fact that all of these flags are mixed together in an unordered nature. When we visit the flag queue, we see a dump of all flags of all types, sorted first by posts with multiple flags on them and then by chronological order. Posts that rack up multiple flags on them usually indicate something we need to look into sooner rather than later, so those tend to get acted on first. For sorting, we can refine this to specific flag types: "not an answer", "very low quality", and spam / offensive. The spam / offensive flag sorting appears in bright red at the top of that list, and we tend to handle those before anything else. "very low quality" and "not an answer" flags are now provided for community review in the Low Quality Posts review queue. They are delayed for an hour before they appear in the moderator queues, but it doesn't appear that extending that time would do much to improve the rate at which those are being handled. The timing for how those flags are handled largely depends on the ebb and flow of moderator participation. We've had a few active moderators lately called away simultaneously for personal or work reasons, leading to an extension in the number of unhandled flags here . There also have been a higher number of bad flags here lately (people flagging any answer they see below -2 as "not an answer", flagging competing answers to theirs, flagging paragraph-long answers because they happen to have a link in them, etc.). I imagine the number of these will come back down as people return. The "other" flags are where things often get jammed up. When processing flags, I'm most effective when I can get into a specific mindset and work through all flags of one type, then all flags of another. "Other" flags are all over the map in terms of quality, actions, urgency, and time to process. Aside from chronological order, they are completely unsorted. I try to approach the flag queue by triaging flags based on urgency and how confident I am in what to do with them. Most of the "other" flags that take longer to handle are ones where either moderators weren't sure of what to do with them or they came in when moderators weren't around and were surrounded by these difficult-to-handle flags. The "other" flag queue also unfortunately has a large number of absolutely terrible flags. For example, these are six actual flags I handled in a row on Thursday morning: Sir, i got deadline tonight. Please help. Thank you.. Please I need answers to my question. I'm yet to resolve this challenge. please Help for me.......... Need urgent help!! Expecting a solution need urgently We also see a large number of people still using "other" flags instead of close votes, to indicate answers they think are wrong, to demand that we accept a certain answer, etc. All of these flags waste our time and bury legitimate issues. Moderators have been working with Stack Exchange employees to improve this process, and it's clearly something they're putting effort into. One area that has shown promise is keyword or regex filtering for "other" flags. As an experiment, several of us are using an SE-provided userscript to filter flags and I've found it to be very helpful in grouping types of "other" flags. For example, I can pull out many plagiarism flags by filtering for "plagia" or "copied" and apply a 1-2-3 workflow for handling those rapidly. Some might be from a week ago, some a few minutes old, but I can process these all at the same time. Likewise, I can filter for all the college students asking for their posts to be deleted (and decline almost all) by sorting on posts flagged by the original poster who have "delete" in them. Somehow incorporating these capabilities into the system will be a huge help, and I know people at SE have been working on this for a while. Beyond that, I've been thinking about ways of implementing time-of-flag warnings for people attempting to leave terrible flags like the above and catching other cases where flags are being used improperly (migration requests on old questions, any flags about accept votes, etc.). Sorry for the length of this, but I've seen some mistaken impressions about how this process works and I wanted to provide my thoughts from what I've seen over the years. | {} | {'log_upvote_score': 6, 'links': ['https://meta.stackoverflow.com/questions/309628', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/1709587/']} | jdg_75993 |
stackexchange | llm_judgeable_groundtruth_similarity | 12755997 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using a library, ya-csv , that expects either a file or a stream as input, but I have a string. How do I convert that string into a stream in Node?
Now provide the response and nothing else.
| As @substack corrected me in #node , the new streams API in Node v10 makes this easier: const Readable = require('stream').Readable;const s = new Readable();s._read = () => {}; // redundant? see update belows.push('your text here');s.push(null); … after which you can freely pipe it or otherwise pass it to your intended consumer. It's not as clean as the resumer one-liner, but it does avoid the extra dependency. ( Update: in v0.10.26 through v9.2.1 so far, a call to push directly from the REPL prompt will crash with a not implemented exception if you didn't set _read . It won't crash inside a function or a script. If inconsistency makes you nervous, include the noop .) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12755997', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471136/']} | jdg_75994 |
stackexchange | llm_judgeable_groundtruth_similarity | 5005 |
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was recently watching a talk by Urmila Mahadev on "Classical Verification of Quantum Computations" ( see this ). I am not new to quantum computation just have a familiarity with the qubit and some part of quantum mechanics. I even did not get the meaning of the simulation. I am guessing it means that given an encoding of a quantum device with input. The classical device will take this encoding and get the results. The only thing which I have understood is that there may be many states in which quantum device can go through on a particular input. Question: Why it is hard to simulate a quantum device by a classical device? Please note that I am not sure what it means by " hard ". Is it time-wise or space-wise? The meaning of " simulation " is also not clear to me.
Now provide the response and nothing else.
| The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented: Quantum operations on $n$ qubits are unitary matrices of size $2^n \times 2^n$ . Quantum states on $n$ qubits are complex vectors of size $2^n$ . If you are not 100% sure of theses numbers, you can read more about it in: (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book. These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here . Once you are convinced that the numbers I wrote above are valid, you have your answer: Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer. To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet): The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e. float or double , and assuming a non-exotic classical computer). A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets. The quantum state needs $2^n$ complex numbers, i.e. $f_{\text{single}}(n) = 2^{3+n}$ octets if you use single precision or $f_{\text{double}}(n) =2^{4+n}$ octets if you use double precision. Say you want to simulate a n-qubit quantum computer with your classical computer: For $n = 10$ you will need at least $f_{\text{simple}}(10) = 2^{13} = 8192\, \text{o} = 8\, \text{kio}$ . Every classical computer should be able to do this. For $n = 20$ you will need at least $f_{\text{simple}}(20) = 2^{23} = 8388608\, \text{o} = 8\, \text{Mio}$ . Every classical computer should be able to do this. For $n = 30$ you will need at least $f_{\text{simple}}(30) = 2^{33} = 8589934592\, \text{o} = 8\, \text{Gio}$ . A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM. For $n = 40$ you will need at least $f_{\text{simple}}(40) = 2^{43} = 8796093022208\, \text{o} = 8\, \text{Tio}$ . This is definitely out of reach for publicly-accessible things, you will need access to a computing server. For $n = 50$ you will need at least $f_{\text{simple}}(50) = 2^{53} = 9007199254740992\, \text{o} = 8\, \text{Pio}$ . Even Summit , the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8\, \text{Pio}$ of RAM. Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need. Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix). Nevertheless, note that you may have a $1$ -qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise. | {} | {'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/5005', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/5361/']} | jdg_75995 |
stackexchange | llm_judgeable_groundtruth_similarity | 15661731 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For me Bracket Highlighter plugin is not coloring and highlighting brackets but just underlining them in white. Here is a snapshot: Anyone knows of a solution?
Now provide the response and nothing else.
| I have modified my example based upon the comments of AGS - it now includes a couple of highlight options and the rest are outline. Thank you AGS and thank you to the original poster for creating this useful thread. bh_core.sublime-settings { "bracket_styles": { // This particular style is used to highlight // unmatched bracket pairs. It is a special // style. "unmatched": { "icon": "question", "color": "brackethighlighter.unmatched", "style": "highlight" }, // User defined region styles "curly": { "icon": "curly_bracket", "color": "brackethighlighter.curly", "style": "highlight" }, "round": { "icon": "round_bracket", "color": "brackethighlighter.round", "style": "outline" }, "square": { "icon": "square_bracket", "color": "brackethighlighter.square", "style": "outline" }, "angle": { "icon": "angle_bracket", "color": "brackethighlighter.angle", "style": "outline" }, "tag": { "icon": "tag", "color": "brackethighlighter.tag", "style": "outline" }, "single_quote": { "icon": "single_quote", "color": "brackethighlighter.quote", "style": "outline" }, "double_quote": { "icon": "double_quote", "color": "brackethighlighter.quote", "style": "outline" }, "regex": { "icon": "regex", "color": "brackethighlighter.quote", "style": "outline" } }} whatever_theme_file_you_use.tmTheme <!-- BEGIN Bracket Highlighter plugin color modifications --><dict> <key>name</key> <string>Unmatched</string> <key>scope</key> <string>brackethighlighter.unmatched</string> <key>settings</key> <dict> <key>foreground</key> <string>#FD971F</string> </dict></dict><dict> <key>name</key> <string>Bracket Curly</string> <key>scope</key> <string>brackethighlighter.curly</string> <key>settings</key> <dict> <key>foreground</key> <string>#FF0000</string> </dict></dict><dict> <key>name</key> <string>Bracket Round</string> <key>scope</key> <string>brackethighlighter.round</string> <key>settings</key> <dict> <key>foreground</key> <string>#0000FF</string> </dict></dict><dict> <key>name</key> <string>Bracket Square</string> <key>scope</key> <string>brackethighlighter.square</string> <key>settings</key> <dict> <key>foreground</key> <string>#800080</string> </dict></dict><dict> <key>name</key> <string>Bracket Angle</string> <key>scope</key> <string>brackethighlighter.angle</string> <key>settings</key> <dict> <key>foreground</key> <string>#AE81FF</string> </dict></dict><dict> <key>name</key> <string>Bracket Tag</string> <key>scope</key> <string>brackethighlighter.tag</string> <key>settings</key> <dict> <key>foreground</key> <string>#FD971F</string> </dict></dict><dict> <key>name</key> <string>Single Quote | Double Quote | Regex</string> <key>scope</key> <string>brackethighlighter.quote</string> <key>settings</key> <dict> <key>foreground</key> <string>#AE81FF</string> </dict></dict><!-- END Bracket Highlighter plugin color modifications --> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15661731', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/812383/']} | jdg_75996 |
stackexchange | llm_judgeable_groundtruth_similarity | 72283 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of strings like this: test1 = {"psi-mi:dvl1_mouse(display_long)|uniprotkb:Dvl1(gene name)|psi-mi:Dvl(display_short)|uniprotkb:Dvl(gene name synonym)|uniprotkb:DSH homolog 1(gene name synonym)","psi-mi:nxn_mouse(display_long)|uniprotkb:Nxn(gene name)|psi-mi:Nxn(display_short)|uniprotkb:Gn25(gene name synonym)|uniprotkb:Protein Red-1(gene name synonym)","psi-mi:lrp8_mouse(display_long)|uniprotkb:Apolipoprotein Ereceptor 2(gene name synonym)|uniprotkb:Lrp8(gene name)|psi-mi:Lrp8(display_short)|uniprotkb:Apoer2(gene name synonym)","psi-mi:reln_mouse(display_long)|uniprotkb:Reln(gene name)|psi-mi:Reln(display_short)|uniprotkb:Rl(gene name synonym)|uniprotkb:Reeler protein(gene name synonym)"} and I want to get the gene name , which is between the "uniprotkb:" and "(gene name)|" . So I'm using StringCases and Shortest : Map[StringCases[#, Shortest["|uniprotkb:" ~~ aa__ ~~ "(gene name)"] -> aa] &, test1] and the result is {{"Dvl1"}, {"Nxn"}, {"Apolipoprotein E receptor 2(gene name synonym)|uniprotkb:Lrp8"}, {"Reln"}} where the third element is not correct, because I just want to get "Lrp8" . Can somebody tell me why this happens, and what should I do? Any suggestions will be appreciated. Thank you very much.
Now provide the response and nothing else.
| The behaviour we see here is due to the semantics of the regular expression pattern-matching process. Notionally, the string is scanned from left to right. At each character position, an attempt is made to match the string by looking ahead from that position. If there is no match, scanning moves onto the next character. If there is a match, then (by default) scanning resumes from the next character after the end of the match . In the case at hand, consider what happens when the scanning position is the vertical bar at the beginning of the substring "|uniprotkb:Apolipoprotein" . The scanner looks ahead from that position for a substring that ends "(gene name)" . Since there are no constraints upon what the intervening characters ( aa ) will match, it happily scans right past a vertical bar and into what we consider to be the next entry. Having found match, it returns it and resumes scanning from the vertical bar introducing "|psi-mi:Lrp8" . There is some subtlety to meaning of the term Shortest . It does not mean find the shortest possible match relative to the lengths of other possible matches anywhere in the string. What it means is that the scanner, when looking ahead from any given position, is to stop as soon as it finds a match. The default (non-shortest) behaviour is to continue scanning, and to keep extending the match until there is no other possibility. The default behaviour is normally labelled by the jargon term "greedy". Mathematica's Shortest behaviour is usually labelled "lazy", or sometimes "reluctant". A simple solution, as noted by @Kuba and @belisarius, is to constrain what can be matched by aa : StringCases[test1, "|uniprotkb:" ~~ aa:Except["|"].. ~~ "(gene name)" :> aa](* {{"Dvl1"}, {"Nxn"}, {"Lrp8"}, {"Reln"}} *) By excluding vertical bars from the intervening characters, we are essentially telling the scanner that it is not permitted to extend a match across entry boundaries. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/72283', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19458/']} | jdg_75997 |
stackexchange | llm_judgeable_groundtruth_similarity | 54053633 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Some articles suggestthat now build() is included in perform() itself, while others suggestthat build().perform() is used when multiple actions are to be chainedtogether.
Now provide the response and nothing else.
| build() is included in perform() , you can see it in the source code public void perform() { build().perform();} The perform() inside the methods calls the perform() method in the inner class BuiltAction . Calling build().perform() in your code is actually calling build() twice, build().build().perform() . build Generates a composite action containing all actions so far, ready to be performed (and resets the internal builder state, so subsequent calls to build() will contain fresh sequences). performe A convenience method for performing the actions without calling build() first. This is little misleading (IMO), because build() is being called, even if implicitly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54053633', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2373469/']} | jdg_75998 |
stackexchange | llm_judgeable_groundtruth_similarity | 499401 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got an app in development where you press a button to play a sound using System beep. Instead of recording the output, I was thinking of recording the timing of when the buttons were pressed, so when the user presses a 'Play' button, the buttons are pressed in the same timing the user has pressed them, so it acts like the iPhone has recorded the sounds, but it's just playing them in sequence. I'd like this to occur once the user has pressed a Record button. Example: User presses record Timing starts records timing of each button pressed and when Timing stops after a button has been pressed PLay button plays back the sequence of events Can anybody give any ideas on coding, documents to read etc. Thanks a lot!
Now provide the response and nothing else.
| I would personally format it in two parts: the non-am/pm part, and the am/pm part with ToLower: string formatted = item.PostedOn.ToString("dddd, MMMM d, yyyy a\\t h:mm") + item.PostedOn.ToString("tt").ToLower(); Another option (which I'll investigate in a sec) is to grab the current DateTimeFormatInfo, create a copy, and set the am/pm designators to the lower case version. Then use that format info for the normal formatting. You'd want to cache the DateTimeFormatInfo, obviously... EDIT: Despite my comment, I've written the caching bit anyway. It probably won't be faster than the code above (as it involves a lock and a dictionary lookup) but it does make the calling code simpler: string formatted = item.PostedOn.ToString("dddd, MMMM d, yyyy a\\t h:mmtt", GetLowerCaseInfo()); Here's a complete program to demonstrate: using System;using System.Collections.Generic;using System.Globalization;public class Test{ static void Main() { Console.WriteLine(DateTime.Now.ToString("dddd, MMMM d, yyyy a\\t h:mmtt", GetLowerCaseInfo()); } private static readonly Dictionary<DateTimeFormatInfo,DateTimeFormatInfo> cache = new Dictionary<DateTimeFormatInfo,DateTimeFormatInfo>(); private static object cacheLock = new object(); public static DateTimeFormatInfo GetLowerCaseInfo() { DateTimeFormatInfo current = CultureInfo.CurrentCulture.DateTimeFormat; lock (cacheLock) { DateTimeFormatInfo ret; if (!cache.TryGetValue(current, out ret)) { ret = (DateTimeFormatInfo) current.Clone(); ret.AMDesignator = ret.AMDesignator.ToLower(); ret.PMDesignator = ret.PMDesignator.ToLower(); cache[current] = ret; } return ret; } }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/499401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/53677/']} | jdg_75999 |
stackexchange | llm_judgeable_groundtruth_similarity | 18311503 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I'm trying to pass a parameter to a javascript function using the razor '@' notation (in an MVC-4 View) but I'm getting Syntax error: "Unterminated string constant" Is there another way I should be writing this? @foreach (var item in Model){ ... <input type="button" value="Assign" onclick="AssignButtonClicked('@item.ID')" />} I've used the same syntax as this answer https://stackoverflow.com/a/5179316/1662619 Edit: It's just a syntax error, the functionality still works
Now provide the response and nothing else.
| If you can use JQuery .data() <input type="button" value="Assign" onclick="AssignButtonClicked(this)" data-assigned-id="@item.ID" /> Further it can be fetched using function AssignButtonClicked(elem){ var id= $(elem).data('assigned-id');} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18311503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1662619/']} | jdg_76000 |
stackexchange | llm_judgeable_groundtruth_similarity | 30201616 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C++, I'm trying to print the address of a C-string but there seems to be some problem with my cast. I copied the code from a book but it just doesn't compile on my mac. const char *const word = "hello";cout << word << endl; // Prints "hello"cout << static_cast< void * >(word) << endl; // Prints address of word
Now provide the response and nothing else.
| You are trying to cast away "constness": word points to constant data, but the result of static_cast<void*> is not a pointer to constant data. static_cast will not let you do that. You should use static_cast<const void*> instead. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30201616', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4893250/']} | jdg_76001 |
stackexchange | llm_judgeable_groundtruth_similarity | 10513167 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I have a class with a @PostConstruct method, how can I test its constructor and thus its @PostConstruct method using JUnit and Spring? I can't simply use new ClassName(param, param) because then it's not using Spring -- the @PostConstruct method is not getting fired. Am I missing something obvious here? public class Connection { private String x1; private String x2; public Connection(String x1, String x2) { this.x1 = x1; this.x2 = x2; } @PostConstruct public void init() { x1 = "arf arf arf"; }}@Testpublic void test() { Connection c = new Connection("dog", "ruff"); assertEquals("arf arf arf", c.getX1());} I have something similar (though slightly more complex) than this and the @PostConstruct method does not get hit.
Now provide the response and nothing else.
| Have a look at Spring JUnit Runner . You need to inject your class in your test class so that spring will construct your class and will also call post construct method. Refer the pet clinic example. eg: @RunWith(SpringJUnit4ClassRunner.class)@ContextConfiguration(locations = "classpath:your-test-context-xml.xml")public class SpringJunitTests { @Autowired private Connection c; @Test public void tests() { assertEquals("arf arf arf", c.getX1(); } // ... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10513167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/366447/']} | jdg_76002 |
stackexchange | llm_judgeable_groundtruth_similarity | 14077842 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If a class contains a bunch of static methods, in order to make sure no one by mistake initializes an instance of this class, I made a private constructor: private Utils() {} Now .. how could this be tested, given that constructor can't be seen? Can this be test covered at all?
Now provide the response and nothing else.
| Using reflection, you can invoke a private constructor: Constructor<Util> c = Utils.class.getDeclaredConstructor();c.setAccessible(true);Utils u = c.newInstance(); // Hello sailor However, you can make even that not possible: private Utils() { throw new UnsupportedOperationException();} By throwing an exception in the constructor, you prevent all attempts. I would make the class itself final too, just "because": public final class Utils { private Utils() { throw new UnsupportedOperationException(); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14077842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/359862/']} | jdg_76003 |
stackexchange | llm_judgeable_groundtruth_similarity | 63552052 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm just trying out WSL 2 with Docker for Windows and I'm having an issues with mounted volumes : version: "3.7"services: node: build: . container_name: node hostname: node volumes: - ./app:/app stdin_open: true the container build and start well, I access it with docker exec nicely but the /app folder inside the container isn't bound to my laptop app folder. However the right path is actually correctly mounted on the running container : (here I do pwd on the host to if it matches perfectly with what is mounted on the container) ➜ app pwd /mnt/c/Users/willi/devspace/these/app And this is screen of portainer telling me what path are mounted where in the container and everything matches. The file I create int he app folder on the host are not visible in the app folder of the container and vice-versa. This is weird and I don't know how to debug it. Complementary infos: Windows 10 Pro 10.0.19041 Docker for Windows version : 2.3.0.4 docker version output in WSL : 19.03.12 docker-compose version : 1.26.2 Thanks
Now provide the response and nothing else.
| As @Pablo mentioned, the Best-Practice seems to be using WSL File system for mapping Volumes. Take a look at the Docker Documentation concerning WSL2 : Best practices To get the best out of the file system performance when bind-mounting files: Store source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path> ) in the Linux filesystem, rather than the Windows filesystem. Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows). Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME . If you have concerns about the size of the docker-desktop-data VHDX, or need to change it, take a look at the WSL tooling built into Windows. If you have concerns about CPU or memory usage, you can configure limits on the memory, CPU, Swap size allocated to the WSL 2 utility VM. To avoid any potential conflicts with using WSL 2 on Docker Desktop, you must uninstall any previous versions of Docker Engine and CLI installed directly through Linux distributions before installing Docker Desktop. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/63552052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9243790/']} | jdg_76004 |
stackexchange | llm_judgeable_groundtruth_similarity | 11498366 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I create an UI or a widget in Android that will sit on top of all applications? There are applications that have widgets like this. One example has a camera icon on top of all the applications that, when clicked, will capture the screen.
Now provide the response and nothing else.
| If you want to just display something, you can display it on top of everything even the lockscreen. If you want something to be clickable, you can display it on top of anything except the lockscreen. Here's a sample, modify to your needs: Create a service and do the following: //These three are our main components.WindowManager wm;LinearLayout ll;WindowManager.LayoutParams ll_lp;//Just a sample layout parameters.ll_lp = new WindowManager.LayoutParams();ll_lp.format = PixelFormat.TRANSLUCENT;ll_lp.height = WindowManager.LayoutParams.FILL_PARENT;ll_lp.width = WindowManager.LayoutParams.FILL_PARENT;ll_lp.gravity = Gravity.CLIP_HORIZONTAL | Gravity.TOP;//This one is necessary.ll_lp.type = WindowManager.LayoutParams.TYPE_SYSTEM_ALERT;//Play around with these two.ll_lp.flags = WindowManager.LayoutParams.FLAG_NOT_TOUCHABLE;ll_lp.flags = ll_lp.flags | WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE;//This is our main layout.ll = new LinearLayout(this);ll.setBackgroundColor(android.graphics.Color.argb(0, 0, 0, 0));ll.setHapticFeedbackEnabled(true);//And finally we add what we created to the screen.wm.addView(ll, ll_lp); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11498366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/450431/']} | jdg_76005 |
stackexchange | llm_judgeable_groundtruth_similarity | 2680162 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
A have a view controller, and it creates a "downloader" object, which has a reference to the view controller (as a delegate). The downloader calls back the view controller if it successfully downloads the item. This works fine as long as you stay on the view, but if you navigate away before the download is complete I get EXC_BAD_ACCESS . I understand why this is happening, but is there any way to check if an object is still allocated?I tried to test using delegate != nil , and [delegate respondsToSelector:] , but it chokes. if (!self.delegate || ![self.delegate respondsToSelector:@selector(downloadComplete:)]) { // delegate is gone, go away quietly [self autorelease]; return; }else { // delegate is still around [self.delegate downloadComplete:result];} I know I could, a) have the downloader objects retain the view controller b) keep an array of downloaders in the view controller, and set their delegate values to nil when I deallocate the view controller. But I wonder if there is an easier way, where I just test if the delegate address contains a valid object?
Now provide the response and nothing else.
| I just ran into this problem and solved it. For ARC, the solution is to use the weak attribute instead of assign . The crash come because the delegate Has an assign attribute, AND Has been deallocated. The solution is to use the weak attribute, because when the object deallocates, the pointer WILL be set the nil . So when your code calls respondsToSelector on a nil , Objective C will ignore the call, and not crash. In your code, when you attempt to call the respondsToSelector method on delegate , you get a EXC_BAD_ACCESS. This is because objects that use the assign property will not be set to nil when they are deallocated. (Hence why doing a !self.delegate before the respondsToSelector does not prevent the responseToSelector from being called on a deallocated object, and still crashes your code) As already mentioned, using a strong or assign attribute on a delegate (as many have mentioned) in ARC will result in a retain cycle. So don't do it, you don't need to. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2680162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/196358/']} | jdg_76006 |
stackexchange | llm_judgeable_groundtruth_similarity | 22432243 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to insert an object to existing vector of objects. I know that i need to use iterator to do it but i dont know how it exactly works. I have alphabetically sorted vector and i need to insert new object by its name in exact index that i got after some search . So i have this. vector<Person>people;int index =54;Person temp;people.push_back(temp);//insert at end of vectorpeople.insert(index, temp);//doesnt work for int Can anyone help me how to use iterator properly to insert my object to 54th index of vector and move all following object by one index ? Thank you for any help.
Now provide the response and nothing else.
| The straight forward answer is you need an iterator. The iterator for std::vector supports random access, which means you can add or subtract an integer value to or from an iterator. people.insert(people.begin() + index, temp); The better answer is don't use an index, use an iterator. What is your loop? You should be able to refactor the loop to use an iterator instead of an index. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22432243', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2973509/']} | jdg_76007 |
stackexchange | llm_judgeable_groundtruth_similarity | 17291932 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to get the basics of F# clear before moving on to complex examples. The material I'm learning has introduced both Discriminate Unions and Record types. I have reviewed the material for both, but it is still unclear to me why we would use one over the other. Most of the toy examples I have created seem to be implementable in both. Records seem to be very close to what I think of as an object in C#, but I am trying to avoid relying on mapping to c# as a way to understand F# So... Are there clear reason to use one over the other? Are there certain canonical cases where one applies? Are there certain functionalities available in one, but not theother?
Now provide the response and nothing else.
| Think of it as a Record is 'and', while a discriminated union is 'or'. This is a string and an int: type MyRecord = { myString: string myInt: int } while this is a value that is either a string or an int, but not both: type MyUnion = | Int of int | Str of string This fictitious game can be in the Title screen, In-game, or displaying the final score, but only one of those options. type Game = | Title | Ingame of Player * Score * Turn | Endgame of Score | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17291932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/818645/']} | jdg_76008 |
stackexchange | llm_judgeable_groundtruth_similarity | 73459254 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new at programming and can someone explain to me how this code work? #include <iostream>using namespace std;int main () { int a = 3, b = 4; decltype(a) c = a; decltype((b)) d = a; ++c; ++d; cout << c << " " << d << endl;} I'm quite confused how this code run as they give me a result of 4 4 , shouldn't be like 5 5 ? Because it was incremented two times by c and d? I'm getting the hang of decltype but this assignment caught me confused how code works again.
Now provide the response and nothing else.
| decltype(a) c = a; becomes int c = a; so c is a copy of a with a value of 3 . decltype((b)) d = a; becomes int& d = a; because (expr) in a decltype will deduce a reference to the expression type. So we have c as a stand alone variable with a value of 3 and d which refers to a which also has a value of 3 . when you increment both c and d both of those 3 s becomes 4 s and that is why you get 4 4 as the output | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/73459254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18335656/']} | jdg_76009 |
stackexchange | llm_judgeable_groundtruth_similarity | 47816702 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to run a simple node application with express but I get this error: Here's my working directory: I ran the following command to mount my current source code directory to /var/www inside the node container and run node npm start to start up the app; but I get the error above and not sure what to do: docker run -p 8085:3000 -v /home/joel/workspace/plural_docker_webdev:/var/www node -w "/var/www" node npm start And I get this error: docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"-w\": executable file not found in $PATH": unknown.ERRO[0000] error waiting for container: context canceled
Now provide the response and nothing else.
| Docker is telling you that the command hit an error.It is trying to run the node image with the command -w .Since -w is not a command, it throws this error. This is because you have written node in a place you probably didn't mean to. Your command is being interpreted like this: docker run -p [port_info] -v [volume_info] node [command] You can rewrite your command like so and it should work fine: docker run -p 8085:3000 -v /home/joel/workspace/plural_docker_webdev:/var/www -w "/var/www" node npm start | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47816702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2663059/']} | jdg_76010 |
stackexchange | llm_judgeable_groundtruth_similarity | 47212 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My algebra book introduces sum, intersection and product of ideals (in non-commutative rings), and then says that all three operations are commutative and associative, without proof. I see no reasons why the product of ideals should be commutative, but I wasn't able to find a counterexample either.
Now provide the response and nothing else.
| It is not true that ideal product is commutative in non-commutative (associative, unital) rings. An easy example is given by triangular rings (see for instance pp 17-22, especially p. 18 of Lam's First Course in Non-commutative Rings). You can easily construct a finite counterexample with 8 elements: Let a be the ring of upper-triangular matrices; let i be the (two-sided) ideal of a consisting of strictly upper triangular matrices, and let j be the (two-sided) ideal consisting of matrices whose second row is zero. Then ij = 0, but ji = i . This works over any field, in particular the field with 2 elements. You can verify this in GAP : gap> a:=AlgebraWithOne(GF(2),[[[0,1],[0,0]],[[1,0],[0,0]]]*One(GF(2)));<algebra-with-one over GF(2), with 2 generators>gap> i:=Ideal(a,[[[0,1],[0,0]]]*One(GF(2)));<two-sided ideal in <algebra-with-one of dimension 3 over GF(2)>, (1 generators)>gap> j:=Ideal(a,[[[1,0],[0,0]],[[0,1],[0,0]]]*One(GF(2)));;gap> ij:=Ideal(a,Concatenation(List(i,x ->List(j,y->x*y))));;gap> ji:=Ideal(a,Concatenation(List(i,x ->List(j,y->y*x))));;gap> Dimension(ij);Dimension(ji);01gap> ji = i;true | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/47212', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12490/']} | jdg_76011 |
stackexchange | llm_judgeable_groundtruth_similarity | 54903158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
These two ways to move a button to the right of its parent seem completely equivalent to me. Is there a reason to choose one over the other? Are there circumstances in which flex works to align content to the right where text-align might not suffice? .parent { text-align: right;} <div class="parent"><button>Awesome button!</button></div> .parent { display: flex; justify-content: flex-end;} <div class="parent"><button>Awesome button!</button></div> I'm curious because I noticed that Bootstrap changed from text-align: right to flex between versions 3 and 4 for aligning buttons in a modal's footer section. To illustrate further:
Now provide the response and nothing else.
| Yes there is a big difference. Flexbox is about boxes and block level element whearas text-align is about text and inline level element. When having one element we won't notice the difference but when it comes to multiple element we can see a clear difference. Here is a basic example where we have text and button inside a container: .parent-flex { display: flex; justify-content: flex-end; margin-bottom:10px;}.parent-normal { text-align:right;} <div class="parent-flex">some text here <button>Awesome button!</button></div><div class="parent-normal">some text here <button>Awesome button!</button></div> Note how in the flex container we no more have white space between the text and the button because the text will become a block element 1 and the button too which is not the case in the second example where both are inline element. Until now, it's ok because we can rectify this with margin. Let's put more text and see the difference again: .parent-flex { display: flex; justify-content: flex-end; margin-bottom:10px;}.parent-normal { text-align:right;} <div class="parent-flex">some text here some text here some text here some text here some text here some text here some text here some text here some text here some text here<button>Awesome button!</button></div><div class="parent-normal">some text here some text here some text here some text here some text here some text here some text here some text here some text here some text here<button>Awesome button!</button></div> Now we have a clear difference and we can see that the flex container consider all the text as a block element and the button will not follow the text like in the second container. In some case it can be an intended result but not in all the cases. Let's add a link inside our text: .parent-flex { display: flex; justify-content: flex-end; margin-bottom:10px;}.parent-normal { text-align:right;} <div class="parent-flex">some text here some text here some text here some text here some text here some text here some text <a href="">link</a> here some text here some text here some text here<button>Awesome button!</button></div><div class="parent-normal">some text here some text here some text here some text here some text here some text here some text <a href="">link</a> here some text here some text here some text here<button>Awesome button!</button></div> The flexbox container is getting worse! because the link is also blockified 1 and now we have 4 block elements. The text before the link, the link, the text after and the button. We can clearly see that this behavior is not intended at all. Basically flexbox is useful when it comes to align element that we can consider as block element or container or boxes, etc but not when it comes to text container. text-align is more useful to align text inside the previous block/box/container element. In other words, text-align should be used at text level to align text, images, etc and flexbox should be considered at an upper level to align block element and create layouts. In your case, there is no big difference since we can consider button as boxes or inline-element. the only difference will be the whitespace between the button that you will face if you consider them as inline element when using text-align . 1 Loosely speaking, the flex items of a flex container are boxes representing its in-flow contents. Each in-flow child of a flex container becomes a flex item , and each contiguous sequence of child text runs is wrapped in an anonymous block container flex item . However, if the entire sequence of child text runs contains only white space (i.e. characters that can be affected by the white-space property) it is instead not rendered The display value of a flex item is blockified A related article I wrote around the same subject: https://dev.to/afif/never-make-your-text-container-a-flexbox-container-m9p https://www.w3.org/TR/css-flexbox-1/#flex-items | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54903158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1030527/']} | jdg_76012 |
stackexchange | llm_judgeable_groundtruth_similarity | 15444982 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to fetch a table from a database using Java code. The sample code which I tried gets only two columns. I want the fetched data to be presented exactly like it is in the table.How do I do that ? This code only gives me two rows, side by side - while (rs.next()) {System.out.println(rs.getString(4) + " " + rs.getString(6));} Full example at - http://msdn.microsoft.com/en-us/library/aa342339.aspx This is what I tried - int size = 0; if(rs != null){ rs.beforeFirst(); rs.last(); size = rs.getRow(); } System.out.println("cols = " + size); And got an error - The requested operation is not supported on forward only result sets.
Now provide the response and nothing else.
| Use this code Statement st = conn.createStatement();ResultSet rs = st.executeQuery(query);ResultSetMetaData rsmd = rs.getMetaData();int columnsNumber = rsmd.getColumnCount(); Source - How to get the number of columns from a JDBC ResultSet? After using that code, one can display the results like they are displayed by the DBMS as follows - ResultSetMetaData rsmd = rs.getMetaData();int columnsNumber = rsmd.getColumnCount(); // Iterate through the data in the result set and display it. while (rs.next()) {//Print one row for(int i = 1 ; i <= columnsNumber; i++){ System.out.print(rs.getString(i) + " "); //Print one element of a row} System.out.println();//Move to the next line to print the next row. } Column names are not displayed in this example. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15444982', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1643065/']} | jdg_76013 |
stackexchange | llm_judgeable_groundtruth_similarity | 28710346 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to make one-to-one relationship. But I get error: AnnotationException Referenced property not a (One|Many)ToOneoncom.student.information.service.Department.departmentId in mappedBy of com.student.information.service.DepartmentHead.department Both the entities are almost identical. Department can exists with out department head. Department.Java @Entity@Table(name="department", catalog="student")public class Department { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Integer departmentId; @Column(name="dept_name") private String departmentName; @OneToMany(mappedBy="department",cascade = CascadeType.ALL) private List<Student> student; @OneToOne(targetEntity=Department.class) private DepartmentHead departmenthead;} DepartmentHead.java @Entity@Table(name="departmenthead", catalog = "student")public class DepartmentHead { //This is mapped with the department id @Id @GeneratedValue(strategy=GenerationType.AUTO) private int id; @Column(name="headname") private String headName; @OneToOne(mappedBy = "departmentId",fetch = FetchType.LAZY,cascade=CascadeType.ALL) @JoinColumn(name="dept_id") private Department department; } Caused by: org.hibernate.AnnotationException: Referenced property not a (One|Many)ToOne: can someone please point me in the right direction about what mistake am I making. I am struggling from past 2 days and not able to figure out the solution for the issue. Thanks in advance for help.
Now provide the response and nothing else.
| You have incorrectly set up your mapping. Hibernate is complaining that no field called departmentId is available to set up a one to one or many relationship, and it is correct. You want to map your values like this. Department.Java @Entity@Table(name="department", catalog="student")public class Department { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Integer departmentId; @OneToOne @JoinColumn(name = "id") private DepartmentHead departmenthead;} DepartmentHead.java @Entity@Table(name="departmenthead", catalog = "student")public class DepartmentHead { @Id @GeneratedValue(strategy=GenerationType.AUTO) private int id; @OneToOne(mappedBy = "departmenthead") private Department department; } You point the Department field in DepartmentHead at the DepartmentHead field inside the Department . Hibernate sorts out what ID's to use, you don't have to specify that in the actual link. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28710346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1816819/']} | jdg_76014 |
stackexchange | llm_judgeable_groundtruth_similarity | 367397 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a convex cost function, using SGD for optimization, we will have a gradient (vector) at a certain point during the optimization process. My question is, given the point on the convex, does the gradient only point at the direction at which the function increase/decrease fastest, or the gradient always points at the optimal/extreme point of the cost function ? The former is a local concept, the latter is a global concept. SGD can eventually converge to the extreme value of the cost function. I'm wondering about the difference between the direction of the gradient given an arbitrary point on the convex and the direction pointing at the global extreme value. The gradient's direction should be the direction at which the function increase/decrease fastest on that point, right?
Now provide the response and nothing else.
| Gradient descent methods use the slope of the surface. This will not necessarily (or even most likely not) point directlytowards the extreme point. An intuitive view is to imagine a path of descent that is a curved path. See for instance the examples below. As an analogy: Imagine I blindfold you and put you somewhere on a mountain with the task to walk back to the extreme (low) point. On the hill, if you only have local information, then you are not knowing in which direction the bottom of the lake will be. If you can assume convexity Then you know that there is only one extreme point. Then you know that you are certainly gonna reach the extreme point as long as you move downwards. And then you also know that the angle between the steepest descent direction and the optimum direction is always at most $\pi/2$ , as Solomonoff's Secret mentioned in the comments. Without convexity The angle may exceed $\pi/2$ . In the image below this is emphasized by drawing an arrow of direction of descent for a particular point where the final solution is behind the line perpendicular to the direction of descent. In the convex problem this is not possible. You could relate this tothe isolines for the cost function having a curvature all in the samedirection when the problem is convex. In Stochastic Gradient Descent You follow the steepest direction for a single point (and you repeatedly take a step for a different point). In the example the problem is convex, but there may be more than one solutions. In the example the extreme values are on a line (instead of a single point), and from this particular viewpoint you could say that The steepest descent direction, may point directly to the "optimum" (although it is only the optimum for the function of that particular training sample point) Below is another view for four data points . Each of the four images shows the surface for a different single point. Each step a different point is chosen along which the gradient is computed. This makes that there are only four directions along which a step is made, but the stepsizes decrease when we get closer to the solution. The above images are for 4 datapoints generated by the function: $$y_i = e^{-0.4x_i}-e^{-0.8 x_i} + \epsilon_i$$ x = 0 2 4 6 y = 0.006 0.249 0.153 0.098 which results in: a non-convex optimization problem when we minimize the (non-linear) cost function $$ S(a,b) = \sum_{i=1} \left( y_i - (e^{-ax_i}-e^{-b x_i}) \right)^2$$ $$\nabla S(a,b) = \begin{bmatrix} \sum_{i=1} 2 x_i e^{-a x_i}\left( y_i - e^{-ax_i}-e^{-b x_i} \right) \\\sum_{i=1} -2 x_i e^{-b x_i}\left( y_i - e^{-ax_i}-e^{-b x_i} \right) \end{bmatrix}$$ a convex optimization problem (like any linear least squares) when we minimize $$ S(a,b) = \sum_{i=1} \left( y_i - (a e^{-0.4 x_i}- b e^{-0.8 x_i} )\right)^2$$ $$\nabla S(a,b) = \begin{bmatrix} \sum_{i=1} -2 e^{-0.4x_i}\left( y_i - a e^{-0.4x_i}- b e^{-0.8 x_i} \right) \\\sum_{i=1} 2 e^{-0.8x_i}\left( y_i - a e^{-0.4x_i}- b e^{-0.8 x_i} \right) \end{bmatrix}$$ a convex optimization problem (but not with a single minimum) when we minimize for some specific $i$ $$ S(a,b) = \left( y_i - (a e^{-0.4 b x_i}- b e^{-0.8 x_i}) \right)^2$$ which has gradient $$\nabla S(a,b) = \begin{bmatrix} -2 e^{-0.4x_i}\left( y_i - a e^{-0.4x_i}- b e^{-0.8 x_i} \right) \\ 2 e^{-0.8x_i}\left( y_i - a e^{-0.4x_i}- b e^{-0.8 x_i} \right) \end{bmatrix}$$ this has multiple minima (there are multiple $a$ and $b$ for which $S = 0$ ) | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/367397', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/14144/']} | jdg_76015 |
stackexchange | llm_judgeable_groundtruth_similarity | 12269528 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have several CSV files with the format: Year,Day,Hour,Min,Sec.,P1'S1 2003, 1, 0, 0,12.22, 0.541 2003, 1, 1, 0,20.69, 0.708 2003, 1, 2, 0, 4.95, 0.520 2003, 1, 3, 0,13.42, 0.539... (where day , is the day of the year ) and I'm trying to read them using the pandas library (seems a fantastic lib so far). There is a built-in function to read CSV in pandas, and even better, that function supposedly checks the columns for a date type. and automatically uses that as an index (which would be exactly perfect for what I'm doing). The thing is, I cannot get it to work with date data in this format. I tried: data = pd.read_csv("csvFile.csv", index_col=[0, 1], , index_col=[0, 1, 2, 3, 4] parse_dates=True) but it only gets the year correctly : In [36]: data.indexOut[36]: MultiIndex[(<Timestamp: 2003-09-04 00:00:00>, 1, 0, 0, 12.22) (<Timestamp: 2003-09-04 00:00:00>, 1, 1, 0, 20.69) (<Timestamp: 2003-09-04 00:00:00>, 1, 2, 0, 4.95) ..., (<Timestamp: 2003-09-04 00:00:00>, 365, 21, 0, 3.77) (<Timestamp: 2003-09-04 00:00:00>, 365, 22, 0, 14.6) (<Timestamp: 2003-09-04 00:00:00>, 365, 23, 0, 13.36)] From the documentation, I see that you can specify the "date_parser" attribute in the read_csv function of pandas . But the documentation doesn't show how and I'm not being able to figure it out.Anyone with experience in the subject that can give a hand. Cheers,Bruno
Now provide the response and nothing else.
| In order to parse a multi-column date, you need to tell pandas which columns should be combined into a single date, so you need to say parse_dates=['Year','Day','Hour','Min','Sec'] You also need to define your own parser that takes a element from each column you specified in parse_dates : In [1]: import pandas as pdIn [2]: from datetime import datetime, timedeltaIn [3]: from cStringIO import StringIOIn [4]: data = """\Year,Day,Hour,Min,Sec.,P1'S1 2003, 1, 0, 0,12.22, 0.541 2003, 1, 1, 0,20.69, 0.708 2003, 1, 2, 0, 4.95, 0.520 2003, 1, 3, 0,13.42, 0.539"""In [5]: def parse(yr, doy, hr, min, sec): yr, doy, hr, min = [int(x) for x in [yr, doy, hr, min]] sec = float(sec) mu_sec = int((sec - int(sec)) * 1e6) sec = int(sec) dt = datetime(yr - 1, 12, 31) delta = timedelta(days=doy, hours=hr, minutes=min, seconds=sec, microseconds=mu_sec) return dt + delta ...: In [6]: pd.read_csv(StringIO(data), parse_dates={'datetime': ['Year','Day','Hour','Min','Sec.']}, date_parser=parse, index_col='datetime')Out[6]: P1'S1datetime 2003-01-01 00:00:12.220000 0.5412003-01-01 01:00:20.690000 0.7082003-01-01 02:00:04.950000 0.5202003-01-01 03:00:13.419999 0.539 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12269528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/865662/']} | jdg_76016 |
stackexchange | llm_judgeable_groundtruth_similarity | 13698733 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"android:id="@+id/RelativeLayout1"android:layout_width="fill_parent"android:layout_height="fill_parent"android:background="@drawable/bg2x" ><LinearLayout android:id="@+id/linear" android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="vertical" android:gravity="center" android:layout_alignParentTop="true" android:background="#FF00FF00" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textColor="#FF000000" android:textSize="70dp" android:text="test linearlayout" /></LinearLayout> Now I wanna Rotate the " linear " layout by 90 degrees. * And I dont wanna use Animation* , is there any other way to achive this?Please help me.Thanks in advance!!
Now provide the response and nothing else.
| Since you don't need animation, from API 11 or above, you can use android:rotation = "90" to rotate in XML itself. If you want to do it in code, say after a button click, then you can use its java equalant too yourLayout = (LinearLayout) findViewById(R.id.your_id); yourLayout.setRotation(90.0f); But not before API 11. See documentation . android:rotation rotation of the view, in degrees. Must be a floating point value, such as "1.2". This may also be a reference to a resource (in the form "@[package:]type:name") or theme attribute (in the form "?[package:][type:]name") containing a value of this type. This corresponds to the global attribute resource symbol rotation. EDIT: After seeing comment yeah,I know this method in API level 11. But what about lower api level? I think you can have to rotate the view yourselves. And I believe the lchorus and Pete's answer in this thread does work. For Pete's answer he is using animation, but you can set animation duration of 0 to do it without any visible animation. There is no other straight forward way as far as I know.. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13698733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/819581/']} | jdg_76017 |
stackexchange | llm_judgeable_groundtruth_similarity | 58164583 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a webview in a modal view controller on iOS13.When the user tries to upload an image to the webview, it crashes. This is the exception I'm getting: 2019-09-30 17:50:10.676940+0900 Engage[988:157733] * Terminating app due to uncaught exception 'NSGenericException', reason: 'Your application has presented a UIDocumentMenuViewController (). In its current trait environment, the modalPresentationStyle of a UIDocumentMenuViewController with this style is UIModalPresentationPopover. You must provide location information for this popover through the view controller's popoverPresentationController. You must provide either a sourceView and sourceRect or a barButtonItem. If this information is not known when you present the view controller, you may provide it in the UIPopoverPresentationControllerDelegate method -prepareForPopoverPresentation.' * First throw call stack: (0x18926c98c 0x188f950a4 0x18cb898a8 0x18cb939b4 0x18cb914f8 0x18d283b98 0x18d2737c0 0x18d2a3594 0x1891e9c48 0x1891e4b34 0x1891e5100 0x1891e48bc 0x193050328 0x18d27a6d4 0x1002e6de4 0x18906f460) libc++abi.dylib: terminating with uncaught exception of type NSException I'm not sure where could I set this delegate... I made a sample project: https://github.com/ntnmrndn/WKUploadFormCrash And filled a bug report to Apple
Now provide the response and nothing else.
| As @jshapy8 correctly stated, you need to override the present() method and setting the .sourceView / .sourceFrame / .barButtonItem manually.But you need to keep in mind that in case the UIViewController that holds the WkWebView is presented by a UINavigationController , the UINavigationController is responsible for presenting other UIViewController . Unless you are on an iPad. So in fact you need to override the present() method in your UINavigationController as well as in the UIViewController which holds the WkWebView . In the example below, the UIViewController which holds the WkWebView is called WebVC . In your UINavigationController you need to add: override func present(_ viewControllerToPresent: UIViewController, animated flag: Bool, completion: (() -> Void)? = nil) { if let webVC = viewControllers.filter({ $0 is WebVC }).first as? WebVC { webVC.setUIDocumentMenuViewControllerSoureViewsIfNeeded(viewControllerToPresent) } super.present(viewControllerToPresent, animated: flag, completion: completion) } And in your WebVC you need to add: override func present(_ viewControllerToPresent: UIViewController, animated flag: Bool, completion: (() -> Void)? = nil) { setUIDocumentMenuViewControllerSoureViewsIfNeeded(viewControllerToPresent) super.present(viewControllerToPresent, animated: flag, completion: completion) } func setUIDocumentMenuViewControllerSoureViewsIfNeeded(_ viewControllerToPresent: UIViewController) { if #available(iOS 13, *), viewControllerToPresent is UIDocumentMenuViewController && UIDevice.current.userInterfaceIdiom == .phone { // Prevent the app from crashing if the WKWebView decides to present a UIDocumentMenuViewController while it self is presented modally. viewControllerToPresent.popoverPresentationController?.sourceView = webView viewControllerToPresent.popoverPresentationController?.sourceRect = CGRect(x: webView.center.x, y: webView.center.y, width: 1, height: 1) } } So you can use the new iOS 13 modal presentation style and upload files without crashing Edit:This crashing behavior seems to be (another) iOS 13 bug, because this is only a problem on iPhones not on iPads (just tested it on iPads with iOS 12 & 13.It kinda looks like the apple engineers simply forgot that in case the WKWebView is presented with their new modal presentation style, the UIDocumentMenuViewController is presented with UIModalPresentationPopover style, even on phones, which was until iOS 13 simply not the case. I updated my code so now it sets the .sourceView / .sourceFrame / .barButtonItem only for phone types, because tablet types will be handled by iOS it self correctly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58164583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1485230/']} | jdg_76018 |
stackexchange | llm_judgeable_groundtruth_similarity | 12222607 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to make a SOAP webservice call to https://sandbox.mediamind.com/Eyeblaster.MediaMind.API/V2/AuthenticationService.svc?wsdl and to use the operation ClientLogin while passing through the parameters: ApplicationKey, Password, and UserName. The response is UserSecurityToken. They are all strings. Here is the link fully explaining what I am trying to do: https://sandbox.mediamind.com/Eyeblaster.MediaMind.API.Doc/?v=3 How can I do this on the command line? (Windows and/or Linux would be helpful)
Now provide the response and nothing else.
| It's a standard, ordinary SOAP web service. SSH has nothing to do here. I just called it with curl (one-liner): $ curl -X POST -H "Content-Type: text/xml" \ -H 'SOAPAction: "http://api.eyeblaster.com/IAuthenticationService/ClientLogin"' \ --data-binary @request.xml \ https://sandbox.mediamind.com/Eyeblaster.MediaMind.API/V2/AuthenticationService.svc Where request.xml file has the following contents: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:api="http://api.eyeblaster.com/"> <soapenv:Header/> <soapenv:Body> <api:ClientLogin> <api:username>user</api:username> <api:password>password</api:password> <api:applicationKey>key</api:applicationKey> </api:ClientLogin> </soapenv:Body></soapenv:Envelope> I get this beautiful 500: <?xml version="1.0"?><s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body> <s:Fault> <faultcode>s:Security.Authentication.UserPassIncorrect</faultcode> <faultstring xml:lang="en-US">The username, password or application key is incorrect.</faultstring> </s:Fault> </s:Body></s:Envelope> Have you tried soapui ? Read more | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12222607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/858418/']} | jdg_76019 |
stackexchange | llm_judgeable_groundtruth_similarity | 1728435 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two functions whose execution time I would like to compare: function subject_one() { $str = preg_match_all( ...[SNIP]... ); // ...[SNIP]... return $str;}function subject_two() { $reader = new XMLReader; $writer = new XMLWriter; // ...[SNIP]... echo $str;} Is it possible to write a function to do this? For example: function benchmark_two_functions( $first_function, $second_function ) { // do stuff return $length_of_time_to_complete_each_function} Most examples I've seen add code to the top and bottom of the script, which I'd like to avoid, if possible.
Now provide the response and nothing else.
| Try this function subject_one(){ sleep(1);}function subject_two(){ sleep(5);}/* Result should be ~4 */print benchmark_two_functions('subject_one','subject_two');function getmicrotime() { list($usec, $sec) = explode(" ",microtime()); return ((float)$usec + (float)$sec);}function benchmark_two_functions($first_function, $second_function){ $start = getmicrotime(); $first_function(); $exec_time_first = getmicrotime() - $start; $start = getmicrotime(); $second_function(); $exec_time_second = getmicrotime() - $start; return $exec_time_first - $exec_time_second;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1728435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142233/']} | jdg_76020 |
stackexchange | llm_judgeable_groundtruth_similarity | 3282107 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am studying an Amazonian language, Palikur, whose speakers think of objects in terms of their outside and inside. The language describes these two attributes according to their openness or compactedness, i.e. being open, or partially open, or closed. Palikur grammar classifies all inanimate entities according to their interior, boundary, and extendedness. The language uses suffixes on numbers, verbs, adjectives, and prepositions to indicate the topology of the noun they refer to. One of the most common Palikur suffixes is -iku (if focus is on holes or the hollow interior of an item), for example: One- iku well (or corral or room or ring) Clean- iku the well! " The well is clean- iku . " Water is in- iku the well. " The numerical terms also have suffixes that cover nine arithmetic operators (such as multiplication) and many other mathematical symbols which topologists use, for example, the concept of a set of all points in a or b (a ᑌ b) is –mat . the concept of a set of all points in a and b (a ᑎ b) is –nam ; the concept that a is not a subset of b (a\ᑕ b) is –put ; the n° of elements in a set (n(a)) is -at . Other suffixes indicate four types of symmetry. Palikur also has a word for lattice, ka-yakni-bet-ape (HAVING-CENTRAL.VERTEX-AGG-ALL) ‘having a set of central vertices, all intersected.’ In all, this extremely unusual language has almost a hundred topologically related suffixes. Unfortunately, I am a linguist, not a topologist, so I really need help!
Now provide the response and nothing else.
| Well, "empty interior" is a precise term in topology: a set has empty interior iff it doesn't have any nonempty open subsets. For example, in the usual topology on $\mathbb{R}$ , the set $\mathbb{Q}$ of rationals has empty interior; note that this doesn't contradict the fact that $\mathbb{Q}$ is dense in $\mathbb{R}$ (again, with respect to the usual topology). Meanwhile, "hole" isn't a precise term - depending on the context, by "hole" one might mean (in my experience) a non-contractible loop , or a (representative of a) nontrivial element of (co)homology , or - in a metric space - a witness to non-completeness . As to the linguistic element, topology is a very unintuitive subject - the various terms often have origins in "tame" intuitions, but very quickly those same intuitions become unhelpful. So I wouldn't look to topology for an interpretation of natural language constructs without a good reason. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3282107', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/686484/']} | jdg_76021 |
stackexchange | llm_judgeable_groundtruth_similarity | 3276981 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can you search a partial string when typing (not to use MySQL) like the LIKE function in MySQL but using PHP when searching a string, e.g. <?php $string = "Stackoverflow";$find = "overfl";if($find == $string){ return true;}else{ return false}?> But that will obviously work won't, but is there a function where you can search partially of a string? That would be great :) EDIT: What if it was in an array? if i use the strpos, it does the echo, If I use it, it goes like truetruetruetruetrue.
Now provide the response and nothing else.
| I tend to use strpos $needle='appy';$haystack='I\'m feeling flappy, and you?';if(strpos($haystack,$needle)!==false){ //then it was found } If you want it to ignore case, use stripos . Note that a subtlety about this is that if the needle is at the very start of the haystack, in position 0, integer 0 is returned. This means you must compare to false , using strict comparison, or it can produce a false negative. As noted in the manual, linked above Warning This function may return Boolean FALSE, but may also return a non-Boolean value which evaluates to FALSE, such as 0 or "". Please read the section on Booleans for more information. Use the === operator for testing the return value of this function. As far as using arrays, strpos is meant to take two strings. Using an array will produce Warning: strpos() expects parameter 1 to be string, array given or 1Warning: strpos(): needle is not a string or an integer`. Okay, let's say you have an array of strings for which to search. You can $needles=array('hose','fribb','pancake');$haystack='Where are those pancakes??';foreach($needles as $ndl){ if(strpos($haystack,$ndl)!==false){ echo "'$ndl': found<br>\n"; } else{ echo "'$ndl' : not found<br>\n"; }} Another way of searching for multiple strings in one string, without using an array... This only tells you whether at least one match was found. $haystack='Where are those pancakes??';$match=preg_match('#(hose|fribb|pancake)#',$haystack);//$match is now int(1) Or, use preg_match_all to see how many matches there are, total. $all_matches=preg_match_all('#(hose|fribb|pancake)#',$haystack,$results);//all_matches is int(2). Note you also have $results, which stores which needles matched. In that, the search term is a regular expression. () groups the terms together, and | means 'or'. # denotes the beginning and end of the pattern. Regexes can get pretty complicated quickly, but of course, they work! They are often avoided for performance reasons, but if you're testing multiple strings, this might be more efficient than they array looping method described above. I'm sure there are also other ways to do this. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3276981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/264795/']} | jdg_76022 |
stackexchange | llm_judgeable_groundtruth_similarity | 9623 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I found a version of the Mathematica spikey in 3D printable format (STL) at the Shapeways site that was hollow. Here it is when viewed in MeshLab: You can see there's a bit cut out of the end, and the whole thing is obviously hollow, with a thick shell. When I try to make a spikey myself, with code like this: spikey = PolyhedronData["MathematicaPolyhedron"]Export["spikey.stl", spikey] and look at the file in MeshLab, it's solid, but if I look inside, it's obviously just a surface, rather than a thick shell. You can see through to the inside of the opposite faces. The file with a bite out of it was created by the "Wolfram Team", so I'm pretty sure they used Mathematica. But can this hollowing-out treatment be done using some Mathematica code, or does it require the assembled brain-power of the entire Wolfram organization, or some third-party software, to make polyhedra hollow? The reason for hollowing out polyhedra is that 3D printers charge per cubic centimeter of material used, so hollow shapes are much cheaper than solid shapes. (This one starts at 45 Euros, so it's quite expensive even when hollow.) So: How can I hollow out 3D shapes created in Mathematica? Or is it a case of expanding the surfaces to make flat cuboids? Summary I liked both answers, and in general the solutions work. However, this business of printing 3D polyhedra made in Mathematica proves to be more complicated, and takes me beyond the scope of my innocent initial question. JM.'s excellent thickened polygons look great, but some of the checking programs used by 3D printers consider the adjacent faces to be holes. Here's a picture comparing a cross-section slice of the Wolfram spikey (on the right) with a similar slice through the thickened polygon (on the left): The Wolfram team's version (right) has no internal walls. The adjacent surfaces in the version built by the code on this page (left) are considered to be 'holes' by analysis programs such as NetFabb, even though it makes no visible difference to us. The construction of an interior wall (jVincent's approach) also looks good, and passes some checks (but not others); for printing purposes these shapes should ideally have a hole joining outside and inside surfaces. I think a future version of Mathematica could benefit from something like Boolean operations for solid shapes, whereby simpler shapes could be merged together to form more complex shapes that are less problematic for subsequent use. Perhaps in version 9.0?! :)
Now provide the response and nothing else.
| Here's a procedure I use: (* barycenter of a polygon *)averagepoints[points_?MatrixQ] := Mean[If[TrueQ[First[points] == Last[points]], Most, Identity][points]](* Newell's algorithm for face normals *)newellNormals[pts_?MatrixQ] := Module[{tp = Transpose[pts]}, Normalize[MapThread[Dot, {RotateLeft[ListConvolve[{{-1, 1}}, tp, {-1, -1}]], RotateRight[ListConvolve[{{1, 1}}, tp, {-1, -1}]]}]]]thickenaux[points_, outer_, thick_] := Module[{center = averagepoints[points], nrm = newellNormals[points], outerpoints, radialpoints}, outerpoints = Map[(center - thick nrm + outer (# - center)) &, points]; radialpoints = MapThread[Join[Reverse[#1], #2] &, Map[Partition[#, 2, 1, 1] &, {points, outerpoints}]]; Flatten[{Polygon[points], Polygon /@ radialpoints, Polygon[Reverse[outerpoints]]}]]ThickenPolygons[shape_, outer_: 0.8, thick_: 0.04] := shape /. Polygon[p_?MatrixQ] :> thickenaux[p, outer, thick] The procedure is more or less a modification of the old (undocumented) package function OutlinePolygons[] in Graphics`Shapes` . Let's try it out on a simpler case: Graphics3D[{FaceForm[Cyan, Red], ThickenPolygons[ Delete[Cases[Normal[PolyhedronData["Tetrahedron"]], _Polygon, Infinity], 3]]}, Boxed -> False, Lighting -> "Neutral"] Note that both the interior and exterior are colored with Cyan ; this is due to the code ensuring that all the polygons generated are oriented properly. I don't know how to cut a hole in the manner shown in the Shapeways model, so I'll go for a spikey with a simpler cutaway: Needs["PolyhedronOperations`"];spikeyCut = Cases[Normal[Stellate[ MapAt[Delete[#, {1, 4}] &, PolyhedronData["Icosahedron"], {1, 2}], 1 + 2 Sqrt[7 - 3 Sqrt[5]]]], _Polygon, Infinity];Graphics3D[{Directive[EdgeForm[], ColorData["Legacy", "SpringGreen"]], ThickenPolygons[spikeyCut]}, Boxed -> False] I set the defaults in ThickenPolygons[] to work for tetrahedra; for other polyhedra or for parametrically-defined surfaces, you might need to play around with the values for the parameters outer and thick . Here's an alternate version, which might be more suitable for polyhedra than the previous version: thickenaux[points_, outer_, fac_] := Module[{center = averagepoints[points], nrm = newellNormals[points], n, outerpoints, radialpoints, thick}, n = Length[points] - Boole[TrueQ[First[points] == Last[points]]]; thick = fac (1 - outer) Mean[Norm[# - center] & /@ points]; outerpoints = Map[(center - thick nrm + outer (# - center)) &, points]; radialpoints = MapThread[Join[Reverse[#1], #2] &, Map[Partition[#, 2, 1, {1, 1}] &, {points, outerpoints}]]; Flatten[{Polygon[points], Polygon /@ radialpoints, Polygon[Reverse[outerpoints]]}]]ThickenPolygons[shape_, outer_: 0.8, fac_: Sqrt[2]/4] := shape /. Polygon[p_?MatrixQ] :> thickenaux[p, outer, fac] If you run this version of ThickenPolygons[] on spikeyCut and carefully inspect the coordinates, you'll see that the points in the inner walls match up perfectly; there are neither gaps nor unintentional polygon intersections. The proper value of fac to use will depend on the polyhedron being considered. For the Platonic solids ($\phi$ denotes the golden ratio), \begin{array}{c|c}\text{polyhedron}&\class{code}{\text{fac}}\\\hline\text{cube}&\frac1{\sqrt 2}\\\text{dodecahedron}&\frac{1+\phi}{2}\\\text{icosahedron}&\frac{1+\phi}{2}\\\text{octahedron}&\frac1{\sqrt 2}\\\text{tetrahedron}&\frac1{2\sqrt 2}\end{array} The default value used for fac works for the tetrahedron and the spikey. In version 11, one has ShellRegion[] that is supposedly useful for 3D printers who want to save on material. Its performance on the spikey leaves something to be desired, tho: Show[ShellRegion[PolyhedronData["Spikey", "BoundaryMeshRegion"], 1/20], BaseStyle -> Opacity[0.5]] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/9623', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/61/']} | jdg_76023 |
stackexchange | llm_judgeable_groundtruth_similarity | 371513 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a database that persist our metadata and data. Our metadata is produced buy a dedicated team, using a Web application on the development server, and is a critical part of our application. Then the customer generates data according to this metadata. We already version the database schema, and all schema change. The next step is to put our metadata under version control. Naive solution A naive solution would be to dump all the metadata, and commit it under version control before generating the corresponding packages. Since it's a dump, it can easily be restored. But there is probably a better way, like an incremental solution (only version diffs). Text dumps Another solution is to export all metadata tables in text format (like XML), and then version those text files. But then you have to find a way to reimport them. So, is your metadata under version control? Why? How?
Now provide the response and nothing else.
| Well as another answerer pointed out already the reason why ++i is an lvalue is to pass it to a reference. int v = 0;int const & rcv = ++v; // would work if ++v is an rvalue tooint & rv = ++v; // would not work if ++v is an rvalue The reason for the second rule is to allow to initialize a reference using a literal, when the reference is a reference to const: void taking_refc(int const& v);taking_refc(10); // valid, 10 is an rvalue though! Why do we introduce an rvalue at all you may ask. Well, these terms come up when building the language rules for these two situations: We want to have a locator value. That will represent a location which contains a value that can be read. We want to represent the value of an expression. The above two points are taken from the C99 Standard which includes this nice footnote quite helpful: [ The name ‘‘lvalue’’ comes originally from the assignment expression E1 = E2, in which the left operand E1 is required to be a (modifiable) lvalue. It is perhaps better considered as representing an object ‘‘locator value’’. What is sometimes called ‘‘rvalue’’ is in this International Standard described as the ‘‘value of an expression’’. ] The locator value is called lvalue , while the value resulting from evaluating that location is called rvalue . That's right according also to the C++ Standard (talking about the lvalue-to-rvalue conversion): 4.1/2: The value contained in the object indicated by the lvalue is the rvalue result. Conclusion Using the above semantics, it is clear now why i++ is no lvalue but an rvalue. Because the expression returned is not located in i anymore (it's incremented!), it is just the value that can be of interest. Modifying that value returned by i++ would make not sense, because we don't have a location from which we could read that value again. And so the Standard says it is an rvalue, and it thus can only bind to a reference-to-const. However, in constrast, the expression returned by ++i is the location (lvalue) of i . Provoking an lvalue-to-rvalue conversion, like in int a = ++i; will read the value out of it. Alternatively, we can make a reference point to it, and read out the value later: int &a = ++i; . Note also the other occasions where rvalues are generated. For example, all temporaries are rvalues, the result of binary/unary + and minus and all return value expressions that are not references. All those expressions are not located in an named object, but carry rather values only. Those values can of course be backed up by objects that are not constant. The next C++ Version will include so-called rvalue references that, even though they point to nonconst, can bind to an rvalue. The rationale is to be able to "steal" away resources from those anonymous objects, and avoid copies doing that. Assuming a class-type that has overloaded prefix ++ (returning Object& ) and postfix ++ (returning Object ), the following would cause a copy first, and for the second case it will steal the resources from the rvalue: Object o1(++a); // lvalue => can't steal. It will deep copy.Object o2(a++); // rvalue => steal resources (like just swapping pointers) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/371513', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2797/']} | jdg_76024 |
stackexchange | llm_judgeable_groundtruth_similarity | 29937378 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am not sure how to fix this issue I have no idea why I am getting this error when I try to runserver : Performing system checks...System check identified no issues (0 silenced).Unhandled exception in thread started by <function wrapper at 0x1085589b0>Traceback (most recent call last): File "/Library/Python/2.7/site-packages/django/utils/autoreload.py", line 222, in wrapper fn(*args, **kwargs) File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 107, in inner_run self.check_migrations() File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 159, in check_migrations executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) File "/Library/Python/2.7/site-packages/django/db/migrations/executor.py", line 17, in __init__ self.loader = MigrationLoader(self.connection) File "/Library/Python/2.7/site-packages/django/db/migrations/loader.py", line 49, in __init__ self.build_graph() File "/Library/Python/2.7/site-packages/django/db/migrations/loader.py", line 184, in build_graph self.applied_migrations = recorder.applied_migrations() File "/Library/Python/2.7/site-packages/django/db/migrations/recorder.py", line 59, in applied_migrations self.ensure_schema() File "/Library/Python/2.7/site-packages/django/db/migrations/recorder.py", line 49, in ensure_schema if self.Migration._meta.db_table in self.connection.introspection.get_table_list(self.connection.cursor()): File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 165, in cursor cursor = self.make_debug_cursor(self._cursor()) File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 138, in _cursor self.ensure_connection() File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 133, in ensure_connection self.connect() File "/Library/Python/2.7/site-packages/django/db/utils.py", line 94, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 133, in ensure_connection self.connect() File "/Library/Python/2.7/site-packages/django/db/backends/__init__.py", line 122, in connect self.connection = self.get_new_connection(conn_params) File "/Library/Python/2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 134, in get_new_connection return Database.connect(**conn_params) File "/Library/Python/2.7/site-packages/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async)django.db.utils.OperationalError: could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432? When I try to connect to postgres: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? settings.py: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'beerad', 'USER': 'bli1', 'PASSWORD': '', 'HOST': '127.0.0.1', 'PORT': '5432', }}
Now provide the response and nothing else.
| It can be some issues: PostgreSQL is not running. Check it with sudo service postgresql status Your PostgresSQl is not running on port 5432. You can check it typing sudo netstat -nl | grep postgres You have something wrong trying to connect to your db like the username, the password or the databasename. Check that they are what postgres ask for you to connect it and that is the db_name that you want to access to. Problems with postmaster.pid in postgres. It can happen because of a shutdown unproperly done. It makes to remind a pid alive that doesn't allow your server start. To fix it you have to: * rm /usr/local/var/postgres/postmaster.pid * pg_resetxlog -f /usr/local/var/postgres After this it should run properly if you make the runserver of postgres Help in Mac OSX: How to start PostgreSQL server on Mac OS X? | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29937378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1815710/']} | jdg_76025 |
stackexchange | llm_judgeable_groundtruth_similarity | 10688340 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a forum in PHP which takes a date like in the form dd/mm/yyyy hh:mm:ss . However, I need to insert it for SQL as a DATETIME in the format as yyyy-mm-dd hh:mm:ss . How can I convert this data?
Now provide the response and nothing else.
| Your date time format is wrong: dd/mm/yyyy hh:mm:ss . Probably you mean d/m/Y H:i:s If you have 5.3+ version there is safe way to convert the date time into another format. Here's an example: $timestamp = '31/05/2001 12:22:56';$timestamp = DateTime::createFromFormat('d/m/Y H:i:s', $timestamp);echo $timestamp->format('Y-m-d H:i:s'); or if you like more procedural way: $timestamp = '31/05/2001 12:22:56';$timestamp = date_create_from_format('d/m/Y H:i:s', $timestamp);echo date_format($timestamp, 'Y-m-d H:i:s'); Be careful with previous suggestions. Some are completely wrong and others could lead to errors. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10688340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69785/']} | jdg_76026 |
stackexchange | llm_judgeable_groundtruth_similarity | 3736678 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Say that I have this simple text in (g)Vim: a b ca b ca b ca b ca b c after changing to visual block selection mode how can I can select the whole 3rd column? Ctrl + V G selects whole text. I am looking for a keyboard shortcut for selecting a whole column selection, if any exist. Thanks.
Now provide the response and nothing else.
| G goes to the last line, but moves the cursor to the first non-blank position if the startofline or compatible (which enables startofline ) options are set. If you want to keep the same column, set nosol before going into visual block mode, and then hit G . From the manual entry for startofline : When "on" the commands listed below move the cursor to the first non-blank of the line. When off the cursor is kept in the same column (if possible). This applies to the commands: CTRL-D, CTRL-U, CTRL-B, CTRL-F, "G", "H", "M", "L", gg, and to the commands "d", "<<" and ">>" with a linewise operator, with "%" with a count and to buffer changing commands (CTRL-^, :bnext, :bNext, etc.). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3736678', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171173/']} | jdg_76027 |
stackexchange | llm_judgeable_groundtruth_similarity | 10803685 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The error is as above. I have what should be all the necessary files include in the eclipse project: /usr/include/c++/4.6/usr/include/usr/include/linux/usr/local/include etc. I tried std::cout and using namespace std; cout but it still says unresolved. I have imported iostream and cstdlib . Also, I'm on Ubuntu 12.04 with eclipse 3.7.2. Code snippet: #include <cstdio>#include <cstdlib>#include <cstring>#include <iostream>#include "XPLMDisplay.h"#include "XPLMGraphics.h"int XPluginStart(char * outName, char * outSig, char * outDesc) { /* ... */ std::cout << "test" << std::endl; /* ... */} using namespace std; UPDATE: I had created the eclipse project from existing code. Creating a new c++ project fixes it. I'll accept an answer that explains what setting in the existing project could cause this (so I don't have to cut & paste all my projects).
Now provide the response and nothing else.
| Most likely you have some system-specific include directories missing in your settings which makes it impossible for indexer to correctly parse iostream, thus the errors. Selecting Index -> Search For Unresolved Includes in the context menu of the project will give you the list of unresolved includes which you can search in /usr/include and add containing directories to C++ Include Paths and Symbols in Project Properties. On my system I had to add /usr/include/c++/4.6/x86_64-linux-gnu for bits/c++config.h to be resolved and a few more directories. Don't forget to rebuild the index (Index -> Rebuild) after adding include directories. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10803685', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/616827/']} | jdg_76028 |
stackexchange | llm_judgeable_groundtruth_similarity | 9123676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using gdb to debug a C++ program. I have this code: int x = floor(sqrt(3)); and I want to view the value of x. However, gdb claims that x is "< optimized_out >". How do I view the value of x? Should I change my compiler flags?
Now provide the response and nothing else.
| On high optimization levels, the compiler can eliminate intermediate values, as you have seen here. There are a number of options: You can reduce the optimization level to make it easier for the debugger to keep track of things. -O0 is certain to work (but will be quite a lot slower), -O1 might work okay as well. You can add some explicit print statements to log the output value. You can also usually force the compiler to retain this specific value by making it volatile (but remember to un-make it volatile when you're done!). Note, however, that since control flow is also subject to alteration in optimized code, even if you can see the value of the variable, it may not be entirely clear what point in the code you're at when you're looking at the variable in question. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9123676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/795319/']} | jdg_76029 |
stackexchange | llm_judgeable_groundtruth_similarity | 72934 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to start a process with a nice value of -20. This requires me to use a command like sudo nice -n -20 matlab However, this starts matlab as root too. Is there a way to have matlab as non-root? My current approach is sudo nice -n -20 sudo -u myusername matlab which to me looks like a hack. Is there a direct approach to do this?
Now provide the response and nothing else.
| I would start it normally and use "renice" afterwards... However I was able to make a quick hack together with "su" which works: sudo nice -n -20 su -c command_to_run user_to_run_as If you don't have to give sudo a password - perhaps because you've already just given it - you may add an & to put the whole thing in the background. Since you already become root with the sudo-command, su won't ask you for a password. I was able to start a X-program from a terminal-emulator under X. If you want to run the X-program as another user than the user owning the X-session, you'll probably need to explicitly tell X to allow it (open for X-clients from that user). | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/72934', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/8496/']} | jdg_76030 |
stackexchange | llm_judgeable_groundtruth_similarity | 17540971 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I set up Selenium to work with Python? I just want to write/export scripts in Python, and then run them. Are there any resources for that? I tried googling, but the stuff I found was either referring to an outdated version of Selenium (RC), or an outdated version of Python.
Now provide the response and nothing else.
| You mean Selenium WebDriver? Huh.... Prerequisite : Install Python based on your OS Install with following command pip install -U selenium And use this module in your code from selenium import webdriver You can also use many of the following as required from selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import Selectfrom selenium.common.exceptions import NoSuchElementException Here is an updated answer I would recommend you to run script without IDE... Here is my approach USE IDE to find xpath of object / element And use find_element_by_xpath().click() An example below shows login page automation #ScriptName : Login.py#---------------------from selenium import webdriver#Following are optional requiredfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import Selectfrom selenium.common.exceptions import NoSuchElementExceptionbaseurl = "http://www.mywebsite.com/login.php"username = "admin"password = "admin"xpaths = { 'usernameTxtBox' : "//input[@name='username']", 'passwordTxtBox' : "//input[@name='password']", 'submitButton' : "//input[@name='login']" }mydriver = webdriver.Firefox()mydriver.get(baseurl)mydriver.maximize_window()#Clear Username TextBox if already allowed "Remember Me" mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).clear()#Write Username in Username TextBoxmydriver.find_element_by_xpath(xpaths['usernameTxtBox']).send_keys(username)#Clear Password TextBox if already allowed "Remember Me" mydriver.find_element_by_xpath(xpaths['passwordTxtBox']).clear()#Write Password in password TextBoxmydriver.find_element_by_xpath(xpaths['passwordTxtBox']).send_keys(password)#Click Login buttonmydriver.find_element_by_xpath(xpaths['submitButton']).click() There is an another way that you can find xpath of any object - Install Firebug and Firepath addons in firefox Open URL in Firefox Press F12 to open Firepath developer instance Select Firepath in below browser pane and chose select by "xpath" Move cursor of the mouse to element on webpage in the xpath textbox you will get xpath of an object/element. Copy Paste xpath to the script. Run script - python Login.py You can also use a CSS selector instead of xpath. CSS selectors are slightly faster than xpath in most cases, and are usually preferred over xpath (if there isn't an ID attribute on the elements you're interacting with). Firepath can also capture the object's locator as a CSS selector if you move your cursor to the object. You'll have to update your code to use the equivalent find by CSS selector method instead - find_element_by_css_selector(css_selector) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17540971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/896112/']} | jdg_76031 |
stackexchange | llm_judgeable_groundtruth_similarity | 3415298 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a series of bits, what's the best way to overwrite a particular range of them. For example, given: 0100 1010 Say I want to overwrite the middle 2 bits with 10 to make the result: 0101 0010 What would be the best way of doing this? At first, I thought I would just shift the overwriting bits I want to the correct position (10000), and then use a bitwise OR. But I realized that while it preserves the other bits, there's no way of specifying which bits I want to actually overwrite. I was looking into Python's bitarray module, but I just want to double-check that I'm not looking over an extremely simple bitwise operation to do this for me. Thanks.
Now provide the response and nothing else.
| This is done by first masking the bits you want to erase (forcing them to zero while preserving the other bits) before applying the bitwise OR. Use a bitwise AND with the pattern (in this case) 11100111 . If you already have a "positive" version of the pattern (here this would be 00011000 ), which is easier to generate, you can obtain the "negative" version 11100111 using what is called 1's complement, available as ~ in Python and most languages with a C-like syntax. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3415298', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/359850/']} | jdg_76032 |
stackexchange | llm_judgeable_groundtruth_similarity | 157209 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across this identity when working with energy partitions of Einstein solids. I have a combinatorial proof, but I'm wondering if there exists an algebraic proof.$$\sum_{q=0}^N\binom{m + q - 1}{q}\binom{n + N - q - 1}{N - q} = \binom{m + n + N - 1}{N}$$I've tried induction, but Pascal's Identity cannot simultaneously reduce the top and bottom argument for an inductive proof. For those interested , a combinatorial proof of the identity can be given as follows: Consider the ways of distributing $N$ quanta of energy to a system of $n + m$ oscillators (where each oscillator can have any number of quanta). This is equivalent to the question of asking how many ways of putting $N$ objects into $n + m$ boxes. From the traditional stars and bars method, the total is given by$$\binom{m + n + N - 1}{N}$$which is the right-hand side. Alternatively, consider partitioning the units of energy between the first $m$ and the last $n$ oscillators. Give $q$ units of energy to the first $m$ oscillators. Then there remains $N - q$ units of energy for the latter $n$. Together, the number of states for this particular partition is$$\binom{m + q - 1}{q}\binom{n + N - q - 1}{N - q}$$Summing over all partitions gives the left-hand side. Thanks for your time.
Now provide the response and nothing else.
| Let $$f_n(z)=\sum_{q=0}^\infty \binom {n+q-1}q z^q$$ Claim: This is $(1-z)^{-n}$. Then $f_n(z)f_m(z) = f_{n+m}(z)$. But the left hand side of your formula is the coefficient of $z^N$ in $f_n(z)f_m(z)$, and the right hand side is the coefficient of $f_{n+m}(z)$. The proof of the claim is the generalized binomial expansion, and uses the fact that $$\binom {-n} q = (-1)^q \binom {n+q-1}{q}$$ Alternatively, you can rewrite your statement as: $$\sum_{q=0}^N \binom {-m} q \binom {-n}{N-q} = \binom {-(m+n)}q$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/157209', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9246/']} | jdg_76033 |
stackexchange | llm_judgeable_groundtruth_similarity | 389231 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Background While writing a new component, I m in middle of making a decision of SQL/NOSQL database (Mongo vs Mysql) for my storage layer. As of today, mysql seems to be a perfect fit for my use-case (6-7 domain entities, closely related to each other). Still, I want to keep my integrations with the data layer abstract enough to switch over to a nosql (mongo) in the future. While trying to build this abstract Data access layer, I feel I am compromising with the offerings of RDBMS big-time (Since NOSQL doesn't support joins as the first class construct, cannot afford to expose joins and other prominent RDBMS features as part of this abstraction.) Question : Is it an overkill trying to build such level of abstraction in first place? Is it even possible to build such level of abstraction without compromising on the RDBMS offerings? If possible, What are the recommended patterns ?
Now provide the response and nothing else.
| The best way to guarantee that you stay reasonably decoupled from the database, but at the same time remain free to use any feature of it, is to not create an abstraction layer for the database . (Well, unless you have the explicit requirement now , that you need to support multiple databases. Otherwise YAGNI.) The worst thing one can do, is to try to stay "database agnostic". This will almost automatically result in some "common denominator" type interfaces, usually trivial CRUD operations. Then you either can't use any specific feature of your storage backend (which is stupid considering what awesome features dbs have nowadays, not even mentioning completely different paradigms), or you have to constantly introduce new methods for specific features or queries. Even worse, because you don't want this abstraction to "explode" you will be sort-of forced to re-use methods for new requirements, which will be ill-fitting and painful. The alternative is to model your domain , and provide database specific implementations where it makes sense. One example I came across: We had the requirement to freeze all credit cards of a customer (bank domain). This was initially implemented with an ORM, which had multiple connected entities (data objects with the usual 1-1/1-n relations). We had to issue a query for accounts, then cards, set flags on cards and let the ORM deal with persisting. Instead of all that, I introduced a method Customer.freezeCreditCards() , which fired an "update" statement directly into a database. While that's not a particularly exciting operation, it shows that if you have the business method somewhere where it makes sense (where the data for it is), that it is trivial to use any optimization or extra feature you require. And you don't have to abstract/generalize features. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/389231', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/306685/']} | jdg_76034 |
stackexchange | llm_judgeable_groundtruth_similarity | 17023 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Coming from the Windows world, I have found the majority of the folder directory names to be quite intuitive: \Program Files contains files used by programs (surprise!) \Program Files (x86) contains files used by 32-bit programs on 64-bit OSes \Users (formerly Documents and Settings ) contains users' files, i.e. documents and settings \Users\USER\Application Data contains application-specific data \Users\USER\Documents contains documents belonging to the user \Windows contains files that belong to the operation of Windows itself \Windows\Fonts stores font files (surprise!) \Windows\Temp is a global temporary directory et cetera. Even if I had no idea what these folders did, I could guess with good accuracy from their names. Now I'm taking a good look at Linux, and getting quite confused about how to find my way around the file system. For example: /bin contains binaries. But so do /sbin , /usr/bin , /usr/sbin , and probably more that I don't know about. Which is which?? What is the difference between them? If I want to make a binary and put it somewhere system-wide, where do I put it? /media contains external media file systems. But so does /mnt . And neither of them contain anything on my system at the moment; everything seems to be in /dev . What's the difference? Where are the other partitions on my hard disk, like the C: and D: that were in Windows? /home contains the user files and settings. That much is intuitive, but then, what is supposed to go into /usr ? And how come /root is still separate, even though it's a user with files and settings? /lib contains shared libraries, like DLLs. But so does /usr/lib . What's the difference? What is /etc ? Does it really stand for "et cetera", or something else? What kinds of files should go in there -- global or local? Is it a catch-all for things no one knew where to put, or is there a particular use case for it? What are /opt , /proc , and /var ? What do they stand for and what are they used for? I haven't seen anything like them in Windows*, and I just can't figure out what they might be for. If anyone can think of other standard places that might be good to know about, feel free to add it to the question; hopefully this can be a good reference for people like me, who are starting to get familiar with *nix systems. *OK, that's a lie. I've seen similar things in WinObj, but obviously not on a regular basis. I still don't know what these do on Linux, though.
Now provide the response and nothing else.
| Linux distributions use the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html You can also try man hier . I'll try to sum up answers your questions off the top of my head, but I strongly suggest that you read through the FHS: /bin is for non-superuser system binaries /sbin is for superuser (root) system binaries /usr/bin & /usr/sbin are for non-critical shared non-superuser or superuser binaries, respectively /mnt is for temporarily mounting a partition /media is for mounting many removable media at once /dev contains your system device files; it's a long story :) The /usr folder, and its subfolders, can be shared with other systems, so that they will have access to the same programs/files installed in one place. Since /usr is typically on a separate filesystem, it doesn't contain binaries that are necessary to bring the system online. /root is separate because it may be necessary to bring the system online without mounting other directories which may be on separate partitions/hard drives/servers Yes, /etc stands for "et cetera". Configuration files for the local system are stored there. /opt is a place where you can install programs that you download/compile. That way you can keep them separate from the rest of the system, with all of the files in one place. /proc contains information about the kernel and running processes /var contains variable size files like logs, mail, webpages, etc. To access a system, you generally don't need /var, /opt, /usr, /home; some of potentially largest directories on a system. One of my favorites, which some people don't use, is /srv. It's for data that is being hosted via services like http/ftp/samba. I've see /var used for this a lot, which isn't really its purpose. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/17023', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/6252/']} | jdg_76035 |
stackexchange | llm_judgeable_groundtruth_similarity | 23696749 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to detect on C++ is windows 32 or 64 bit?I see a lot of examples in .Net but I need C++. Also IsWow64Process() dosen't works for me, becouse "If the process is running under 32-bit Windows, the value is set to FALSE. If the process is a 64-bit application running under 64-bit Windows, the value is also set to FALSE" if I have 32 bit proc under 32 bit OS I have FALSEif I have 64 bit proc under 64 bit OS I have FALSE BUT I dont care about process bit I need OS bit
Now provide the response and nothing else.
| The Win32 API function to detect information about the underlying system is GetNativeSystemInfo . Call the function and read the wProcessorArchitecture member of the SYSTEM_INFO struct that the function populates. Although it is actually possible to use IsWow64Process to detect this. If you call IsWow64Process and TRUE is returned, then you know that you are running on a 64 bit system. Otherwise, FALSE is returned. And then you just need to test the size of a pointer, for instance. A 32 bit pointer indicates a 32 bit system, and a 64 bit pointer indicates a 64 bit system. In fact, you can probably get the information from a conditional supplied by the compiler, depending on which compiler you use, since the size of the pointer is known at compile time. Raymond Chen described this approach in a blog article . He helpfully included code which I reproduce here: BOOL Is64BitWindows(){#if defined(_WIN64) return TRUE; // 64-bit programs run only on Win64#elif defined(_WIN32) // 32-bit programs run on both 32-bit and 64-bit Windows // so must sniff BOOL f64 = FALSE; return IsWow64Process(GetCurrentProcess(), &f64) && f64;#else return FALSE; // Win64 does not support Win16#endif} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23696749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1879838/']} | jdg_76036 |
stackexchange | llm_judgeable_groundtruth_similarity | 1811041 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
These lines are both in the implementation file above the @implementation declaration. NSString * const aVar = @"aVarStringValue";static NSString *aVar = @"aVarStringValue"; As far as I understand, the second static is allocated once only within the lifetime of the application and this fact contributes to performance. But does this mean it is essentially a memory leak seeing as that block of memory will never be released? And does the first const declaration get allocated every time it is accessed in contrast?
Now provide the response and nothing else.
| static keyword in Objective-C (and C/C++) indicates the visibility of the variable. A static variable (not in a method) may only be accessed within that particular .m file. A static local variable on the other hand, gets allocated only once. const on the other hand, indicates that the reference may not be modified and/or reassigned; and is orthogonal on how it can be created (compilers may optimize consts though). It's worth mentioning that NSString literals get initialized and never get destroyed in the life of application. They are allocated in a read-only part of the memory. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1811041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26088/']} | jdg_76037 |
stackexchange | llm_judgeable_groundtruth_similarity | 7767049 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
in my viewcontroller,I have a button,when press the button,entry the navigationController,my code like: -(IBAction)ShangHaiButtonPressed:(id)sender{ marketviewcontroller = [[MarketViewController alloc]initWithNibName:@"MarketViewController" bundle:nil]; NSLog(@"%@",self.navigationController); [self.navigationController pushViewController:marketviewcontroller animated:YES]; [marketviewcontroller release]; } but I can see the self.navigationController is null,how to solve this problem?thank you. update: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ _switchviewcontroller = [[SwitchViewController alloc]initWithNibName:@"SwitchViewController" bundle:nil]; [self.window addSubview:_switchviewcontroller.view]; [self.window makeKeyAndVisible]; return YES;}
Now provide the response and nothing else.
| The navigationController property of a view controller will return a valid navigation controller object only if the view controller is in a navigation controller's navigation stack. A view controller can be added to a navigation stack in the following ways. By making the view controller the rootViewController of a navigation controller using initWithRootViewController: method of UINavigationController By pushing the view controller using pushViewController: method of UINavigationController . Make sure your view controller is added to the navigation stack in any of the above ways. EDIT: (After the didFinishLaunchingWithOptions: code added to the question): Change the didFinishLaunchingWithOptions: method to this, - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ _switchviewcontroller = [[SwitchViewController alloc]initWithNibName:@"SwitchViewController" bundle:nil]; UINavigationController *navController = [[UINavigationController alloc] initWithRootViewController:_switchviewcontroller]; [self.window addSubview:navController.view]; [navController release]; [self.window makeKeyAndVisible]; return YES;} Swift 4 (version): func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey : Any]? = nil) -> Bool { switchviewcontroller = SwitchViewController(nibName: "SwitchViewController", bundle: nil) let navController = UINavigationController(rootViewController: switchviewcontroller) window.addSubview(navController.view) window.makeKeyAndVisible() return true} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7767049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/918859/']} | jdg_76038 |
stackexchange | llm_judgeable_groundtruth_similarity | 2629612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to write a String to a Stream (a MemoryStream in this case) and read the bytes one by one. stringAsStream = new MemoryStream();UnicodeEncoding uniEncoding = new UnicodeEncoding();String message = "Message";stringAsStream.Write(uniEncoding.GetBytes(message), 0, message.Length);Console.WriteLine("This:\t\t" + (char)uniEncoding.GetBytes(message)[0]);Console.WriteLine("Differs from:\t" + (char)stringAsStream.ReadByte()); The (undesired) result I get is: This: MDiffers from: ? It looks like it's not being read correctly, as the first char of "Message" is 'M', which works when getting the bytes from the UnicodeEncoding instance but not when reading them back from the stream. What am I doing wrong? The bigger picture: I have an algorithm which will work on the bytes of a Stream, I'd like to be as general as possible and work with any Stream. I'd like to convert an ASCII-String into a MemoryStream, or maybe use another method to be able to work on the String as a Stream. The algorithm in question will work on the bytes of the Stream.
Now provide the response and nothing else.
| After you write to the MemoryStream and before you read it back, you need to Seek back to the beginning of the MemoryStream so you're not reading from the end. UPDATE After seeing your update, I think there's a more reliable way to build the stream: UnicodeEncoding uniEncoding = new UnicodeEncoding();String message = "Message";// You might not want to use the outer using statement that I have// I wasn't sure how long you would need the MemoryStream object using(MemoryStream ms = new MemoryStream()){ var sw = new StreamWriter(ms, uniEncoding); try { sw.Write(message); sw.Flush();//otherwise you are risking empty stream ms.Seek(0, SeekOrigin.Begin); // Test and work with the stream here. // If you need to start back at the beginning, be sure to Seek again. } finally { sw.Dispose(); }} As you can see, this code uses a StreamWriter to write the entire string (with proper encoding) out to the MemoryStream . This takes the hassle out of ensuring the entire byte array for the string is written. Update: I stepped into issue with empty stream several time. It's enough to call Flush right after you've finished writing. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2629612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246390/']} | jdg_76039 |
stackexchange | llm_judgeable_groundtruth_similarity | 11022675 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to rotate a histogram in R, plotted by hist(). The question is not new, and in several forums I have found that it is not possible. However, all these answers date back to 2010 or even later. Has anyone found a solution meanwhile? One way to get around the problem is to plot the histogram via barplot() that offers the option "horiz=TRUE". The plot works fine but I fail to overlay a density in the barplots. The problem probably lies in the x-axis since in the vertical plot, the density is centered in the first bin, while in the horizontal plot the density curve is messed up. Any help is very much appreciated! Thanks, Niels Code: require(MASS)Sigma <- matrix(c(2.25, 0.8, 0.8, 1), 2, 2)mvnorm <- mvrnorm(1000, c(0,0), Sigma)scatterHist.Norm <- function(x,y) { zones <- matrix(c(2,0,1,3), ncol=2, byrow=TRUE) layout(zones, widths=c(2/3,1/3), heights=c(1/3,2/3)) xrange <- range(x) ; yrange <- range(y) par(mar=c(3,3,1,1)) plot(x, y, xlim=xrange, ylim=yrange, xlab="", ylab="", cex=0.5) xhist <- hist(x, plot=FALSE, breaks=seq(from=min(x), to=max(x), length.out=20)) yhist <- hist(y, plot=FALSE, breaks=seq(from=min(y), to=max(y), length.out=20)) top <- max(c(xhist$counts, yhist$counts)) par(mar=c(0,3,1,1)) plot(xhist, axes=FALSE, ylim=c(0,top), main="", col="grey") x.xfit <- seq(min(x),max(x),length.out=40) x.yfit <- dnorm(x.xfit,mean=mean(x),sd=sd(x)) x.yfit <- x.yfit*diff(xhist$mids[1:2])*length(x) lines(x.xfit, x.yfit, col="red") par(mar=c(0,3,1,1)) plot(yhist, axes=FALSE, ylim=c(0,top), main="", col="grey", horiz=TRUE) y.xfit <- seq(min(x),max(x),length.out=40) y.yfit <- dnorm(y.xfit,mean=mean(x),sd=sd(x)) y.yfit <- y.yfit*diff(yhist$mids[1:2])*length(x) lines(y.xfit, y.yfit, col="red")}scatterHist.Norm(mvnorm[,1], mvnorm[,2])scatterBar.Norm <- function(x,y) { zones <- matrix(c(2,0,1,3), ncol=2, byrow=TRUE) layout(zones, widths=c(2/3,1/3), heights=c(1/3,2/3)) xrange <- range(x) ; yrange <- range(y) par(mar=c(3,3,1,1)) plot(x, y, xlim=xrange, ylim=yrange, xlab="", ylab="", cex=0.5) xhist <- hist(x, plot=FALSE, breaks=seq(from=min(x), to=max(x), length.out=20)) yhist <- hist(y, plot=FALSE, breaks=seq(from=min(y), to=max(y), length.out=20)) top <- max(c(xhist$counts, yhist$counts)) par(mar=c(0,3,1,1)) barplot(xhist$counts, axes=FALSE, ylim=c(0, top), space=0) x.xfit <- seq(min(x),max(x),length.out=40) x.yfit <- dnorm(x.xfit,mean=mean(x),sd=sd(x)) x.yfit <- x.yfit*diff(xhist$mids[1:2])*length(x) lines(x.xfit, x.yfit, col="red") par(mar=c(3,0,1,1)) barplot(yhist$counts, axes=FALSE, xlim=c(0, top), space=0, horiz=TRUE) y.xfit <- seq(min(x),max(x),length.out=40) y.yfit <- dnorm(y.xfit,mean=mean(x),sd=sd(x)) y.yfit <- y.yfit*diff(yhist$mids[1:2])*length(x) lines(y.xfit, y.yfit, col="red")}scatterBar.Norm(mvnorm[,1], mvnorm[,2]) # Source of scatter plot with marginal histograms (click first link after "adapted from..."): http://r.789695.n4.nabble.com/newbie-scatterplot-with-marginal-histograms-done-and-axes-labels-td872589.html Source of density in a scatter plot: http://www.statmethods.net/graphs/density.html
Now provide the response and nothing else.
| scatterBarNorm <- function(x, dcol="blue", lhist=20, num.dnorm=5*lhist, ...){ ## check input stopifnot(ncol(x)==2) ## set up layout and graphical parameters layMat <- matrix(c(2,0,1,3), ncol=2, byrow=TRUE) layout(layMat, widths=c(5/7, 2/7), heights=c(2/7, 5/7)) ospc <- 0.5 # outer space pext <- 4 # par extension down and to the left bspc <- 1 # space between scatter plot and bar plots par. <- par(mar=c(pext, pext, bspc, bspc), oma=rep(ospc, 4)) # plot parameters ## scatter plot plot(x, xlim=range(x[,1]), ylim=range(x[,2]), ...) ## 3) determine barplot and height parameter ## histogram (for barplot-ting the density) xhist <- hist(x[,1], plot=FALSE, breaks=seq(from=min(x[,1]), to=max(x[,1]), length.out=lhist)) yhist <- hist(x[,2], plot=FALSE, breaks=seq(from=min(x[,2]), to=max(x[,2]), length.out=lhist)) # note: this uses probability=TRUE ## determine the plot range and all the things needed for the barplots and lines xx <- seq(min(x[,1]), max(x[,1]), length.out=num.dnorm) # evaluation points for the overlaid density xy <- dnorm(xx, mean=mean(x[,1]), sd=sd(x[,1])) # density points yx <- seq(min(x[,2]), max(x[,2]), length.out=num.dnorm) yy <- dnorm(yx, mean=mean(x[,2]), sd=sd(x[,2])) ## barplot and line for x (top) par(mar=c(0, pext, 0, 0)) barplot(xhist$density, axes=FALSE, ylim=c(0, max(xhist$density, xy)), space=0) # barplot lines(seq(from=0, to=lhist-1, length.out=num.dnorm), xy, col=dcol) # line ## barplot and line for y (right) par(mar=c(pext, 0, 0, 0)) barplot(yhist$density, axes=FALSE, xlim=c(0, max(yhist$density, yy)), space=0, horiz=TRUE) # barplot lines(yy, seq(from=0, to=lhist-1, length.out=num.dnorm), col=dcol) # line ## restore parameters par(par.)}require(mvtnorm)X <- rmvnorm(1000, c(0,0), matrix(c(1, 0.8, 0.8, 1), 2, 2))scatterBarNorm(X, xlab=expression(italic(X[1])), ylab=expression(italic(X[2]))) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11022675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1454600/']} | jdg_76040 |
Subsets and Splits