source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
8426693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my Java code: List<Object> objects = new ArrayList();// Assign values to objects...for (int i = 0; i < objects.size(); i++) { Object object = objects.get(i); ...} I have two questions: Is objects.size() calculated only once before stating the loop, or is it calculated each loop? If objects.size() is calculated each loop, then if other thread change it at the same time without multi-threads protection, the code may be crashed. Am I correct? Now provide the response and nothing else.
Answers: objects.size() is called every loop (whether it is calculated depends on the ArrayList implementation, which you shouldn't care about) Yes, another thread may change the list and this will affect your loop Real answer: You shouldn't have to care, and here's how you don't have to: Use a CopyOnWriteArrayList , which is thread-safe. If you iterate over it using an Iterator (as the foreach syntax uses internally), you'll iterate over the list as it was when the iteration started Use the foreach syntax, which means you don't have to use an index etc - it's done for you: for (Object object : objects) { // do something with each object }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8426693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/650640/']}
jdg_82841
stackexchange
llm_judgeable_groundtruth_similarity
56811537
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a C++ class like this: class A {public: A() : m_a(someValue1), m_b(someValue2), m_c(someValue3) { } // .... other class membersprivate: A m_a; B m_b; C m_c;}; After formatting this code with clang-format I am getting: class A {public: A() : m_a(someValue1), m_b(someValue2), m_c(someValue3) { } // .... other class membersprivate: A m_a; B m_b; C m_c;}; I.e. initializer list in constructor got formatted onto single line, until they permitted maximum line length is reached. My question is how to tell clang-format to leave each element on its own line, like it was in my original code before formatting? I could not find any suitable parameter. I've tried to set parameter AllowShortBlocksOnASingleLine, which seemed to me the most appropriate, to the both true and false, but it had no effect on this. So can anyone suggest how to implement such formatting? Here is my .clang-format: BasedOnStyle: GoogleAccessModifierOffset: '-4'AlignAfterOpenBracket: DontAlignAlignConsecutiveAssignments: 'false'AlignConsecutiveDeclarations: 'false'AlignEscapedNewlines: LeftAlignOperands: 'true'AlignTrailingComments: 'false'AllowAllParametersOfDeclarationOnNextLine: 'true'AllowShortBlocksOnASingleLine: 'true'AllowShortCaseLabelsOnASingleLine: 'true'AllowShortFunctionsOnASingleLine: NoneAllowShortIfStatementsOnASingleLine: 'true'AllowShortLoopsOnASingleLine: 'false'AlwaysBreakAfterReturnType: NoneAlwaysBreakBeforeMultilineStrings: 'false'AlwaysBreakTemplateDeclarations: 'true'BinPackArguments: 'true'BinPackParameters: 'true'BreakAfterJavaFieldAnnotations: 'true'BreakBeforeBinaryOperators: NonAssignmentBreakBeforeBraces: CustomBraceWrapping:AfterFunction: trueSplitEmptyFunction: trueBreakBeforeInheritanceComma: 'false'BreakBeforeTernaryOperators: 'true'BreakConstructorInitializers: AfterColonBreakStringLiterals: 'true'ColumnLimit: '100'CompactNamespaces: 'false'ConstructorInitializerAllOnOneLineOrOnePerLine: 'false'ConstructorInitializerIndentWidth: '4'ContinuationIndentWidth: '8'Cpp11BracedListStyle: 'true'DerivePointerAlignment: 'false'DisableFormat: 'false'ExperimentalAutoDetectBinPacking: 'false'FixNamespaceComments: 'true'IncludeBlocks: PreserveIndentCaseLabels: 'true'IndentPPDirectives: NoneIndentWidth: '4'IndentWrappedFunctionNames: 'false'JavaScriptQuotes: DoubleJavaScriptWrapImports: 'true'KeepEmptyLinesAtTheStartOfBlocks: 'false'NamespaceIndentation: NonePointerAlignment: LeftReflowComments: 'false'SortIncludes: 'true'SortUsingDeclarations: 'true'SpaceAfterCStyleCast: 'true'SpaceAfterTemplateKeyword: 'false'SpaceBeforeAssignmentOperators: 'true'SpaceBeforeParens: ControlStatementsSpaceInEmptyParentheses: 'false'SpacesBeforeTrailingComments: '2'SpacesInAngles: 'false'SpacesInCStyleCastParentheses: 'false'SpacesInContainerLiterals: 'false'SpacesInParentheses: 'false'SpacesInSquareBrackets: 'false'Standard: Cpp11TabWidth: '4'UseTab: Never UPDATE : There is option in clang-format called "ConstructorInitializerAllOnOneLineOrOnePerLine" descrined as follows: "If the constructor initializers don’t fit on a line, put each initializer on its own line".However, it still doesn't do what I want, because it only moves initializer to a new line if it doesn't fit to column limit. So looks like there is no way to force clang-format to put subsequent initializer on next line even if column limit is not reached.It would be nice if abovementioned option turned into enum with additional value that forces to put initializer to a next line even if column limit is not reached. Now provide the response and nothing else.
Update 2021: In clang14 there is a new option called PackConstructorInitializers which will do what you want. The other options ConstructorInitializerAllOnOneLineOrOnePerLine and AllowAllConstructorInitializersOnNextLine , which were confusing anyways, will be deprecated. End Update I am pretty sure that this is a bug / shortcoming of clang-format.The issue was already addressed in 2015, but rejected by the clang-format developers: https://reviews.llvm.org/D14484 For what it is worth, I made a simple change to clang-format that should give you, your intended behaviour: https://github.com/Nikolai-Hlubek/clang/tree/ConstructorInitializer_AlwaysBreakAfterColon I made a push request to upstream, but I doubt that it will be accepted.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56811537', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1540501/']}
jdg_82842
stackexchange
llm_judgeable_groundtruth_similarity
313134
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A neutron is a neutral particle which is merely some times more massive than an electron.What makes it so unstable outside the nucleus that it has a half life only of about 12 min? Now provide the response and nothing else.
How long is long? So "half life only of about 12 min" is actually really a strange idea to most of your readers. 12 minutes is a very long time, atomically speaking! Like, the charged pions have a half-life of 18 nanoseconds, the uncharged one is 58 nano-nanoseconds (attoseconds). You might say "well those are mesons, not baryons like the proton and neutron," but actually the first new baryon ever discovered, the $\Lambda^0$, had a half-life of 0.18 ns and this was considered so strange (in the sense of being so much longer than expected!) that the newly discovered particle was thought to have a quality called strangeness and this eventually became the name of the relevant quark; it is still today called the "strange quark." The mass difference The neutron decays to the proton for a simple reason: a proton is made of two ups and a down, a neutron is made of two downs and an up, and the down quark is intrinsically more massive than the up quark. Now there is a subtlety: the vast majority of the proton's and neutron's masses comes from their strong-force binding energy via $E=mc^2,$ which is why they have basically the exact same mass when fully assembled, a little over 930 MeV. (An electron volt, or eV, is the amount of energy that an electron gains when it goes through one volt of potential difference; it corresponds to a certain mass after dividing by $c^2.$) But the up quarks in these particles are about 2 MeV lighter than down quarks are (we actually don't know the real masses 100%, but the story seems to be about right), and the point is that this ~2 MeV gap is big enough that even after creating an electron (0.5 MeV) and neutrino and accounting for the greater electromagnetic self-repulsion, the proton is still 1.3 MeV lighter overall. Lighter means lower-energy, which means the total energy is spread out more across the universe, and in some sense we're talking about entropy and statistics again. You might wonder why this argument doesn't go one step further, to a particle with three ups. This particle exists and is called the $\Delta^{++}.$ However, this fact that "most of the mass is binding energy" comes back to bite us, because some of that binding energy, it turns out, lives in the spin configuration of the quarks that make up the nucleon. This comes down to the "Pauli exclusion principle": a down and an up-quark, being different particles, can be in "the same state" but two up-quarks must be "in different states". In the details, this exclusion principle takes the form that the up/down "flavor" configuration and spin configuration must either both be symmetric or antisymmetric, since the color-charge state is antisymmetric and the overall state must be antisymmetric. Well the up-up-up state of the $\Delta^{++}$ and down-down-down state of the $\Delta^-$ can't help but be symmetric; so the spin-state must be symmetric too, and the spin-symmetric state has a higher energy than the spin-antisymmetric state by 200-300 MeV. By contrast there are two (1u,2d) and (2u, 1d) configurations, the ones that are flavor-antisymmetric and spin-antisymmetric have total spin 1/2 and are the proton and neutron; the ones that are flavor-symmetric and spin-symmetric have total spin 3/2 and are the $\Delta^+$ and $\Delta^0.$ Anyway the point is that the extra energy which needs to be bound in this state to keep the extra spin in the system is very high, so that's why you don't see these particles in nature. Quantum tunneling So neutrons are a higher energy-state than protons, and quantum mechanics says that if there ever is a lower-energy state, and there is any process which can transfer energy out, then eventually the system will come to be in that lower-energy state. But, this could take a while if the transfer-process requires more energy than the system has, in which case quantum mechanics has to "tunnel" through the higher-energy state which takes some time due to time-energy uncertainty. That's what makes this process take so long for neutrons; the only pathway involves creating a $W^-$ boson which eventually decays into an electron and an antineutrino, but the boson in the middle has a very large mass -- 80,000 MeV or so -- and there is therefore nowhere near enough mass to create one of these. QM has to tunnel through this $W$-boson state. How does the presence of other nucleons stabilize neutrons? On the flip side, when these baryons are within a nucleus, the attraction of the different baryons can create a force which "holds together" neutrons, in the sense that the decay of a neutron would increase the energy of the whole, formed nucleus. This actually occurs by the exact same mechanism that makes that $\Delta^{++}$ baryon cost energy, that Pauli exclusion. So if you have dealt with atoms you know that two uncharged atoms will still "stick" to each other by the van der Waals forces, which just have to do with "even though the total charge is 0, there is still some charge-distribution structure here, which matters a lot at short distances." The nucleons within atoms actually have a very similar property even though the color charge is more complicated than the electric charge. Basically, these protons and neutrons are being held internally together with these gluons into color-charge-neutral particles; but they can still "stick" to each other through the strong force, generally by exchanging virtual pions. The pions are mesons: combinations of a quark and an antiquark with opposite color charges, so they end up being color-neutral as well. In this case the up-antidown meson is called $\pi^+$ while the down-antiup meson is called $\pi^-$ and there are two very short-lived $\pi^0$ mesons between them, up-antiup and down-antidown. These were predicted by Yukawa a long time before we knew anything about quarks: they were, in fact, our first jump down the rabbit hole! But anyway, there are these short-lived pions that "stick" protons and neutrons together at short ranges. Now Pauli exclusion comes in and says "hey, these protons and neutrons are also identical spin-1/2 particles, so I demand that they be in different states." This picture is much more like the electron-shell model of the atom, there are some energy "shells" for the protons and an almost-identical set of shell levels for the neutrons: the proton levels are a little higher in energy because the electromagnetic force says that like charges repel. Imagine these are laid out side-by side, left column is protons, right column is neutrons. If a neutron wants to become a proton by emitting an electron and antineutrino, it may need to pay an extra "cost" if there is no corresponding proton state to the left: and those levels also see a non-negligible splitting based on spin due to a strong spin-orbit interaction. In fact these effects are already enough to keep a neutron together in the case of deuterium, one proton bound to one neutron by these pions. Add one more neutron, and this becomes weakly unstable tritium with a half-life of 12 years, add one more neutron and the result is severely unstable. Actually there is a balance here where the energy gain from being able to "drop" down several energy shells can drive a nucleus with too few neutrons and too many protons to emit a positron (an anti-electron) in reverse-beta decay, turning into a neutron in order to "drop" a few shells down in energy. Those nuclei are very useful in medicine, because the positron then usually annihilates with an electron to produce two gamma rays going in opposite directions, and detecting these gamma rays is how the PET scanner works. So you say "drink this positron-emitting fluid!" and then you can map out with the PET scanner where all of these atoms have gone in the body.
{}
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/313134', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/145764/']}
jdg_82843
stackexchange
llm_judgeable_groundtruth_similarity
28445552
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm running Android 4.4.2 and I enabled the "Bluetooth HCI snoop log" as described here Sniffing/logging your own Android Bluetooth traffic After turning bluetooth on and off I rebooted the phone. I could not find the log file in the expected location: $ adb pull /sdcard/btsnoop_hci.logremote object '/sdcard/btsnoop_hci.log' does not exist How can I get to the btsnoop_hci.log? Now provide the response and nothing else.
UPDATE: The btsnoop hci log seems to be getting phased out of the user-accessible areas on a lot of phones. Assuming you have hci logging enabled, you can get a bugreport adb bugreport anewbugreportfolder Then decompress the folder. If you're lucky there is an 'FS' folder that contains the btsnoop_hci.log log several layers down (not sure why some phones have this and some don't.) If you don`t have it, grab the bug report text file that looks like this bugreport-2018-08-01-15-08-01.txt Run btsnooz.py against it. Per Google`s instructions, To extract snoop logs from the bug report, use the btsnooz script.Get btsnooz.py.Extract the text version of the bug report.Run btsnooz.py on the text version of the bug report:btsnooz.py BUG_REPORT.txt > BTSNOOP.log As of 1/12/21 the link to btsnooz is here: https://android.googlesource.com/platform/packages/modules/Bluetooth/+/refs/heads/master/system/tools/scripts/btsnooz.py LEGACY ANSWER: You can see where your phone is storing the hci log by reading the bt_stack.conf file. Try adb shell cat /etc/bluetooth/bt_stack.conf You will see a line that looks like # BtSnoop log output fileBtSnoopFileName =/sdcard/btsnoop_hci.log <--your file location It is usually, but not always (depends on the phone) on the root of the sdcard. There is also a line in this configuration file which may reflect if hci logging is actually on or not # EnableBtSnoop logging function# valid value : true , falseBtSnoopLogOutput=false Toggling the 'Enable Bluetooth HCI snoop log' option in the developer options should change it to # EnableBtSnoop logging function# valid value : true , falseBtSnoopLogOutput=true I say "should" because for some phones this doesn't update this file. You should: Read the bt_stack.conf file. See where the HCI log should be and if bt snoop logging is actually enabled or not If developer options say btsnoop_hci logging is on but the bt_stack.conf file says it is off, try power cycling bluetooth and/or your phone. If your phone is rooted, manually set BtSnoopLogOutput=true If none of the 3 options work, you're out of luck. BT Snoop hci logging is a bit inconsistent across different phones. I've seen a few phones where I just couldn't get it to work not matter what but for the most cases you should be able to get it going. A rooted phone is not a requirement.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28445552', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3116989/']}
jdg_82844
stackexchange
llm_judgeable_groundtruth_similarity
27743531
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am watching a tutorial on parsers in haskell https://www.youtube.com/watch?v=9FGThag0Fqs . The lecture starts with defining some really basic parsers. These are to be used together to create more complicated parsers later. One of the basic parsers is item . This is used to extract a character from the string we are parsing. All parsers have the following type: type Parser a = String -> [(a, String)] The parser item is defined like this: item :: Parser Char item = \inp -> case inp of [] -> [] (x:xs) -> [(x,xs)] I am not so used to this syntax, so it looks strange to me. I would have written it: item' :: Parser Char item' [] = [] item' (x:xs) = [(x,xs)] Testing it in ghci indicates that they are equal: *Main> item ""[]*Main> item "abc"[('a',"bc")]*Main> item' ""[]*Main> item' "abc"[('a',"bc")] The lecturer makes a short comment about thinking it looks clearer, but I disagree. So my questions are: Are they indeed completely identical?Why is the lambda version clearer? Now provide the response and nothing else.
I believe this comes from the common practice of writing f :: Type1 -> ... -> Typen -> Resultf x1 ... xn = someResult where we have exactly n function arrows in the type, and exactly n arguments in the left hand side of the equation. This makes it easy to relate types and formal parameters. If Result is a type alias for a function, then we may write f :: Type1 -> ... -> Typen -> Resultf x1 ... xn y = something or f :: Type1 -> ... -> Typen -> Resultf x1 ... xn = \y -> something The latter follows the convention above: n arrows, n variables in the left hand side. Also, on the right hand side we have something of type Result , making it easier to spot. The former instead does not, and one might miss the extra argument when reading the code quickly. Further, this style makes it easy to convert Result to a newtype instead of a type alias: newtype Result = R (... -> ...)f :: Type1 -> ... -> Typen -> Resultf x1 ... xn = R $ \y -> something The posted item :: Parser Char code is an instance of this style when n=0 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27743531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/867377/']}
jdg_82845
stackexchange
llm_judgeable_groundtruth_similarity
220780
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I have a list such as list={1,2,3,4,5}, how can a prepend multiple values to the list?. For example what I want is to prepend 6 zeros prior to {1,2,3,4,5} to get {0,0,0,0,0,0,1,2,3,4,5} . I tried using Prepend[list, Table[0, 6]] but this give me {{0,0,0,0,0,0},1,2,3,4,5} which would be all the six zeros as the first element rather than each zero being an element. Thank you. Now provide the response and nothing else.
You have lots of options, such as: list = {1, 2, 3, 4, 5};zeros = ConstantArray[0, 3];Catenate[{zeros, list}](* or *)Join[zeros, list](* or *)Flatten[{zeros, list}, 1] {0, 0, 0, 1, 2, 3, 4, 5} As of Mathematica 12.1, you also have this option: {Splice[zeros], Splice[list]} {0, 0, 0, 1, 2, 3, 4, 5} Which I guess is equivalent to doing this, which also works in earlier versions: {Sequence @@ zeros, Sequence @@ list} {0, 0, 0, 1, 2, 3, 4, 5} But Splice has the additional advantage that it is not evaluated before it occurs in the context of a list, which means that we can use it with Prepend , and salvage your initial attempt: Prepend[list, Splice[zeros]] {0, 0, 0, 1, 2, 3, 4, 5} ( Prepend[list, Sequence @@ zeros] would not have have worked since Sequence would have been applied immediately, which nicely highlights the utility of the Splice .)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/220780', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/71956/']}
jdg_82846
stackexchange
llm_judgeable_groundtruth_similarity
27872723
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to create a histogram that plots a line and not a step or bar chart. I am using python 2.7 The plt.hist function below plots a stepped line and the bins don't line up in the plt.plot function. import matplotlib.pyplot as pltimport numpy as npnoise = np.random.normal(0,1,(1000,1))(n,x,_) = plt.hist(noise, bins = np.linspace(-3,3,7), histtype=u'step' ) plt.plot(x[:-1],n) I need the line to correlate with each bin's count at the bin centers as if there was a histtype=u'line' flag to go with the align=u'mid' flag Now provide the response and nothing else.
Using scipy, you could use stats.gaussian_kde to estimate the probability density function : import matplotlib.pyplot as pltimport numpy as npimport scipy.stats as statsnoise = np.random.normal(0, 1, (1000, ))density = stats.gaussian_kde(noise)n, x, _ = plt.hist(noise, bins=np.linspace(-3, 3, 50), histtype=u'step', density=True) plt.plot(x, density(x))plt.show()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27872723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3107858/']}
jdg_82847
stackexchange
llm_judgeable_groundtruth_similarity
667812
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Giving that Windows 10 would most likely wipe my Linux EFI boot entry, See the comment after the answer here : Windows 10 will usually "self-heal" its firmware boot entry if you manage to get Windows booting even once. In the process, if there is no existing Windows boot entry in the firmware (i.e. in the efibootmgr list), it will usually usurp Boot0000 for itself, regardless of whether or not it is already in use. I'd like to backup my EFI boot entry before so that I can then easily restore it even Windows 10 wipes it. Seems there is no existing tools that can do it, though https://github.com/rhboot/efibootmgr/issues/10 mentioned the efivar utility, with somewhat manual process. However, I cannot find any further info into that direction. Hence the question. Or, if I have a EFI boot entry like this: Boot0000* debian HD(13,GPT,007a058a-8e5e-45df-8d97-6575b66b5355,0x1afa9000,0x113000)/File(\EFI\debian\grubx64.efi) How to recreate it next time? Now provide the response and nothing else.
It's easy enough to recreate a boot entry from scratch once you know how... and have the efibootmgr tool at hand, of course. Boot0000* debian HD(13,GPT,007a058a-8e5e-45df-8d97-6575b66b5355,0x1afa9000,0x113000)/File(\EFI\debian\grubx64.efi) The 007a058a-8e5e-45df-8d97-6575b66b5355 is the PARTUUID of the ESP partition the \EFI\debian\grubx64.efi is located in. (The 13 may be a partition number, but according to the specification, the PARTUUID is the primary identifier.) The efibootmgr command just needs to know the disk: it will find the ESP partition on that disk, and its PARTUUID, automatically on its own, assuming there is only one ESP per disk. So, let's assume that this PARTUUID belongs to your /dev/sda13 partition (use blkid or lsblk -o +partuuid to check). To recreate the boot entry (or to make an extra copy of it right now): efibootmgr -c -d /dev/sda -L debian -l \\EFI\\debian\\grubx64.efi Backslashes are doubled because backslash is a special escape character for the shell. This command will automatically find the ESP partition on /dev/sda and its PARTUUID, and will build the boot entry for you. efibootmgr will automatically pick the first free BootNNNN number for the boot entry, and will also automatically add it as the first entry in the BootOrder . So if Boot0000 already exists, this would create Boot0001 and set BootOrder to 0001,0000 if it previously was just 0000 . This would be an effective backup of your current boot entries: (lsblk -o +partuuid; efibootmgr -v) > boot-entry-repair-kit.txt
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/667812', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/374303/']}
jdg_82848
stackexchange
llm_judgeable_groundtruth_similarity
45282608
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster. I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster. I have found below two ways of doing it but it needs extra steps to be performed on the docker host - Mount the NFS share using "fstab" or "mount" command on the host & then use it as a host volume for docker services. Use Netshare plugin - https://github.com/ContainX/docker-volume-netshare Is there a standard way where i can directly use/mount NFS share using docker compose v3 by performing only few/no steps(I understand that "nfs-common" package is required anyhow) on the docker host? Now provide the response and nothing else.
After discovering that this is massively undocumented,here's the correct way to mount a NFS volume using stack and docker compose. The most important thing is that you need to be using version: "3.2" or higher. You will have strange and un-obvious errors if you don't. The second issue is that volumes are not automatically updated when their definition changes. This can lead you down a rabbit hole of thinking that your changes aren't correct, when they just haven't been applied. Make sure you docker rm VOLUMENAME everywhere it could possibly be, as if the volume exists, it won't be validated. The third issue is more of a NFS issue - The NFS folder will not be created on the server if it doesn't exist. This is just the way NFS works. You need to make sure it exists before you do anything. (Don't remove 'soft' and 'nolock' unless you're sure you know what you're doing - this stops docker from freezing if your NFS server goes away) Here's a complete example: [root@docker docker-mirror]# cat nfs-compose.ymlversion: "3.2"services: rsyslog: image: jumanjiman/rsyslog ports: - "514:514" - "514:514/udp" volumes: - type: volume source: example target: /nfs volume: nocopy: truevolumes: example: driver_opts: type: "nfs" o: "addr=10.40.0.199,nolock,soft,rw" device: ":/docker/example"[root@docker docker-mirror]# docker stack deploy --with-registry-auth -c nfs-compose.yml rsyslogCreating network rsyslog_defaultCreating service rsyslog_rsyslog[root@docker docker-mirror]# docker stack ps rsyslogID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTStb1dod43fe4c rsyslog_rsyslog.1 jumanjiman/rsyslog:latest swarm-4 Running Starting less than a second ago[root@docker docker-mirror]# Now, on swarm-4: root@swarm-4:~# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESd883e0f14d3f jumanjiman/rsyslog:latest "rsyslogd -n -f /e..." 6 seconds ago Up 5 seconds 514/tcp, 514/udp rsyslog_rsyslog.1.tb1dod43fe4cy3j5vzsy7pgv5root@swarm-4:~# docker exec -it d883e0f14d3f df -h /nfsFilesystem Size Used Available Use% Mounted on:/docker/example 7.2T 5.5T 1.7T 77% /nfsroot@swarm-4:~# This volume will be created ( but not destroyed ) on any swarm node that the stack is running on. root@swarm-4:~# docker volume inspect rsyslog_example[ { "CreatedAt": "2017-09-29T13:53:59+10:00", "Driver": "local", "Labels": { "com.docker.stack.namespace": "rsyslog" }, "Mountpoint": "/var/lib/docker/volumes/rsyslog_example/_data", "Name": "rsyslog_example", "Options": { "device": ":/docker/example", "o": "addr=10.40.0.199,nolock,soft,rw", "type": "nfs" }, "Scope": "local" }]root@swarm-4:~#
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/45282608', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3802507/']}
jdg_82849
stackexchange
llm_judgeable_groundtruth_similarity
15757695
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm aware that R markdown can produce superscripts: text^superscript But is it possible to produce proper subscripts? Or is the only way to do so to cheat and use LaTeX math mode: $\sf{text_{subscript}}$ The intended final output is HTML. Now provide the response and nothing else.
Since you mention Pandoc in your comments, maybe it's not cheating to depend on Pandoc's extensions for subscript and superscript . From here , we can create a minimal example Rmd file: Testing Subscript and Superscript========================================================This is an R Markdown document. Pandoc includes numerous extensions to markdown, and one of them is *subscript* and *superscript*.Here's the example from the Pandoc help page (http://johnmacfarlane.net/pandoc/README.html#superscripts-and-subscripts): H~2~O is a liquid. 2^10^ is 1024.For fun, here's an R code block with some code from @Spacedman:```{r}list.depth <- function(this, thisdepth = 0) {# http://stackoverflow.com/a/13433689/1270695 if(!is.list(this)) { return(thisdepth) } else { return(max(unlist(lapply(this, list.depth, thisdepth = thisdepth+1)))) }}``` Using Knitr results in an HTML file that renders like this: That clearly doesn't work. But you can run pandoc on the resulting markdown file (which I've named "Subscripts.md"): pandoc -o Subscripts.html Subscripts.md -s -S and you'll get this: The CSS is different, but perhaps you can call pandoc with a custom CSS argument to use the same CSS used by Knitr. Subscripts in PDF files also work as expected with that markdown file: pandoc -o Subscripts.pdf Subscripts.md Edit If you want the pandoc output to match the visual appearance of the output when you knit with RStudio, download the CSS file that RStudio uses here and make a reference to that file when you create your HTML file from pandoc. (The following assumes you have kept the name as markdown.css an it is in the same directory as your other files.) pandoc -o Subscripts.html Subscripts.md -s -S --css=markdown.css
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15757695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1465387/']}
jdg_82850
stackexchange
llm_judgeable_groundtruth_similarity
56117261
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: docker image inspect <name> gives me 16GB and about 20 layers When I am logged as root, this du -hs / show me just 2GB FYI, there are already very multi-lines RUN commands in Dockerfile. can I squash all layers into one layer without touching Dockerfile, rebuilding etc? or possibly by adding extra action to Dockerfile which clear/improve caching Dockerfile is FROM heroku/heroku:18ENV PYENV_ROOT="/pyenv"ENV PATH="/pyenv/shims:/pyenv/bin:$PATH"ENV PYTHON_VERSION 3.5.6ENV GPG_KEY <value>ENV PYTHONUNBUFFERED 1ENV TERM xtermENV EDITOR vimRUN apt-get update && apt-get install -y \ build-essential \ gdal-bin \ binutils \ iputils-ping \ libjpeg8 \ libproj-dev \ libjpeg8-dev \ libtiff-dev \ zlib1g-dev \ libfreetype6-dev \ liblcms2-dev \ libxml2-dev \ libxslt1-dev \ libssl-dev \ libncurses5-dev \ virtualenv \ python-pip \ python3-pip \ python-dev \ libmysqlclient-dev \ mysql-client-5.7 \ libpq-dev \ libcurl4-gnutls-dev \ libgnutls28-dev \ libbz2-dev \ tig \ git \ vim \ nano \ tmux \ tmuxinator \ fish \ sudo \ libnet-ifconfig-wrapper-perl \ ruby \ libssl-dev \ nodejs \ strace \ tcpdump \ # npm & grunt && curl -L https://npmjs.com/install.sh | sh \ && npm install -g grunt-cli grunt \ # ruby & foreman && gem install foreman \ # installing pyenv && curl https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bashCOPY . /appCOPY ./requirements /requirementsCOPY ./requirements.txt /requirements.txtCOPY ./docker/docker_compose/django/foreman.sh /foreman.shCOPY ./docker/docker_compose/django/Procfile /ProcfileCOPY ./docker/docker_compose/django/entrypoint.sh /entrypoint.sh# ADD sudoer user django with password djangoRUN groupadd -r django -g 1000 && \ useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid 1000 --gid 1000 -r -g django django && \ usermod -a -G sudo django && \ chown -R django:django /app COPY --chown=django:django ./docker/docker_compose/django/fish /home/django/.config/fishCOPY --chown=django:django ./docker/docker_compose/django/tmuxinator /home/django/.tmuxinatorCOPY ./docker/docker_compose/django/fish /root/.config/fishWORKDIR /appRUN sed -i 's/\r//' /entrypoint.sh \ && sed -i 's/\r//' /foreman.sh \ && chmod +x /entrypoint.sh \ && chown django /entrypoint.sh \ && chmod +x /foreman.sh \ && chown django /foreman.sh \ && chown -R django:django /home/django/ \ && pyenv install ${PYTHON_VERSION%%} \ && mkdir -p /app/log \ && pyenv global ${PYTHON_VERSION%%} \ && pyenv rehash \ && ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -U pip \ && ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements.txt \ && chown -R django:django /pyenv/ \ && ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements/dev_requirements.txt# this user receives ENVs from the topUSER djangoENTRYPOINT ["/entrypoint.sh"] What I've tried so far: The --squash option from experimental mode of docker build is rather not for me. That Dockerfile is one of more Dockerfiles inside docker-compose. I've also checked this: https://github.com/jwilder/docker-squash but seems docker load cannot load a squashed image.also, that squash gives me 8GB (still far away from expected ~2GB) docker save <image_id> | docker-squash -t latest_tiny | docker load update after answers: when I've added this: && apt-get autoremove \ # ? to consider&& apt-get clean \ # ? to consider&& rm -rf /var/lib/apt/lists/* to apt-get and --no-cache-dir to each pip, the result was 72GB (yes, even much more - docker images shows 36GB before pip command, and 72GB as final size). my working directory is clear (regarding COPY ). du -hs / (as a root) still has 2GB. And all images were removed before rebuilding. Following the @Mihai approach, I was able to slim down the image from 16GB to 9GB. Now provide the response and nothing else.
There is a simple trick to get rid of the intermediate layers. It will bring down the size as well but with how much depends on how it was built. Create a Dockerfile like this: FROM your_image as initialFROM your_image_baseCOPY --from=initial / / your_image_base should be something like 'alpine' - so the smallest image from which your image and its parents descend from. Now build the image and check the history and size: docker build -t your-image:2.0 .docker image history your-image:2.0docker image ls This way you do create a new Dockerfile (if that is acceptable for your process) without touching the initial Dockerfile. Let me know if this solves your issue. UPDATE AFTER SEEING THE Dockerfile: maybe I miss it but I don't see you cleaning up the apt-get cache after you perform the installations. Your big RUN command should end with "&& rm -rf /var/lib/apt/lists/*" on the same line so that it doesn't store the whole cache on the layer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56117261', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1416144/']}
jdg_82851
stackexchange
llm_judgeable_groundtruth_similarity
886633
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I calculate the 1-norm of the difference of two vectors, ||a - b||_1 = sum(|a_i - b_i|) in Python? a = [1,2,3,4] b = [2,3,4,5]||a - b||_1 = 4 Now provide the response and nothing else.
Python has powerful built-in types, but Python lists are not mathematical vectors or matrices . You could do this with lists, but it will likely be cumbersome for anything more than trivial operations. If you find yourself needing vector or matrix arithmetic often, the standard in the field is NumPy , which probably already comes packaged for your operating system the way Python also was. I share the confusion of others about exactly what it is you're trying to do, but perhaps the numpy.linalg.norm function will help: >>> import numpy>>> a = numpy.array([1, 2, 3, 4])>>> b = numpy.array([2, 3, 4, 5])>>> numpy.linalg.norm((a - b), ord=1)4 To show how that's working under the covers: >>> aarray([1, 2, 3, 4])>>> barray([2, 3, 4, 5])>>> (a - b)array([-1, -1, -1, -1])>>> numpy.linalg.norm((a - b))2.0>>> numpy.linalg.norm((a - b), ord=1)4
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/886633', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_82852
stackexchange
llm_judgeable_groundtruth_similarity
16082238
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two following tables table 1)ID | HOTEL ID | NAME 1 100 xyz 2 101 pqr3 102 abctable 2)ID | BOOKING ID | DEPARTURE DATE | AMOUNT1 1 2013-04-12 1002 1 2013-04-14 1203 1 2013-04-9 904 2 2013-04-14 1005 2 2013-04-18 1506 3 2013-04-12 100 I want to get reault in mysql such that it take the row from table two with MAX DEPARTURE DATE. ID | BOOKING ID | DEPARTURE DATE | AMOUNT2 1 2013-04-14 1205 2 2013-04-18 1506 3 2013-04-12 100 Now provide the response and nothing else.
SELECT b.ID, b.BookingID, a.Name, b.departureDate, b.AmountFROM Table1 a INNER JOIN Table2 b ON a.ID = b.BookingID INNER JOIN ( SELECT BookingID, MAX(DepartureDate) Max_Date FROM Table2 GROUP BY BookingID ) c ON b.BookingID = c.BookingID AND b.DepartureDate = c.Max_date SQLFiddle Demo
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16082238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1504606/']}
jdg_82853
stackexchange
llm_judgeable_groundtruth_similarity
12330341
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to construct a class with alot of variables directly from the Database, For simplicity we'll name them 'userX', I've looked into ORM just a little, but its way over my head. Essentially I thought I could use my procedural code for ($i=0; $i<100; $i++) {public ${'user'.$i};} But, in a class class test() { private $var1; for ($i=0; $i<10000; $i++) { public ${'user'.$i}; } function __constructor .....} Obviously not.. but it leaves me with the same problem, how can I add $user0, $user1, $user2, etc etc, without having to type all 10k of them in.. Obviously, it would be 1000x easier to just grab the names from the Database, but again, that looks even harder to code. Should I buckle down and grab them all ORM style? Now provide the response and nothing else.
You could simply use the magic accessors to have as many instance attributes as you wish : class test{ private $data; public function __get($varName){ if (!array_key_exists($varName,$this->data)){ //this attribute is not defined! throw new Exception('.....'); } else return $this->data[$varName]; } public function __set($varName,$value){ $this->data[$varName] = $value; }} Then you could use your instance like this : $t = new test();$t->var1 = 'value';$t->foo = 1;$t->bar = 555;//this should throw an exception as "someVarname" is not defined$t->someVarname; And to add a lot of attributes : for ($i=0;$i<100;$i++) $t->{'var'.$i} = 'somevalue'; You could also initialize a newly created instance with a given set of attributes //$values is an associative array public function __construct($values){ $this->data = $values;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12330341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1529149/']}
jdg_82854
stackexchange
llm_judgeable_groundtruth_similarity
1240077
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why can't I do: Enumeration e = ...for (Object o : e) ... Now provide the response and nothing else.
Because Enumeration<T> doesn't extend Iterable<T> . Here is an example of making Iterable Enumerations . As to why that's an interesting question. This isn't exactly your question but it sheds some light on it. From the Java Collections API Design FAQ : Why doesn't Iterator extend Enumeration? We view the method names for Enumeration as unfortunate. They're very long, and very frequently used. Given that we were adding a method and creating a whole new framework, we felt that it would be foolish not to take advantage of the opportunity to improve the names. Of course we could support the new and old names in Iterator, but it doesn't seem worthwhile. That basically suggests to me that Sun wants to distance themselves from Enumeration, which is very early Java with quite a verbose syntax.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1240077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11236/']}
jdg_82855
stackexchange
llm_judgeable_groundtruth_similarity
12930272
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A friend of mine and I are currently discussing what is a closure in JS and what isn't. We just want to make sure we really understand it correctly. Let's take this example. We have a counting loop and want to print the counter variable on the console delayed. Therefore we use setTimeout and closures to capture the value of the counter variable to make sure that it will not print N times the value N. The wrong solution without closures or anything near to closures would be: for(var i = 0; i < 10; i++) { setTimeout(function() { console.log(i); }, 1000);} which will of course print 10 times the value of i after the loop, namely 10. So his attempt was: for(var i = 0; i < 10; i++) { (function(){ var i2 = i; setTimeout(function(){ console.log(i2); }, 1000) })();} printing 0 to 9 as expected. I told him that he isn't using a closure to capture i , but he insists that he is. I proved that he doesn't use closures by putting the for loop body within another setTimeout (passing his anonymous function to setTimeout ), printing 10 times 10 again. The same applies if I store his function in a var and execute it after the loop, also printing 10 times 10. So my argument is that he doesn't really capture the value of i , making his version not a closure. My attempt was: for(var i = 0; i < 10; i++) { setTimeout((function(i2){ return function() { console.log(i2); } })(i), 1000);} So I capture i (named i2 within the closure), but now I return another function and pass this around. In my case, the function passed to setTimeout really captures i . Now who is using closures and who isn't? Note that both solutions print 0 to 9 on the console delayed, so they solve the original problem, but we want to understand which of those two solutions uses closures to accomplish this. Now provide the response and nothing else.
Editor's Note: All functions in JavaScript are closures as explained in this post . However we are only interested in identifying a subset of these functions which are interesting from a theoretical point of view. Henceforth any reference to the word closure will refer to this subset of functions unless otherwise stated. A simple explanation for closures: Take a function. Let's call it F. List all the variables of F. The variables may be of two types: Local variables (bound variables) Non-local variables (free variables) If F has no free variables then it cannot be a closure. If F has any free variables (which are defined in a parent scope of F) then: There must be only one parent scope of F to which a free variable is bound. If F is referenced from outside that parent scope, then it becomes a closure for that free variable. That free variable is called an upvalue of the closure F. Now let's use this to figure out who uses closures and who doesn't (for the sake of explanation I have named the functions): Case 1: Your Friend's Program for (var i = 0; i < 10; i++) { (function f() { var i2 = i; setTimeout(function g() { console.log(i2); }, 1000); })();} In the above program there are two functions: f and g . Let's see if they are closures: For f : List the variables: i2 is a local variable. i is a free variable. setTimeout is a free variable. g is a local variable. console is a free variable. Find the parent scope to which each free variable is bound: i is bound to the global scope. setTimeout is bound to the global scope. console is bound to the global scope. In which scope is the function referenced ? The global scope . Hence i is not closed over by f . Hence setTimeout is not closed over by f . Hence console is not closed over by f . Thus the function f is not a closure. For g : List the variables: console is a free variable. i2 is a free variable. Find the parent scope to which each free variable is bound: console is bound to the global scope. i2 is bound to the scope of f . In which scope is the function referenced ? The scope of setTimeout . Hence console is not closed over by g . Hence i2 is closed over by g . Thus the function g is a closure for the free variable i2 (which is an upvalue for g ) when it's referenced from within setTimeout . Bad for you: Your friend is using a closure. The inner function is a closure. Case 2: Your Program for (var i = 0; i < 10; i++) { setTimeout((function f(i2) { return function g() { console.log(i2); }; })(i), 1000);} In the above program there are two functions: f and g . Let's see if they are closures: For f : List the variables: i2 is a local variable. g is a local variable. console is a free variable. Find the parent scope to which each free variable is bound: console is bound to the global scope. In which scope is the function referenced ? The global scope . Hence console is not closed over by f . Thus the function f is not a closure. For g : List the variables: console is a free variable. i2 is a free variable. Find the parent scope to which each free variable is bound: console is bound to the global scope. i2 is bound to the scope of f . In which scope is the function referenced ? The scope of setTimeout . Hence console is not closed over by g . Hence i2 is closed over by g . Thus the function g is a closure for the free variable i2 (which is an upvalue for g ) when it's referenced from within setTimeout . Good for you: You are using a closure. The inner function is a closure. So both you and your friend are using closures. Stop arguing. I hope I cleared the concept of closures and how to identify them for the both of you. Edit: A simple explanation as to why are all functions closures (credits @Peter): First let's consider the following program (it's the control ): lexicalScope();function lexicalScope() { var message = "This is the control. You should be able to see this message being alerted."; regularFunction(); function regularFunction() { alert(eval("message")); }} We know that both lexicalScope and regularFunction aren't closures from the above definition . When we execute the program we expect message to be alerted because regularFunction is not a closure (i.e. it has access to all the variables in its parent scope - including message ). When we execute the program we observe that message is indeed alerted. Next let's consider the following program (it's the alternative ): var closureFunction = lexicalScope();closureFunction();function lexicalScope() { var message = "This is the alternative. If you see this message being alerted then in means that every function in JavaScript is a closure."; return function closureFunction() { alert(eval("message")); };} We know that only closureFunction is a closure from the above definition . When we execute the program we expect message not to be alerted because closureFunction is a closure (i.e. it only has access to all its non-local variables at the time the function is created ( see this answer ) - this does not include message ). When we execute the program we observe that message is actually being alerted. What do we infer from this? JavaScript interpreters do not treat closures differently from the way they treat other functions. Every function carries its scope chain along with it. Closures don't have a separate referencing environment. A closure is just like every other function. We just call them closures when they are referenced in a scope outside the scope to which they belong because this is an interesting case.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/12930272', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/592323/']}
jdg_82856
stackexchange
llm_judgeable_groundtruth_similarity
7423082
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's the correct way of overriding onMeasure()? I've seen various approaches. For example, Professional Android Development uses MeasureSpec to calculate the dimensions, then ends with a call to setMeasuredDimension(). For example: @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec){int parentWidth = MeasureSpec.getSize(widthMeasureSpec);int parentHeight = MeasureSpec.getSize(heightMeasureSpec);this.setMeasuredDimension(parentWidth/2, parentHeight);} On the other hand, as per this post , the "correct" way is to use MeasureSpec, call setMeasuredDimensions(), followed by a call to setLayoutParams(), and ending with a call to super.onMeasure(). For example: @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec){int parentWidth = MeasureSpec.getSize(widthMeasureSpec);int parentHeight = MeasureSpec.getSize(heightMeasureSpec);this.setMeasuredDimension(parentWidth/2, parentHeight);this.setLayoutParams(new *ParentLayoutType*.LayoutParams(parentWidth/2,parentHeight));super.onMeasure(widthMeasureSpec, heightMeasureSpec);} So which is the right way? Neither approach has worked 100% for me. I guess really what I'm asking is does anyone know of a tutorial that explains onMeasure(), layout, dimensions of child views etc.? Now provide the response and nothing else.
The other solutions are not comprehensive. They may work in some cases, and are a good place to start, but they may are not guaranteed to work. When onMeasure gets called you may or may not have the rights to change the size. The values that are passed to your onMeasure ( widthMeasureSpec , heightMeasureSpec ) contain information about what your child view is allowed to do. Currently there are three values: MeasureSpec.UNSPECIFIED - You can be as big as you'd like MeasureSpec.AT_MOST - As big as you want (up to the spec size), This is parentWidth in your example. MeasureSpec.EXACTLY - No choice. Parent has chosen. This is done so that Android can make multiple passes to find the right size for each item, see here for more details. If you do not follow these rules, your approach is not guaranteed to work. For example if you want to check if you're allowed to change the size at all you can do the following: final int widthSpecMode = MeasureSpec.getMode(widthMeasureSpec);final int heightSpecMode = MeasureSpec.getMode(heightMeasureSpec);boolean resizeWidth = widthSpecMode != MeasureSpec.EXACTLY;boolean resizeHeight = heightSpecMode != MeasureSpec.EXACTLY; Using this information you will know whether you can modify the values as in your code. Or if you are required to do something different. A quick and easy way to resolve your desired size is to use one of the following methods: int resolveSizeAndState (int size, int measureSpec, int childMeasuredState) int resolveSize (int size, int measureSpec) While the first is only available on Honeycomb, the second is available on all versions. Note: You may find that resizeWidth or resizeHeight are always false. I found this to be the case if I was requesting MATCH_PARENT . I was able to fix this by requesting WRAP_CONTENT on my parent layout and then during the UNSPECIFIED phase requesting a size of Integer.MAX_VALUE . Doing so gives you the max size your parent allows on the next pass through onMeasure.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7423082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/555493/']}
jdg_82857
stackexchange
llm_judgeable_groundtruth_similarity
10907325
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm going through the latest rails tutorial on railstutorial.org and I am stuck on a certain exercise (#8 on http://ruby.railstutorial.org/chapters/updating-showing-and-deleting-users#sec:updating_deleting_exercises ). You have to write a rspec/capybara test to make sure the admin can't delete themself. I have the implementation working, but can't get the test to work right. Here is my code. I found similar questions here: Ruby on Rails syntax and https://getsatisfaction.com/railstutorial/topics/how_to_prevent_admin_user_from_deleting_themselves . But I think it is an older tutorial and not the same question. Here is the relevant code in spec/requests/user_pages_spec.rb: describe "User pages" do subject { page } describe "delete links" do describe "as an admin user" do let(:admin) { FactoryGirl.create(:admin) } before do sign_in admin visit users_path end it "should not be able to delete themself" do expect { admin.delete }.should_not change(User, :count) end end end endend The error message says that the user count is getting reduced by 1. For completeness, here is my (working) implementation: class UsersController < ApplicationController before_filter :current_admin, only: :destroy def current_admin @user = User.find(params[:id]) redirect_to users_path, notice: "Cannot delete current admin" if current_user?(@user) endend Where am I going wrong, thanks? (I left out some methods, but hopefully there is enough to figure out what I'm trying to do) Edit: Using Ruby v1.9.3, Rails v3.2.3. By default, there is no delete link for admins. Edit2: Here is what I got working: spec/controllers/users_controller_spec.rb require 'spec_helper'describe UsersController do describe "admins" do let(:admin) { FactoryGirl.create(:admin) } it "should not be able to delete themself" do sign_in admin expect { delete :destroy, :id => admin.id }.should_not change(User, :count) end endend users_controller.rb def destroy @user = User.find(params[:id]) if current_user?(@user) flash[:error] = "Cannot delete current admin" else @user.destroy flash[:success] = "User destroyed." end redirect_to users_pathend Now provide the response and nothing else.
In Eclipse the .java file name must be the same as the class name. Is it true in any case or only for Eclipse? The java file must have the same name as the public class that contains. For the main method, it looks like the main function in C, but the question is, what if I have two classes both with their own main method and is it possible to link them together? Yes, you can have two classes in the same project that have a static void main(String[] args) {...} method, but only 1 will be the main class for your project, and you should decide which one will be by declaring it in the manifest file. Read more about this: Multiple classes in a single Java file, each with a main method - unexpected behavior? Specifying the Main class to run in a jar file from command line How to setup Main class in manifest file in jar produced by NetBeans project In addition, can I write two classes in a single .java file? Yes, you can, as long as is just one class in the file and the name of that class is the same as the name of the file: TheClass.java file package edu.home.bean;public class TheClass { class SomeClass { }}class AnotherClass {} EDIT: When you do not define public access to a class, it could have default or private access, depending where is declared. In the example above, SomeClass will have a private access and can be accessed only by TheClass , while AnotherClass can be accessed from any class in the same package. This means, classes in a package different from edu.home.bean can't access AnotherClass . package edu.home.control;public class ControlClass { //this line compiles fine private TheClass x; //compilation error, it can't access for being in a different package private AnotherClass y; //compilation error, it can't access because its scope works only in TheClass private SomeClass z;} I'm using Eclipse in Windows, is it the same as the Linux version? A good thing of Java is that the code is platform independent (not like C or C++), so you can port your code from Windows to Linux and viceversa, as long as the Java Virtual Machine (JVM) is installed in the operating system you want to run/develop Java projects. For packages, my book tells me a package is sorta a directory. So to specify a package a file belongs to, do I just need to say "package my_pack"? Yes, the book is right. When you have a Java project, like MyFirstJavaProject, you set a source folder and the packages inside that folder. The packages will be subfolders of your source folder. An example: Project structure in Eclipse: + MyFirstJavaProject + src + edu.home.bean + edu.home.control + edu.home.gui + another.package+ MyFirstJavaProject + src + edu + home + bean ... class files + control ... class files + gui ... class files + another + package Do I need to create a directory like /src/my_pack and place my .java file in it? You can do it manually, or let that your IDE (in this case, Eclipse) do it for you, as easy as creating packages and classes inside your packages. Third, what is a namespace? What's its relationship with packages and classes? Java doesn't work with namespaces, instead use packages to facilitate the manage/relation of classes. You can import classes from another package (1st form), or use the class with the full package name (2nd form). Example of 1st form: package package1;public class Package1Class {}package package2;import package1.Package1Class;public class Package2Class { private Package1Class x; public Package2Class (Package1Class x) { this.x = x }} Example of 2nd form: package package1;public class Package1Class {}package package2;public class Package2Class { //look that there is not use of the import statement, however everytime you need //to call Package1Class, you should provide the full package name private package1.Package1Class x; public Package2Class (package1.Package1Class x) { this.x = x }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10907325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1319954/']}
jdg_82858
stackexchange
llm_judgeable_groundtruth_similarity
14593367
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table on which i need to apply sorting. I'm using knockout and jquery.tablesorter.js. I have tried custom binding also but is not helping. Without knockout my code works fine. Below is my table. <table class="tbl" id="dash" data-bind="sortTable: true"> <thead> <tr class="tag close"> <th>Type</th> <th>Title</th> </tr> </thead> <tbody class="scrollContent" data-bind="foreach: Course"> <tr> <td><i class="icon"></i></td> <td><a href="#" id="qtipselector_01" data-bind="text: Title"></a></td> <div id="TooltipContent_01" class="hidden"> <a> Test Tool Tip</a> </div> </div> </tr> </tbody> </table> Now provide the response and nothing else.
Here is an example: http://jsfiddle.net/jearles/RGsEH/ NOTE: The JS and CSS file dependencies are brought in under Managed Resources. HTML <table data-bind="sortTable: true"> <thead> <tr> <th>Type</th> <th>Title</th> </tr> </thead> <tbody data-bind="foreach: course"> <tr> <td data-bind="text: type"></td> <td data-bind="text: title"></td> </tr> </tbody> </table> JS function Course(type, title) { this.type = type; this.title = title;}var ViewModel = function() { this.course = ko.observableArray([ new Course("type", "course1"), new Course("another_type", "course2"), new Course("second_type", "course5"), new Course("third_type", "course4"), new Course("fourth_type", "course3") ]);}ko.bindingHandlers.sortTable = { init: function(element, valueAccessor) { setTimeout( function() { $(element).addClass('tablesorter'); $(element).tablesorter({widgets: ['zebra']}); }, 0); }};ko.applyBindings(new ViewModel());
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14593367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1324180/']}
jdg_82859
stackexchange
llm_judgeable_groundtruth_similarity
48795701
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an Account class that contain a readonly variable balance.Constructor will assign default value as 0. But i want to update that value when I call CashIn method. I tried this: class Account{ private string name; private readonly double balance; public Account(string n, double b) { name = n; balance = b; } public void CashIn(double amount) { balance = balance+amount; }} But as I know, readonly variable could be assigned through constructor or field. Is it possible to update balance when i call CashIn method?If yes then Please describe me how? Now provide the response and nothing else.
There is a big difference between the readonly modifier and the concept of a "read only" property. The readonly modifier means your field can only be set in the constructor. You will not be able to set that variable to anything else outside of the constructor. Given the context of your code (a bank account simulator) it seems weird that you would have a balance that never, ever changes. Which leads me to believe you want a read only property. A property is the preferred way of storing simple data that has get/set methods. For a bank account simulator it would be really bad to just have the balance exposed with a public setter right? So I think in that context it makes more sense to have a property with a public read, and a private set: public decimal Balance { get; private set; } This allows it to be read publicly but its only allowed to be modified inside the class (i.e from a custom method you write like CashIn() ) public void CashIn(decimal amount){ Balance += amount;} Please note how I use decimal instead of double . Monetary values should always be represented as decimal , this is why it exists
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48795701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9361902/']}
jdg_82860
stackexchange
llm_judgeable_groundtruth_similarity
36858634
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hello angular friends, I'm working on an angular2 app (multiple actually). And I live in the Netherlands. Currently I'm formatting my currency with the following: {{someIntegerWithCentsToBeDivided / 100 | currency:'EUR':true:'1.0-2'}} This displays something like 500 to be Eurosign 5 and 501 to be Eurosign 5.01. Now we dutchies really like comma's the other way around so does anyone know how to change the . to a ,? Bonus points if someone knows how to show 5,- optionally when there is no cents.My thoughts now would be to extend the CurrencyPipe Now provide the response and nothing else.
Your problem has probably been solved some time ago, but just for reference for other Dutch developers (like myself): Create a custom Pipe: import {Pipe} from '@angular/core'; @Pipe({ name: 'currencyFormat'})export class CurrencyFormat { transform(value: number, currencySign: string = '€ ', decimalLength: number = 2, chunkDelimiter: string = '.', decimalDelimiter:string = ',', chunkLength: number = 3): string { value /= 100; let result = '\\d(?=(\\d{' + chunkLength + '})+' + (decimalLength > 0 ? '\\D' : '$') + ')'; let num = value.toFixed(Math.max(0, ~~decimalLength)); return currencySign+(decimalDelimiter ? num.replace('.', decimalDelimiter) : num).replace(new RegExp(result, 'g'), '$&' + chunkDelimiter); }} Now you can use: {{someIntegerWithCentsToBeDivided | currencyFormat}} The Pipe has already all the Dutch defaults included, but you can easily change them or use them as arguments in the template. For example: {{1234567 | currencyFormat:'$':2:' ':'.':3}} will give $12 345.67 as output.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36858634', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/905422/']}
jdg_82861
stackexchange
llm_judgeable_groundtruth_similarity
33259384
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have to do this layout: I was trying to align the views, using RelativeLayout and layout_toRightOf, layout_below, etc, but the best that I achieved was this: Here are the xml: <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"android:paddingBottom="@dimen/activity_vertical_margin"android:paddingLeft="@dimen/activity_horizontal_margin"android:paddingRight="@dimen/activity_horizontal_margin"android:paddingTop="@dimen/activity_vertical_margin"tools:context=".MainActivity"><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" /><RelativeLayout android:id="@+id/big" android:layout_width="150dp" android:layout_height="150dp" android:background="@drawable/circular" android:layout_margin="10dp" android:layout_centerInParent="true"/><RelativeLayout android:id="@+id/right" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_toRightOf="@+id/big" android:layout_centerVertical="true"/><RelativeLayout android:id="@+id/left" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_toLeftOf="@+id/big" android:layout_centerVertical="true"/><RelativeLayout android:id="@+id/top" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_above="@+id/big" android:layout_centerHorizontal="true"/><RelativeLayout android:id="@+id/bottom" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_below="@+id/big" android:layout_centerHorizontal="true"/><RelativeLayout android:id="@+id/northeast" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_toRightOf="@+id/big" android:layout_alignTop="@+id/top"/><RelativeLayout android:id="@+id/northwest" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_toLeftOf="@+id/big" android:layout_alignTop="@+id/top"/><RelativeLayout android:id="@+id/southeast" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_toRightOf="@+id/big" android:layout_below="@+id/big"/> <RelativeLayout android:id="@+id/southwest" android:layout_width="50dp" android:layout_height="50dp" android:background="@drawable/circular" android:layout_toLeftOf="@+id/big" android:layout_below="@+id/big"/></RelativeLayout> I'm trying to avoid using margin on the little circles, because the diagonal circles have to be aligned exactly to the center, in comparison with the top/bottom/right/left circles. How can I do that? Now provide the response and nothing else.
I show you another approach. import android.content.Context;import android.graphics.Canvas;import android.graphics.Color;import android.graphics.Paint;import android.util.AttributeSet;import android.view.View;public class CircleMenu extends View {private Paint mainPaint;private Paint secondPaint;private Paint textPaint;private int radius_main =130;private int menuRadialButtonsCount =7;private int menuInnerPadding = 40;private int radialCircleRadius = 60;private int textPadding = 25;private double startAngle = - Math.PI/2f;;public CircleMenu(Context context) { super(context);}public CircleMenu(Context context, AttributeSet attrs) { super(context, attrs);}public CircleMenu(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr);}@Overrideprotected void onFinishInflate() { super.onFinishInflate(); mainPaint = new Paint(); mainPaint.setColor(Color.BLUE); secondPaint = new Paint(); secondPaint.setColor(Color.DKGRAY); textPaint = new Paint(); textPaint.setColor(Color.BLACK);}@Overrideprotected void onDraw(Canvas canvas) { super.onDraw(canvas); int centerX = canvas.getWidth()/2 ; int centerY= canvas.getHeight()/2; canvas.drawCircle(centerX,centerY,radius_main,mainPaint); for(int i=0;i<menuRadialButtonsCount;i++){ double angle =0; if(i==0){ angle = startAngle; }else{ angle = startAngle+(i * ((2 * Math.PI) / menuRadialButtonsCount)); } int x = (int) (centerX + Math.cos(angle)*(radius_main+menuInnerPadding+radialCircleRadius)); int y = (int) (centerY + Math.sin(angle)*(radius_main+menuInnerPadding+radialCircleRadius)); canvas.drawCircle(x,y,radialCircleRadius,secondPaint); float tW = textPaint.measureText("Text "+i); canvas.drawText("Text "+i,x-tW/2,y+radialCircleRadius+textPadding,textPaint); } }} You can extend this class, add methods to set dimmensions from resources, controlling numer of circles, their size, paddings, onTouch, shadows, colors .... <your.package.CircleMenu android:layout_width="match_parent" android:layout_height="match_parent"/> Updated version: import android.content.Context;import android.graphics.Canvas;import android.graphics.Color;import android.graphics.Paint;import android.util.AttributeSet;import android.view.MotionEvent;import android.view.View;import java.util.ArrayList;public class CircleMenu extends View {public static interface IMenuListener{ public void onMenuClick(MenuCircle item);}public static class MenuCircle{ private int x,y,radius; public int id; public String text;}private Paint mainPaint;private Paint secondPaint;private Paint textPaint;private int radius_main =130;private int menuInnerPadding = 40;private int radialCircleRadius = 60;private int textPadding = 25;private double startAngle = - Math.PI/2f;private ArrayList<MenuCircle> elements;private IMenuListener listener;public void setListener(IMenuListener listener){ this.listener = listener;}public void clear(){ elements.clear(); listener=null;}public CircleMenu(Context context) { super(context); init();}public CircleMenu(Context context, AttributeSet attrs) { super(context, attrs); init();}public CircleMenu(Context context, AttributeSet attrs, int defStyleAttr) { super(context, attrs, defStyleAttr); init();}private void init(){ elements = new ArrayList<>();}public void addMenuItem(String text,int id){ MenuCircle item = new MenuCircle(); item.id = id; item.text=text; elements.add(item);}@Overrideprotected void onFinishInflate() { super.onFinishInflate(); mainPaint = new Paint(); mainPaint.setColor(Color.BLUE); secondPaint = new Paint(); secondPaint.setColor(Color.DKGRAY); textPaint = new Paint(); textPaint.setColor(Color.BLACK);}@Overrideprotected void onDraw(Canvas canvas) { super.onDraw(canvas); int centerX = canvas.getWidth()/2 ; int centerY= canvas.getHeight()/2; canvas.drawCircle(centerX,centerY,radius_main,mainPaint); for(int i=0;i<elements.size();i++){ double angle =0; if(i==0){ angle = startAngle; }else{ angle = startAngle+(i * ((2 * Math.PI) / elements.size())); } elements.get(i).x = (int) (centerX + Math.cos(angle)*(radius_main+menuInnerPadding+radialCircleRadius)); elements.get(i).y = (int) (centerY + Math.sin(angle)*(radius_main+menuInnerPadding+radialCircleRadius)); canvas.drawCircle( elements.get(i).x,elements.get(i).y,radialCircleRadius,secondPaint); float tW = textPaint.measureText(elements.get(i).text); canvas.drawText(elements.get(i).text,elements.get(i).x-tW/2,elements.get(i).y+radialCircleRadius+textPadding,textPaint); }}@Overridepublic boolean onTouchEvent(MotionEvent event) { if(event.getAction()==MotionEvent.ACTION_DOWN){ for(MenuCircle mc : elements){ double distance = Math.hypot(event.getX()-mc.x,event.getY()-mc.y); if(distance<= radialCircleRadius){ //touched if(listener!=null) listener.onMenuClick(mc); return true; } } } return super.onTouchEvent(event);}@Overrideprotected void onDetachedFromWindow() { super.onDetachedFromWindow();}} In Fragment: CircleMenu cm = (CircleMenu) view.findViewById(R.id.c_menu); cm.addMenuItem("one",1); cm.addMenuItem("two",2); cm.addMenuItem("three",3); cm.addMenuItem("ten",10); cm.addMenuItem("oh oh",156); cm.addMenuItem("exit",134); cm.setListener(new CircleMenu.IMenuListener() { @Override public void onMenuClick(CircleMenu.MenuCircle item) { Toast.makeText(getActivity(),item.text+" "+item.id,Toast.LENGTH_LONG).show(); } });
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33259384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2370826/']}
jdg_82862
stackexchange
llm_judgeable_groundtruth_similarity
280762
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Bug introduced in 13.1, and fixed in 13.2. I'm trying the following locally (13.1.0 for Mac) and getting incorrect result Quit[];Integrate[1/(x + 1)^2 Exp[-t 1/(x + 1)^2], {x, 0, \[Infinity]}] /. t -> 3. (* 0.0575796 *) This answer is wrong.I get the correct answer 0.504344 when running same expression on Wolfram Cloud (13.2.0). Is this a bug in 13.1, corrupted local state, or something else? Because of this, I can't rerun the notebook from this post anymore, any workarounds appreciated. Now provide the response and nothing else.
$Version(* "13.1.0 for Mac OS X x86 (64-bit) (June 16, 2022)" *)Clear["Global`*"]g[x_] = 1/(x + 1)^2; The incorrect result is Integrate[1/(x + 1)^2 Exp[-t 1/(x + 1)^2], {x, 0, ∞}](* E^(2 - t)/(-1 + E^2) *) To get the correct result, do a change of variables IntegrateChangeVariables[ Inactive[Integrate][1/(x + 1)^2 Exp[-t 1/(x + 1)^2], {x, 0, ∞}], u, u == x + 1] % // Activate(* (Sqrt[π] Erf[Sqrt[t]])/(2 Sqrt[t]) *) Compare with, WolframAlpha["Integrate[1/(x+1)^2 Exp[-t 1/(x+1)^2], {x, 0, ∞}]", \{{"Input", 1}, "Content"}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/280762', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/217/']}
jdg_82863
stackexchange
llm_judgeable_groundtruth_similarity
1284977
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to prove that if $X$ is compactly generated and $Y$ is T2 (Hausdorff) and locally compact then $X\times Y$ is compactly generated. First it is clear that since both $X$ and $Y$ are T2 then $X\times Y$ is also T2 and so the first condition for "compactly generated" is fulfilled. For the next condition, we need to show that: $$\forall B\subseteq X\times Y,\,\left[B\in Open(X\times Y)\Longleftrightarrow B\cap K\in Open(K)\forall K\in Compact(X\times Y) \right]$$ So let $B\subseteq X\times Y$ be given. The direction $\Longrightarrow$ follows immediately due to the definition of the subspace topology, so there is nothing to prove. For the $\Longleftarrow$ direction, assume $B\cap K\in Open(K)\forall K\in Compact(X\times Y)$, and the goal is to show $B\in Open(X\times Y)$. My idea was to take an arbitrary point $(x,y)\in B$ and try to find $U\times V \in Open(X)\times Open(Y)$ such that $(x,y)\in U\times V \subseteq B$. Because $Y$ is locally compact, $\exists K_y \in Compact(Y)$ such that $\exists V_y \in Open(Y)$ such that $y\in V_y \subseteq K_y$. However, now I get stuck, because I don't know which compact set $K_x$ of $X$ to find so that $x\in K_x$ and $(x,y)\in K_x \times K_y$. Because there is no obvious choice for a compact set, I'm not sure how to employ the input data. Now provide the response and nothing else.
A space $X$ is a $k$-space if it has the final topology with respect to all maps from compact Hausdorff spaces to it, in other words, if $A\subseteq X$ is closed if $t^{-1}(A)$ is closed in $K$ for every map $t:K\to X$ where $K$ is compact Hausdorff. Usually, a compactly generated space has the property that a subset is closed if it intersects every compact subspace in a closed set, so every $k$-space is compactly generated. For Hausdorff spaces, though, both definitions are the same. It is useful to know that a space is a $k$-space (without $T_2$) precisely if it's a quotient of a disjoint union of compact Hausdorff spaces. This also implies that a space in which every point has a compact Hausdorff neighborhood, and this includes locally compact Hausdorff spaces, is a $k$-space. So in your problem you have a Hausdorff $k$-space $X$ with a quotient map $p:\bigsqcup_i K_i\to X$, and you have a locally compact Hausdorff space $Y$ with a quotient map $q:\bigsqcup_j L_j\to Y$, where all $K_i$ and $L_j$ are compact Hausdorff. This gives you a surjective map $p×q:\bigsqcup_{i,j}K_i×L_j\to X×Y$, and the goal is to show that $p×q$ is a quotient map. But we can factor this map as $(p×1_Y)(1_{\bigsqcup K_i}×q)$, and each of these maps is quotient map, being the product of a quotient map with the identity map on a locally compact space.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1284977', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61151/']}
jdg_82864
stackexchange
llm_judgeable_groundtruth_similarity
7148419
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's the difference between UserDict , dict and ABC and which one is recommended? The docs seem to deprecate UserDict ? Also it seems UserDict's update() would use my setitem method whereas dict doesn't? Which methods are really essential to override given I want custom setitem and getitem function? With ABC s I'd have to implement absolutely all methods since it provides no default implementation? I want to make a dict that does two things: intern() all keys and values store some of the values in an SQLite database So which of UserDict , dict and ABC would best allow me to do this? Now provide the response and nothing else.
If you want a custom collection that actually holds the data, subclass dict . This is especially useful if you want to extend the interface (e.g., add methods). None of the built-in methods will call your custom __getitem__ / __setitem__ , though. If you need total control over these, create a custom class that implements the collections.MutableMapping abstract base class instead. The ABC does not provide a means to store the actual data, only an interface with default implementations for some methods. These default implementations will, however, call your custom __getitem__ and __setitem__ . You will have to use an internal dict to hold the data, and implement all abstract methods: __len__ , __iter__ , __getitem__ , __setitem__ , and __delitem__ . The class UserDict from the collections module (in Python 2, the module is called UserDict as well) is a wrapper around an internal dict , implementing the MutableMapping ABC. If you want to customize the behavior of a dict , this implementation could be a starting point. In summary: MutableMapping defines the interface. Subclass this to create something that acts like a dict . It's totally up to you if and how you store the data. UserDict is an implementation of MutableMapping using an internal "real" dict as storage. If you want a dict-like storage collection but override some methods exposed by dict , this might be a good starting point for you. But make sure to read the code to know how the basic methods are implemented, so that you are consistent when overriding a method. dict is "the real thing". Subclass this if you want to extend the interface. Overriding methods to do custom things might be dangerous, as there are usually multiple ways of accessing the data, and you could end up with an inconsistent API.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7148419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/815443/']}
jdg_82865
stackexchange
llm_judgeable_groundtruth_similarity
200623
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to build my own network attached storage based on FreeBSD+ZFS+standard components, but there are strange performance issues. The hardware specs are: AMD Athlon II X2 240e processor ASUS M4A78LT-M LE mainboard 2GiB Kingston ECC DDR3 (two sticks) Intel Pro/1000 CT PCIe network adapter 5x Western Digital Caviar Green 1.5TB I created a RAID-Z2 zpool from all disks. I installed FreeBSD 8.1 on that zpool following the tutorial . The SATA controllers are running in AHCI mode. Output of zpool status: pool: zrootstate: ONLINEscrub: none requestedconfig: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gptid/7ef815fc-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/80344432-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/81741ad9-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/824af5cb-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/82f98a65-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 The problem is that write performance on the pool is very very bad (<10 MB/s) and every application that is accessing the disk is unresponsive every few seconds when writing. It seems like writing is fine until the ZFS ark cache is full and then ZFS stalls the entire system I/O till it's finished writing that data. Also I'm getting kmem_malloc to small kernel panics. I've already tried to put vm.kmem_size="1500M"vm.kmem_size_max="1500M" into /boot/loader.conf, but it doesn't help. Does anyone know what's going on here? Am I really not having enough memory for ZFS to handle this RAID-Z2? Now provide the response and nothing else.
I suspect you are probably running into the issue that I'm going to call the "512 byte sectors are not 4K sectors" issue. Anyway, google up on gnop, 4K sector, WD Green and I suspect you'll find the fix. I first learned of it on this site , and it was very informative about that issue, as well as a variety of other tunings for FreeBSD and ZFS. Good luck! Edit: to quote from the linked site: ... Finally, I came across references to problems with Western Digital’s 1.5 TB (WE15EADS) Green drives that I am using. The drives have a 4KB physical sector but report 512 Bytes to the BIOS. So performance drops off on really big writes because zfs on FreeBSD sends 4KB of data to the drive as 8 separate writes of 512 bytes, which requires the firmware in the drive to increase its work load by an estimated factor of 60 (1st 512 Bytes - write 4KB, 2nd 512 Bytes, read 4K, write 4K, ..., 8th 512 Bytes, read 4K, write 4K -- so 4KB of writes become 4KB write + (4KB read + 4KB write)X(4KB/512Bytes - 1) = 60. The drives built in 32 MB cache helps until it fills and the zfs arc kicks in and then the arc begins to fill. So all in all, no big deal right? Actually it is a very big deal if you are writing files to zfs that are larger than your arc plus the size of the buffer on the drive. ...
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/200623', 'https://serverfault.com', 'https://serverfault.com/users/34296/']}
jdg_82866
stackexchange
llm_judgeable_groundtruth_similarity
177513
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a grep-like utility that will enable me to do grep searches with logic operators. I want to be able to nest and combine the logical constructs freely. For example, stuff like this should be possible: grep (term1 && term2) || (term1 && (term3 xor term4)) * I realize this can be done with vanilla grep and additional bash scripting, but my goal here is to avoid having to do that. Now provide the response and nothing else.
There are lot of ways to use grep with logical operators. Using multiple -e options matches anything that matches any of the patterns, giving the OR operation. Example: grep -e pattern1 -e pattern2 filename In extended regular expressions ( grep -E ), you can use | to combine multiple patterns with the OR operation. Example: grep -E 'pattern1|pattern2' filename grep -v can simulate the NOT operation. There is no AND operator in grep , but you can brute-forcesimulate AND by using multiple patterns with | . Example : grep -E 'pattern1.*pattern2|pattern2.*pattern1' filename The above example will match all the lines that contain both pattern1 and pattern2 in either order. This gets very ugly if there are more patterns to combine.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/177513', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_82867
stackexchange
llm_judgeable_groundtruth_similarity
277070
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Ryan says in his book "Introduction to Tensor Products of Banach Spaces"(pg. 17) that for Banach spaces $X$ and $Y$, $X\otimes Y$ equipped with projective norm is not complete unless $X$ and $Y$ are finite dimensional. First I want the example of this. Second, is there any sources about the proof of this statement? Thanks. Now provide the response and nothing else.
The projective tensor product $\ell_1\widehat{\otimes}X$ is naturally isometrically isomorphic to the $\ell_1$-sum of countably many copies of $X$. The uncompleted tensor product $\ell_1 \odot X$ is then the linear span of elements of the form $(\xi_n x)$, where $(\xi_n)$ is in $\ell_1$ under this identification, which is hardly complete as there exist infinite convergent series. For example, take a linearly independent sequence $(x_n)_{n=1}^\infty$ of unit vectors in $X$ and consider $(n^{-2}x_n)_{n=1}^\infty$; it does not belong to (the image of) $\ell_1\odot X$. In general the proof goes along the same lines -- it uses the possibility of (non-unique) representation of elements of the projective tensor product as infinite series of simple tensors. You then have to show that if $X$ and $Y$ are infinite-dimensional then there is an infinite series that cannot be truncated to a finite one.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/277070', 'https://mathoverflow.net', 'https://mathoverflow.net/users/112538/']}
jdg_82868
stackexchange
llm_judgeable_groundtruth_similarity
3677228
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to traverse node by node on a web page by maintaining sequence. e.g. Below is basic DOM : <BODY> <DIV id ='1'> Test 1 </DIV> <DIV id='2'> Details about <SPAN id='3'> Execution </SPAN> </DIV> </BODY> As per above example I want traverse each node by node i.e. 1st Traversal : <BODY>2nd Traversal : <DIV id ='1'>3rd Traversal : <DIV id='2'>4rd Traversal : <SPAN id='3'> My motive is to loop over all the nodes available on current page and visit each node one by one saying simply nextnode(), while traversing not looking in to parent and child relations. Exepcted is, it should visit each node by following sequence. So my statement will be like this : startnode //consider this is start nodeWhile ( startnode!=null ) { // will process on startnode startnode= startnode->nextnode(); // something like that to visit each node} Is any one knows about this, how to achieve this using jquery(preferably) or javascript,please share their references. Thanks -Pravin Now provide the response and nothing else.
There's always the standard Crockford walk the dom method. Example: http://jsfiddle.net/FJeaY/ var walk_the_DOM = function walk(node, func) { func(node); node = node.firstChild; while (node) { walk(node, func); node = node.nextSibling; }};walk_the_DOM(document.body, function(node) { if(node.nodeType == 1) alert(node.id); // alert if we have a type 1 node}); Specific walk_the_DOM code example copied from here: http://snipplr.com/view/19815/walking-the-dom/ EDIT: Text nodes have nodeType = 3 , so you can add that to your if() statement if those are desired as well. walk_the_DOM(document.body, function(node) { if(node.nodeType == 1 || node.nodeType == 3) alert(node.id); // ID will be undefined if it is a text node});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3677228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/373142/']}
jdg_82869
stackexchange
llm_judgeable_groundtruth_similarity
1538617
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working at a web application in Python/Twisted. I want the user to be able to download a very big file (> 100 Mb). I don't want to load all the file in memory (of the server), of course. server side I have this idea: ...request.setHeader('Content-Type', 'text/plain')fp = open(fileName, 'rb')try: r = None while r != '': r = fp.read(1024) request.write(r)finally: fp.close() request.finish() I expected this to work, but I have problems:I'm testing with FF... It seems the browser make me wait until the file is completed downloaded, and then I have the open/save dialog box. I expected the dialog box immediately, and then the progress bar in action... Maybe I have to add something in the Http header... Something like the size of the file? Now provide the response and nothing else.
Two big problems with the sample code you posted are that it is non-cooperative and it loads the entire file into memory before sending it. while r != '': r = fp.read(1024) request.write(r) Remember that Twisted uses cooperative multitasking to achieve any sort of concurrency. So the first problem with this snippet is that it is a while loop over the contents of an entire file (which you say is large). This means the entire file will be read into memory and written to the response before anything else can happen in the process. In this case, it happens that " anything " also includes pushing the bytes from the in-memory buffer onto the network, so your code will also hold the entire file in memory at once and only start to get rid of it when this loop completes. So, as a general rule, you shouldn't write code for use in a Twisted-based application that uses a loop like this to do a big job. Instead, you need to do each small piece of the big job in a way that will cooperate with the event loop. For sending a file over the network, the best way to approach this is with producers and consumers . These are two related APIs for moving large amounts of data around using buffer-empty events to do it efficiently and without wasting unreasonable amounts of memory. You can find some documentation of these APIs here: http://twistedmatrix.com/projects/core/documentation/howto/producers.html Fortunately, for this very common case, there is also a producer written already that you can use, rather than implementing your own: http://twistedmatrix.com/documents/current/api/twisted.protocols.basic.FileSender.html You probably want to use it sort of like this: from twisted.protocols.basic import FileSenderfrom twisted.python.log import errfrom twisted.web.server import NOT_DONE_YETclass Something(Resource): ... def render_GET(self, request): request.setHeader('Content-Type', 'text/plain') fp = open(fileName, 'rb') d = FileSender().beginFileTransfer(fp, request) def cbFinished(ignored): fp.close() request.finish() d.addErrback(err).addCallback(cbFinished) return NOT_DONE_YET You can read more about NOT_DONE_YET and other related ideas the "Twisted Web in 60 Seconds" series on my blog, http://jcalderone.livejournal.com/50562.html (see the "asynchronous responses" entries in particular).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1538617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_82870
stackexchange
llm_judgeable_groundtruth_similarity
15367265
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two numeric fields to collect data from users. Need to validate it using codeigniter form validation class. Conditions: First field can be zero Second field cannot be zero First field should not be equal to second field Second field should be greater than first field Currently I use $this->form_validation->set_rules('first_field', 'First Field','trim|required|is_natural'); $this->form_validation->set_rules('second_field', 'Second Field','trim|required|is_natural_no_zero'); But, how to validate for 3rd and 4th condition mentioned above? Thanks in advance. Now provide the response and nothing else.
Thanks dm03514. I got it working by the below callback function. $this->form_validation->set_rules('first_field', 'First Field', 'trim|required|is_natural');$this->form_validation->set_rules('second_field', 'Second Field', 'trim|required|is_natural_no_zero|callback_check_equal_less['.$this->input->post('first_field').']'); and the callback function is: function check_equal_less($second_field,$first_field) { if ($second_field <= $first_field) { $this->form_validation->set_message('check_equal_less', 'The First &amp;/or Second fields have errors.'); return false; } return true; } Everything seems to be working fine now :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15367265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1413260/']}
jdg_82871
stackexchange
llm_judgeable_groundtruth_similarity
79685
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The question is contained in the title; I mean the standard axioms ZFC. The wiki link: Riemann hypothesis . There are finite algorithms allowing one to decide if there are non-trivial zeroes of the $\zeta$ -function in the domains whose union exhausts the whole strip $0<\Re z<1$ , but this does not seem to be the obstacle for undecidability. Are there other arguments? Now provide the response and nothing else.
I do not know anything about zero-finding algorithms for $\zeta$, so I will make only one small remark which doesn't require such knowledge: If the Riemann Hypothesis is false, then it is provably false (in ZFC, or any similar system). This is because Robin's theorem tells us that the Riemann hypothesis is equivalent to the assertion that, for every natural $n \geq 5041$, the sum of the divisors of $n$ is less than $e^{\gamma} n \log{\log{n}}$; since there are programs which calculate this latter quantity to arbitrary precision, and thus can verify whether this inequality holds for any given $n$, we find that the Riemann hypothesis is a $\Pi_1$ statement: it is equivalent to the assertion that some computer program never outputs "NO" on any input. (Although not familiar with the proofs of Robin's theorem, etc., I assume they can be carried out in ZFC, and thus establish the relevant equivalence within ZFC.). There may be more direct ways to establish that the Riemann hypothesis is a $\Pi_1$ statement, such as by knowledge of algorithms which enumerate to arbitrary precision the zeros of $\zeta$, but at any rate, there is this one. Accordingly, if the Riemann hypothesis is false, then the relevant computer program does output "NO" on some input, from which it would follow that ZFC proves that that computer program outputs "NO" on that input, and thus ZFC would prove the Riemann hypothesis to be false. The possibility still remains, however, as far as I know, that the Riemann hypothesis may be true but unprovable in ZFC.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/79685', 'https://mathoverflow.net', 'https://mathoverflow.net/users/18934/']}
jdg_82872
stackexchange
llm_judgeable_groundtruth_similarity
11879217
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have defined a name for each of the constraint for the multiple tables that I have created in Oracle SQL. The problem is that to drop a constraint for the column of a particular table I need to know the name that I have supplied for each constraints, which I have forgotten. How do I list out all the names of constraints that I have specified for each column of a table? Is there any SQL statement for doing so? Now provide the response and nothing else.
You need to query the data dictionary , specifically the USER_CONS_COLUMNS view to see the table columns and corresponding constraints: SELECT * FROM user_cons_columns WHERE table_name = '<your table name>'; FYI, unless you specifically created your table with a lower case name (using double quotes) then the table name will be defaulted to upper case so ensure it is so in your query. If you then wish to see more information about the constraint itself query the USER_CONSTRAINTS view: SELECT * FROM user_constraints WHERE table_name = '<your table name>' AND constraint_name = '<your constraint name>'; If the table is held in a schema that is not your default schema then you might need to replace the views with: all_cons_columns and all_constraints adding to the where clause: AND owner = '<schema owner of the table>'
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/11879217', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/935108/']}
jdg_82873
stackexchange
llm_judgeable_groundtruth_similarity
22832
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following R code: example <- function(n) { X <- 1:n Y <- rep(1,n) return(lm(Y~X))}#(2.13.0, i386-pc-mingw32)summary(example(7)) #R^2 = .1963summary(example(62)) #R^2 = .4529summary(example(4540)) #R^2 = .7832summary(example(104))) #R^2 = 0#I did a search for n 6:10000, the result for R^2 is NaN for#n = 2, 4, 16, 64, 256, 1024, 2085 (not a typo), 4096, 6175 (not a typo), and 8340 (not a typo) Looking at http://svn.r-project.org/R/trunk/src/appl/dqrls.f ) did not help me understand what is going on, because I do not know Fortran. In another question it was answered that floating point machine tolerance errors were are to blame for coefficients for X that are close to, but not quite 0. $R^2$ is greater when the value for coef(example(n))["X"] is closer to 0. But... Why is there an $R^2$ value at all? What (specifically) is determining it? Why the seemingly orderly progression of NaN results? Why the violations of that progression? What of this is 'expected' behavior? Now provide the response and nothing else.
As Ben Bolker says, the answer to this question can be found in the code for summary.lm() . Here's the header: function (object, correlation = FALSE, symbolic.cor = FALSE, ...) { So, let x <- 1:1000; y <- rep(1,1000); z <- lm(y ~ x) and then take a look at this slightly modified extract: p <- z$rank rdf <- z$df.residual Qr <- stats:::qr.lm(z) n <- NROW(Qr$qr) r <- z$residuals f <- z$fitted.values w <- z$weights if (is.null(w)) { mss <- sum((f - mean(f))^2) rss <- sum(r^2) } ans <- z[c("call", "terms")] if (p != attr(z$terms, "intercept")) { df.int <- 1L ans$r.squared <- mss/(mss + rss) ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n - df.int)/rdf) } Notice that ans\$r.squared is $0.4998923$... To answer a question with a question: what do we draw from this? :) I believe the answer lies in how R handles floating point numbers. I think that mss and rss are the sums of very small (squared) rounding errors, hence the reason $R^2$ is about 0.5. As for the progression, I suspect this has to do with the number of values that it takes for the +/- approximations to cancel out to 0 (for both mss and rss , as 0/0 is likely the source of these NaN values). I don't know why the values differ from a 2^(1:k) progression, though. Update 1: Here is a nice thread from R-help addressing some of the reasons that underflow warnings are not addressed in R. In addition, this SO Q&A has a number of interesting posts and useful links regarding underflow, higher precision arithmetic, etc.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/22832', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/196/']}
jdg_82874
stackexchange
llm_judgeable_groundtruth_similarity
31029320
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a very large JSON-like file, but it is not using proper JSON syntax: the object keys are not quoted. I'd like to write a script to fix the file, so that I can load it with json.loads . I need to match all words followed by a colon and replace them with the quoted word. I think the regex is \w+\s*: and that I should use re.sub , but I'm not exactly sure how to do it. How can I take the following input and get the given output? # In{abc : "xyz", cde : {}, fgh : ["hfz"]}# Out{"abc" : "xyz", "cde" : {}, "fgh" : ["hfz"]}# In{ a: "b", b: { c: "d", d: [] }, e: "f"}# Out{ "a": "b", "b": { "c": "d", "d": [] }, "e": "f"} Now provide the response and nothing else.
Rather than a potentially fragile regex solution, you can take advantage of the fact that while your log file isn't valid JSON, it is valid YAML . Using the PyYAML library, you can load it into a Python data structure and then write it back out as valid JSON: import jsonimport yamlwith open("original.log") as f: data = yaml.load(f)with open("jsonified.log", "w") as f: json.dump(data, f)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31029320', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3248481/']}
jdg_82875
stackexchange
llm_judgeable_groundtruth_similarity
25784727
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to get the list of all sub-directories of a folder, including the nested ones. Therefore, I would like to have a list<Folder> myList that will be filled by a recursive method. void myMethod(?) { list<Folder> listFolders = ... ; foreach (Folder curFolder in listFolders) { myList.add(curFolder); } } Is it possible? Now provide the response and nothing else.
Declare a function like this List<Folder> ListFolders(Folder initialFolder){ var folders = new List<Folder>(); folder.Add(initialFolder); foreach (var f in initialFolder.GetSubFolders()) { folders.AddRange(ListFolders(f)); } return folders;} The Folder class is left to you as an exercice, it will have to contain the name of the current folder, and a method to get its subfolders (hint: look into the System.IO.Directory class)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25784727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3165438/']}
jdg_82876
stackexchange
llm_judgeable_groundtruth_similarity
46866478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to copy a file from a container to my host windows machine with this command docker cp my_container:/folder c:\anotherfolder the docker console returns copying between containers is not supported Why? Client: Version: 17.05.0-ce API version: 1.29 Go version: go1.7.5 Git commit: 89658be Built: Fri May 5 15:36:11 2017 OS/Arch: windows/amd64Server: Version: 17.05.0-ce API version: 1.29 (minimum version 1.12) Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 21:43:09 2017 OS/Arch: linux/amd64 Experimental: false Now provide the response and nothing else.
I think it's because docker thinks c is container name. You can use relative path rather than absolute path. So if you are in C:/ you can just: docker cp my_container:/folder anotherfolder Also notice that there are few differences between running commands in window's cmd and in git bash . 1. You are using git bash: Or escape in git bash console on windows: WlaDo@DESKTOP-RBBRJOD MINGW64 ~$ docker cp eb19fc21889c:/data c:\testcopying between containers is not supported <--- we got this error which points to implementation for which I added link belowWlaDo@DESKTOP-RBBRJOD MINGW64 ~$ docker cp eb19fc21889c:/data c:\\test<--- no errors here -->WlaDo@DESKTOP-RBBRJOD MINGW64 ~$ ls c:\\test<--- data from container --> 2. You are using windows cmd From windows cmd the approach you have should work: C:\Users\WlaDo>docker cp eb19fc21889c:/data c:\test<--- no errors here -->C:\Users\WlaDo> dir c:\test<--- data from container --> Copying between containers is not implemented and throws the error. For more info check documentation about docker cp A colon (:) is used as a delimiter between CONTAINER and its path. You can also use : when specifying paths to a SRC_PATH or DEST_PATH on a local machine, for example file:name.txt. If you use a : in a local machine path, you must be explicit with a relative or absolute path, for example: `/path/to/file:name.txt` or `./file:name.txt`
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46866478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7531001/']}
jdg_82877
stackexchange
llm_judgeable_groundtruth_similarity
5856709
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between the functions liftM and mapM? Now provide the response and nothing else.
First, the types differ: liftM :: (Monad m) => (a -> b) -> m a -> m bmapM :: (Monad m) => (a -> m b) -> [a] -> m [b] liftM lifts a function of type a -> b to a monadic counterpart. mapM applies a function which yields a monadic value to a list of values, yielding list of results embedded in the monad. Examples: > liftM (map toUpper) getLineHallo"HALLO"> :t mapM return "monad"mapM return "monad" :: (Monad m) => m [Char] ... note that map and mapM differ! E.g. > map (x -> [x+1]) [1,2,3][[2],[3],[4]]> mapM (x -> [x+1]) [1,2,3][[2,3,4]]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5856709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/457261/']}
jdg_82878
stackexchange
llm_judgeable_groundtruth_similarity
78826
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do you use gen_udp in Erlang to do multicasting ? I know its in the code, there is just no documentation behind it. Sending out data is obvious and simple. I was wondering on how to add memberships. Not only adding memberships at start-up, but adding memberships while running would be useful too. Now provide the response and nothing else.
Here is example code on how to listen in on Bonjour / Zeroconf traffic. -module(zcclient).-export([open/2,start/0]).-export([stop/1,receiver/0]).open(Addr,Port) -> {ok,S} = gen_udp:open(Port,[{reuseaddr,true}, {ip,Addr}, {multicast_ttl,4}, {multicast_loop,false}, binary]), inet:setopts(S,[{add_membership,{Addr,{0,0,0,0}}}]), S.close(S) -> gen_udp:close(S).start() -> S=open({224,0,0,251},5353), Pid=spawn(?MODULE,receiver,[]), gen_udp:controlling_process(S,Pid), {S,Pid}.stop({S,Pid}) -> close(S), Pid ! stop.receiver() -> receive {udp, _Socket, IP, InPortNo, Packet} -> io:format("~n~nFrom: ~p~nPort: ~p~nData: ~p~n",[IP,InPortNo,inet_dns:decode(Packet)]), receiver(); stop -> true; AnythingElse -> io:format("RECEIVED: ~p~n",[AnythingElse]), receiver() end.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/78826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10432/']}
jdg_82879
stackexchange
llm_judgeable_groundtruth_similarity
207167
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose $V_\mathbb{F_V}$ and $W_\mathbb{F_W}$ are two vector spaces over fields $\mathbb{F}_V$ and $\mathbb{F}_W$. Then a homomorphism of these vector spaces consists of maps $f:V\rightarrow W$ and $f_\mathbb{F}:\mathbb{F}_V\rightarrow \mathbb{F}_W$ satisfying: $$f\left(a.v+b.u\right)= f_{\mathbb{F}}\left(a\right)f\left(v\right)+f_{\mathbb{F}}\left(b\right)f\left(u\right)$$for all $a,b \in \mathbb{F}_V$ and $v,u \in V$.With such morphisms we can talk about the category of all vector spaces over arbitrary fields. But, I have never seen such examples. Why is that? Is it because the category of fields is not very welcoming of a place. Now provide the response and nothing else.
There is indeed a category of all vector spaces with morphisms as you describe. It has many interesting properties – first of all, notice that it comes equipped with a projection to the category of all fields, $p : \textbf{Vect} \to \textbf{Fld}$. Let $\textbf{Vect}(K)$ be the non-full subcategory of $\textbf{Vect}$ of objects $V$ such that $p V = K$ and morphisms $f$ such that $p f = \textrm{id}_K$. This is easily seen to be isomorphic to the usual category of $K$-vector spaces. Given any field homomorphism $\phi : K \to L$, we get a functor $\phi^\sharp : \textbf{Vect}(L) \to \textbf{Vect}(K)$, and it is not hard to check that the operation $(-)^\sharp$ is strictly functorial. The category $\textbf{Vect}$ is then seen to be the Grothendieck construction applied to $(-)^\sharp$, and therefore $p : \textbf{Vect} \to \textbf{Fld}$ is a Grothendieck fibration. Why is this interesting? Well, it gives us a way to consider vector spaces over all fields on equal grounds, and the universal property of some familiar constructions is best expressed in terms of this Grothendieck fibration. For example, if $W$ is a $L$-vector space, then $\phi^\sharp W$ is a $K$-vector space $V$ and a morphism $f : V \to W$ lying over $\phi : K \to L$ such that for all morphisms $h : U \to W$ in $\textbf{Vect}$ lying over $\chi : F \to L$ in $\textbf{Fld}$ and all factorisations $\chi = \phi \circ \psi$, there is a unique morphism $g : U \to W$ lying over $\psi : F \to K$ such that $h = g \circ f$. If you draw the diagram you will see this is basically the universal property of a pullback, but two different categories are involved here. On the other hand, $\phi^\sharp : \textbf{Vect}(L) \to \textbf{Vect}(K)$ has a well-known left adjoint $\phi_\sharp : \textbf{Vect}(K) \to \textbf{Vect}(L)$, namely the tensor product $\phi_\sharp V = L \otimes_K V$. This makes $p : \textbf{Vect} \to \textbf{Fld}$ into a Grothendieck bifibration, and again this means $\phi_\sharp V$ can be described in terms of a universal property. You are quite right that $\textbf{Fld}$ isn't a category with particularly good properties – and unfortunately that means $\textbf{Vect}$ also lacks the same properties. For example, there is no terminal object in either $\textbf{Fld}$ or $\textbf{Vect}$. In this respect, the category $\textbf{Mod}$ of all modules over all commutative rings is more well-behaved. $\textbf{Mod}$ has some remarkable properties – in addition to being a Grothendieck bifibration, it (or rather $\textbf{Mod}^\textrm{op}$) is what is known as a stack for the faithfully flat topology on $\textbf{CRing}^\textrm{op}$. This is studied at length in SGA1 and is the motivating example behind the whole theory of fibred categories in general.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/207167', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33845/']}
jdg_82880
stackexchange
llm_judgeable_groundtruth_similarity
64415
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We had a web server that had the wrong SSL certs installed yet the browser was able to find another trusted path to a root and supply the page properly over HTTPS. We also have proprietary software clients that use SSL outside of browsers that connect to the same server yet these failed SSL validation. My question is what is the mechanism in browser logic that allows it to find alternate paths to a trusted root? Now provide the response and nothing else.
In SSL/TLS the server is supposed to show its certificate as part of a chain. Theoretically, the server should make sure that the sent chain is correct, and the client is "morally entitled" to reject the connection if the exact chain sent by the server fails to validate. However, clients are allowed to make extra efforts; if they can validate the certificate with another chain, then it is OK to continue. Therefore, one cannot formally blame some clients for failing to validate the server's certificate if the server sends a flawed chain. When a client tries to build an alternate chain, it will use some or all of the following methods: The client may have locally installed intermediate CA certificates (in Windows system, the "intermediate CA" store is meant for that). The certificates sent by the server may be reused (but maybe not in the same order). The client may have access to some LDAP server or equivalent in which some certificates may be looked up by subject name (this may happen in Windows / Active Directory setups). The initial plan for X.509 was that there should be a worldwide Directory, like some sort of generalized DNS, but it never happened. The client may try to download other intermediate CA certificates by following the URL found in the Authority Information Access extensions of the certificates. This last method is what will usually work. A well-issued certificate will contain an AIA extension with an URL pointing to the certificate for the CA which issued it. That certificate may itself contain an AIA extension pointing to the upper-level CA, and so on, up to the root. As long as all URL are publicly accessible, network is up and running, and no sysadmin got into his pathetic excuse for a mind to block that mechanism (I have seen it done, unfortunately), then the chain will be successfully rebuilt. Modern Windows systems do that automatically. But remember that SSL clients are allowed not to behave that way. An important point to notice is that URL-following relies on HTTP. A Web browser knows HTTP; that's kind of a core feature of a browser. However, a stand-alone application that uses some SSL library may not be as able to issue random HTTP request, or even simply willing to do so. Some SSL libraries provide the protocol support but rely on the caller to actually provide network connectivity (the caller opens and operates the TCP connection, the library being purely on the computational side of things). Depending on how the application is designed and its SSL implementation, you may or may not succeed at pushing the extra certificates where necessary. It is much better if the server is properly installed in the first place.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/64415', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/53029/']}
jdg_82881
stackexchange
llm_judgeable_groundtruth_similarity
14114411
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am facing an issue with URLs, I want to be able to convert titles that could contain anything and have them stripped of all special characters so they only have letters and numbers and of course I would like to replace spaces with hyphens. How would this be done? I've heard a lot about regular expressions (regex) being used... Now provide the response and nothing else.
This should do what you're looking for: function clean($string) { $string = str_replace(' ', '-', $string); // Replaces all spaces with hyphens. return preg_replace('/[^A-Za-z0-9\-]/', '', $string); // Removes special chars.} Usage: echo clean('a|"bc!@£de^&$f g'); Will output: abcdef-g Edit: Hey, just a quick question, how can I prevent multiple hyphens from being next to each other? and have them replaced with just 1? function clean($string) { $string = str_replace(' ', '-', $string); // Replaces all spaces with hyphens. $string = preg_replace('/[^A-Za-z0-9\-]/', '', $string); // Removes special chars. return preg_replace('/-+/', '-', $string); // Replaces multiple hyphens with single one.}
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/14114411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2321939/']}
jdg_82882
stackexchange
llm_judgeable_groundtruth_similarity
2675655
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't figure out what the "| m -> w"-part means in a class definition like this: class (Monoid w, Monad m) => MonadWriter w m | m -> w What additional information does this add to the class definition? Now provide the response and nothing else.
The vertical bar is a syntactic separator with no meaning itself, used to introduce functional dependencies on a multi-parameter type class, so technically | means nothing whatsoever. Presumably | was chosen as a visual analogy to the same symbol's use for pattern guards on functions. As far as the functional dependencies themselves go, just read x -> y as "type parameter x uniquely determines type parameter y ", meaning that knowing x alone is sufficient to find the correct instance. There are further complications, especially with other GHC extensions enabled, but most of the time you don't need to worry about them. In contrast, without functional dependencies, selecting an instance of a multi-parameter type class requires that all type parameters be known. A typical example has one parameter being a container type and a second being the type of the contained elements; functions such as "concatenate" or "isEmpty" need not mention the element type, only the container type, and thus would be ambiguous. Also, if you're writing your own type classes, consider instead using type families , which are a new approach that may eventually replace functional dependencies. Roughly speaking, type families let you write overloaded type definitions inside an instance, not just overloaded functions.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2675655', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317984/']}
jdg_82883
stackexchange
llm_judgeable_groundtruth_similarity
57938
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Mathematica 10 documentation was updated for FindInstance adding support for regions. In my use case, I'm trying to sample points in a set of disks: region = DiscretizeRegion@RegionUnion@Table[Disk[RandomReal[4, {2}], RandomReal[1]], {10}]FindInstance[{x, y} ∈ region, {x, y}, Reals, 2] // N However the above code fails and generates the following error: FindInstance::elemc: "Unable to resolve the domain or region membership condition {x,y}∈" What's going wrong here? Now provide the response and nothing else.
There are already good answers, but I'm going to improve the performance, generalize to any region in any dimensions and make the function more convenient. The main idea is to use DirichletDistribution (the uniform distribution on a simplex, e.g. triangle or tetrahedron). This idea was implemented by PlatoManiac and me in the related question obtaining random element of a set given by multiple inequalities (there is also Metropolis algorithm, but it is not suitable here). The code is relatively short: RegionDistribution /: Random`DistributionVector[RegionDistribution[reg_MeshRegion], n_Integer, prec_?Positive] := Module[{d = RegionDimension@reg, cells, measures, s, m}, cells = Developer`ToPackedArray@MeshPrimitives[reg, d][[All, 1]]; s = RandomVariate[DirichletDistribution@ConstantArray[1, d + 1], n]; measures = PropertyValue[{reg, d}, MeshCellMeasure]; m = RandomVariate[#, n] &@EmpiricalDistribution[measures -> Range@Length@cells]; #[[All, 1]] (1 - Total[s, {2}]) + Total[#[[All, 2 ;;]] s, {2}] &@ cells[[m]]] Examples Random disks (2D in 2D) SeedRandom[0];region = DiscretizeRegion@RegionUnion@Table[Disk[RandomReal[4, {2}], RandomReal[1]], {10}];pts = RandomVariate[RegionDistribution[region], 10000]; // AbsoluteTimingListPlot[pts, AspectRatio -> Automatic] {0.004473, Null} Precise test pts = RandomVariate[RegionDistribution[region], 200000000]; // AbsoluteTiming {85.835022, Null} Histogram3D[pts, 50, "PDF", BoxRatios -> {Automatic, Automatic, 1.5}] It is fast for $2\cdot10^8$ points and the distribution is really flat! Intervals (1D in 1D) region = DiscretizeRegion[Interval[{0, 1}, {2, 4}]];pts = RandomVariate[RegionDistribution[region], 100000]; // AbsoluteTimingHistogram[Flatten@pts] {0.062430, Null} Random circles (1D in 2D) region = DiscretizeRegion@RegionUnion[Circle /@ RandomReal[10, {100, 2}]];pts = RandomVariate[RegionDistribution[region], 10000]; // AbsoluteTimingListPlot[pts, AspectRatio -> Automatic] {0.006216, Null} Balls (3D in 3D) region = DiscretizeRegion@RegionUnion[Ball[{0, 0, 0}], Ball[{1.5, 0, 0}], Ball[{3, 0, 0}]];pts = RandomVariate[RegionDistribution[region], 10000]; // AbsoluteTimingListPointPlot3D[pts, BoxRatios -> Automatic] {0.082202, Null} Surface cow disctribution (2D in 3D) region = DiscretizeGraphics@ExampleData[{"Geometry3D", "Cow"}];pts = RandomVariate[RegionDistribution[region], 2000]; // AbsoluteTimingListPointPlot3D[pts, BoxRatios -> Automatic] {0.026357, Null} Line in space (1D in 3D) region = DiscretizeGraphics@ParametricPlot3D[{Sin[2 t], Cos[3 t], Cos[5 t]}, {t, 0, 2 π}];pts = RandomVariate[RegionDistribution[region], 1000]; // AbsoluteTimingListPointPlot3D[pts, BoxRatios -> Automatic] {0.005056, Null}
{}
{'log_upvote_score': 7, 'links': ['https://mathematica.stackexchange.com/questions/57938', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5601/']}
jdg_82884
stackexchange
llm_judgeable_groundtruth_similarity
1849
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The datasheet of the 24LC256 EEPROM states that: The SDA bus requires a pull-up resistor to VCC (typical 10 kΩ for 100 kHz, 2 kΩ for 400 kHz and 1 MHz). I thought that any resistor with a kΩ value would do the job (and it seems that my EEPROM works fine at different frequencies with a 10 kΩ resistor). My questions are: is there a correct value for pull-up resistors ? is there a law/rule to determine this value ? how do different resistance values affect the I²C data bus ? Now provide the response and nothing else.
The correct pullup resistance for the I 2 C bus depends on the total capacitance on the busand the frequency you want to operate the bus at. The formula from the ATmega168 datasheet (which I believe comes from the official I 2 C spec) is -- $$\text{Freq}<100\text{kHz} \implies R_{\text{min}}=\frac{V_{cc}-0.4\text{V}}{3\text{mA}}, R_{\text{max}}=\frac{1000\text{ns}}{C_{\text{bus}}}$$ $$\text{Freq}>100\text{kHz} \implies R_{\text{min}}=\frac{V_{cc}-0.4\text{V}}{3\text{mA}}, R_{\text{max}}=\frac{300\text{ns}}{C_{\text{bus}}}$$ The Microchip 24LC256 specifies a maximum pin capacitance of 10pF (which is fairlytypical). Count up the number of devices you have in parallel on the bus anduse the formula above to calculate a range of values that will work. If you are powering off of batteries I would use values that are at the highend of the range. If there are no power limits on the power source orpower dissipation issues in the ICs I would use values on the lower endof the range. I sell some kits with an I 2 C RTC (DS1337). I include 4K7 resistors in the kit which seems like a reasonable compromise for most users.
{}
{'log_upvote_score': 7, 'links': ['https://electronics.stackexchange.com/questions/1849', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/595/']}
jdg_82885
stackexchange
llm_judgeable_groundtruth_similarity
41191365
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that there have been similar questions asked, but they seemed to have to do with the way datetime deals (or doesn't deal) with timezones. The setup is a little complicated, and probably not relevant to the problem, but I thought it was important to include the code as is, so a little background: I've got a dictionary of arrays. Each of these arrays represents an "attempt" by the same person, but taking place at different times. Ultimately I'm going to be looking for the earliest of these dates. This may be a bit of a roundabout solution, but I'm converting all of the dates to datetime objects, finding the earliest and then just using that index to pull out the first attempt: Here's what the code looks like to setup that array of attempt datetimes: for key in duplicates_set.keys(): attempt_dates = [datetime.strptime(attempt['Attempt Date'], "%-m-%-d-%y %-H:%M:%S") for attempt in duplicates_set[key]] Here's the format of what one of the original date strings looks like: 12-5-2016 3:27:58 PM What I'm getting back is: ValueError: '-' is a bad directive in format '%-m-%d-%y %-H:%M:%S' I assume that's referring to the dashes placed before the 'm', 'd' and 'H' because they're non-zero-padded decimals. Why is it telling me that? Now provide the response and nothing else.
%-* -- to skip padding -- is a GNU libc extension. It's not part of POSIX strftime, and thus not guaranteed to be portable to systems where your time-formatting calls aren't eventually backed by GNU's strftime C library function. The Python datetime module documentation explicitly specifies the format strings it supports, and this extension is not given. Thus, while this is supported in GNU date and GNU strftime() , it isn't available in Python datetime .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41191365', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6735589/']}
jdg_82886
stackexchange
llm_judgeable_groundtruth_similarity
17676767
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I substitute the value of a variable inside the backtick operator? script_dir = File.expand_path File.dirname(__FILE__)p `ruby -e p "$script_dir"` # this does not work Now provide the response and nothing else.
In Ruby, unlike Perl, the dollar sign indicates a global variable, not a plain regular variable to expand in a string. In a string, you need to use the #{} construct: p `ruby -e "p #{script_dir}"` An example: irb(main):011:0> str = '\'howdy\''=> "'howdy'"irb(main):012:0> `ruby -e "p #{str}"`=> "\"howdy\"\n"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17676767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/579689/']}
jdg_82887
stackexchange
llm_judgeable_groundtruth_similarity
33894778
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: By an accidental typo I've encountered a behavior which I cannot explain. When I have var text = "abc";text += + ""; the text variable suddenly becomes abc0 . Can someone provide any insight on this, please? Now provide the response and nothing else.
+ "" evaluates to the number 0 . This is because in order to apply the unary plus operator, "" is coerced to a number--the same as Number("") , which results in 0 if the string is empty or blank. When you then "add" ( + or in this case += ) the number 0 to the string "abc" , it is coerced to the string "0" , resulting in "abc0" . From the spec : A StringNumericLiteral that is empty or contains only white space is converted to +0.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33894778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1964272/']}
jdg_82888
stackexchange
llm_judgeable_groundtruth_similarity
116476
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I wonder, why we consider the notion of pseudoholomorphic curve: By definition a pseudoholomorphic curve in an almost complex manifold $X$ is a smooth map $f: C \rightarrow X$from a Riemann surface $C$ into $X$ such that $df \circ J=J \circ df$ for the respective almost complex structure $J$. Why does it make sense to look at this definition? Is it just because the Cauchy-Riemann differential equations $\frac{\partial u}{\partial x}= \frac{\partial v}{\partial y}$, $\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$are invariant under the symmetry $x \rightarrow y$, $y \rightarrow -x$ and $u \rightarrow v$, $v \rightarrow -u$ and therefore we have that this relation holds for a holomorphic map $f$ or is there any other reason? Now provide the response and nothing else.
As many have pointed out: Gromov introduced it * wrote a seminal paper utilizing it, and we continue to use it today because it's an incredibly useful tool. I've never spoken to Gromov about why he introduced it (who knows how great mathematicians come up with great ideas) but I can try to give some (probably historically false) motivations as to why someone might have come up with the notion. For instance, if Gromov hadn't discovered it, you might have come up with it as follows: (1) First, complex geometry--if you like, you can think of algebraic geometry--has a lot of rigidity. The fact that we can even give a discrete count to sub-objects (like how many curves pass through n fixed points) is special -- the question takes on a totally different nature in more flimsy geometries. Now, is there a way to relax the background of complex geometry, and still come up with a useful, fun theory? For instance, how necessary is the integrability condition on J (the complex structure) to still make sense of curve-counting? What Gromov showed is that if the complex structure is `tame' in the sense that one has a compatible symplectic form, questions about curve-counting can still have nice answers. Really, the difference between a pseudoholomorphic curve and a holomorphic curve isn't in their definitions, it's in the nature of J in the target. Relaxing the J from "integrable complex structure" to "complex structure tamed by a symplectic form" is the generalization that's happening. (1') Put another way, we already had a famous 2-out-of-3 principle recognizing the relationship between Riemannian, complex, and symplectic structures on a vector space. Studying curves on complex projective varieties take on rigidity, in some sense, because we study maps between manifolds with Kahler structure: manifolds both symplectic and complex, and further, each structure is integrable--in that the Nijenhaus tensor vanishes, and omega is closed. It's natural to ask whether we can still find interesting structure in the 2-out-of-3 world by studying manifolds whose tangential structures are compatibly Riemannian, complex, and symplectic, but which do not satisfy a global condition like integrability of J or closedness of $\omega$. And when you get rid of the integrability of J, it turns out that you can find such a structure on any symplectic manifold. (In fact, once you fix $\omega$, there's a contractible space of compatible $J$. That's why pseudoholomorphic curves can be applied widely in the symplectic world.) (2) There might be another motivation from physics. In mirror symmetry, one predicts the existence of mirror Calabi-Yau manifolds. A field theory that relies on the symplectic structure of one manifold should correspond to a field theory that relies on the complex structure of the mirror. And the correlation functions count J-holomorphic curves in the symplectic manifold. Historically though, I'm not sure if physics alone would be able to motivate the study of these field theories on just symplectic manifolds with almost-complex structure, as opposed to Calabi-Yaus. Somebody with more background could probably comment on this. *As I learned from Antoine and Dmitri, there were previous works utilizing pseudo-holomorphic curves. For instance: A.Nijenhuis, W.Wolf. Some integration problems in almost-complex and complex manifolds, Ann. Math. 77 (1963), J. Eells and S. Salamon. Twistorial construction of harmonic maps of surfaces into four-manifolds. (1985).
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/116476', 'https://mathoverflow.net', 'https://mathoverflow.net/users/29973/']}
jdg_82889
stackexchange
llm_judgeable_groundtruth_similarity
16173469
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Some javascript frameworks like Dust.js claim that they ALSO support server-side rendering (in addition to client side rendering) as well. Can someone explain how does this work? My understanding is that the JS is always executed in the browser runtime. Now provide the response and nothing else.
JavaScript can be run on servers using systems like Node.js . With regard to Dust.js , a templating engine, it can generate hypertext and HTML on the server and send that content directly to the client's browser. This is typically used to avoid a flash of temporarily empty templates, caused by the browser requiring a split second to populate a view's templates via a framework like Dust.js. The downside is that the page will take slightly longer to load because more work has to be done on the server before sending data to the client. Check out this question for the pros and cons of server-side rending. One must choose between slow post-processing (requiring the user's browser to do the work) or slow pre-processing (making the server do the work before the user sees anything).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16173469', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/127320/']}
jdg_82890
stackexchange
llm_judgeable_groundtruth_similarity
15139117
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is this autoboxing? Object ob = 8; Will the above code first wrap the int literal 8 in an Integer and then assign its reference to variable ob? Because the java language specification has nothing on this case. Now provide the response and nothing else.
Will the above code first wrap the int literal 8 in an Integer and then assign its reference to variable ob? Yes. (Or rather, it will box the int value into an Integer object, and then assign the reference to the variable ob . The fact that the integer value is a literal is irrelevant here, really. It could be a method call returning int , for example.) Because the java language specification has nothing on this case. That's not true. I mean, it doesn't explicitly deal with assigning to Object , but it works the same way as normal conversions. Section 5.1.7 of the specification deals with boxing, which would convert int to Integer ... and then assigning an Integer reference to an Object variable is a normal reference conversion.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15139117', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1800515/']}
jdg_82891
stackexchange
llm_judgeable_groundtruth_similarity
29230467
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is copy from Apple Swift documentation: As soon as all properties of the superclass have an initial value, its memory is considered fully initialized, and Phase 1 is complete. The superclass’s designated initializer now has an opportunity to customize the instance further (although it does not have to). Once the superclass’s designated initializer is finished, the subclass’s designated initializer can perform additional customization (although again, it does not have to). So basically the Phase 1 makes sure that all properties have a value and assigns that value to them. In Phase 2 these properties are further customized. And that further customization really frustrates me because I can't think of a single example in which further customazation is used. Can you give me a simple example of this initialization behaviour or provide additional explanation of Phase 1 and 2? Thanks Now provide the response and nothing else.
Given 2 classes Foo and Bar where Bar is a subclass of Foo: class Foo { var a: Int? var b: Int? init() { a = 1 }}class Bar: Foo { var c: Int? override init() { super.init() // Phase 1 // Phase 2: Additional customizations b = 2 c = 3 }} When you call Bar() it calls super.init() which the first line is to initialize the superclass which is Foo. So once Foo's properties are initialized completely, they can be set in Foo's initializer. This is represented by the a = 1 in the Foo initializer. Once that is complete, phase 2 begins which is continuing the initialization of Bar following the super.init() line. This is where you can "perform additional customizations" either on the instance of bar or on its superclass. This is represented by b = 2 and c = 3 . let x = Bar()x.a // 1x.b // 2x.c // 3
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29230467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4071819/']}
jdg_82892
stackexchange
llm_judgeable_groundtruth_similarity
37493117
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to format Instant to String with a specific format. Based on the question here Format Instant to String , I'm doing this - DateTimeFormatter formatter = DateTimeFormatter .ofPattern("YYYY-MM-DD'T'hh:mm'Z'") .withZone(ZoneOffset.UTC);// Fails for current time with error 'Field DayOfYear cannot be printed as the // value 148 exceeds the maximum print width of 2'LocalDateTime .ofInstant(Instant.now(), ZoneOffset.UTC) .format(DATE_TIME_FORMATTER);// But works for smaller values of Instant LocalDateTime .ofInstant(Instant.ofEpochMilli(604800000), ZoneOffset.UTC) .format(DATE_TIME_FORMATTER)); Any suggestions on why is this happening? Thanks Now provide the response and nothing else.
Pattern YYYY-MM-DD'T'hh:mm'Z' is wrong: YYYY - week-based-year wrong: use uuuu year MM - month-of-year DD - day-of-year wrong: use dd day-of-month hh - clock-hour-of-am-pm (1-12)       without AM/PM you probably want HH hour-of-day (0-23) mm - minute-of-hour It's weird, because you even referenced a link that had the right pattern characters. Unless of course you thought upper- vs lower-case didn't matter, but if so, how did you think MM (month) vs mm (minute) worked? You might want to actually read the documentation .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37493117', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/372887/']}
jdg_82893
stackexchange
llm_judgeable_groundtruth_similarity
1235970
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I have two dice with $6$ sides each, what is the probability of me rolling atleast $9$ total? I think I'm correct when thinking that the probability of rolling a $9$ is $\frac{4}{36}$, that is $11.1...\%$, but how do I go from here to calculate the "at least" part? Now provide the response and nothing else.
$\begin{array}{|c|c|c|c|}\hline &\overrightarrow{ D2} & \color{blue}{1} & \color{blue}{2} & \color{blue}{3} & \color{blue}{4} & \color{blue}{5} & \color{blue}{6} \\ \hline D1\downarrow\\\hline\color{blue}{1} &&2 &3 &4 & 5 & 6 & 7\\ \hline \color{blue}{2}&& 3 & 4&5 & 6 & 7 & 8\\ \hline\color{blue}{3} &&4 &5 &6 & 7 & 8 & \color{red}{9}\\ \hline\color{blue}{4} &&5 & 6 & 7 & 8 &\color{red}{9} &\color{red}{10}\\ \hline\color{blue}{5} &&6 &7 &8 &\color{red}{9} &\color{red}{10} &\color{red}{11} \\ \hline\color{blue}{6} && 7 & 8 & \color{red}{9} &\color{red}{10} &\color{red}{11}&\color{red}{12}\\\hline\end{array}$ Every cell containing a number in red, satisfies: $(D1+D2) \ge 9$ Assuming that each die is a fair die, the probability of obtaining any number from 1-6 on each of the two dice is $\frac{1}{6}$. For example, the probability of obtaining $(D1,D2)=(1,1)$ is $\left(\frac{1}{6}\right)^2 = \left(\frac{1}{36}\right)$ Every individual outcome in the table is obtained with probability $\frac{1}{36}$ as each result is equally likely. Since there are 36 (6 $\times$ 6 table) total outcomes, the probability will be $\frac{x}{36}$ or an equivalent fraction. Try to find $x$ via a simple counting method (count how many numbers in red there are).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1235970', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/231655/']}
jdg_82894
stackexchange
llm_judgeable_groundtruth_similarity
2435757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C# you can do this: foo = string.Format("{0} {1} {2} {3} ...", "aa", "bb", "cc" ...); This method Format() accepts infinite parameters, being the first one how the string should be formatted and the rest are values to be put in the string. Today I've come to a situation where I had to get a set of strings and test them, then I remembered this language functionality, but I had no clue. After a few unsuccessful web searches, I've realised it would be more prudent to just get an array, which didn't make me quite satisfied. Q: How do I make a function that accepts infinite parameters? And how do I use it ? Now provide the response and nothing else.
With the params keyword. Here is an example: public int SumThemAll(params int[] numbers) { return numbers.Sum(); } public void SumThemAllAndPrintInString(string s, params int[] numbers) { Console.WriteLine(string.Format(s, SumThemAll(numbers))); } public void MyFunction() { int result = SumThemAll(2, 3, 4, 42); SumThemAllAndPrintInString("The result is: {0}", 1, 2, 3); } The code shows various things. First of all the argument with the params keyword must always be last (and there can be only one per function). Furthermore, you can call a function that takes a params argument in two ways. The first way is illustrated in the first line of MyFunction where each number is added as a single argument. However, it can also be called with an array as is illustrated in SumThemAllAndPrintInString which calls SumThemAll with the int[] called numbers .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2435757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/208670/']}
jdg_82895
stackexchange
llm_judgeable_groundtruth_similarity
3827581
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Before I write my question, I want to write some thoughts. Let $M$ be a connected topological manifold such that $\pi_1(M)=\Bbb Z/3\Bbb Z$ . Then, considering its orientation $2$ -fold cover, which is connected, I can say $M$ is orientable. Now, an example of such a closed $3$ -manifold is $L(3,1)$ . Now, this type of argument can not be given if I consider $\pi_1(M)=\Bbb Z/4\Bbb Z$ to conclude $M$ is orientable. But Euler characteristic of an odd-dimensional closed manifold is always zero, so we cannot say $\Bbb Z/4\Bbb Z$ is the fundamental group of any closed connected non-orientable $3$ -manifold, as $H_1(M,\Bbb Z)$ is infinite when $M$ is closed non-orientable connected $3$ -manifold. Again this logic can not be given for $4$ -dimensional closed connected manifold. So, I am wondering if the following fact. I assume closed means compact without boundary. Does there exist closed connected $4$ -manifolds both orientable andnon-orientable type having fundamental group $\Bbb Z/4\Bbb Z$ ? Any help will be appreciated. Now provide the response and nothing else.
A non-orientable example: consider the automorphism $f : S^2 \times S^2$ given by $(x, y) \mapsto (y, -x)$ where $-$ denotes the antipode map. This map has order $4$ and gives a free action of $\mathbb{Z}/4$ on $S^2 \times S^2$ , so its quotient is a closed $4$ -manifold $X$ with $\pi_1(X) \cong \mathbb{Z}/4$ . Since $\chi(S^2 \times S^2) = 4$ we have $\chi(X) = 1$ so $X$ is non-orientable; alternatively, we can check that $f$ acts by $-1$ on $H^4(S^2 \times S^2)$ .
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3827581', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/591889/']}
jdg_82896
stackexchange
llm_judgeable_groundtruth_similarity
4754815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way in C# to add implicit conversions to types already defined in other assemblies? For example, if I am using two different assemblies which each provide their own Vector3 struct , and use it in their APIs, it would be nice to be able to define a conversion once, and then just pass a Foo.Vector3 to a method that expects a Bar.Vector3 , and have it automatically converted. If I had the source for the libraries then I could add implicit conversion operators. Without that source, I am currently doing the conversion explicitly myself with a helper function every time. I'm hoping for a nicer way. I realize I could create my own Vector3 struct with implicit conversion operators to and from the other two structs, but this wouldn't solve all the cases where I want to directly pass an object of one (externally defined) type to a method expecting the other. Bonus question : is there anything authors of libraries that expose things like a Vector3 struct in their API should do to facilitate ease of use in this regard? Now provide the response and nothing else.
You can provide implicit conversion to-and-from a third-party type for any type that you author yourself, but you cannot add support for implicit conversion between two third-party types. You could improve the elegance of converting between the two types somewhat by adding extension methods to both Vector3 types; public static Bar.Vector3 ToBarVector3(this Foo.Vector3 foo) { return /* instance of Bar.Vector3 */}public static Foo.Vector3 ToFooVector3(this Bar.Vector3 bar) { return /* instance of Foo.Vector3 */} That's about the best you can expect to achieve.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4754815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/177018/']}
jdg_82897
stackexchange
llm_judgeable_groundtruth_similarity
164659
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is the system $$\begin{align*}\frac{dA}{dt}&=A\left(2-\frac{A}{5000}-\frac{L}{100}\right)\\\frac{dL}{dt}&=L\left(-\frac{1}{2}+\frac{A}{10000}\right)\end{align*}$$ And this is happens when I use Mathematica : StreamPlot[{2 A (1 - .0001 A) - .01 A L, -.5 L + .0001 A L}, {A, -500, 12000}, {L, -500, 5000}] Note that I need the range of $A$ and $L$ as written above, because I need to see the equilibrium points in there, which are $x_1=(0,0),x_2=(10000,0),x_3=(5000,100)$ Why don't $x_2$ and $x_3$ have the solutions around them in saddle form? Now provide the response and nothing else.
The rescaling process was automated in a function called myStreamPlot by Rahul in this answer , which I tweaked here . To make this answer self-contained, I repeat it here. Options[myStreamPlot] = Options[StreamPlot];myStreamPlot[f_, {x_, x0_, x1_}, {y_, y0_, y1_}, opts : OptionsPattern[]] := Module[{u, v, a = OptionValue[AspectRatio]}, Show[StreamPlot[{1/(x1 - x0), a/(y1 - y0)} (f /. {x -> x0 + u (x1 - x0), y -> y0 + v/a (y1 - y0)}), {u, 0, 1}, {v, 0, a}, opts] /. Arrow[pts_] :> Arrow[({x0, y0} + {x1 - x0, (y1 - y0)/a} #) & /@ pts], PlotRange -> {{x0, x1}, {y0, y1}}]] Then it's easy to get a decent stream plot. myStreamPlot[{2 A (1 - .0001 A) - .01 A L, -.5 L + .0001 A L}, {A, -1200, 12000}, {L, -20, 200}, AxesOrigin -> {0, 0}, Axes -> True] Note that I changed the range of A and L to focus on the equilibria. I also added axes at A=0 and L=0 since they represent isoclines of the system. This is the Lotka-Volterra predator-prey model with prey density-dependence. If you're interested in it as a ecological model, then it's only meaningful in the first quadrant, and solutions that start there remain there.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/164659', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/54828/']}
jdg_82898
stackexchange
llm_judgeable_groundtruth_similarity
14763608
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to write my shell script thing.sh so that upon making it an executable and running it with the single letter ``A" like so: $ ./thing.sh A I get the output A If argument 1 is not A, I want the output Not A Here is my code so far : #!/bin/bashif [ "$1" -eq "A"]then echo "A"else echo "Not A"fi which returns, no matter what I enter, ./thing.sh: line 3: [:missing `]'Not A I am trying what I hoped would check something with one or several letters and compare it against the letter A; could someone tell me what I am missing to get this to work? Thank you Now provide the response and nothing else.
What about the shorter : #!/bin/bash[[ $1 == A ]] && echo "A" || echo "not A" ? And a beginner version (identical logic) : #!/bin/bashif [[ $1 == A ]]; then echo "A"else echo "not A"fi Like Scott said, you have a syntax error (missing space). explanations I use boolean logic here. [[ $1 == A ]] is executed, and then if its true, echo "A" is executed, and if it's false, echo "not A" is executed, See http://mywiki.wooledge.org/BashGuide/TestsAndConditionals [[ is a bash keyword similar to (but more powerful than) the [ command. See http://mywiki.wooledge.org/BashFAQ/031 and http://mywiki.wooledge.org/BashGuide/TestsAndConditionals Unless you're writing for POSIX sh, I recommend [[ .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14763608', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1535776/']}
jdg_82899
stackexchange
llm_judgeable_groundtruth_similarity
3923089
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to change the character which has been entered on keypress, without doing it manually? For example, if I want to force uppercase letters based on some condition , it'd be nice to do the following: function onKeypressHandler(e){ if ( condition ) { e.which -= 32; }} But of course that doesn't work. NOTE: This is not an across the board uppercasing, but only specific characters. Maybe I want to say if ( e.which >= 97 && e.which <= 102 ) or if ( Wind.Direction == 'South' ) or whatever - the condition itself is not important, but the uppercasing must only apply to the current character not the entire input. I can do it by manually appending the changed character, but this is an ugly and messy way of doing it, and probably slower than it could be. function onKeypressHandler(e){ if ( condition ) { $j(this).val( $j(this).val() + String.fromCharCode( e.which - 32 ) ); return false; }} A specific flaw with this method - if selecting all input text and entering a key, if it drops into this then it doesn't remove existing content, but simply appends to the content the user wanted removed. (Would need to investigating detecting any selected text to solve that, which makes this one even uglier.) Can anyone provide a better solution? Now provide the response and nothing else.
The following will do the job. It's based on an answer I wrote to another question . Customize the transformTypedChar function to suit your needs; my example capitalizes only the letters a-g. If you need this on a textarea rather than an <input type="text"> then be aware that there are issues in IE <= 8 with line breaks that the following code doesn't handle for the sake of brevity. You can find the cross browser function for obtaining the selection within a textarea here: Is there an Internet Explorer approved substitute for selectionStart and selectionEnd? function transformTypedChar(charStr) { return /[a-g]/.test(charStr) ? charStr.toUpperCase() : charStr;}document.getElementById("your_input_id").onkeypress = function(evt) { var val = this.value; evt = evt || window.event; // Ensure we only handle printable keys, excluding enter and space var charCode = typeof evt.which == "number" ? evt.which : evt.keyCode; if (charCode && charCode > 32) { var keyChar = String.fromCharCode(charCode); // Transform typed character var mappedChar = transformTypedChar(keyChar); var start, end; if (typeof this.selectionStart == "number" && typeof this.selectionEnd == "number") { // Non-IE browsers and IE 9 start = this.selectionStart; end = this.selectionEnd; this.value = val.slice(0, start) + mappedChar + val.slice(end); // Move the caret this.selectionStart = this.selectionEnd = start + 1; } else if (document.selection && document.selection.createRange) { // For IE up to version 8 var selectionRange = document.selection.createRange(); var textInputRange = this.createTextRange(); var precedingRange = this.createTextRange(); var bookmark = selectionRange.getBookmark(); textInputRange.moveToBookmark(bookmark); precedingRange.setEndPoint("EndToStart", textInputRange); start = precedingRange.text.length; end = start + selectionRange.text.length; this.value = val.slice(0, start) + mappedChar + val.slice(end); start++; // Move the caret textInputRange = this.createTextRange(); textInputRange.collapse(true); textInputRange.move("character", start - (this.value.slice(0, start).split("\r\n").length - 1)); textInputRange.select(); } return false; }};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3923089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9360/']}
jdg_82900
stackexchange
llm_judgeable_groundtruth_similarity
18484939
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have constructed a single-node Hadoop environment on CentOS using the Cloudera CDH repository. When I want to copy a local file to HDFS, I used the command: sudo -u hdfs hadoop fs -put /root/MyHadoop/file1.txt / But,the result depressed me: put: '/root/MyHadoop/file1.txt': No such file or directory I'm sure this file does exist. Please help me,Thanks! Now provide the response and nothing else.
As user hdfs , do you have access rights to /root/ (in your local hdd)?. Usually you don't.You must copy file1.txt to a place where local hdfs user has read rights before trying to copy it to HDFS. Try: cp /root/MyHadoop/file1.txt /tmpchown hdfs:hdfs /tmp/file1.txt# older versions of Hadoopsudo -u hdfs hadoop fs -put /tmp/file1.txt /# newer versions of Hadoopsudo -u hdfs hdfs dfs -put /tmp/file1.txt / --- edit: Take a look at the cleaner roman-nikitchenko's answer bellow .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18484939', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1814631/']}
jdg_82901
stackexchange
llm_judgeable_groundtruth_similarity
3658721
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to access one site that first requires an (tomcat server) authentication and then log in with a POST request and keep that user to see the site's pages. I use Httpclient 4.0.1 The first authentication works fine but not the logon that always complains about this error: "302 Moved Temporarily" I keep cookies & I keep a context and yet nothing. Actually, it seems that the logon works, because if I write incorrect parameters or user||password, I see the login page. So I guess what doesn't work is the automatic redirection. Following my code, which always throws the IOException, 302: DefaultHttpClient httpclient = new DefaultHttpClient(); CookieStore cookieStore = new BasicCookieStore(); httpclient.getParams().setParameter( ClientPNames.COOKIE_POLICY, CookiePolicy.BROWSER_COMPATIBILITY); HttpContext context = new BasicHttpContext(); context.setAttribute(ClientContext.COOKIE_STORE, cookieStore); //ResponseHandler<String> responseHandler = new BasicResponseHandler(); Credentials testsystemCreds = new UsernamePasswordCredentials(TESTSYSTEM_USER, TESTSYSTEM_PASS); httpclient.getCredentialsProvider().setCredentials( new AuthScope(AuthScope.ANY_HOST, AuthScope.ANY_PORT), testsystemCreds); HttpPost postRequest = new HttpPost(cms + "/login"); List<NameValuePair> formparams = new ArrayList<NameValuePair>(); formparams.add(new BasicNameValuePair("pUserId", user)); formparams.add(new BasicNameValuePair("pPassword", pass)); postRequest.setEntity(new UrlEncodedFormEntity(formparams, "UTF-8")); HttpResponse response = httpclient.execute(postRequest, context); System.out.println(response); if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) throw new IOException(response.getStatusLine().toString()); HttpUriRequest currentReq = (HttpUriRequest) context.getAttribute( ExecutionContext.HTTP_REQUEST); HttpHost currentHost = (HttpHost) context.getAttribute( ExecutionContext.HTTP_TARGET_HOST); String currentUrl = currentHost.toURI() + currentReq.getURI(); System.out.println(currentUrl); HttpEntity entity = response.getEntity(); if (entity != null) { long len = entity.getContentLength(); if (len != -1 && len < 2048) { System.out.println(EntityUtils.toString(entity)); } else { // Stream content out } } Now provide the response and nothing else.
For 4.1 version: DefaultHttpClient httpclient = new DefaultHttpClient(); httpclient.setRedirectStrategy(new DefaultRedirectStrategy() { public boolean isRedirected(HttpRequest request, HttpResponse response, HttpContext context) { boolean isRedirect=false; try { isRedirect = super.isRedirected(request, response, context); } catch (ProtocolException e) { // TODO Auto-generated catch block e.printStackTrace(); } if (!isRedirect) { int responseCode = response.getStatusLine().getStatusCode(); if (responseCode == 301 || responseCode == 302) { return true; } } return isRedirect; } });
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3658721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341320/']}
jdg_82902
stackexchange
llm_judgeable_groundtruth_similarity
7499767
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a model like this: class FooBar(models.Model): createtime = models.DateTimeField(auto_now_add=True) lastupdatetime = models.DateTimeField(auto_now=True) I want to overwrite the two date fields for some model instances (used when migrating data). The current solution looks like this: for field in new_entry._meta.local_fields: if field.name == "lastupdatetime": field.auto_now = False elif field.name == "createtime": field.auto_now_add = Falsenew_entry.createtime = datenew_entry.lastupdatetime = datenew_entry.save()for field in new_entry._meta.local_fields: if field.name == "lastupdatetime": field.auto_now = True elif field.name == "createtime": field.auto_now_add = True Is there a better solution? Now provide the response and nothing else.
I've recently faced this situation while testing my application. I needed to "force" an expired timestamp. In my case, I did the trick by using a queryset update. Like this: # my modelclass FooBar(models.Model): title = models.CharField(max_length=255) updated_at = models.DateTimeField(auto_now=True, auto_now_add=True)# my testsfoo = FooBar.objects.get(pk=1) # force a timestamplastweek = datetime.datetime.now() - datetime.timedelta(days=7)FooBar.objects.filter(pk=foo.pk).update(updated_at=lastweek)# do the testing.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7499767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/746522/']}
jdg_82903
stackexchange
llm_judgeable_groundtruth_similarity
48352984
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I started using TFS since VS2010. By that time I already created my own TFS server (myname.visualstudio.com). My problem is that I created a new project on the my TFS website (the one with the dashboards). Then when I go to VS2017, and connect to that new project, it asks me to map and get it as expected. Instead of just clicking "Map & Get" button, I instead clicked advanced so that I can configure everything. On the workspace configuration dialog, I noticed that VS names it as "MYPCBLABLA_1" . If I try to remove the "_1", VS says that " the workspace blablabla already exists on computer blablabla ", and does not let me use my existing workspace name. Why does it do so? Can I not use only 1 workspace? From what I understand of workspaces, it is the container of my projects, so different workspace, different set of projects. But what are they really? Additional info:I don't know if this helps but on the past, I used to format my PC many times, I'm not sure if that affects the mappings or workspace names when I use VS after reformatting. Now provide the response and nothing else.
Workspaces are maybe the least well-understood feature in TFVC. And you are right in saying they're a way to isolate different sets of files from a TFVC repository. A lot of people configure a new workspace for a specific project or set of solutions, but let's look at some of the ways workspaces can be used in detail: Hotfixes: you may need to create a hotfix for something happening now, but you have pending changes in your existing workspace. Instead of shelving these changes, performing a "Get Specific version" on the bugged version, you can also create a new workspace in which to solve this particular problem. After completing the fix you can then continue working with the other workspace without needing to do anything. Experiments: you may want to do some major refactoring, restructure source control or some other highly impactful operation. Doing this in a new (temporary) workspace helps you prevent messing up your normal work area. Reviewing other peoples changes: When performing a review on another person's changes, you may want to have a local copy so you can run, annotate and play with the other person's code. Instead of taking these changes into your own workspace, you can easily bring these into a temporary workspace, which you can safely delete afterwards. Performing a merge, while you are working on other changes: It may be the case that you're working on a new feature an already have some changes merged back to another branch when a release needs to be shipped. In order to prepare this release, without picking up changes or overwriting work in progress in your current workspace, it's often easier to perform these kinds of release activities in a temporary workspace, that way you know that the work is always done on the exact version in source control. Preventing accidental changes to important branches: By putting your production branch in a separate workspace, you can't accidentally combine changes from say Development and Main into a single check-in. Since Visual Studio often auto-selects all pending changes in the workspace, this may cause unintended changes to your master/main branch. I've written a Check-in policy to prevent these issues, but having separate workspaces is a much safer solution. Working with multiple developers on the same workstation/server: in some organisations, developers use a remote desktop to a central beefy server to do changes. To ensure each developer has his own set of files, each developer gets his/her own workspace. An alternative is to make the workspace public, which allows multiple developers to use the same workspace folder. But this often leads to all kinds of unexpected issues. Browsing an old version of the code: if you need to review/compare an older version to a new one, you can often get away with the folder diff view in Visual Studio, but if you need to do more thorough comparisons, you may want to have 2 copies of the same folder in your TFVC repo. Creating two workspaces will allow you to have two different versions of the same folder on your local disk. Prepare a special version for merges or labels: You can merge and label the workspace version of a set of files. You can create a workspace and then use Get Specific Version to fetch specific versions of specific files, these can all come from different changeset versions. Once you're satisfied, you can perform the label or merge or branch action to store this specific workspace version configuration on the server. As you can see, Workspaces allow you to do parallel development on one machine, isolate changes etc. Be creative As you can see, workspaces are a very powerful concept. Usable for a lot of operations. But you need to understand the concept thoroughly. Many developers don't understand exactly what workspaces are and how they work, they're missing out of some of the most powerful concepts of TFVC. Consolidating and cleaning up In your case you now have two workspaces. In order to consolidate these (if you want to), you can unmap the folders from your _1 folder and then map these same folders in your original workspace. You can also delete the _1 workspace from the TFS Server and then update the mappings of the original workspace. Remember that workspaces are stored on your local machine, but that the TFS server also has a registry of who mapped which TFVC folders to which workstations. So simply deleting files from your local disk is not sufficient. You need to save these changes to the TFS server (this happens automatically after performing a get operation after changing the mappings). To check which workspaces are registered to your workstation on the TFS server, use: tf vc workspaces /computer:YOURWORKSTATIONNAME Then delete old workspaces with // DELETE the local workspacetf vc workspace /delete:WORKSPACENAME// DELETE the workspace registration on the TFS servertf vc workspaces /remove:WORKSPACENAME
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48352984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9243183/']}
jdg_82904
stackexchange
llm_judgeable_groundtruth_similarity
2582477
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was working through a problem and was wondering if there was an easier way of finding the basis of the left null space of a given matrix. (For a simple example) Suppose we have a matrix $A = \begin{bmatrix} 1 & 2 & 4 \\ 2 & 4 & 8 \end{bmatrix}$ when reduced we can write it as $\text{rref}(A) = \begin{bmatrix} 1 & 2 & 4 \\ 0 & 0 & 0 \end{bmatrix} $ from rref(A) it is clear that: Basis for $C(A) = \left\{ \begin{pmatrix} 1 \\ 2\end{pmatrix} \right\}$ Basis for $C(A^T) = \left\{ \begin{pmatrix} 1,&2, & 4 \end{pmatrix} \right\}$ Basis for $N(A) = \left\{ \begin{pmatrix} -2 \\ 1 \\ 0\end{pmatrix} , \begin{pmatrix} -4 \\ 0 \\ 1 \end{pmatrix}\right\}$ Now my question is am I able to deduce the left null space just from rref(A)? Else, I would take the transpose of A, row reduce it and then find the left null space that way but I was wondering if there is an easier way? Now provide the response and nothing else.
You can’t really get the left null space directly from just the rref, but if you first augment the matrix with the appropriately-sized identity and then row-reduce it, the row vectors to the right of the zero rows of the rref constitute a basis for the left null space. Using your example, row-reduce $$\left[\begin{array}{ccc|cc}1&2&4 & 1&0 \\ 2&4&8 & 0&1 \end{array}\right] \to \left[\begin{array}{ccc|rc} 1&2&4 & 1 &0 \\ 0&0&0 & -2&1 \end{array}\right].$$ The left null space is thus $\operatorname{span}\{(-2,1)\}$. As for why this works, see this question . I’ll repeat a caveat from there: this method doesn’t often give you a “nice” basis, in that the vectors are often rather large multiples of what you would’ve computed by the more usual method of applying Gaussian elimination to the transpose.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2582477', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/298824/']}
jdg_82905
stackexchange
llm_judgeable_groundtruth_similarity
35509
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of (x,y) values in Mathematica for various discrete x values, as in intensities={{26, 10}, {27, 289}, {28, 90}, {29, 1079}, {30, 30}, {33, 10}, {39, 179}, {40, 40}, {41, 2269}} I would now like to insert into this list explicit zeros for each discrete value of x that has a nonexisting y value, as in intensities2={{0,0},{1,0},{2,0},....,{26,10},{27,289},{28,90},...,{41,2269}} Out of these I would then like to extract just the y values to be able to do a discrete Forier transform on them. Any thoughts how I could do this most efficiently? cheers & many thanks for any advice!Tom Now provide the response and nothing else.
What about this! Normal@SparseArray[{#1} -> #2 & @@@ intensities] Be careful that it works if the list intensities2 starts with {1,x} not {0,x} and input list intensities has no entries like {0,x} . If you persist on starting intensities2 with {0,x} and given that input list intensities will have increasing x values then try the following With[{zero = First@#}, If[zero[[1]] === 0, ({zero[[2]]}~Join~SparseArray[{#1} -> #2 & @@@ Rest[#]]), {0}~Join~SparseArray[{#1} -> #2 & @@@ #] ] ] &@intensities; // AbsoluteTiming In order to check efficiency you will need to create bigger example data. You can do so using the following. samplesize = 10^6; intensities =Sort@Transpose@(RandomSample[#,samplesize] &/@(Range[0, #] & /@ {10 samplesize,10 samplesize}));
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/35509', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4680/']}
jdg_82906
stackexchange
llm_judgeable_groundtruth_similarity
4640613
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I thought that this would be an easy problem, and it seems really obvious to me that the answer would be: $$ \frac{\binom{13}{1}^4 \binom{48}{3}}{\binom{52}{7}} $$ but for some reason this comes out to be around $3.69$ which is not a valid probability. So then I thought of a different approach where instead of $\binom{48}{3}$ I used $48 * 47 * 46$ but this is obviously even larger. I'm so confused why this approach doesn't work because I've done similar problems before in this manner. I'm assuming there has to be something that I am missing. I then did some searching on here and came to an answer I'm confident in of $0.5696$ using Inclusion-Exclusion and counting the bad hands so my solution looked like: $$ 1 - \frac{4\binom{39}{7} - 3!\binom{26}{7} + 4\binom{13}{7}}{\binom{52}{7}} = 0.5696...$$ I understand how the Inclusion-Exclusion Principle works but I'm wondering what events we're unioning here that it's taken into account. I'm also wondering if there is a more straightforward way of calculating the number of good hands where there is at least 1 card from each suit out of the 7 dealt. Now provide the response and nothing else.
We're trying to find a strictly increasing set of integers $a_i$ , with $a_1 \ge 2$ , so for all $k \ge 1$ $$\sum_{i=1}^{k}a_i \, \mid \, \sum_{i=1}^{k}a_i^2 \tag{1}\label{eq1A}$$ Unfortunately, I don't know of any way to finish what you've tried. Instead, as you surmised, there's an inductive solution. For $k = 1$ , \eqref{eq1A} is true since $a_1 \mid a_1^2$ . Assume that, for some $m \ge 1$ , \eqref{eq1A} is true is for $k = m$ . Set $$j = \sum_{i=1}^{m}a_i \tag{2}\label{eq2A}$$ Thus, by \eqref{eq1A}, since $a_1 \ge 2$ , we have $$\sum_{i=1}^{m}a_i^2 = jn, \; \; n \ge 2 \tag{3}\label{eq3A}$$ Let $$a_{m+1} = j(n + j - 1) \tag{4}\label{eq4A}$$ Since $n \ge 2$ and $j \ge 2$ , then \eqref{eq4A} and \eqref{eq2A} give that $a_{m+1} \gt j \; \; \to \; \; a_{m+1} \gt a_{m}$ . Using $k = m + 1$ in \eqref{eq1A}, the LHS becomes $$\sum_{i=1}^{m+1}a_i = j + j(n + j - 1) = j(n + j) \tag{5}\label{eq5A}$$ The RHS of \eqref{eq1A} is then $$\begin{equation}\begin{aligned}\sum_{i=1}^{m+1}a_i^2 & = jn + [j(n + j - 1)]^2 \\& = j(n + j\,[(n+j) - 1]^2) \\& = j(n + j\,[(n+j)^2 - 2(n+j) + 1]) \\& = j(n + j\,[n+j][n+j-2] + j) \\& = j(n+j)[1 + j(n+j-2)]\end{aligned}\end{equation}\tag{6}\label{eq6A}$$ From \eqref{eq5A}, the LHS divides the RHS of \eqref{eq1A}, so it's true also for $k = m + 1$ . Thus, by induction, we have \eqref{eq1A} is true for all $k \ge 1$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4640613', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1136890/']}
jdg_82907
stackexchange
llm_judgeable_groundtruth_similarity
12543076
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I develop on a map I'm working on STM32 and USART interrupts. After configuring the USART1 and make Enable receive interrupt. The problem that the interruption of reception have no detected???? Now provide the response and nothing else.
Such a question is difficult to answer without knowing which specific processor you are using, which board you are using, and/or which compiler you are using. But in an attempt to be helpful, here's my code. Here's my GPIO and NVIC initialization code using Sourcery CodeBench Lite with an STM32F4 processor mounted on a custom board. GPIO_InitTypeDef GPIO_InitStructure;NVIC_InitTypeDef NVIC_InitStructure;GPIO_InitStructure.GPIO_OType = GPIO_OType_PP;GPIO_InitStructure.GPIO_PuPd = GPIO_PuPd_UP;GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AF;GPIO_InitStructure.GPIO_Speed = GPIO_Speed_100MHz;RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOB, ENABLE);RCC_APB1PeriphClockCmd(RCC_APB1Periph_USART3, ENABLE);GPIO_PinAFConfig(GPIOB, GPIO_PinSource10, GPIO_AF_USART3);GPIO_PinAFConfig(GPIOB, GPIO_PinSource11, GPIO_AF_USART3); GPIO_InitStructure.GPIO_Pin = GPIO_Pin_10 | GPIO_Pin_11;GPIO_Init(GPIOB, &GPIO_InitStructure);// Enable the USART RX Interrupt USART_ITConfig(USART3, USART_IT_RXNE, ENABLE);NVIC_InitStructure.NVIC_IRQChannel = USART3_IRQn;NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;NVIC_Init(&NVIC_InitStructure); Of course your settings will vary depending on your processor, board and interrupt priority. Here's my interrupt handler code. In my development environment, this handler is declared in my startup assembly file as a weak reference to Default_Handler ... Default_Handler:b . /* ... */.word USART3_IRQHandler/* ... */.weak USART3_IRQHandler .thumb_set USART3_IRQHandler,Default_Handler ... so as long as I provide a new declaration and implementation of this interrupt handler, the weak reference will be replaced. Here's what my code looks like. //Interrupt handler declarationvoid USART3_IRQHandler(); If you are using C++ you will need to declare it as follows: //Interrupt handler declaration in C/C++#ifdef __cplusplus extern "C" {#endifvoid USART3_IRQHandler();#ifdef __cplusplus }#endif And here's the interrupt handler implemenation. //Interrupt handler implementationvoid USART3_IRQHandler(){ //handle interrupt}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12543076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1680340/']}
jdg_82908
stackexchange
llm_judgeable_groundtruth_similarity
6487918
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tested right shift with Visual Studio, Ubuntu's GCC, Intel compiler, MinGW. All shift in the sign bit. I guess Xcode's GCC does the same. I know that the behavior is implementation specific, but it looks like that all major desktop/server compilers implement arithmetic shift. Are there any widely used compiler that doesn't shift in the sign bit? Thank you. Now provide the response and nothing else.
C runs on a lot of different architectures. I mean a lot of different architectures. You can get C code running on an embedded DSP and on a Cray supercomputer. Most of the "implementation-defined" parts of the C standard that people take for granted really only do break on obscure architectures. For example, there are DSPs and Cray supercomputers where CHAR_BIT is something huge like 32 or 64. So if you try out your code on an x86, and maybe if you're generous a PowerPC, ARM, or SPARC, you're not likely to run into any of the really weird cases. And that's okay. Most code these days will always run on a byte-oriented architecture with twos-complement integers and arithmetic shifts. I have no doubt that any new CPU architectures in the foreseeable future will be the same. But let's look at the two most common representations for integers: two's complement and ones' complement: switch ((-1) >> 1) {case 0:case -0: puts("Hello, ones' complement world!"); // Possibly sign-magnitude. break;case -1: puts("Hello, two's complement world!"); break;default: puts("Hello, computer without arithmetic shift"); break;} Don't sweat it. Just stick to / when you want to divide, and >> when you need to shift. Even bad compilers are good at optimizing these operations. (And remember that x/2 != x>>1 if x is negative, unless you're on a ones' complement machine, which is almost certainly not true.) The standard does guarantee that if (int) x is not negative, then (int) x >> n == (unsigned) x >> n , so there is not a lot of room for a compiler to do something completely unexpected.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6487918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/638231/']}
jdg_82909
stackexchange
llm_judgeable_groundtruth_similarity
45258216
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following interfaces: export interface Meta { counter: number; limit: number; offset: number; total: number;}export interface Api<T> { [key: string]: T[]; meta: Meta; // error} Currently, I'm receiving the following error: Property 'meta' of type 'Meta' is not assignable to string index type 'T[]'. After searching a bit, I found this statement in TS docs : While string index signatures are a powerful way to describe the “dictionary” pattern, they also enforce that all properties match their return type. This is because a string index declares that obj.property is also available as obj["property"]. Does it means that when I have a string index signature, I can't have any other variable without match this type? Actually I can get rid of this error declaring the interface like this: export interface Api<T> { [key: string]: any; // used any here meta: Meta;} Doing this, I lose the completely ability of type inference. Is there any way to do this without this ugly way? Now provide the response and nothing else.
You can use an intersection of two interfaces: interface Api<T> { [key: string]: T[]; }type ApiType<T> = Api<T> & { meta: Meta;}declare let x: ApiType<string>;let a = x.meta // type of `a` is `Meta`let b = x["meta"]; // type of `b` is `Meta`let p = x["someotherindex"] // type of `p` is `string[]`let q = x.someotherindex // type of `q` is `string[]`
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/45258216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6847234/']}
jdg_82910
stackexchange
llm_judgeable_groundtruth_similarity
1217752
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Compute the limit $\lim\limits_{x\to+\infty}\sqrt{x+\sqrt{x}}-\sqrt{x-1}$ my attempt: I tried to multiply top and bottom by the conjugate $$\begin{align}\lim_{x\to+\infty}\sqrt{x+\sqrt{x}}-\sqrt{x-1}&=\lim_{x\to+\infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-1}\right)\frac{\sqrt{x+\sqrt{x}}+\sqrt{x-1}}{\sqrt{x+\sqrt{x}}+\sqrt{x-1}}\\&=\lim_{x\to+\infty}\frac{\left(\sqrt{x+\sqrt{x}}\right)^2-\left(\sqrt{x-1}\right)^2}{\sqrt{x+\sqrt{x}}+\sqrt{x-1}}\\&=\lim_{x\to+\infty}\frac{(x+\sqrt{x})-(x-1)}{\sqrt{x+\sqrt{x}}+\sqrt{x-1}}\\&=\lim_{x\to+\infty}\frac{1+\sqrt{x}}{\sqrt{x+\sqrt{x}}+\sqrt{x-1}}\end{align}$$ But I don't know what I can do after this. Now provide the response and nothing else.
Let's start from your last line:$$\begin{align}\lim_{x\to+\infty}\frac{1+\sqrt{x}}{\sqrt{x+\sqrt{x}}+\sqrt{x-1}} &= \lim \frac{\sqrt x}{\sqrt x} \frac{\frac{1}{\sqrt x} + 1}{\sqrt{1 + \frac{1}{\sqrt x}} + \sqrt{1 - \frac{1}{ x}}} \\&= \frac{1}{1 + 1} = \frac{1}{2}\end{align}$$where we note that everywhere we have $\frac{1}{\sqrt x}$, those terms go to $0$ as $x \to \infty$. The method of factoring out the largest element in the numerator and denominator very often works. $\diamondsuit$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1217752', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/161705/']}
jdg_82911
stackexchange
llm_judgeable_groundtruth_similarity
11399805
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Supposing I have a windows app developed based on C#. I want to ensure that it works on 32 bit and 64 bit both. But I don't want to change the config settings or application settings time and again. Is there a way to test both variants? Now provide the response and nothing else.
There are a couple of options. You can target AnyCPU . If you're program is 100% C#/managed code, with no native dependencies, this will cause it to run 64bit on 64bit Operating systems and 32bit on 32bit Operating systems. Target x86. This will cause it to run 32bit everywhere, which works properly on 64bit Windows (via WOW64). This works properly if you're using native (32bit) libraries, as well. Make two builds, and two separate deployments. This allows you to use native code and still run 64bit on 64bit operating systems, but is far more work. Since 32bit applications run well on 64bit operating systems, there is rarely reason to run the program natively in 64bit. This is typically only really beneficial if you're processing large amounts of data and truly need access to larger memory space than you can get in 32bit processes. In .NET, this typically means you'll want to build 64bit if you're going to use more than 1.2-1.6gb of RAM for your program. Otherwise, 32bit will work fine everywhere.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11399805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1183979/']}
jdg_82912
stackexchange
llm_judgeable_groundtruth_similarity
10048978
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to send messages from php to node.js? I have a linux server running php and node.js. When a user completes a transaction (via php), I'd like send a message from php to node.js. Node will then update the client via a socket connection. What's a good way to send a small amount of data from php to node.js without defeating the performance of node.js? Now provide the response and nothing else.
The suggestion seems to be to talk to node through the HTTP interface, just as any other client does. You can talk to node via HTTP using cURL in php See: http://groups.google.com/group/socket_io/browse_thread/thread/74a76896d2b72ccc/216933a076ac2595?pli=1 In particular, see this post from Matt Pardee I faced a similar problem with wanting to keep users informed of a new note added on to a bug, and similar notifications that could really only be effectively sent from PHP to my Node server. What I did follows (apologies if this gets all garbled and unformatted in sending, if it does, I'd be happy to paste the code somewhere else): First, you'll need to use cURL from PHP. I wrote a function for my class like this: function notifyNode($type, $project_id, $from_user, $data) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://127.0.0.1'); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Expect:')); curl_setopt($ch, CURLOPT_PORT, 8001); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2); curl_setopt($ch, CURLOPT_POST, true); $pf = array('f' => $type, 'pid' => $project_id, 'user_from' => $from_user, 'data' => array()); foreach($data as $k => $v) { $pf['data'][$k] = $v; } curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($pf)); curl_exec($ch); curl_close($ch);} You'll notice that I send the cURL request on the same server since both PHP and NodeJS are running there, your mileage may vary. The port I set this code to connect to is 8001 (this is the port my Node server is running on, and the port the socket.io server connects to). This sends a HTTP POST request with the post field encoded. This is all pretty standard cURL stuff. In your Node app you probably have something like: var server = http.createServer(function(req, res) {});server.listen(8001);var io = io.listen(server, { transports: ['websocket', 'flashsocket', 'xhr-polling'] });... well what we'll do here is expand on the http.createServer part, to listen for connections coming from our local host ("127.0.0.1"). The createServer code then becomes: var server = http.createServer(function(req, res) { // Check for notices from PHP if(res.socket.remoteAddress == '127.0.0.1') { if(req.method == 'POST') { // The server is trying to send us an activity message var form = new formidable.IncomingForm(); form.parse(req, function(err, fields, files) { res.writeHead(200, [[ "Content-Type", "text/plain"] , ["Content-Length", 0] ]); res.write(''); res.end(); //sys.puts(sys.inspect({fields: fields}, true, 4)); handleServerNotice(fields); }); } }}); From there you can implement your handleServerNotice function.. function handleServerNotice(data) { ...} etc etc. I haven't tested this in a while, and in fact that code block was commented out on my node server, so I hope what I've pasted here works - in general this concept is proven and I think it'll work for you. Anyway just wanted to be sure you knew it's been a few months so I'm not sure exactly why I commented out. The code I wrote took a little research -- like setting the 'Expect:' header in cURL -- and I was pretty excited when it finally worked. Let me know if you need any additional help. Best, Matt Pardee
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10048978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979310/']}
jdg_82913
stackexchange
llm_judgeable_groundtruth_similarity
21869223
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have problem let user create folder in laravel 4 through ajax request > route > controller@method. I did test ajax success request to the url call right method. When I use mkdir or File::mkdir($path); (is this method exist?) , I will get the response Failed to load resource: the server responded with a status of 500 (Internal Server Error) and fail to create new folder.. how to solve it ? route.php Route::post('admin/article/addimagegallery', 'AdminDashboardController@addImagegallery'); AdminDashboardController public function addImagegallery(){ if (Request::ajax()) { … $galleryId = 1; // for test $path = public_path().'/images/article/imagegallery/'.$galleryId; File::mkdir($path); }} js $.ajax({ url: 'addimagegallery', type: 'POST', data: {addimagegallery: 'addimagegallery'},}).done(function(response) { console.log(response);}); Now provide the response and nothing else.
No, actually it's use File;File::makeDirectory($path); Also, you may try this: $path = public_path().'/images/article/imagegallery/' . $galleryId;File::makeDirectory($path, $mode = 0777, true, true); Update: Actually it does work, mkdir is being used behind the scene. This is the source: /** * Create a directory. * * @param string $path * @param int $mode * @param bool $recursive * @param bool $force * @return bool */public function makeDirectory($path, $mode = 0777, $recursive = false, $force = false){ if ($force) { return @mkdir($path, $mode, $recursive); } else { return mkdir($path, $mode, $recursive); }} For deleting: public function deleteDirectory($directory, $preserve = false); Check the source at following path (in your local installation): root/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21869223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1775888/']}
jdg_82914
stackexchange
llm_judgeable_groundtruth_similarity
51657105
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a cronjob that sends out emails to customers. It occasionally fails for various reasons. I do not want it to restart, but it still does. I am running Kubernetes on GKE. To get it to stop, I have to delete the CronJob and then kill all the pods it creates manually. This is bad, for obvious reasons. apiVersion: batch/v1beta1kind: CronJobmetadata: creationTimestamp: 2018-06-21T14:48:46Z name: dailytasks namespace: default resourceVersion: "20390223" selfLink: [redacted] uid: [redacted]spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: metadata: creationTimestamp: null spec: template: metadata: creationTimestamp: null spec: containers: - command: - kubernetes/daily_tasks.sh env: - name: DB_HOST valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP envFrom: - secretRef: name: my-secrets image: [redacted] imagePullPolicy: IfNotPresent name: dailytasks resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 schedule: 0 14 * * * successfulJobsHistoryLimit: 3 suspend: truestatus: active: - apiVersion: batch kind: Job name: dailytasks-1533218400 namespace: default resourceVersion: "20383182" uid: [redacted] lastScheduleTime: 2018-08-02T14:00:00Z Now provide the response and nothing else.
It turns out that you have to set a backoffLimit: 0 in combination with restartPolicy: Never in combination with concurrencyPolicy: Forbid . backoffLimit means the number of times it will retry before it is considered failed. The default is 6. concurrencyPolicy set to Forbid means it will run 0 or 1 times, but not more. restartPolicy set to Never means it won't restart on failure. You need to do all 3 of these things, or your cronjob may run more than once. spec: concurrencyPolicy: Forbid failedJobsHistoryLimit: 1 jobTemplate: metadata: creationTimestamp: null spec: [ADD THIS -->]backoffLimit: 0 template: ... MORE STUFF ...
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/51657105', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142098/']}
jdg_82915
stackexchange
llm_judgeable_groundtruth_similarity
71901632
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I found an error log from Apache 2 that is fatal: unsafe repository ('/home/repon' is owned by someone else) It happens because I have git rev-parse --symbolic-full-name --abbrev-ref HEAD' in PHP code, and it looks like the new Git safety change no longer allows www-data to run this Git command. Running the following command does not work: git config --global --add safe.directory /homerepon Is there a workaround to solve this issue? Git version: 2.35.3 PHP version: 7.4 Apache2 version: 2.4.41 Now provide the response and nothing else.
This started appearing with the release of the Git 2.35.2 security update which fixes vulnerabilities described here . Credits @Juan-Kabbali Here are four possible solutions: trust the Git directory (do it if you know the directory contents are safe) git config --global --add safe.directory /home/repon This adds the safe group to file ~/.gitconfig as shown in this example: [safe] directory = /home/repon run the command as the correct user, for example: sudo -u ubuntu -- git status Note : This requires user www-data to have permission to execute the Git command as user ubuntu (assuming ubuntu is the repository owner). For this to work, you will need to add a new file inside /etc/sudoers.d/ with the following contents: www-data ALL=(ubuntu) NOPASSWD: /usr/bin/git This may have security implications, so refer to your security person first. change the Git repository owner to www-data sudo chown -R www-data:www-data /home/repon downgrade Git as a temporary solution. For example, in Ubuntu: apt install git-man=1:2.17.0-1ubuntu1 git=1:2.17.0-1ubuntu1 Note : At least on Windows, it appears that all Git repositories on ejectable drives are considered unsafe and changing the ownership does not seem to work.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/71901632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18836161/']}
jdg_82916
stackexchange
llm_judgeable_groundtruth_similarity
19599252
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm populating a DropDownList in MVC 4 from an enum and I want to order the enum values from largest to smallest. However, there doesn't seem to be a direct way of approaching this. Currently, I'm using this code to add to a dictionary with the key being the ID and the value being the display text: var priorities = Enum.GetValues(typeof(Models.Priority)).OfType<Models.Priority>().ToList();for (int i = priorities.Count - 1; i >= 0; i--){ Models.Priority priority = priorities[i]; prioritiesDictionary.Add((int)priority, "Priority " + ((int)priority).ToString());} I don't believe that putting enum values into a list and looping backwards is the most efficient method. There are only four values in the enum, but is there a better way to perform an OrderBy operation on what is returned from Enum.GetValues ? I know it might be minimal performance impact doing it the way I am, but I want to know for larger enums. Now provide the response and nothing else.
Sounds like you just want: var priorities = ((Models.Priority[]) Enum.GetValues(typeof(Models.Priority))) .OrderByDescending(x => x); Or to avoid quite as many brackets: var priorities = (Models.Priority[]) Enum.GetValues(typeof(Models.Priority));var ordered = priorities.OrderByDescending(x => x); It's not clear how your current code is helping you, by giving you a dictionary - but the above will definitely give you a sequence of enum values, ordered from highest to lowest. You don't need to cast to int , because enum values of the same type are already comparable to each other. If you need a list, just call ToList() after the OrderByDescending call.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19599252', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/541404/']}
jdg_82917
stackexchange
llm_judgeable_groundtruth_similarity
43156989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm overriding Doorkeepers AuthorizationsController . As the docs suggested I inherit from the AuthorizationsController . Now the code below shows my latest attempt for the override. What I currently have Basically, add an extra if statement around the new Authorization Doorkeeper process . I've added line 3-7, currently that works fine. It returns me :error if line 6 is equal to true. My question I can still see the AccessToken via the Browser URL and the server log. So as a user, I could still use this AccessToken to retrieve some data with Postman for example. Even tho it gave me an error when signing in. Why is this? And how could I prevent this from happening? class AuthorizationsController < Doorkeeper::AuthorizationsController def new application = Application.find(authorization.authorize.pre_auth.client.id) resource_owner = User.find(current_resource_owner) if application.users.exclude?(resource_owner) && application.owner != resource_owner render :error elsif pre_auth.authorizable? if skip_authorization? || matching_token? auth = authorization.authorize redirect_to auth.redirect_uri else render :new end else render :error end endend If you check the introduction to OAuth2 written by DigitalOcean, my if statement still succeeds on step 3 'User-agent Receives Access Token with Redirect URI', because I can see the AccessToken with the redirect URI in my browser URL. And after step 3 it gives me the :error . UPDATE The whole process of generating an AccessToken is already finished before my override on the AuthorizationsController starts. I added a simple before_action to print to the server log, but before that Doorkeeper::AccessToken Load (0.9ms) SELECT 'oauth_access_tokens'.* FROM 'oauth_access_tokens' WHERE 'oauth_access_tokens'.'token' = 'x' LIMIT 1 happens. Now provide the response and nothing else.
The concept of trigram similarity relies on having any sentence divided into "trigrams" (sequences of three consecutive letters), and treating the result as a SET (i.e.: the order doesn't matter, and you don't have repeated values). Before the sentence is considered, two blank spaces are added at the beginning, and one at the end, and single spaces are replaced by double ones. Trigrams are a special case of N-grams . The trigram set corresponding to "Chateau blanc" is found by finding all sequences of three letters that appear on it: chateau blanc--- => ' c' --- => ' ch' --- => 'cha' --- => 'hat' --- => 'ate' --- => 'tea' --- => 'eau' --- => 'au ' --- => 'u ' --- => ' b' --- => ' bl' --- => 'bla' --- => 'lan' --- => 'anc' --- => 'nc ' Sorting them, and taking out repetitions gets you: ' b'' c'' bl'' ch''anc''ate''au ''bla''cha''eau''hat''lan''nc ''tea' This can be computed by PostgreSQL by means of the function show_trgm : SELECT show_trgm('Chateau blanc') AS AA = [ b, c, bl, ch,anc,ate,au ,bla,cha,eau,hat,lan,nc ,tea] ... which has 14 trigrams. (Check pg_trgm ). And the trigram set corresponding to "Chateau Cheval Blanc" is: SELECT show_trgm('Chateau Cheval Blanc') AS B B = [ b, c, bl, ch,anc,ate,au ,bla,cha,che,eau,evl,hat,hev,la ,lan,nc ,tea,vla] ... which has 19 trigrams If you count how many trigrams have both sets in common, you find that they have the following ones: A intersect B = [ b, c, bl, ch,anc,ate,au ,bla,cha,eau,hat,lan,nc ,tea] and the ones they have in total are: A union B = [ b, c, bl, ch,anc,ate,au ,bla,cha,che,eau,evl,hat,hev,la ,lan,nc ,tea,vla] That is, both sentences have 14 trigrams in common, and 19 in total. The similarity is computed as: similarity = 14 / 19 You can check it with: SELECT cast(14.0/19.0 as real) AS computed_result, similarity('Chateau blanc', 'chateau cheval blanc') AS function_in_pg and you'll see that you get: 0.736842 ... which explains how similarity is computed, and why you get the values you get. NOTE: You can compute the intersection and union by means of: SELECT array_agg(t) AS in_commonFROM( SELECT unnest(show_trgm('Chateau blanc')) AS t INTERSECT SELECT unnest(show_trgm('chateau chevla blanc')) AS t ORDER BY t) AS trigrams_in_common ;SELECT array_agg(t) AS in_totalFROM( SELECT unnest(show_trgm('Chateau blanc')) AS t UNION SELECT unnest(show_trgm('chateau chevla blanc')) AS t) AS trigrams_in_total ; And this is a way to explore the similarity of different pair of sentences: WITH p AS( SELECT 'This is just a sentence I''ve invented'::text AS f1, 'This is just a sentence I''ve also invented'::text AS f2),t1 AS( SELECT unnest(show_trgm(f1)) FROM p),t2 AS( SELECT unnest(show_trgm(f2)) FROM p),x AS( SELECT (SELECT count(*) FROM (SELECT * FROM t1 INTERSECT SELECT * FROM t2) AS s0)::integer AS same, (SELECT count(*) FROM (SELECT * FROM t1 UNION SELECT * FROM t2) AS s0)::integer AS total, similarity(f1, f2) AS sim_2FROM p )SELECT same, total, same::real/total::real AS sim_1, sim_2FROM x ; You can check it at Rextester
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43156989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6018313/']}
jdg_82918
stackexchange
llm_judgeable_groundtruth_similarity
6711870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What does irrefutable pattern failed for pattern mean?What cases will cause this runtime error? Now provide the response and nothing else.
Well, I assume it means what it says - that a pattern doesn't match but there is no alternative. This example: But for the program: g x = let Just y = f x in h y GHC reports: Main: M1.hs:9:11-22: Irrefutable pattern failed for pattern Data.Maybe.Just y Indicating the source of the failure. Comes from http://www.haskell.org/haskellwiki/Debugging The point of the example is that if f x returns Nothing then there is no way GHC can assign a value to y .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6711870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_82919
stackexchange
llm_judgeable_groundtruth_similarity
14721062
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 2 lists of instances: list1list2 each instance contains variables such as id, name, etc... I am iterating through list2, and I want to find entries that don't exist in list1. eg.. for entry in list2: if entry.id in list1: <do something> I'm hoping to find a way to do this without a douple for loop. Is there an easy way? Now provide the response and nothing else.
I might do something like: set1 = set((x.id,x.name,...) for x in list1)difference = [ x for x in list2 if (x.id,x.name,...) not in set1 ] where ... is additional (hashable) attibutes of the instance -- You need to include enough of them to make it unique. This takes your O(N*M) algorithm and turns it into an O(max(N,M)) algorithm.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14721062', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2005821/']}
jdg_82920
stackexchange
llm_judgeable_groundtruth_similarity
16106757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have seen people use a couple of different way of initializing arrays: string[] Meal = new string[]{"Roast beef", "Salami", "Turkey", "Ham", "Pastrami"}; or another way, also called initializing is: string[] Meats = {"Roast beef", "Salami", "Turkey", "Ham", "Pastrami" }; What is the best way, and what is the major difference between both ways (including memory allocation)? Now provide the response and nothing else.
There is no difference in both cases. Compiler generates the same bytecode ( newarr OpCode): public static void Main(){ string[] Meal = new string[] { "Roast beef", "Salami", "Turkey", "Ham", "Pastrami"}; string[] Meats = { "Roast beef", "Salami", "Turkey", "Ham", "Pastrami" };} MSIL: .entrypoint // Code size 100 (0x64) .maxstack 3 .locals init ([0] string[] Meal, [1] string[] Meats, [2] string[] CS$0$0000) IL_0000: nop IL_0001: ldc.i4.5 IL_0002: newarr [mscorlib]System.String IL_0007: stloc.2 IL_0008: ldloc.2 IL_0009: ldc.i4.0 IL_000a: ldstr "Roast beef" IL_000f: stelem.ref IL_0010: ldloc.2 IL_0011: ldc.i4.1 IL_0012: ldstr "Salami" IL_0017: stelem.ref IL_0018: ldloc.2 IL_0019: ldc.i4.2 IL_001a: ldstr "Turkey" IL_001f: stelem.ref IL_0020: ldloc.2 IL_0021: ldc.i4.3 IL_0022: ldstr "Ham" IL_0027: stelem.ref IL_0028: ldloc.2 IL_0029: ldc.i4.4 IL_002a: ldstr "Pastrami" IL_002f: stelem.ref IL_0030: ldloc.2 IL_0031: stloc.0 IL_0032: ldc.i4.5 IL_0033: newarr [mscorlib]System.String IL_0038: stloc.2 IL_0039: ldloc.2 IL_003a: ldc.i4.0 IL_003b: ldstr "Roast beef" IL_0040: stelem.ref IL_0041: ldloc.2 IL_0042: ldc.i4.1 IL_0043: ldstr "Salami" IL_0048: stelem.ref IL_0049: ldloc.2 IL_004a: ldc.i4.2 IL_004b: ldstr "Turkey" IL_0050: stelem.ref IL_0051: ldloc.2 IL_0052: ldc.i4.3 IL_0053: ldstr "Ham" IL_0058: stelem.ref IL_0059: ldloc.2 IL_005a: ldc.i4.4 IL_005b: ldstr "Pastrami" IL_0060: stelem.ref IL_0061: ldloc.2 IL_0062: stloc.1 IL_0063: ret
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16106757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1032248/']}
jdg_82921
stackexchange
llm_judgeable_groundtruth_similarity
29892082
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My requirement is to create a map which contains key as tableName and value as elements inside the schema field in the json file. { "status":"success", "tables":[ { "dbname":"idn", "tableName":"my_monthly_hits_b", "schema":"(cm11:chararray)", "location":"/user/mydb/" }, { "dbname":"idn", "tableName": "my_monthly_match", "schema":"(city:chararray,match:chararray,cm11:chararray)", "location":"/user/mydb1" } ]} Now provide the response and nothing else.
The error messages tells you that you can't index into a vector with values of type u32 . Vec indices have to be of type usize , so you have to cast your indices to that type like this: vec[start as usize..(start + len) as usize] or just change the type of the start and len arguments to usize . You also need to take a reference to the result : &vec[start as usize..(start + len) as usize]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29892082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4798203/']}
jdg_82922
stackexchange
llm_judgeable_groundtruth_similarity
25889637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are writing a small ANN which is supposed to categorize 7000 products into 7 classes based on 10 input variables. In order to do this we have to use k-fold cross validation but we are kind of confused. We have this excerpt from the presentation slide: What are exactly the validation and test sets? From what we understand is that we run through the 3 training sets and adjust the weights (single epoch). Then what do we do with the validation? Because from what I understand is that the test set is used to get the error of the network. What happens next is also confusing to me. When does the crossover take place? If it's not too much to ask, a bullet list of step would be appreciated Now provide the response and nothing else.
You seem to be a bit confused (I remember I was too) so I am going to simplify things for you. ;) Sample Neural Network Scenario Whenever you are given a task such as devising a neural network you are often also given a sample dataset to use for training purposes. Let us assume you are training a simple neural network system Y = W · X where Y is the output computed from calculating the scalar product (·) of the weight vector W with a given sample vector X . Now, the naive way to go about this would be using the entire dataset of, say, 1000 samples to train the neural network. Assuming that the training converges and your weights stabilise you can then safely say that you network will correctly classify the training data. But what happens to the network if presented with previously unseen data? Clearly the purpose of such systems is to be able to generalise and correctly classify data other than the one used for training. Overfitting Explained In any real-world situation, however, previously-unseen/new data is only available once your neural network is deployed in a, let's call it, production environment. But since you have not tested it adequately you are probably going to have a bad time. :) The phenomenon by which any learning system matches its training set almost perfectly but constantly fails with unseen data is called overfitting . The Three Sets Here come in the validation and testing parts of the algorithm. Let's go back to the original dataset of 1000 samples. What you do is you split it into three sets -- training , validation and testing ( Tr , Va and Te ) -- using carefully selected proportions. (80-10-10)% is usually a good proportion, where: Tr = 80% Va = 10% Te = 10% Training and Validation Now what happens is that the neural network is trained on the Tr set and its weights are correctly updated. The validation set Va is then used to compute the classification error E = M - Y using the weights resulting from the training, where M is the expected output vector taken from the validation set and Y is the computed output resulting from the classification ( Y = W * X ). If the error is higher than a user-defined threshold then the whole training-validation epoch is repeated. This training phase ends when the error computed using the validation set is deemed low enough. Smart Training Now, a smart ruse here is to randomly select which samples to use for training and validation from the total set Tr + Va at each epoch iteration. This ensures that the network will not over-fit the training set. Testing The testing set Te is then used to measure the performance of the network. This data is perfect for this purpose as it was never used throughout the training and validation phase. It is effectively a small set of previously unseen data, which is supposed to mimic what would happen once the network is deployed in the production environment. The performance is again measured in term of classification error as explained above. The performance can also (or maybe even should) be measured in terms of precision and recall so as to know where and how the error occurs, but that's the topic for another Q&A. Cross-Validation Having understood this training-validation-testing mechanism, one can further strengthen the network against over-fitting by performing K-fold cross-validation . This is somewhat an evolution of the smart ruse I explained above. This technique involves performing K rounds of training-validation-testing on, different, non-overlapping, equally-proportioned Tr , Va and Te sets . Given k = 10 , for each value of K you will split your dataset into Tr+Va = 90% and Te = 10% and you will run the algorithm, recording the testing performance. k = 10for i in 1:k # Select unique training and testing datasets KFoldTraining <-- subset(Data) KFoldTesting <-- subset(Data) # Train and record performance KFoldPerformance[i] <-- SmartTrain(KFoldTraining, KFoldTesting)# Compute overall performanceTotalPerformance <-- ComputePerformance(KFoldPerformance) Overfitting Shown I am taking the world-famous plot below from wikipedia to show how the validation set helps prevent overfitting. The training error, in blue, tends to decrease as the number of epochs increases: the network is therefore attempting to match the training set exactly. The validation error, in red, on the other hand follows a different, u-shaped profile. The minimum of the curve is when ideally the training should be stopped as this is the point at which the training and validation error are lowest. References For more references this excellent book will give you both a sound knowledge of machine learning as well as several migraines. Up to you to decide if it's worth it. :)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25889637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3857691/']}
jdg_82923
stackexchange
llm_judgeable_groundtruth_similarity
1876606
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can do SELECT TOP (200) ... but why not BOTTOM (200)? Well not to get into philosophy what I mean is, how can I do the equivalent of TOP (200) but in reverse (from the bottom, like you'd expect BOTTOM to do...)? Now provide the response and nothing else.
SELECT columnsFROM( SELECT TOP 200 columns FROM My_Table ORDER BY a_column DESC) SQORDER BY a_column ASC
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1876606', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18309/']}
jdg_82924
stackexchange
llm_judgeable_groundtruth_similarity
50328143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm a beginner in async await and promises. I read few articles and watch few tutorial videos but I am still not able to understand it completely.So I have a code that I'm working on right now }).then(function() { var responseArray = [] [url1,url2,url3,url4].forEach((url)=>{ makeRequest(url) }).then((response)=>{ responseArray.push(response) }) return responseArray}) So as expected the responseArray is returned empty. I need to make it wait until all the responses from each makerequest(url) is pushed to the responseArray. This is my attempt }).then(function() { var responseArray = [] [url1,url2,url3,url4].forEach((url)=>{ async makeRequest(url) }).then((response)=>{ await responseArray.push(response) }) return responseArray }) Can anyone help me fix this one? Now provide the response and nothing else.
You need to map the requests to an array of promises then use Promise.all : .then(async () => { const responseArray = await Promise.all( [url1, url2, url3, url4].map(makeRequest) );}) This will execute all the requests in parallel (which is generally what you want unless you want to limit the bandwidth etc). If you want to execute them sequentially, there's a huge discussion on the best approach .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50328143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4531869/']}
jdg_82925
stackexchange
llm_judgeable_groundtruth_similarity
2586596
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's say $\boldsymbol{\vec F}$ is a field vector with a liner relationship $$\boldsymbol{\vec F}(\boldsymbol{\vec x})=\boldsymbol A \boldsymbol{\vec x}$$ where $\boldsymbol{\vec x}$ is a vector of size $n$ and $\boldsymbol A$ is a constant $n\times n$ square matrix. For a given point $\boldsymbol{\vec x}_0$, how can I find the hyper-surface $S(\boldsymbol{\vec x})=0$ which crosses $\boldsymbol{\vec x}_0$ and it is perpendicular to the vector field $\boldsymbol{\vec F}$? I believe this there is an explicit form of the surface which depends on $\boldsymbol{\vec x}_0$ and $\boldsymbol A$. On hyper-surface $S$, there will be $n-1$ degree of freedom. I am looking for a solution for an $n$ dimension vector field. This image is just for illustration. PS. This question is a special case (linear form) of my previous question . Now provide the response and nothing else.
Such orthogonal hypersurfaces need not exist if $n\geq3$. Relevant here is Frobenius' theorem that gives the necessary and sufficient local integrability condition. The following source http://staff.ustc.edu.cn/~wangzuoq/Courses/16F-Manifolds/Notes/Lec11.pdf contains a readable introduction. Note that your vector field ${\bf F}$ defines at each point ${\bf x}$ an $(n-1)$-dimensional orthogonal hyperplane, called a distribution in the quoted notes. Frobenius' theorem expands to a condition on the matrix $A$ in your question. The case $n=3$ should be intuitively comprehensible. About this case I can say the following: If the vector field ${\bf F}$ has orthogonal surfaces these can be regarded as level surfaces of some scalar function $g$. This $g$ has an associated gradient field $\nabla g$ which then is orthogonal to these same surfaces, hence parallel to ${\bf F}$. It follows that there is a scalar function $\lambda$ such that$${\bf F}=\lambda\>\nabla g\ .$$We now compute $${\rm curl}({\bf F})=\nabla\lambda\times\nabla g\ .$$It follows that ${\rm curl}({\bf F})$ is orthogonal to $\nabla g$, hence to the given ${\bf F}$. In this way we have derived the integrability condition $${\rm curl}({\bf F})\cdot{\bf F}\equiv0\ .\tag{1}$$This is a necessary condition for the existence of such a family of orthogonal surfaces. Frobenius' theorem says that this condition is also (locally) sufficient . (Unfortunately I could not find a proof of this simple case of Frobenius' theorem on the web.) In the case of your ${\bf F}:=A{\bf x}$ this means the following: Being a linear function of ${\bf x}$ the field ${\bf F}$ has constant curl ${\bf c}=(A_{32}-A_{23},A_{13}-A_{31}, A_{21}-A_{12})$. The condition $(1)$ then expands to$$A^{\top}{\bf c}={\bf 0}\ .\tag{2}$$If $A$ is nonsingular then $(2)$ enforces ${\bf c}={\bf 0}$, which means that $A$ has to be a symmetric matrix. Rodrigo de Azevedo's solution shows that in this case we indeed can find orthogonal surfaces.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2586596', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/219655/']}
jdg_82926
stackexchange
llm_judgeable_groundtruth_similarity
263399
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Scenario : I'm looking to protect my software that is written in PHP. The nature of PHP is that it is delivered as plain text and therefore cannot be protected by itself. I don't want to install libs to the server like ZendGuard, IonCube or SourceGuardian. I want that individuals should use the software but should not interfere with the protected part of the application (licensing, sensitive parts). The software is distributed to clients as trial software . Solution Concept : Encrypt the payload with a blockcipher (AES) and store the key on a remote system. To software would require the key to decrypt the payload. An obfuscated part of the software should contact the key server via cURL over SSL and request the decryption key. The decryption key should only be sent back to the Server IP that holds a trial license. A checksum of the files could also be posted with the key request and if it even fails once, the key server could reject to provide the decryption key forever, which should hopefully prevent tampering. Notes : The client requesting the trial will be known in advance so unauthorized copies of the software should never receive a (correct) decryption key in first place. The decryption keys are unique to every client and the payload parts are too. The code will never be stored decrypted but only decrypted at runtime and eval()'ed. Questions : Is this a feasable or doomed to failure? Wouldn't this protect even against the tiniest tampering by anyone? Thank you! Now provide the response and nothing else.
Let's see. Once your key server sends the decryption key, the hacker can monitor network activity to retrieve the key and use it to decrypt the source code. From this moment, he can do whatever he wants with it, especially: Modify it in a way that it will either send the checksums you are expecting, and not the actual checksums of files, Replace the part which contacts the key server, or simply remove it to avoid the check, Post on P2P networks the decrypted source code with key checking part removed. On the other hand, some part of the legitimate users of your product will probably prefer using the product of your competitors , because: They'll find that your app is slow to start. Contacting the remote key server takes time, which may or may not be accepted by the end users. Decrypting takes time too, which can waste too much resources on servers. Your competitors' products may achieve much better CPU footprint if they don't add this sort of complexity. Banning by IP will cause additional problems. Most users have dynamic IPs, which means that a ban will affect the concerned person only for a very limited amount of time (such as one day), and then prevent other persons to access your product. In the same way, banning the whole company or a wifi spot is rather unfortunate in terms of marketing. They won't accept to rely on a fragile product (unless you work in a very large company which guarantees that the product will be maintained for the next 5, 10 or 15 years). Not having the source code means that: If your company stops maintaining it, your customers will be using an obsolete product which may contain known but unpatched bugs . If a customer wants to modify it, he has no other choice than to pay you (often a lot, because of the monopoly) to do the change. They won't accept to rely on a product which can contain malware . If you're a known company with excellent reputation, this is not an issue. If you're a startup or don't have a well-established reputation, some customers would not install your app because of the risk of malicious code. This is especially important on servers which are often more protected than desktop PCs in companies. If you actually invented something that should be protected ( and very probably, you haven't ), the only way you can protect it is to move the sensitive code to the servers you own , and then provide an API to access the functionality from the outside. As soon as you give the code, no matter how well is it protected, it can be decrypted; otherwise, it wouldn't be possible to run it. If the only goal is to have a trial version, simply host it on your own servers. Potential customers will be able to try it, and if they are interested, purchase the product. Of course, in order to convince sysadmins as well, you should also provide very detailed information about the way your app should be deployed and hosted. Later, when your product becomes successful, you may consider evolving your offer, by providing: The limited in time demo hosted on your server, The pay-per-month subscription where the product is still hosted on your server. Multiple variants of subscriptions may target separately individuals, small companies and medium-sized companies. For small entities, the product may be out of charge for a year or some other long period of time (or forever). The more expensive solution where the customer deploys your app on his own servers. This solution may target large companies. With this model (used by many startups), encryption of source code becomes mostly irrelevant. Large companies won't try to download your product from a P2P network, because of all the compliance policies which prevent them from using unlicensed products.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/263399', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/7239/']}
jdg_82927
stackexchange
llm_judgeable_groundtruth_similarity
47312116
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My Generic class export class BaseService<T> { public subUrl; constructor(public repo:RepogitoryService) { }} How can I store the class name of T on a local variable? Now provide the response and nothing else.
You must understand that Typescript is just a transpiler (compiler to javascript). Some of the syntax sugar (such as generics) are working only in type-checking phase (and also it's helpful for intellisense in your IDE/text-editor). However assignment to a variable is happening in runtime, in runtime it's just a plain Javascript. There are no types and no generics in runtime. But here's the easiest way I would do it: class Some<T> { private TName : string; constructor(x : T&Function) { this.TName = x.name; }}class Another {}const some = new Some<Another>(Another);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47312116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1526675/']}
jdg_82928
stackexchange
llm_judgeable_groundtruth_similarity
2317868
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tend to use the .mm extension by default when creating new classes so that I can use ObjC++ later on if I require it. Is there any disadvantage to doing this? When would you prefer .m ? Does .m compile to a faster executable (since C is generally faster than C++)? Now provide the response and nothing else.
Before Xcode 4.0 (released in 2011), which can use the Clang frontend tool chain for both, the major disadvantage to using .mm over .m for "normal" Objective-C was that compile times are significantly higher for Objective-C++. This is because the C++ compiler takes longer than the C compiler. A better strategy is to use .m by default. If you need to use Objective-C++ later in development, there is no harm in renaming the file to use a .mm extension. If you so from within Xcode, the project will be automatically updated to use the newly named file. Of course all of the standard caveats apply once you try to compare Objective-C++ vs. Objective-C performance at run time. Since Objective-C++ is a C++ superset while Objective-C is a C superset, you are dealing with two different languages each with performance tradeoffs at runtime. Given that you're using Objective-X at all, you are likely writing a user-level application (not a systems level app) and the difference in performance between C and C++ wil likely be completely determined by your abilities to code efficient algorithms in each language. If you're a C++ developer, you'll likely code better than in C and visa versa. So, as always, use the appropriate tool for the job. For reference, you may also be interested in this answer: C vs C++ (Objective-C vs Objective-C++) for iPhone UPDATE Feb 17, 2012 As of Xcode 4.0 (with LLVM 3.0), Clang has supported Objective-C++. Even C++11 support is quite strong now.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2317868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135700/']}
jdg_82929
stackexchange
llm_judgeable_groundtruth_similarity
27160
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We have a shell/tty based application that has ~1,000 users. It is running in an environment where X Server is not available. If a user accidently turns on the Scroll Lock , it makes it seem as if the application is frozen because it quits responding to keyboard input. Is there a way to disable the Scroll Lock or remap it to something less intrusive for this use-case? Now provide the response and nothing else.
For a GUI env: Type xev on the CLI, then click on the Scroll Lock key to see what its keycode is. Then use xmodmap -e 'keycode <value>=<action>' Where the value is the keycode number you get from the xev command. If you want to desable the Scroll Lock, you should leave the <action> as blank.or you can map the <action> to another keycode. For example, on my Asus EEE 1005P, the Scroll Lock key is map to keycode 78 so i would issue xmodmap -e 'keycode 78=' to disable it. For a non GUI env: setkeycodes scancode keycode you get the scancode with the showkey command. And then you need to use loadkeys .And here you can find a guide. I don't have any experience with a non GUI env, hence i gave you the links and a general way on how to do it.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/27160', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/13468/']}
jdg_82930
stackexchange
llm_judgeable_groundtruth_similarity
115937
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $F_n$ be the free group generated by $x_i$ , for $1\leq i\leq n$ . Let $a_i$ be some elements of $F_n$ , also for $1\leq i\leq n$ . Is there a nice way to tell when the list $\{a_i^{-1}x_ia_i\}$ does not generate $F_n$ ? For insufficient reasons partially related to a talk Balloons and Hoops and their Universal Finite Type Invariant, BF Theory, and an Ultimate Alexander Invariant I gave once in Hamburg (video and handout there), and to a paper I'm not done writing with the same title (PDF there), I expect that there might be a way to construct out of the $a_i$ 's a conjugacy class in $F_n$ (or perhaps in some completion of $F_n$ ), whose non-triviality implies that $\{a_i^{-1}x_ia_i\}$ do not generate $F_n$ . Does this to anyone make sense? Now provide the response and nothing else.
So you're giving an endomorphism $\varphi:F_n\to F_n$ , such that $\varphi(x_i)=a_i^{-1}x_ia_i$ ,and you want to know if this is an isomorphism? There is a nice algorithm due to Stallings , called Stallings foldings, which will quickly tell you the answer. One may assume that $a_i$ is a reduced word and does not begin in $x_i^{-1}$ , and that the $a_i$ 's share no common postfix. If you take a wedge of loops as a $K(F_n,1)$ , with a loop for each generator $x_i$ , then the map $\varphi$ may be realized by a map between wedges of loops, where the loop $x_i$ goes to a loop represented by $a_i^{-1}x_ia_i$ . The domain graph gets an induced cell decomposition, where the loop $x_i$ gets subdivided into $|a_i^{-1}x_ia_i|$ edges. Then you can "fold" the domain graph, making it shorter by identifying edges stemming from a vertex which map to the same edge in the target. If you iterate until no folds are available, then the map will be an isomorphism if and only if the map is a homeomorphism at the end. From this, one gets a simple sufficient criterion for the endomorphism to not be an isomorphism, namely there are no folds available. This happens if the elements $a_i$ end in different generators and their inverses as reduced words. In general, though, I think one would have to carry out Stallings algorithm to get a necessary criterion.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/115937', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8899/']}
jdg_82931
stackexchange
llm_judgeable_groundtruth_similarity
37264396
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am counting the number of times a word appears in Macbeth: use std::io::{BufRead, BufReader};use std::fs::File;use std::collections::HashMap;fn main() { let f = File::open("macbeth.txt").unwrap(); let reader = BufReader::new(f); let mut counts = HashMap::new(); for l in reader.lines() { for w in l.unwrap().split_whitespace() { let count = counts.entry(w).or_insert(0); *count += 1; } } println!("{:?}", counts);} Rust barfs on this, saying: error[E0597]: borrowed value does not live long enough --> src/main.rs:14:9 |11 | for w in l.unwrap().split_whitespace() { | ---------- temporary value created here...14 | } | ^ temporary value dropped here while still borrowed...18 | } | - temporary value needs to live until here | = note: consider using a `let` binding to increase its lifetime The actual problem is that w is a reference, and so changing it to w.to_string() solves it. I don't get why the Rust compiler is pointing the blame at l , when the issue is w . How am I supposed to infer that w is the problem here? Now provide the response and nothing else.
is pointing the blame at l It's not, really. Review the error message again: for w in l.unwrap().split_whitespace() { ---------- temporary value created here The error marker is pointing to the call of unwrap on l . when the issue is w It's not, really. l is of type Result<String> . When you call unwrap , you get a String , and then split_whitespace returns references to that string. These references live only as long as the string, but your code tries to put them into a hashmap that will live longer than the string. The problem is that the l.unwrap() doesn't live long enough, and w is just a reference to the thing that doesn't live long enough. Conceptually, it's the same problem as this code: use std::collections::HashMap;fn main() { let mut counts = HashMap::new(); { let s = String::from("hello world"); counts.insert(&s, 0); } println!("{:?}", counts);} Which also points to s and says it doesn't live long enough (because it doesn't). The correct solution is to convert each word into an owned String which the HashMap can then hold: for l in reader.lines() { for w in l.unwrap().split_whitespace() { counts.entry(w.to_string()).or_insert(0) += 1; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37264396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/79849/']}
jdg_82932
stackexchange
llm_judgeable_groundtruth_similarity
14702783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of files in a directory. I need to find files for a set of specific dates (For example if i need files from 16th Jan to 20th Jan). I tried using ls -ltr | grep <date> but it is taking too many steps to accomplish selecting the files. Is there any easier way to get this done. Thanks! Now provide the response and nothing else.
I'm not sure how many files you have in your directory but something like this should be blindingly fast: ls -al | awk '$6 == "Jan" && $7 >= 16 && $7 <= 20 {print $9}' On my system, I see the following with dates slightly modified: pax> ls -al | awk '$6 == "Jan" && $7 >= 16 && $7 <= 29 {print $9}'kids_shares.odsour_savings.gnumericphotospax> ls -ald kids_shares.ods our_savings.gnumeric photos-rw-r--r-- 1 pax pax 51005 Jan 29 19:39 kids_shares.ods-rw-r--r-- 1 pax pax 2275 Jan 28 14:48 our_savings.gnumericdrwxrwxrwx 130 pax pax 4096 Jan 29 21:47 photos You can see that the dates match for the given files. One thing to watch out for: if the file is recent, it will have a time in column 8. Beyond some age, ls starts putting the year in there. I have the vague recollection that it's somewhere around the six-month-old border but I'm not absolutely certain. You'll need to cater for that as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14702783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1145138/']}
jdg_82933
stackexchange
llm_judgeable_groundtruth_similarity
23145
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've found in my solution book multiple references to this "dimension theorem" but have been unlucky looking for a more extensive description of this. When does this hold? Are there any other hypothesis needed? Thanks. Now provide the response and nothing else.
The Dimension Theorem is often called the "Rank Nullity Theorem", so you may have more luck finding it under that name. The result holds for any linear transformation and vector spaces; it's not even restricted to finite dimensional spaces, so long as your sum is taken to be a sum of cardinals and you assume the Axiom of Choice (so that all vector spaces have bases). The Dimension Theorem, aka Rank Nullity Theorem, states: Dimension Theorem. If $\mathbf{V}$ and $\mathbf{W}$ are any vector spaces (over the same field), and $T$ is any linear transformation from $\mathbf{V}$ to $\mathbf{W}$, then $\dim(\mathrm{ker} T) + \dim(\mathrm{Im} T) = \dim(\mathbf{V})$ where the sum is a sum of cardinals. The idea is to start with a basis $\beta$ for $\mathrm{ker} T$, and then extend it to a basis $\beta\cup\gamma$ for all of $\mathbf{V}$. Then you show that the image of $\gamma$ under $T$ is linearly independent: if $\mathbf{v}_1,\ldots,\mathbf{v}_n\in\gamma$ are such that $$\alpha_1T(\mathbf{v}_1)+\cdots\alpha_nT(\mathbf{v}_n) = \mathbf{0}$$in $\mathbf{W}$, then you can rewrite this as$$T(\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n) = \mathbf{0}$$hence $\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n\in\mathrm{ker}(T)$. That means that this linear combination of vectors of $\gamma$ lies in the span of $\beta$, but since $\beta\cup\gamma$ is linearly independent, the only way this can happen is if $\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n=\mathbf{0}$; this is a linear combination of vectors in $\gamma$, which is linearly independent, so $\alpha_1=\cdots=\alpha_n=0$. This proves that$$T(\gamma) = \{T(\mathbf{v})\mid \mathbf{v}\in\gamma\}$$is a linearly independent subset of $\mathbf{W}$. It is now easy to show that $\mathrm{Im}(T) = \mathrm{span}T(\beta\cup\gamma) = \mathrm{span}(T(\gamma))$, so that $\dim(\mathrm{Im}(T)) = |\gamma|$. So you have that$$\dim\mathbf{V} = |\beta\cup\gamma| = |\beta|+|\gamma| = \dim(\mathrm{ker}\; T) + \dim(\mathrm{Im}\; T),$$as desired.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/23145', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6427/']}
jdg_82934
stackexchange
llm_judgeable_groundtruth_similarity
192703
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've found that image: and cannot find a reason why there is page directory ? I think that it could save time, if the process (or processes), especially the CR3 register, points straight to the page table. Now provide the response and nothing else.
The MMU makes an arbitrary mapping of 32bits virtual addresses to 32bits physical addresses, with 4kB granularity. The 4GB address range is divided into 1 million 4kB pages. You could use a single array with 1 million page table entries, occupying 4MB (with 32 bits per PTE).You can use also "nested" page tables, which allows to reduce the memory footprint of the mapping tables, by marking some first level entries as unallocated. Imagine when the first 386 computers were sold with 4MB RAM. Where would you put a flat page table ? Now, 64bits computers would need impossibly large page tables if these tables were not multilevel tables (4 levels on x64_64, IIRC) An additional refinement is that page tables are per-application. Each application can freely use the whole virtual address range and the OS can decide which pages are exclusive, and which are shared (for example, a dynamic library code could be accessible from several programs).So, parts of the level 2 pages tables can be shared between applications, while the first level is separate for each task. Traversing several tables takes time, the CPU keeps in "TLBs" (Translation Lookaside Buffers) last accessed page translations, and, sometimes, some intermediate page table entries are also cached, so that if a software uses contiguous memory, fewer accesses to RAM are usually needed.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/192703', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/69723/']}
jdg_82935
stackexchange
llm_judgeable_groundtruth_similarity
43515442
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I really need to have something like this: interface IReadableVar{ object Value { get; }}interface IWritableVar{ object Value { set; }}interface IReadableWritableVar : IReadableVar, IWritableVar{} However when I try to use IReadableWritableVar.Value I get compile errors unless I explicitly cast to base interface, like here: static void Main(){ IReadableWritableVar var = null; var t = var.Value; // <-- CS0229: Ambiguity between 'IReadableVar.Value' and 'IWritableVar.Value' var.Value = null; // <-- CS0229: Ambiguity between 'IReadableVar.Value' and 'IWritableVar.Value' var v = ((IReadableVar)var).Value; // compiles fine ((IWritableVar)var).Value = null; // compiles fine} Why do I get these errors although everything should be clear to the compiler? Is there any way to fix this problem other than casting (hundreds of places in the application)? Update: it was suggested this is a dupe of Implementing 2 Interfaces with 'Same Name' Properties but this is slightly different as in the other case there's no inheritance in interfaces. Anyway, the problem is solved now - see accepted answer. Now provide the response and nothing else.
A possible workaround can be modify your interface IReadableWritableVar like this: interface IReadableWritableVar : IReadableVar, IWritableVar{ new object Value { get; set; }} But keep in my that a valid implementation should be: class ReadableWritableVar : IReadableWritableVar{ public object Value { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } object IWritableVar.Value { set { throw new NotImplementedException(); } } object IReadableVar.Value { get { throw new NotImplementedException(); } }} A more concrete example: class ReadableWritableVar : IReadableWritableVar{ public object Value { get { return ((IReadableVar)this).Value; } set { ((IWritableVar)this).Value = value; } } object _val; object IWritableVar.Value { set { _val = value; } } object IReadableVar.Value => _val;} Or even better: class ReadableWritableVar : IReadableWritableVar{ public object Value { get; set; } object IWritableVar.Value { set { Value = value; } } object IReadableVar.Value => Value;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43515442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1256553/']}
jdg_82936
stackexchange
llm_judgeable_groundtruth_similarity
6462439
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's the difference between long long and long? And they both don't work with 12 digit numbers (600851475143), am I forgetting something? #include <iostream>using namespace std;int main(){ long long a = 600851475143;} Now provide the response and nothing else.
Going by the standard, all that's guaranteed is: int must be at least 16 bits long must be at least 32 bits long long must be at least 64 bits On major 32-bit platforms: int is 32 bits long is 32 bits as well long long is 64 bits On major 64-bit platforms: int is 32 bits long is either 32 or 64 bits long long is 64 bits as well If you need a specific integer size for a particular application, rather than trusting the compiler to pick the size you want, #include <stdint.h> (or <cstdint> ) so you can use these types: int8_t and uint8_t int16_t and uint16_t int32_t and uint32_t int64_t and uint64_t You may also be interested in #include <stddef.h> (or <cstddef> ): size_t ptrdiff_t
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6462439', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/807547/']}
jdg_82937
stackexchange
llm_judgeable_groundtruth_similarity
1031015
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am forced to use VS2008 for the bulk of my projects at work, but whenever the odd text file needs editing I use Vim. Now I know that there are plugins and whatnot that can make VIM work like an IDE, so I am wondering if anyone actually uses it as an IDE? EDIT: For those of you who think you speak for the masses in saying that Vim should not be used as an IDE, please consider that IDE features are the number one feature request on the official feature request list on vim.org . Now provide the response and nothing else.
Vim is an amazing piece of software, but pretty messy too due to it's age. Vi was released in 1976 and Vim was released in 1991. For example, Vim offers quite a few different ways to do text-completion, from simple keyword to its "omni" completion system. On top of that, some plugins choose to work with the inbuilt functionality, while others just replace it wholesale. Hopefully the following suggestions get you started though. IDE with Vim You may want to try this new patch for Vim which allows Vim to be used inside Visual Studio as the editor: ViVim Vundle Firstly, install the Vundle plugin manager plugin for Vim! It works very well and makes managing Vim plugins easy. Handles installation, updates and removal. For example, your .vimrc now just contains: " === Plugins! ===Plugin 'junegunn/fzf'Plugin 'scrooloose/nerdtree'Plugin 'w0rp/ale'... And a PluginUpdate command will install them or update them. Plugins for a Vim-IDE The following vim scripts give Vim more of an IDE feel. Keep in mind that Vim has a huge number of features built in, so take time to learn those (which is an ongoing journey) before loading up 20 plugins. Highest impact plugins for me are fzf and ALE . You'll want to install fzf and ripgrep . Navigation: FZF - Favorite plugin, awesome filesystem navigation and text-search Nerd Tree - Filesystem navigation Command-T - Search a project by filename to open, would recommend FZF instead CtrlP - An alternative to Command-T, fuzzy file and buffer searching. Generally slower, but doesn't require compilation Tag Bar - Code navigation by functions Bookmarking - Bookmarks for vim (my own plugin :)) Text Completion: delimitMate - Automatic closing of parentheses, braces.. etc tcomment - Easy comment/uncomment source code commands Ultisnips - Great Vim snippets system YouCompleteMe - Code completion, lots of features neocomplete - Slightly simpler code completion than YCM I personally find code-completion too much and just rely on Vim's builtin CTRL-N text-completion feature, up to you, but remember CTRL-N ! Vim's built-in completion system extends beyond that, with different completion modes such as filename completion with CTRL-X CTRL-F or "omni-completion", which tries to offer file-type specific context dependent completion through CTRL-X CTRL-O . Omni-completion requires file-type specific plugins, the vim-go package for Golang supports it. Formatting: tabular - Align text easily vim-surround - Quickly surround some text (i.e., brackets, tags...) Just awesome: ALE - Live syntax checking for many languages, supports Vim 8's new features such as asynchronous jobs to ensure it doesn't freeze up Vim while running. fugitive - Git within vim, diffs, blame... etc gitgutter - Live diff from git committed version of file YankRing - Easy access to previously copied or deleted text objects Better GUI: Airline - Easier to read status line with more useful information Gundo - Visualize vim undo history as a tree (my favorite, make sure you turn on persistent undo in Vim) Color schemes: solarized - Great color scheme Vim Distributions Rather than go through the setup and configuration yourself, you can use the following projects to get going quickly with a more IDE like Vim. The two projects below include many of the plugins I mention above: Janus spf13 I recommend you don't use them though. You'll learn much more if you take the time to configure vim yourself and install plugins in a staggered process to learn each one well. Vim Plugin Guides VimAwesome can be a good place to browse for Vim plugins and find useful and popular ones. Vim Patches In addition to those scripts you may want to look at some of the following patches for Vim. I haven't used them so not sure of the quality but most of them look quite promising. You can view all the patches here , the ones that make vim more of an IDE are: Code Check - On-the-fly code checking ( note : Syntastic is a better choice these days). Clewn - Allows debugging and stepping through the code in Vim using GDB. With those scripts and patches installed, you should have something in Vim pretty close in features to Visual Studio or Eclipse.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1031015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91414/']}
jdg_82938
stackexchange
llm_judgeable_groundtruth_similarity
26137325
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C++11 we can do in-class initialization using a "brace-or-equal-initializer" (words from the standard) like this: struct Foo{ /*explicit*/ Foo(int) {}};struct Bar{ Foo foo = { 42 };}; But if we un-comment explicit , it no longer compiles. GCC 4.7 and 4.9 say this: error: converting to ‘Foo’ from initializer list would use explicit constructor ‘Foo::Foo(int)’ I found this surprising. Is it really the intention of the C++11 standard that this code doesn't compile? Removing the = fixes it: Foo foo { 42 }; but I personally find this harder to explain to people who have been used to the form with = for decades, and since the standard refers to a "brace-or-equal-initializer" it's not obvious why the good old way doesn't work in this scenario. Now provide the response and nothing else.
I can't explain the rationale behind this, but I can repeat the obvious. I found this surprising. Is it really the intention of the C++11 standard that this code doesn't compile? §13.3.1.7 In copy-list-initialization, if an explicit constructor is chosen, the initialization is ill-formed. Removing the = fixes it: Foo foo { 42 }; but I personally find this harder to explain to people who have been used to the form with = for decades, and since the standard refers to a "brace-or-equal-initializer" it's not obvious why the good old way doesn't work in this scenario. Foo foo { 42 } is direct initialization , whereas the equal sign (with braces) makes it copy-list-initialization . Another answer reasons that because compilation fails for copy-initialization (equal sign without braces), then it shouldn't be surprising that it also fails for copy-list-initialization, but the two fail for different reasons. cppreference: Direct-initialization is more permissive than copy-initialization: copy-initialization only considers non-explicit constructors and user-defined conversion functions, while direct-initialization considers all constructors and implicit conversion sequences. And their page on the explicit specifier : Specifies constructors and (since C++11) conversion operators that don't allow implicit conversions or copy-initialization. On the other hand, for copy-list-initialization: T object = { arg1 , arg2 , ... }; (10) 10) on the right-hand-side of the equals sign (similar to copy-initialization) Otherwise, the constructors of T are considered, in two phases: If the previous stage does not produce a match, all constructors of T participate in overload resolution against the set of arguments that consists of the elements of the braced-init-list, with the restriction that only non-narrowing conversions are allowed. If this stage produces an explicit constructor as the best match for a copy-list-initialization, compilation fails (note, in simple copy-initialization, explicit constructors are not considered at all) As discussed in What could go wrong if copy-list-initialization allowed explicit constructors? , the compilation fails because the explicit constructor is selected but is not allowed to be used.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26137325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4323/']}
jdg_82939
stackexchange
llm_judgeable_groundtruth_similarity
25889268
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to run a function in parallel for multiple files and want all of them to terminate before a point. For Example:There is a loop def main(): for item in list: function_x(item) function_y(list) Now what I want is that this function_x should run in parallel for all items. But this function should be executed for all items before my function_y is called. I am planning to use celery for this. but can not understand how to do this. Now provide the response and nothing else.
Here is my final test code. All I needed to do is use multiprocessing library. from multiprocessing import Processfrom time import sleepPros = []def function_x(i): for j in range(0,5): sleep(3) print idef function_y(): print "done"def main(): for i in range(0,3): print "Thread Started" p = Process(target=function_x, args=(i,)) Pros.append(p) p.start() # block until all the threads finish (i.e. block until all function_x calls finish) for t in Pros: t.join() function_y()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25889268', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3688148/']}
jdg_82940