source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
167280
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have command to list system process by memory usage: ps -A --sort -rss -o comm,pmem Which list a table like COMMAND %MEMfirefox 28.2chrome 5.4compiz 4.8atom 2.5chrome 2.3Xorg 2.3skype 2.2chrome 2.0chrome 1.9atom 1.9nautilus 1.8hud-service 1.5evince 1.3 I would like to get total memory share per programs instead of per process of same programs. So I could get output like COMMAND %MEMfirefox 28.2chrome 11.6compiz 4.8atom 4.4Xorg 2.3skype 2.2nautilus 1.8hud-service 1.5evince 1.3 I thought about using awk , which I don't know much. Ended up with something like: ps -A --sort -rss -o comm,pmem | awk -F "\t" '{processes[$0] += $1;}{Endfor(i in processes) { print i,"\t",processes[i];}}' But it didn't work. How can I correct this? Now provide the response and nothing else.
processes[$0] += $1; use the whole line as key in your associative array, which is not unique. You must use $1 , which is command name as key. Try: $ ps -A --sort -rss -o comm,pmem | awk ' NR == 1 { print; next } { a[$1] += $2 } END { for (i in a) { printf "%-15s\t%s\n", i, a[i]; } }' If you want to sort the output by the second field, try: $ ps -A --sort -rss -o comm,pmem | awk ' NR == 1 { print; next } { a[$1] += $2 } END { for (i in a) { printf "%-15s\t%s\n", i, a[i] | "sort -rnk2"; } }'
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/167280', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/15519/']}
jdg_85841
stackexchange
llm_judgeable_groundtruth_similarity
2303
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In PETSC, there are many xxxSetyyy methods, e.g. MatSetLocalToGlobalMapping(A,rmap,cmap). I wonder whether contents of rmap and cmap (or generally yyy, that set to xxx) pointing to are copied to somewhere in A (or xxx). In other words, shall I destroy 'yyy' after set to 'xxx' or I should not before finish using 'yyy' inside 'xxx'? Other examples include KSPSetPC, PCCompositeAddPC(PC pc,PCType type), ISCreateGeneral, PCASMSetLocalSubdomains. I found that for PCASMSetLocalSubdomains the IS defining the subdomains are copied so we can destroy it afterwards. On the other hand, ISCreateGeneral forward a parameter to let user specify PetscCopyMode (Own_Pointer or Copy_Values). Is there any general rule for these things? Thanks! Now provide the response and nothing else.
When there is a PetscCopyMode parameter, the behavior is explicit. We would like to be explicit in every instance, but that would become very cumbersome in the interface. Create () gives back a new reference, whereas Get () returns a borrowed reference and should be given back using Restore (). The exceptions come up in Set () methods. They should all reference objects, and most do such as KSPSetPC(), however since this change was made relatively recently (and sometimes does not make as much sense), there are a few exceptions left. They will all eventually be documented/fixed. Please report any inconsistencies.The Get () methods generally return a borrowed reference since it is so cumbersome to destroy the object each time.
{}
{'log_upvote_score': 4, 'links': ['https://scicomp.stackexchange.com/questions/2303', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/491/']}
jdg_85842
stackexchange
llm_judgeable_groundtruth_similarity
477463
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm pretty new to system administration, and I've been trying to get more familiar with tools like netsat. If I run netstat -n on my server, I see this line: tcp 0 0 xxx.xxx.xxx.xxx:44573 xxx.xxx.xxx.xxx:443 ESTABLISHED However, I'm using iptables with a default policy of DROP and 44573 isn't one of the ports I allow traffic to. My rules look like this in iptables: Chain INPUT (policy DROP)target prot opt source destinationACCEPT tcp -- anywhere anywhere tcp dpt:pcsync-https state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:21022 state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:https state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:ftp state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpts:65500:65534ACCEPT tcp -- anywhere anywhere tcp dpt:webcache state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:https state ESTABLISHEDACCEPT icmp -- anywhere anywhere icmp echo-replyACCEPT icmp -- anywhere anywhere icmp echo-requestACCEPT all -- anywhere anywhereACCEPT udp -- anywhere anywhere udp spt:domainACCEPT tcp -- anywhere anywhere tcp dpt:smtp state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:http limit: avg 25/min burst 100Chain FORWARD (policy DROP)target prot opt source destinationChain OUTPUT (policy ACCEPT)target prot opt source destinationACCEPT tcp -- anywhere anywhere tcp spt:pcsync-https state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:21022 state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:https state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:ftp state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp spt:webcache state ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHEDACCEPT tcp -- anywhere anywhere tcp dpt:https state NEW,ESTABLISHEDACCEPT icmp -- anywhere anywhere icmp echo-requestACCEPT icmp -- anywhere anywhere icmp echo-replyACCEPT all -- anywhere anywhereACCEPT udp -- anywhere anywhere udp dpt:domainACCEPT tcp -- anywhere anywhere tcp spt:smtp state ESTABLISHED Is anyone able to shed some light on this? Am I just interpreting this wrong? Now provide the response and nothing else.
The most likely explanation is that you're seeing an outbound connection from your machine to an HTTPS port. Port 44573 is an ephemeral port that a program on your machine was given when it made an outbound HTTPS request. netstat doesn't specifically show outbound versus inbound, but since you don't have any programs listening on port 44573, port 44573 is in the ephemeral port range, and the remote machine's port (443) is not a in the ephemeral port range (and is a port used for a common server protocol) it can be reasonably assured that it's an outbound connection.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/477463', 'https://serverfault.com', 'https://serverfault.com/users/57232/']}
jdg_85843
stackexchange
llm_judgeable_groundtruth_similarity
17797191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been troubleshooting this complex query today, thinking it was a performance issue. It appears that the query is getting stuck into the 'Statistics' state. It is currently in that state for 1300 seconds. I've checked the indexes for the tables involved -- I've optimized the table -- what could be causing this hang? SELECT Import_Values.id, Import_Values.part_id, Import_Values.qty, Import_Values.note, Parts.partterminologyname, GROUP_CONCAT(BaseVehicle.YearID, ' ', Make.MakeName, ' ', Model.modelname, ' ', SubModel.SubModelName SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineDesignation.EngineDesignationName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineVIN.EngineVINName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.Liter) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.CC) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.CID) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.Cylinders) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.BlockType) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.EngBoreIn) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.EngBoreMetric) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.EngStrokeIn) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineBase.EngStrokeMetric) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(FuelDeliveryType.FuelDeliveryTypeName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(FuelDeliverySubType.FuelDeliverySubTypeName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(FuelSystemControlType.FuelSystemControlTypeName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(FuelSystemDesign.FuelSystemDesignName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(Aspiration.AspirationName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(CylinderHeadType.CylinderHeadTypeName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(FuelType.FuelTypeName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(IgnitionSystemType.IgnitionSystemTypeName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(Mfr.MfrName) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(EngineVersion.EngineVersion) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(Valves.ValvesPerEngine) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(BedLength.BedLength) SEPARATOR ', '), GROUP_CONCAT(DISTINCT(BedLength.BedLengthMetric) SEPARATOR ', ') FROM Import_Values INNER JOIN BaseVehicle ON Import_Values.base_vehicle_id=BaseVehicle.BaseVehicleID INNER JOIN Parts ON Import_Values.part_type_id=Parts.PartTerminologyID INNER JOIN Make ON BaseVehicle.MakeID=Make.MakeID INNER JOIN Model ON BaseVehicle.ModelID=Model.ModelID INNER JOIN Vehicle ON Import_Values.base_vehicle_id=Vehicle.BaseVehicleID INNER JOIN SubModel ON Vehicle.SubModelID=SubModel.SubModelID INNER JOIN VehicleToEngineConfig ON Vehicle.VehicleID=VehicleToEngineConfig.VehicleID INNER JOIN EngineConfig ON VehicleToEngineConfig.EngineConfigID=EngineConfig.EngineConfigID INNER JOIN EngineDesignation ON EngineConfig.EngineDesignationID=EngineDesignation.EngineDesignationID INNER JOIN EngineVIN ON EngineConfig.EngineVINID=EngineVIN.EngineVINID INNER JOIN EngineBase ON EngineConfig.EngineBaseID=EngineBase.EngineBaseID INNER JOIN FuelDeliveryConfig ON EngineConfig.FuelDeliveryConfigID=FuelDeliveryConfig.FuelDeliveryConfigID INNER JOIN FuelDeliveryType ON FuelDeliveryConfig.FuelDeliveryTypeID=FuelDeliveryType.FuelDeliveryTypeID INNER JOIN FuelDeliverySubType ON FuelDeliveryConfig.FuelDeliverySubTypeID=FuelDeliverySubType.FuelDeliverySubTypeID INNER JOIN FuelSystemControlType ON FuelDeliveryConfig.FuelSystemControlTypeID=FuelSystemControlType.FuelSystemControlTypeID INNER JOIN FuelSystemDesign ON FuelDeliveryConfig.FuelSystemDesignID=FuelSystemDesign.FuelSystemDesignID INNER JOIN Aspiration ON EngineConfig.AspirationID=Aspiration.AspirationID INNER JOIN CylinderHeadType ON EngineConfig.CylinderHeadTypeID=CylinderHeadType.CylinderHeadTypeID INNER JOIN FuelType ON EngineConfig.FuelTypeID=FuelType.FuelTypeID INNER JOIN IgnitionSystemType ON EngineConfig.IgnitionSystemTypeID=IgnitionSystemType.IgnitionSystemTypeID INNER JOIN Mfr ON EngineConfig.EngineMfrID=Mfr.MfrID INNER JOIN EngineVersion ON EngineConfig.EngineVersionID=EngineVersion.EngineVersionID INNER JOIN Valves ON EngineConfig.ValvesID=Valves.Valvesid INNER JOIN VehicleToBedConfig ON Vehicle.VehicleID=VehicleToBedConfig.VehicleID INNER JOIN BedConfig ON VehicleToBedConfig.BedConfigID=BedConfig.BedConfigID INNER JOIN BedLength ON BedConfig.BedLengthID=BedLength.BedLengthID GROUP BY part_id Now provide the response and nothing else.
I ran into the same problem recently: MySQL started to choke (stuck in state 'statistics') on queries with a lot of tables joined together. I found a good blog post explaining why this happens and how to solve it. Basically at least in MySQL 5.5, the default value for the configuration parameter optimizer_search_depth is 62 which will cause the query optimizer to take exponentially more time as the number of tables in the query increases. After a certain point it will start to take days or even longer to finish the query. However, if you set the optimizer_search_depth to 0 in your my.cnf , MySQL automatically chooses suitable depth, but limits it to 7 which is not yet too complex. Apparently this issue has been fixed in MySQL 5.6 but I haven't tested it myself yet.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17797191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2599946/']}
jdg_85844
stackexchange
llm_judgeable_groundtruth_similarity
12384704
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a model there is a field validates :image_file_name, :format => { :with => %r{\.(gif|jpg|jpeg|png)$}i It looks pretty odd for me. I am aware that this is a regular expression. But I would like: to know what exactly it means. Is %r{value} equal to /value/ ? be able to replace it with normal Ruby regex operator /some regex/ or =~ . Is this possible? Now provide the response and nothing else.
%r{} is equivalent to the /.../ notation, but allows you to have '/' in your regexp without having to escape them: %r{/home/user} is equivalent to: /\/home\/user/ This is only a syntax commodity, for legibility. Edit: Note that you can use almost any non-alphabetic character pair instead of '{}'.These variants work just as well: %r!/home/user!%r'/home/user'%r(/home/user) Edit 2: Note that the %r{}x variant ignores whitespace, making complex regexps more readable. Example from GitHub's Ruby style guide : regexp = %r{ start # some text \s # white space char (group) # first group (?:alt1|alt2) # some alternation end}x
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/12384704', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/468345/']}
jdg_85845
stackexchange
llm_judgeable_groundtruth_similarity
1371905
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My questions are about the reasoning made in the note http://folk.uib.no/st00895/MAT112-V12/unif-kont.pdf (which is in Norwegian). To prove that $f(x)=\frac{1}{x}$ is not uniformly continuous, the authors use the following "result" (Sats 2.12 in the note, which I translate below): Result 2.12: If for every $h>0$ we have that $|f(x+h)-f(x)|$ is unbounded on $I$, then $f$ is not uniformly continuous on $I$. Proof: The result follows directly from the definition of uniform continuity. In Example 2.14 (Eksempel 2.14), the authors look at $$|f(x+h)-f(x)|=\left|\frac{1}{x+h}-\frac{1}{x}\right|=\left|\frac{h}{x(x+h)}\right|.$$They then claim that the above quantity is not bounded for any $h>0$, since $$\lim_{x\rightarrow 0}\left|\frac{h}{x(x+h)}\right|=\infty.$$ Question 1: Is it not possible to choose $h=x^2$, thereby obtaining $$\frac{h}{x(x+h)}=\frac{1}{\frac{x}{h}(x+h)}=\frac{1}{x(1/x+1)}=\frac{1}{x(1/x+1)}=\frac{1}{1+x}\rightarrow1 \text{ as }x\rightarrow 0.$$ Therefore, |f(x+h)-f(x)| is not unbounded, so we cannot use the result "Result 2.12". Is my argumentation correct?? Is it OK to choose $h$ like I have done? Question 2 : However, it seems correct to me that my argumentation is all you need to prove that $f(x)=\frac{1}{x}$ is not uniformly continuous on (0,1). Here $x_1=x,x_2=x+x^2$, so that $|x_1-x_2|$ can be made arbitrarily small. However $|f(x_1)-f(x_2)|=1$, which stays the same regardless of how small we make $|x_1-x_2|$. Is this correct? Now provide the response and nothing else.
To use the result you cite, $h$ must be a constant. It cannot depend on $x$. Your argument is that no matter how small $|x_1-x_2|$ is, $$\left|\frac{1}{x}-\frac{1}{x+x^2}\right|=\left|\frac{x^2}{x^2(x+1)}\right|=\left|\frac{1}{1+x}\right|$$ is close to $1$ as $x$ is close to zero. This will work, but you need to use the $\epsilon$-$\delta$ formalism to prove it (or use the result).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1371905', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/251368/']}
jdg_85846
stackexchange
llm_judgeable_groundtruth_similarity
32452025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to allocate a new array with the length N and fill it up by repeating a given array. The interface looks like this: <T> T[] repeat(T[] array, int n); To clarify what I mean here is a small example: String a = {"a", "b", "c"};// b = {"a", "b", "c", "a", "b", "c", "a", "b", "c", "a"}String b = repeat(a, 10); Most of the programmer will come up with the following solution (for simplicity of array generation a specific type was chosen): public String[] repeat(String[] array, int n) { String[] repeated = new String[n]; for (int i = 0; i < n; i++) { repeated[i] = array[i % array.length]; } return repeated;} Is there a faster way to do this? Now provide the response and nothing else.
I came up with this generic solution: public static <T> T[] repeat(T[] arr, int newLength) { T[] dup = Arrays.copyOf(arr, newLength); for (long last = arr.length; last != 0 && last < newLength; last <<= 1) { System.arraycopy(dup, 0, dup, (int) last, (int) (Math.min(last << 1, newLength) - last)); } return dup;} Theory System.arraycopy is a native call. Therefore it is very fast but it doesn't mean it is the fastest way. Every other solution copys the array element by element. My solution copys larger blocks. Every iteration duplicates the existing elements in the array which means the loop will run at most log2(n) times. Profiling reports Input TEST_ARRAY = { "a", "b", "c", "d", "e", "f" }NEW_LENGTH = 10000 Here is my benchmark code to reproduce the results: import org.openjdk.jmh.annotations.Benchmark;import org.openjdk.jmh.annotations.BenchmarkMode;import org.openjdk.jmh.annotations.Fork;import org.openjdk.jmh.annotations.Measurement;import org.openjdk.jmh.annotations.Mode;import org.openjdk.jmh.annotations.OutputTimeUnit;import org.openjdk.jmh.annotations.Scope;import org.openjdk.jmh.annotations.State;import org.openjdk.jmh.annotations.Threads;import org.openjdk.jmh.annotations.Warmup;@Fork(3)@BenchmarkMode(Mode.AverageTime)@Measurement(iterations = 10, timeUnit = TimeUnit.NANOSECONDS)@State(Scope.Benchmark)@Threads(1)@Warmup(iterations = 5, timeUnit = TimeUnit.NANOSECONDS)@OutputTimeUnit(TimeUnit.NANOSECONDS)public class MyBenchmark { private static final String[] TEST_ARRAY = { "a", "b", "c", "d", "e", "f" }; private static final int NEW_LENGTH = 10_000; @Benchmark public String[] testNativeCall() { String[] dup = Arrays.copyOf(TEST_ARRAY, NEW_LENGTH); for (int last = TEST_ARRAY.length; last != 0 && last < NEW_LENGTH; last <<= 1) { System.arraycopy(dup, 0, dup, last, Math.min(last << 1, NEW_LENGTH) - last); } return dup; } @Benchmark public String[] testLoopModulo() { String[] arr = new String[NEW_LENGTH]; for (int i = 0; i < NEW_LENGTH; i++) { arr[i] = arr[i % TEST_ARRAY.length]; } return arr; } @Benchmark public String[] testArrayList() { List<String> initialLetters = Arrays.asList(TEST_ARRAY); List<String> results = new ArrayList<>(); int indexOfLetterToAdd = 0; for (int i = 0; i < 10000; i++) { results.add(initialLetters.get(indexOfLetterToAdd++)); if (indexOfLetterToAdd == initialLetters.size()) { indexOfLetterToAdd = 0; } } return results.toArray(new String[results.size()]); } @Benchmark public String[] testLoopReset() { String result[] = new String[NEW_LENGTH]; for (int i = 0, j = 0; i < NEW_LENGTH && j < TEST_ARRAY.length; i++, j++) { result[i] = TEST_ARRAY[j]; if (j == TEST_ARRAY.length - 1) { j = -1; } } return result; } @Benchmark public String[] testStream() { String[] result = Stream.iterate(TEST_ARRAY, x -> x).flatMap(x -> Stream.of(TEST_ARRAY)).limit(NEW_LENGTH) .toArray(String[]::new); return result; }} Results Benchmark Mode Cnt Score Error UnitsMyBenchmark.testNativeCall avgt 30 4154,553 ± 11,242 ns/opMyBenchmark.testLoopModulo avgt 30 19273,717 ± 235,547 ns/opMyBenchmark.testArrayList avgt 30 71079,139 ± 2686,136 ns/opMyBenchmark.testLoopReset avgt 30 18307,368 ± 202,520 ns/opMyBenchmark.testStream avgt 30 68898,278 ± 2488,104 ns/op As you can see the native call method is the fastest way to repeat an array. Additional Results Further I was asked to benchmark these methods with various inputs. Input ranges // Array size not fixed anymore - filled with random elementsSIZE = { 100, 1000, 100000, 1000000 }NEW_LENGTH = { 100, 1000, 100000, 1000000 } That means there are SIZE x NEW_LENGTH tests and here are the results: Benchmark (NEW_LENGTH) (SIZE) Mode Cnt Score Error UnitsMyBenchmark.testArrayList 100 100 avgt 30 706,274 ± 6,787 ns/opMyBenchmark.testArrayList 100 1000 avgt 30 692,586 ± 15,076 ns/opMyBenchmark.testArrayList 100 100000 avgt 30 685,214 ± 6,747 ns/opMyBenchmark.testArrayList 100 1000000 avgt 30 685,333 ± 5,493 ns/opMyBenchmark.testArrayList 1000 100 avgt 30 7170,897 ± 63,221 ns/opMyBenchmark.testArrayList 1000 1000 avgt 30 7180,612 ± 93,280 ns/opMyBenchmark.testArrayList 1000 100000 avgt 30 6818,585 ± 197,859 ns/opMyBenchmark.testArrayList 1000 1000000 avgt 30 6810,614 ± 139,456 ns/opMyBenchmark.testArrayList 100000 100 avgt 30 597614,173 ± 6446,318 ns/opMyBenchmark.testArrayList 100000 1000 avgt 30 580696,750 ± 5141,845 ns/opMyBenchmark.testArrayList 100000 100000 avgt 30 598657,608 ± 5126,519 ns/opMyBenchmark.testArrayList 100000 1000000 avgt 30 595529,027 ± 4981,095 ns/opMyBenchmark.testArrayList 1000000 100 avgt 30 6836746,484 ± 38848,467 ns/opMyBenchmark.testArrayList 1000000 1000 avgt 30 6745066,786 ± 57971,469 ns/opMyBenchmark.testArrayList 1000000 100000 avgt 30 7130391,072 ± 50583,914 ns/opMyBenchmark.testArrayList 1000000 1000000 avgt 30 8791342,042 ± 172323,938 ns/opMyBenchmark.testLoopModulo 100 100 avgt 30 301,252 ± 1,195 ns/opMyBenchmark.testLoopModulo 100 1000 avgt 30 301,988 ± 2,056 ns/opMyBenchmark.testLoopModulo 100 100000 avgt 30 299,892 ± 1,776 ns/opMyBenchmark.testLoopModulo 100 1000000 avgt 30 300,468 ± 2,569 ns/opMyBenchmark.testLoopModulo 1000 100 avgt 30 3277,018 ± 14,880 ns/opMyBenchmark.testLoopModulo 1000 1000 avgt 30 3275,648 ± 21,742 ns/opMyBenchmark.testLoopModulo 1000 100000 avgt 30 3258,570 ± 27,360 ns/opMyBenchmark.testLoopModulo 1000 1000000 avgt 30 3259,617 ± 28,747 ns/opMyBenchmark.testLoopModulo 100000 100 avgt 30 321483,331 ± 4320,938 ns/opMyBenchmark.testLoopModulo 100000 1000 avgt 30 326319,662 ± 2419,602 ns/opMyBenchmark.testLoopModulo 100000 100000 avgt 30 327027,966 ± 3174,011 ns/opMyBenchmark.testLoopModulo 100000 1000000 avgt 30 319201,057 ± 4472,220 ns/opMyBenchmark.testLoopModulo 1000000 100 avgt 30 3053122,364 ± 31814,342 ns/opMyBenchmark.testLoopModulo 1000000 1000 avgt 30 3134151,676 ± 108227,023 ns/opMyBenchmark.testLoopModulo 1000000 100000 avgt 30 3220082,188 ± 43925,401 ns/opMyBenchmark.testLoopModulo 1000000 1000000 avgt 30 3204777,236 ± 25365,542 ns/opMyBenchmark.testLoopReset 100 100 avgt 30 159,828 ± 1,107 ns/opMyBenchmark.testLoopReset 100 1000 avgt 30 125,461 ± 0,881 ns/opMyBenchmark.testLoopReset 100 100000 avgt 30 129,912 ± 7,801 ns/opMyBenchmark.testLoopReset 100 1000000 avgt 30 134,503 ± 7,602 ns/opMyBenchmark.testLoopReset 1000 100 avgt 30 1809,207 ± 93,642 ns/opMyBenchmark.testLoopReset 1000 1000 avgt 30 1728,705 ± 70,808 ns/opMyBenchmark.testLoopReset 1000 100000 avgt 30 1354,887 ± 9,631 ns/opMyBenchmark.testLoopReset 1000 1000000 avgt 30 1350,327 ± 15,886 ns/opMyBenchmark.testLoopReset 100000 100 avgt 30 159680,209 ± 2477,183 ns/opMyBenchmark.testLoopReset 100000 1000 avgt 30 162030,985 ± 1949,660 ns/opMyBenchmark.testLoopReset 100000 100000 avgt 30 149299,890 ± 1516,486 ns/opMyBenchmark.testLoopReset 100000 1000000 avgt 30 136059,242 ± 3090,410 ns/opMyBenchmark.testLoopReset 1000000 100 avgt 30 1407369,992 ± 12979,717 ns/opMyBenchmark.testLoopReset 1000000 1000 avgt 30 1447001,173 ± 14979,769 ns/opMyBenchmark.testLoopReset 1000000 100000 avgt 30 1463913,706 ± 12564,617 ns/opMyBenchmark.testLoopReset 1000000 1000000 avgt 30 1404701,860 ± 21587,436 ns/opMyBenchmark.testNativeCall 100 100 avgt 30 58,306 ± 0,669 ns/opMyBenchmark.testNativeCall 100 1000 avgt 30 57,441 ± 0,590 ns/opMyBenchmark.testNativeCall 100 100000 avgt 30 57,595 ± 0,386 ns/opMyBenchmark.testNativeCall 100 1000000 avgt 30 60,196 ± 1,995 ns/opMyBenchmark.testNativeCall 1000 100 avgt 30 450,808 ± 8,259 ns/opMyBenchmark.testNativeCall 1000 1000 avgt 30 558,079 ± 5,724 ns/opMyBenchmark.testNativeCall 1000 100000 avgt 30 557,246 ± 4,873 ns/opMyBenchmark.testNativeCall 1000 1000000 avgt 30 565,005 ± 9,696 ns/opMyBenchmark.testNativeCall 100000 100 avgt 30 73074,811 ± 3332,432 ns/opMyBenchmark.testNativeCall 100000 1000 avgt 30 70970,603 ± 2693,394 ns/opMyBenchmark.testNativeCall 100000 100000 avgt 30 69907,864 ± 2945,072 ns/opMyBenchmark.testNativeCall 100000 1000000 avgt 30 74041,205 ± 2599,841 ns/opMyBenchmark.testNativeCall 1000000 100 avgt 30 790679,353 ± 15672,480 ns/opMyBenchmark.testNativeCall 1000000 1000 avgt 30 812660,137 ± 25490,999 ns/opMyBenchmark.testNativeCall 1000000 100000 avgt 30 838094,181 ± 12374,194 ns/opMyBenchmark.testNativeCall 1000000 1000000 avgt 30 925567,535 ± 19091,943 ns/opMyBenchmark.testStream 100 100 avgt 30 810,262 ± 54,519 ns/opMyBenchmark.testStream 100 1000 avgt 30 1344,998 ± 14,792 ns/opMyBenchmark.testStream 100 100000 avgt 30 159901,562 ± 3453,210 ns/opMyBenchmark.testStream 100 1000000 avgt 30 1407506,571 ± 419985,287 ns/opMyBenchmark.testStream 1000 100 avgt 30 6464,099 ± 169,665 ns/opMyBenchmark.testStream 1000 1000 avgt 30 5869,457 ± 260,297 ns/opMyBenchmark.testStream 1000 100000 avgt 30 165394,656 ± 4943,362 ns/opMyBenchmark.testStream 1000 1000000 avgt 30 1352900,959 ± 412849,634 ns/opMyBenchmark.testStream 100000 100 avgt 30 423531,274 ± 3944,801 ns/opMyBenchmark.testStream 100000 1000 avgt 30 391727,181 ± 5341,826 ns/opMyBenchmark.testStream 100000 100000 avgt 30 427462,700 ± 7517,953 ns/opMyBenchmark.testStream 100000 1000000 avgt 30 981304,769 ± 10206,849 ns/opMyBenchmark.testStream 1000000 100 avgt 30 4528465,859 ± 72959,405 ns/opMyBenchmark.testStream 1000000 1000 avgt 30 4121720,516 ± 60283,781 ns/opMyBenchmark.testStream 1000000 100000 avgt 30 5920334,609 ± 63051,631 ns/opMyBenchmark.testStream 1000000 1000000 avgt 30 6227476,270 ± 84066,493 ns/op As expected the native call is always ahead (about 2 times faster than the loop version).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32452025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4105457/']}
jdg_85847
stackexchange
llm_judgeable_groundtruth_similarity
287816
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: public class MyClass { public object Prop1 { get; set; } public object Prop2 { get; set; } public object Prop3 { get; set; } } Suppose I have an object myObject of MyClass and I need to reset its properties, is it better to create a new object or reassign each property? Assume I don't have any additional use with the old instance. myObject = new MyClass(); or myObject.Prop1 = null;myObject.Prop2 = null;myObject.Prop3 = null; Now provide the response and nothing else.
Instantiating a new object is always better, then you have 1 place to initialise the properties (the constructor) and can easily update it. Imagine you add a new property to the class, you would rather update the constructor than add a new method that also re-initialises all properties. Now, there are cases where you might want to re-use an object, one where a property is very expensive to re-initialise and you'd want to keep it. This would be more specialist however, and you'd have special methods to reinitialise all other properties. You'd still want to create a new object sometimes even for this situation.
{}
{'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/287816', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/184809/']}
jdg_85848
stackexchange
llm_judgeable_groundtruth_similarity
24424377
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a class that inherits from another class like so: class TestClass : public BaseClass I am wondering if it is possible to make this a test class using the TEST_CLASS macro or some other macro that is part of the Microsoft Unit Testing Framework for C++. I tried: class TEST_CLASS(TestClass : public BaseClass) But the IDE gives the error 'Error: expected either a definition or a tag name' and the compiler error is error C3861: '__GetTestClassInfo': identifier not found I know it's probably bad practice to inherit on a test class but it would make implementing the test easier. I am relatively new to C++ so I am wondering if it is something simple I have missed or if it's just not possible. Thanks, Now provide the response and nothing else.
There is one other option you didn't include and others may be tripping over this question without knowing the solution. You can actually derive from any arbitrary type by looking at the macro itself: ///////////////////////////////////////////////////////////////////////////////////////////// Macro to define your test class. // Note that you can only define your test class at namespace scope,// otherwise the compiler will raise an error.#define TEST_CLASS(className) \ONLY_USED_AT_NAMESPACE_SCOPE class className : public ::Microsoft::VisualStudio::CppUnitTestFramework::TestClass<className> As C++ supports multiple inheritance you can easily derive by using code similar to the following: class ParentClass{public: ParentClass(); virtual ~ParentClass();};TEST_CLASS(MyTestClass), public ParentClass{}; Just remember that if you are working with resources you will need to have a virtual destructor to have it be called. You will also have to call the initialize & cleanup methods directly if you are going to be using them, because the static methods they create are not called automagically. Good luck, Good testing!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24424377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/512965/']}
jdg_85849
stackexchange
llm_judgeable_groundtruth_similarity
22157596
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When a user loads a page, it makes one or more ajax requests, which hit ASP.NET Web API 2 controllers. If the user navigates to another page, before these ajax requests complete, the requests are canceled by the browser. Our ELMAH HttpModule then logs two errors for each canceled request: Error 1: System.Threading.Tasks.TaskCanceledException: A task was canceled. at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__2.MoveNext()--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() at System.Web.Http.Controllers.ExceptionFilterResult.<ExecuteAsync>d__0.MoveNext() Error 2: System.OperationCanceledException: The operation was canceled. at System.Threading.CancellationToken.ThrowIfCancellationRequested() at System.Web.Http.WebHost.HttpControllerHandler.<WriteBufferedResponseContentAsync>d__1b.MoveNext()--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.Http.WebHost.HttpControllerHandler.<CopyResponseAsync>d__7.MoveNext()--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.Http.WebHost.HttpControllerHandler.<ProcessRequestAsyncCore>d__0.MoveNext()--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.TaskAsyncHelper.EndTask(IAsyncResult ar) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Looking at the stacktrace, I see that the exception is being thrown from here: https://github.com/ASP-NET-MVC/aspnetwebstack/blob/master/src/System.Web.Http.WebHost/HttpControllerHandler.cs#L413 My question is: How can I handle and ignore these exceptions? It appears to be outside of user code... Notes: I am using ASP.NET Web API 2 The Web API endpoints are a mix of async and non-async methods. No matter where I add error logging, I am unable to catch the exception in user code Global.asax Applicaiton_Error TaskScheduler.UnobservedTaskException ELMAH Error Filtering void ErrorLog_Filtering ( https://code.google.com/p/elmah/wiki/ErrorFiltering ) Now provide the response and nothing else.
This is a bug in ASP.NET Web API 2 and unfortunately, I don't think there's a workaround that will always succeed. We filed a bug to fix it on our side. Ultimately, the problem is that we return a cancelled task to ASP.NET in this case, and ASP.NET treats a cancelled task like an unhandled exception (it logs the problem in the Application event log). In the meantime, you could try something like the code below. It adds a top-level message handler that removes the content when the cancellation token fires. If the response has no content, the bug shouldn't be triggered. There's still a small possibility it could happen, because the client could disconnect right after the message handler checks the cancellation token but before the higher-level Web API code does the same check. But I think it will help in most cases. David config.MessageHandlers.Add(new CancelledTaskBugWorkaroundMessageHandler());class CancelledTaskBugWorkaroundMessageHandler : DelegatingHandler{ protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { HttpResponseMessage response = await base.SendAsync(request, cancellationToken); // Try to suppress response content when the cancellation token has fired; ASP.NET will log to the Application event log if there's content in this case. if (cancellationToken.IsCancellationRequested) { return new HttpResponseMessage(HttpStatusCode.InternalServerError); } return response; }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22157596', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3376511/']}
jdg_85850
stackexchange
llm_judgeable_groundtruth_similarity
1507405
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created a simple class to benchmark some methods of mine. But is it accurate? I am kind of new to benchmarking, timing, et cetera, so thought I could ask for some feedback here. Also, if it is good, maybe somebody else can make use of it as well :) public static class Benchmark{ public static IEnumerable<long> This(Action subject) { var watch = new Stopwatch(); while (true) { watch.Reset(); watch.Start(); subject(); watch.Stop(); yield return watch.ElapsedTicks; } }} You can use it like this: var avg = Benchmark.This(() => SomeMethod()).Take(500).Average(); Any feedback? Does it look to be pretty stable and accurate, or have I missed something? Now provide the response and nothing else.
It is about as accurate as you can get for a simple benchmark. But there are some factors not under your control: load on the system from other processes state of the heap before/during the benchmark You could do something about that last point, a benchmark is one of the rare situations where calling GC.Collect can be defended. And you might call subject once beforehand to eliminate any JIT issues. But that requires calls to subject to be independent. public static IEnumerable<TimeSpan> This(Action subject){ subject(); // warm up GC.Collect(); // compact Heap GC.WaitForPendingFinalizers(); // and wait for the finalizer queue to empty var watch = new Stopwatch(); while (true) { watch.Reset(); watch.Start(); subject(); watch.Stop(); yield return watch.Elapsed; // TimeSpan }} For bonus, your class should check the System.Diagnostics.Stopwatch.IsHighResolution field . If it is off, you only have a very coarse (20 ms) resolution. But on an ordinary PC, with many services running in the background, it is never going to be very accurate.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1507405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/39321/']}
jdg_85851
stackexchange
llm_judgeable_groundtruth_similarity
2653938
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove that $$-2\log(2) = -2 + \sum_{n=1}^{\infty}\frac{1}{n(2n+1)}$$ I know that this sum is equal to $\phi(1/2)+\gamma$ where $\phi(x)$ is the digamma function and $\gamma$ is the Euler-Mascheroni constant, but I want to evaluate the sum without knowing it. Now provide the response and nothing else.
$$\frac{1}{2n(2n+1)}=\frac1{2n}-\frac1{2n+1}.$$$$\sum_{n=1}^\infty\frac1{n(2n+1)}=2\sum_{n=1}^\infty\left(\frac1{2n}-\frac1{2n+1}\right)=2\sum_{m=2}^\infty\frac{(-1)^m}{m}.$$Of course,$$\log2=-\sum_{m=1}^\infty\frac{(-1)^m}{m}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2653938', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/483385/']}
jdg_85852
stackexchange
llm_judgeable_groundtruth_similarity
13882
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Background: When Ueno builds the fully faithful functor from Var/k to Sch/k he mentions that the variety $V$ can be identified with the rational points of $t(V)$ over $k$. I know how to prove this on affine everything and will work out the general case at some future time. The question that this got me thinking about was if $X$ is a $k$-scheme where $k$ is algebraically closed, then are the $k$-rational points of $X$ just the closed points? This is probably extremely well known, but I can't find it explicitly stated nor can I find a counterexample. For $k$ not algebraically closed, I can come up with examples where this is not true. So in general is there some relation between the closed points and rational points on schemes (everything over $k$)? This would give a bit more insight into what this functor does. It takes the variety and makes all the points into closed points of a scheme, then adds the generic points necessary to actually make it a legitimate scheme. General tangential thoughts on this are welcome as well. Now provide the response and nothing else.
If $k$ is algebraically closed and $X$ is a $k$-scheme locally of finite type, then the $k$-rational points are precisely the closed points. (See EGA 1971, Ch. I, Corollaire 6.5.3). More generally: if $k$ is a field and $X$ is a $k$-scheme locally of finite type, then $X$ is a Jacobson scheme (i.e. it is quasi-isomorphic to its underlying ultrascheme) and the closed points are precisely the points $x \in X$ such that $\kappa(x)|k$ is a finite extension. You should also confer the appendix of EGA 1971. There it is shown that for any field $k$ the category of $k$-schemes locally of finite type with morphisms locally of finite type is equivalent to the category of $k$-ultraschemes (a $k$-ultrascheme is locally the maximal spectrum of a $k$-algebra).
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/13882', 'https://mathoverflow.net', 'https://mathoverflow.net/users/14672/']}
jdg_85853
stackexchange
llm_judgeable_groundtruth_similarity
2167339
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working through the problems on project Euler and am not too certain if my understanding of the question is correct. Problem 8 is as follows: Find the greatest product of five consecutive digits in the 1000-digit number. I have taken this to mean the following: I need to find any five numbers that run consecutively in the 1000 digit number and then add these up to get the total. I am assuming that the size of the numbers could be anything, i.e. 1,2,3 or 12,13,14 or 123,124,124 or 1234,1235,1236 etc. Is my understanding of this correct, or have I misunderstood the question? Note: Please don't supply code or the solution, that I need to solve myself. Now provide the response and nothing else.
The number is: 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450 The first five consecutive digits are: 73167. Their product is 7*3*1*6*7=882 The next five consecutive digits are: 31671. Their product is 3*1*6*7*1=126 The next five consecutive digits are: 16717. Their product is 1*6*7*1*7=294 And so on. Note the overlap. Now, find the five consecutive digits whose product is maximal over the whole 1000-digit number.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2167339', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105254/']}
jdg_85854
stackexchange
llm_judgeable_groundtruth_similarity
2205695
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to build a "class" (with properties and methods) upon which will be created a lot of instances? Now provide the response and nothing else.
The modern way is to use class , as introduced in ES6. class Movie { constructor(name) { this.name = name; // The "_" prefix is commonly used to denote "private" members. this._id = +new Date(); } getName() { return `${this.name} ${this._id}`; } setName(name) { this.name = name; }}const movie = new Movie('Beerfest');console.log(movie.getName()); The above example provides an equivalent interface to the original examples from 2010. Original answer from 2010: In "modern" JavaScript, there are three popular methods for defining objects. The first method is the classic method and is still popular because of its simplicity; however, it is discouraged by MDC in favor of the second method because of the inefficiency of having to redefine each function every time an instance of the object is created. // Constructor, methods and members all rolled up into one definitionvar Movie = function(name) { this.name = name; // Note that private members can be created using the closure property var _id = +(new Date()); this.getName = function() { return this.name + " " + _id; }; this.setName = function(name) { this.name = name; };};var m = new Movie("Beerfest"); The second method is a variation of and can be used interchangeably with the first. It is also useful for adding new methods to existing objects via the prototype property. Private members and methods are not possible in this form. // Constructor is separate from its methodsvar Movie = function(name) { this.name = name;}Movie.prototype.getName = function() { return name;};Movie.prototype.setName = function(name) { this.name = name;};var m = new Movie("Kill Bill"); The third method is to use the module pattern , which makes it possible to instantiate objects without having to use the new operator. var Movie = function (name) { var _id = +(new Date()); var privateMethod = function() { alert(_id); }; // All methods and members defined here are public return { name: name, getName: function() { return this.name + " " + _id; }, setName: function(name) { this.name = name; } };};var m = Movie("Stackoverflow: the movie"); Note that in the first and third methods, you can use private access members and methods. But be aware that to use this within private methods some must happen .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2205695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/266863/']}
jdg_85855
stackexchange
llm_judgeable_groundtruth_similarity
4206543
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is an exercise in Serge Lang’s Algebra in the first chapter. I am wondering why q $\not\equiv$ 1 mod $p$ is assumed considering it is unnecessary. Indeed, if that is excluded from the requirements, then let $H_q$ and $H_p$ be the Sylow subgroups of orders $q$ and $p$ , respectively. Then they are cyclic and thus have trivial intersection. Since they have trivial intersection, the product of groups $H_qH_p$ (which is a group since $H_q$ is normal) is isomorphic to $H_q$$\times$$H_p$ which has order $pq$ and so it is equal to $G$ . Considering $p$ and $q$ are coprime, $G$ is cyclic. Is this solution correct/an acceptable answer to this problem? If so, why is the aforementioned requirement provided? Note that all of the information used in my proof is either in the exercises preceding this one or in the chapter on Sylow subgroups. Now provide the response and nothing else.
It is not true that $H_p\cap H_q=\{1\}$ implies $H_pH_q\simeq H_p\times H_q$ . This is only true when both $H_p$ and $H_q$ are normal in $G$ . Indeed, for a counterexample you can take $G=S_3$ . Edit: to add to this, let's explain how you should proceed with a correct proof. What $H_p\cap H_q$ and $H_q\trianglelefteq G$ does imply is that $G$ is isomorphic to a semidirect product $H_q\rtimes_\theta H_p$ for some homomorphism $\theta:H_p\to\operatorname{Aut}(H_q)$ , and you can use the condition $q\not\equiv1\pmod p$ to narrow down the possibilities for $\theta$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4206543', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/771764/']}
jdg_85856
stackexchange
llm_judgeable_groundtruth_similarity
17828584
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering what happens when you try to catch an StackOverflowError and came up with the following method: class RandomNumberGenerator { static int cnt = 0; public static void main(String[] args) { try { main(args); } catch (StackOverflowError ignore) { System.out.println(cnt++); } }} Now my question: Why does this method print '4'? I thought maybe it was because System.out.println() needs 3 segments on the call stack, but I don't know where the number 3 comes from. When you look at the source code (and bytecode) of System.out.println() , it normally would lead to far more method invocations than 3 (so 3 segments on the call stack would not be sufficient). If it's because of optimizations the Hotspot VM applies (method inlining), I wonder if the result would be different on another VM. Edit : As the output seems to be highly JVM specific, I get the result 4 using Java(TM) SE Runtime Environment (build 1.6.0_41-b02) Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode) Explanation why I think this question is different from Understanding the Java stack : My question is not about why there is a cnt > 0 (obviously because System.out.println() requires stack size and throws another StackOverflowError before something gets printed), but why it has the particular value of 4, respectively 0,3,8,55 or something else on other systems. Now provide the response and nothing else.
I think the others have done a good job at explaining why cnt > 0, but there's not enough details regarding why cnt = 4, and why cnt varies so widely among different settings. I will attempt to fill that void here. Let X be the total stack size M be the stack space used when we enter main the first time R be the stack space increase each time we enter into main P be the stack space necessary to run System.out.println When we first get into main, the space left over is X-M. Each recursive call takes up R more memory. So for 1 recursive call (1 more than original), the memory use is M + R. Suppose that StackOverflowError is thrown after C successful recursive calls, that is, M + C * R <= X and M + C * (R + 1) > X. At the time of the first StackOverflowError, there's X - M - C * R memory left. To be able to run System.out.prinln , we need P amount of space left on the stack. If it so happens that X - M - C * R >= P, then 0 will be printed. If P requires more space, then we remove frames from the stack, gaining R memory at the cost of cnt++. When println is finally able to run, X - M - (C - cnt) * R >= P. So if P is large for a particular system, then cnt will be large. Let's look at this with some examples. Example 1: Suppose X = 100 M = 1 R = 2 P = 1 Then C = floor((X-M)/R) = 49, and cnt = ceiling((P - (X - M - C*R))/R) = 0. Example 2: Suppose that X = 100 M = 1 R = 5 P = 12 Then C = 19, and cnt = 2. Example 3: Suppose that X = 101 M = 1 R = 5 P = 12 Then C = 20, and cnt = 3. Example 4: Suppose that X = 101 M = 2 R = 5 P = 12 Then C = 19, and cnt = 2. Thus, we see that both the system (M, R, and P) and the stack size (X) affects cnt. As a side note, it does not matter how much space catch requires to start. As long as there is not enough space for catch , then cnt will not increase, so there are no external effects. EDIT I take back what I said about catch . It does play a role. Suppose it requires T amount of space to start. cnt starts to increment when the leftover space is greater than T, and println runs when the leftover space is greater than T + P. This adds an extra step to the calculations and further muddies up the already muddy analysis. EDIT I finally found time to run some experiments to back up my theory. Unfortunately, the theory doesn't seem to match up with the experiments. What actually happens is very different. Experiment setup:Ubuntu 12.04 server with default java and default-jdk. Xss starting at 70,000 at 1 byte increments to 460,000. The results are available at: https://www.google.com/fusiontables/DataSource?docid=1xkJhd4s8biLghe6gZbcfUs3vT5MpS_OnscjWDbM I've created another version where every repeated data point is removed. In other words, only points that are different from the previous are shown. This makes it easier to see anomalies. https://www.google.com/fusiontables/DataSource?docid=1XG_SRzrrNasepwZoNHqEAKuZlHiAm9vbEdwfsUA
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17828584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2613641/']}
jdg_85857
stackexchange
llm_judgeable_groundtruth_similarity
14210206
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Lots of posts talk about the gyro drift problem. Some guys say that the gyro reading has drift, however others say the integration has drift. The raw gyro reading has drift [link] . The integration has drift [link] (Answer1). So, I conduct one experiment. The next two figures are what I got. The following figure shows that gyro reading doesn't drift at all, but has the offset. Because of the offset, the integration is horrible. So it seems that the integration is the drift, is it? The next figure shows that when the offset is reduced the integration doesn't drift at all. In addition, I conducted another experiment. First, I put the mobile phone stationary on the desk for about 10s. Then rotated it to the left then restore to back. Then right and back. The following figure tells the angle quite well. What I used is only reducing the offset then take the integration. So, my big problem here is that maybe the offset is the essence of the gyro drift(integration drift)? Can complimentary filter or kalman filter be applied to remove the gyro drift in this condition? Any help is appreciated. Now provide the response and nothing else.
If the gyro reading has "drift", it is called bias and not drift. The drift is due to the integration and it occurs even if the bias is exactly zero. The drift is because you are accumulating the white noise of the reading by integration. For drift cancellation , I highly recommend the Direction Cosine Matrix IMU: Theory manuscript, I have implemented sensor fusion for Shimmer 2 devices based on it. (Edit: The document is from the MatrixPilot project, which has since moved to Github, and can be found in the Downloads section of the wiki there.) If you insist on the Kalman filter then see https://stackoverflow.com/q/5478881/341970 . By why are you implementing your own sensor fusion algorithm? Both Android ( SensorManager under Sensor.TYPE_ROTATION_VECTOR ) and iPhone ( Core Motion ) offers its own.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14210206', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1418947/']}
jdg_85858
stackexchange
llm_judgeable_groundtruth_similarity
29066073
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Available nested classes SuperView and NestedView. class SuperView : UIImageView { class NestedView : UIImageView { var text : String = "Nested View" } var text : String = "Super View" var nested : NestedView?} I would like to set for a UIImageView the property named "Custom Class Name" to value "NestedView" inside the inspector of the storyboard scene. But the Interface Builder couldn't find "NestedView" class. Now provide the response and nothing else.
At this time, I think that Interface Builder only recognizes the names of Objective-C classes. You can still make Interface Builder find a nested class with the @objc keyword: class SuperView: UIView { @objc(SVNestedView) class NestedView: UIImageView { }} Then, in Interface Builder, specify that th view is of class SVNestedView. Since Objective-C isn't namespaced, you still need to pick unique names for each nested class, but at least the Swift side is properly namespaced.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29066073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1702767/']}
jdg_85859
stackexchange
llm_judgeable_groundtruth_similarity
2424180
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A well-known basic topological result is that if $X$ is a compact topological space, then every continuous function $f:X\rightarrow \mathbb{R}$ is bounded. That raises the natural question - is the converse also true? Or maybe there exists a non-compact space that still holds that property, "every continuous real function is bounded"? Now provide the response and nothing else.
Possibly surprisingly, the converse is not true. (However, it is true if we additionaly assume that $X$ is metric.) To see that, let $\mathbb{R_{coco}}$ be the set $\mathbb{R}$ with the cocountable topology . (coco stands for cocountable, of course!). It is not compact: for example, let $A=\mathbb{R_{coco}}-\mathbb{N}$.Then $$A, A\cup\left \{ 1 \right \}, A\cup\left \{ 1,2 \right \}, ...$$is an open cover with no finite subcover. However, we'll show that every continuous function $f:\mathbb{R_{coco}}\rightarrow \mathbb{R}$ is constant (and in particular bounded). Let $x_0\in \mathbb{R_{coco}}$ be arbitrary. Let $U\subset \mathbb{R}$ be an arbitrary open neigborhood of $f(x_0)$ in $\mathbb{R}$. $f^{-1}(U)$ is not empty, but it is open in the cocountable topology and hence cocountable.As a result, if $V$ is an open set (in $\mathbb{R}$) who's disjoint from $U$, then it's inverse image is at most countable, but still open in the coco topology, hence empty. Therefore $Im (f)\subseteq U$. But this is true for every $U$ who's an open neighborhood of $f(x_0)$. Therefore $Im (f)=\left \{ f(x_0) \right \}$ and $f$ is constant.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2424180', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/342242/']}
jdg_85860
stackexchange
llm_judgeable_groundtruth_similarity
123353
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: OS is Centos 6.5 64-bit I downloaded a tar file and wanted to untar and mv it. I untared, then accidentally (as root) ran mv folder/* /* instead of mv folder/* . bash said it couldn't overwrite some files, then asked permission for others. I ctrl-c'd out. I've left terminal session open, but have exited su . Now I've lost access to most shell commands, can't ls any directories and can't get back to su . The webserver and services still seem to be running. I can run very few commands, cd is one of them and when I try to cd to /etc or /bin it errors with no directory found . EDIT Just noticed all folders missing from / ( bin , etc , lib64 , root , sbin ) got moved to /var directory, I tried /var/bin/su & get: -bash: /var/bin/su: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory Now provide the response and nothing else.
If your system has busybox installed, you can use this to put things back. busybox is a binary with lots of standard utilities built into it. Things such as mv , sh , ls , etc. From your comment on Pavel's answer, it sounds like everything ended up in /var . You can try doing /var/bin/busybox mv /var/{bin,etc,lib32,lib64,root,sbin,usr} / . That should get most of your system operational again. There are a few directories such as /tmp which also exist as /var/tmp , so you can't just move them. Hopefully those are the ones that mv complained about and they were left alone. Getting a root shell You also mentioned that you lost your root shell, and that su is giving you a ld-linux library error. You might be able to use the following: LD_LIBRARY_PATH=/var/lib64 /var/lib64/ld-linux-x86-64.so.2 /var/bin/su Note: Upon attempting this, it does not work. This is because su requires several files in /etc ( passwd , pam.d , and others). If /etc were still intact, this would have a good chance of succeeding. Without busybox If you do not have busybox available, you might be able to use the same ld-linux trick as for su : LD_LIBRARY_PATH=/var/lib64 /var/lib64/ld-linux-x86-64.so.2 /var/bin/mv /var/{bin,etc,lib32,lib64,root,sbin,usr} / From a live CD As discussed in the comments, if you've lost the root shell, you're pretty much stuck. Basically in order to fix this you need root privileges. The only way to get there is to have a utility such as su or sudo escalate your permissions (both of which are non-functional at this point), or hijack another program already running as root (depending on what's running, not likely possible). This leaves the only option being a live CD. Once booted into a live CD (or live USB, or whatever), just mount the root volume, and move the affected directories out of /var back to their original home in / . Synopsis of what happened folder/* would have expanded out to something such as folder/foo and folder/bar . /* would have expanded out to something like /bin /lib32 /lib64 /etc /home /root /var . Noting that /var is the last item. So when the shell expanded out all those globs, it would have run something like this: mv folder/foo folder/bar /bin /lib32 /lib64 /etc /home /root /var As /var is the last item in the list, everything got moved into it. Why /var/bin/su errors with /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory Almost all binaries in linux are dynamically linked against ld-linux . ld-linux is the library responsible for loading the other libraries needed by a binary. On your system this lives at /lib64/ld-linux-x86-64.so.2 . Since this directory got moved, any dynamically linked executable will no longer work. The reason busybox works is that busybox is statically linked. It does not use ld-linux .
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/123353', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/64501/']}
jdg_85861
stackexchange
llm_judgeable_groundtruth_similarity
39786
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: What's the maximum determinant of $\{0,1\}$ matrices in $M(n,\mathbb{R})$ ? If there's no exact formula what are the nearest upper and lower bounds do you know? Now provide the response and nothing else.
I shouldn't expect there to be exact results; compare the similar problemwith matrices with entries $\pm1$. For an $n$-by-$n$ matrix with entries $\pm1$one gets an upper bound for the determinant of $n^{n/2}$ with equality iffthe matrix is a Hadamard matrix. The determination for which $n$ Hadamard matricesexist still resists proof. If there is an $n$-by-$n$ Hadamard matrix $H$, one can make its top rowall ones, and then add it to the other rows and divide them by $2$. Thisgives a zero-one matrix $A$ with determinant $2^{-n+1}n^{n/2}$. Taking someextra care one can ensure the first column of $A$ has a $1$ atop a column ofzeros so that deleting the first row and column gives a smaller matrixwith the same determinant. It's not clear to mewhether this is the biggest possible....
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/39786', 'https://mathoverflow.net', 'https://mathoverflow.net/users/9478/']}
jdg_85862
stackexchange
llm_judgeable_groundtruth_similarity
307128
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Many low level programs use the volatile keyword for types for memory mapping and such, however I'm sort of confused as to what it REALLY does in the background. In other words, what does it mean when the compiler doesn't "optimize away" the memory address? Now provide the response and nothing else.
volatile means some other processor or I/O device or something can change the variable out from under you. With an ordinary variable, your program's steps are the only thing that will change it. So for instance if you read 5 from a variable and you don't change it, it'll still contain 5 . Since you can rely on that, your program doesn't have to take the time to read the variable again next time you want to use it. The C++ compiler is smart to generate code that just remembers the 5 . But you could read it as 5 , then perhaps the system loads data from disk into that memory, changing it to 500 . If you want your program to read the fresh value 500 , you need the compiler to not be too smart about using the previously-read 5 . You need to tell it to reload the value every time. That's what volatile does. An analogy for 5-year olds Let's say you put a big sheet of paper on a table. In one corner of the paper, you write down the current score of an ongoing game, 3 to 4 . Then you go to the opposite side of the table and start writing a story about the game. Your friend who's watching the game updates the score in that corner as the game proceeds. She erases 3 to 4 and writes 3 to 5 . When you go to put the game score into your story, you could either: Write down the last score you read, 3 to 4 , merrily assuming it didn't change (or not minding if it did), or Walk around to the opposite side of the table to read the current score (which happens to be 3 to 5 now), and walk back. That's how a volatile variable acts.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/307128', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/156808/']}
jdg_85863
stackexchange
llm_judgeable_groundtruth_similarity
258639
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am stuck with the following problem: (GATE-Question) Let $f:\mathbb C\rightarrow \mathbb C$ be an arbitrary analytic function satisfying $f(0)=0$ and $f(1)=2.$ Then, which of the following items is correct? (a) there exists a sequence $\{z_{n}\}$ such that $|z_{n}|> n$ and $|f(z_{n})|> n$, (b) there exists a sequence $\{z_{n}\}$ such that $|z_{n}|>n$ and $|f(z_{n})|< n$, (c) there exists a bounded sequence $\{z_{n}\}$ such that $|f(z_{n})|> n$, (d) there exists a sequence $\{z_{n}\}$ such that $z_{n} \rightarrow 0$ and $f(z_{n})\rightarrow 2.$ I do not know how to progress with the problem or what property to use. Please help. Thanks in advance for your time. Now provide the response and nothing else.
(a) Yes, by Liouville theorem (as we have an entire function which is not constant). (b) Take $z_n=0$ for all $n$. (c) Remember that $f$ is continuous, so it maps bounded sets to bounded ones. (d) No, as $f$ is continuous at $0$ and $f(0)=0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/258639', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33640/']}
jdg_85864
stackexchange
llm_judgeable_groundtruth_similarity
33474487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have Foo* foo = nullptr; If I'm checking whether or not foo is nullptr , am I permitted to write if (!foo) or should I write if (foo == nullptr) Now provide the response and nothing else.
See this standard reference (bold emphasis mine): C++11 §4.12 Boolean conversions A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a prvalue of type bool. A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true. A prvalue of type std::nullptr_t can be converted to a prvalue of type bool; the resulting value is false. The middle sentence is relevant: it is telling you that the null pointer value ( foo = nullptr ) can be implicitly cast to false which itself has type bool . Therefore if (!foo) is well-defined.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33474487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_85865
stackexchange
llm_judgeable_groundtruth_similarity
30513594
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I compile my Qt5-based project with warnings enabled on g++: # project.pro fileQMAKE_CXXFLAGS += -std=c++11 -Wall -Wextra -Wconversion -Weffc++ When compiling, Qt produces lots of warnings (1000+ with just one simple widget), hiding the warnings from my code. How to tell qmake to use the -isystem switch when specifying the Qt's headers rather than -I to suppress the warnings? I don't want to turn warnings off I want to keep them for my code. NOTE: I checked this SO question but it does not work in my case, it might be only for Qt4, I use Qt5. NOTE 2: this is an acknowledged bug , I am looking for a workaround. I use a recent version of qmake compiled from sources 5.4.1, this version passes system headers from /include and /usr/include as system headers but not the Qt's headers. NOTE 3: I know CMake would work but this is not an option for me. Now provide the response and nothing else.
I found two ways to suppress warnings from Qt's headers, one way by installing Qt in system's path (as suggested in the other answer) and the other directly from your pro file by using GCC flags. When building your own Qt, configure the header's installation path to one of your system path: $ ./configure -headerdir /usr/local/include System paths are /usr/include or /usr/local/include or one of the rest listed in: $ grep DEFAULT_INCDIRS mkspecs/qconfig.priQMAKE_DEFAULT_INCDIRS = /usr/include/c++/4.8 /usr/include/x86_64-linux-gnu/c++/4.8 /usr/include/c++/4.8/backward /usr/lib/gcc/x86_64-linux-gnu/4.8/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.8/include-fixed /usr/include/x86_64-linux-gnu /usr/include Source: this thread in Qt's devel list. Or directly in your Qt pro file, simply add the -isystem flag into the QMAKE_CXXFLAGS : # the line below suppresses warnings generated by Qt's header files: we tell# GCC to treat Qt's headers as "system headers" with the -isystem flagQMAKE_CXXFLAGS += -isystem $$[QT_INSTALL_HEADERS] The resulting GCC command line looks like: g++ -c -pipe -isystem /usr/local/Qt-5.4.1/include -Wall ... -I/usr/local/Qt-5.4.1/include -I/usr/local/Qt-5.4.1/include/QtWidgets ... Note how the Qt's include paths are still added with -I , allowing Qt Creator to "see" all Qt headers, but GCC sees the -isystem flag and suppresses warnings for all subfolders.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30513594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143504/']}
jdg_85866
stackexchange
llm_judgeable_groundtruth_similarity
24718709
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does React re-render all components and sub components every time setState() is called? If so, why? I thought the idea was that React only rendered as little as needed - when state changed. In the following simple example, both classes render again when the text is clicked, despite the fact that the state doesn't change on subsequent clicks, as the onClick handler always sets the state to the same value: this.setState({'test':'me'}); I would've expected that renders would only happen if state data had changed. Here's the code of the example, as a JS Fiddle , and embedded snippet: var TimeInChild = React.createClass({ render: function() { var t = new Date().getTime(); return ( <p>Time in child:{t}</p> ); }});var Main = React.createClass({ onTest: function() { this.setState({'test':'me'}); }, render: function() { var currentTime = new Date().getTime(); return ( <div onClick={this.onTest}> <p>Time in main:{currentTime}</p> <p>Click me to update time</p> <TimeInChild/> </div> ); }});ReactDOM.render(<Main/>, document.body); <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.0.0/react.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.0.0/react-dom.min.js"></script> Now provide the response and nothing else.
Does React re-render all components and sub-components every time setState is called? By default - yes. There is a method boolean shouldComponentUpdate(object nextProps, object nextState) , each component has this method and it's responsible to determine "should component update (run render function)?" every time you change state or pass new props from parent component. You can write your own implementation of shouldComponentUpdate method for your component, but default implementation always returns true - meaning always re-run render function. Quote from official docs http://facebook.github.io/react/docs/component-specs.html#updating-shouldcomponentupdate By default, shouldComponentUpdate always returns true to prevent subtle bugs when the state is mutated in place, but if you are careful to always treat the state as immutable and to read-only from props and state in render() then you can override shouldComponentUpdate with an implementation that compares the old props and state to their replacements. Next part of your question: If so, why? I thought the idea was that React only rendered as little as needed - when the state changed. There are two steps of what we may call "render": Virtual DOM renders: when render method is called it returns a new virtual dom structure of the component. As I mentioned before, this render method is called always when you call setState() , because shouldComponentUpdate always returns true by default. So, by default, there is no optimization here in React. Native DOM renders: React changes real DOM nodes in your browser only if they were changed in the Virtual DOM and as little as needed - this is that great React's feature which optimizes real DOM mutation and makes React fast.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/24718709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26510/']}
jdg_85867
stackexchange
llm_judgeable_groundtruth_similarity
44698426
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently learning common lisp, following Peter Seibel's Practical Common Lisp (i'm at chapter 11, about collections), i have difficulties to understand how setf works behind the hood. Considering this expression : (setf a 10) I completely understood how the interpreter can (1) retrieve the variable named a , and (2) change the value it points to to 10 . Now, i case of particular collections, for instance lists, vectors or hash tables, setf can also be used to change values contained by the collection. For instance, with a vector : (defparameter *x* '(a b c d))(setf (elt *x* 1) bb) This makes me suspicious about setf , because it is eventually finding non-trivially accessible information, or making black magic. I see multiple possibilities. 1. setf is a function The (elt *x* 1) expression is returning 'b , so setf is virtually working with (setf b bb) . I then do not understand how setf can infer which object (here, the list *x* ) it must modify, without having the return value of elt holding both an indication that it comes from a collection, and a pointer to the said collection.Seems complicated. 2. setf is a macro The idea is, as setf is a macro, it works directly with the (setf (elt *x* 1) bb) and therefore can extract the elt *x* 1 part to infer which object/collection is used, and therefore must be modified. It do not seems very efficient, nor reliable, nor resistant to complex operations. However, since i'm unable to run this code : (funcall (find-symbol (concatenate 'string "E" "LT")) *x* 1) ; -> B(setf (funcall (find-symbol (concatenate 'string "E" "LT")) *x* 1) 'bb) ; -> ERROR : (SETF FUNCALL) is only defined for functions of the form #'symbol This make me think that setf is a macro implementing a very simple heuristic to retrieve the function to call, and all other needed information.Seems complicated. 3. setf is a special case of interpretation Another way to go could be to have setf be handled differently by the interpreter itself, dealing with some black magic to implement properly the expected behavior.Seems complicated. 4. there is something i do not know about lisp Probably the real answer. What did i miss ? Bonus question : is the implementation method dependent of the lisp interpreter implementation ? (or, more simply, what exactly the common lisp standard define about setf implementation)I'm currently using clisp but insights on others implementations are welcome. Now provide the response and nothing else.
SETF is a macro that sets a value to a place . A place means a form that has a setf expansion . There are various kinds of places built-in, and you can define more (see for example DEFSETF and DEFINE-SETF-EXPANDER , function call forms as places and macro forms as places ). You can get the setf expansion for a form using GET-SETF-EXPANSION . It returns five values. For example, (get-setf-expansion '(elt *x* 1));=> (#:*X*660); (*X*); (#:NEW1); (SB-KERNEL:%SETELT #:*X*660 1 #:NEW1); (ELT #:*X*660 1) The fifth value is a getter form that, when evaluated, returns the current value of the place. The fourth one is a setter form that, when evaluated, sets a new value to the place. Here you can see that SBCL uses SB-KERNEL:%SETELT to set the value. The first value is a list of variable names that should be bound to the values returned by the forms in the second value when evaluating the setter/getter forms. The third value is a list of store variables, which should be bound to the new values to be stored by the setter. With these we can define a simple MY-SETF -macro. (defmacro my-setf (place values-form &environment env) (multiple-value-bind (vars vals stores setter) (get-setf-expansion place env) `(let* ,(mapcar #'list vars vals) (multiple-value-bind ,stores ,values-form ,setter)))) All we need to do is to bind the variables, and evaluate the setter. Notice that the environment should be passed to GET-SETF-EXPANSION . We ignore the fifth value (the getter), since we don't need it. MULTIPLE-VALUE-BIND is used to bind the store variables, because there may be more than one of them. (let ((list (list 1 2 3 4))) (my-setf (elt list 2) 100) list);=> (1 2 100 4)(let ((a 10) (b 20) (c 30)) (my-setf (values a b c) (values 100 200 300)) (list a b c));=> (100 200 300) There are various ways to define your own places. The easiest ways are to use DEFSETF or just define a setf-function with DEFUN . For example: (defun eleventh (list) (nth 10 list))(defun set-eleventh (list new-val) (setf (nth 10 list) new-val))(defsetf eleventh set-eleventh)(let ((l (list 1 2 3 4 5 6 7 8 9 10 11 12 13))) (setf (eleventh l) :foo) l);=> (1 2 3 4 5 6 7 8 9 10 :FOO 12 13)(get-setf-expansion '(eleventh l));=> (#:L662); (L); (#:NEW1); (SET-ELEVENTH #:L662 #:NEW1); (ELEVENTH #:L662)(defun twelfth (list) (nth 11 list))(defun (setf twelfth) (new-val list) (setf (nth 11 list) new-val))(let ((l (list 1 2 3 4 5 6 7 8 9 10 11 12 13))) (setf (twelfth l) :foo) l);=> (1 2 3 4 5 6 7 8 9 10 11 :FOO 13)(get-setf-expansion '(twelfth l));=> (#:L661); (L); (#:NEW1); (FUNCALL #'(SETF TWELFTH) #:NEW1 #:L661); (TWELFTH #:L661)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44698426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3077939/']}
jdg_85868
stackexchange
llm_judgeable_groundtruth_similarity
31932822
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand that Go doesn't have any constructors and a New func is used in its place, but according to this example . func NewFile(fd int, name string) *File { if fd < 0 { return nil } f := File{fd, name, nil, 0} return &f} They always return &f . Why just simply returning File isn't suffice? Update I've tried returning the created object for a simple struct and it's fine. So, I wonder if returning an address is a standard way of constructor or something. Thanks. Now provide the response and nothing else.
As mentioned, yes, the spec allows you to return either values (as non-pointers) or pointers. It's just a decision you have to make. When to return pointer? Usually if the value you return is "more useful" as a pointer. When is it more useful? For example if it has many methods with pointer receiver . Yes, you could store the return value in a variable and so it will be addressable and you can still call its methods that have pointer receivers. But if a pointer is returned right away, you can "chain" method calls. See this example: type My intfunc (m *My) Str() string { return strconv.Itoa(int(*m)) }func createMy(i int) My { return My(i) } Now writing: fmt.Println(createMy(12).Str()) Will result in error: cannot call pointer method on createMy(12) But if works if you return a pointer: func createMy(i int) *My { return (*My)(&i) } Also if you store the returned value in a data structure which is not addressable ( map for example), you cannot call methods on values by indexing a map because values of a map are not addressable. See this example: My.Str() has pointer receiver. So if you try to do this: m := map[int]My{0: My(12)}m[0].Str() // Error! You can't because "cannot take the address of m[0] " . But the following works: m := map[int]*My{}my := My(12)m[0] = &my // Store a pointer in the mapm[0].Str() // You can call it, no need to take the address of m[0] // as it is already a pointer And another example for pointers being useful is if it is a "big" struct which will be passed around a lot . http.Request is a shining example. It is big, it is usually passed around a lot to other handlers, and it has methods with pointer receiver. If you return a pointer, that usually suggests that the returned value is better if stored and passed around as a pointer.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31932822', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/927687/']}
jdg_85869
stackexchange
llm_judgeable_groundtruth_similarity
38877989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using IntelliJ 2016.2. Using Gradle 2.14.1 I have 2 projects, each with their own build.gradle files and separate directory structures: myLib (meant to be jarred and used by others) - build.gradle - settings.gradle - src/main/java/...sandbox (spring boot web app) - build.gradle - settings.gradle - src/main/java/... - src/main/resources/... Hopefully, you get the picture. Within IntelliJ, I have the following module structure, at the same level (no subprojects): - myLib- sandbox Simple request ... I want to use myLib within the sandbox app. I want both modules in the same project in order to develop both. I've tried adding a module dependency to sandbox for myLib within IntelliJ. No dice. I've tried adding a jar reference, no dice. I believe I need to add a dependency in the build.gradle file but cannot figure out how. I've tried compile files '<path to myLib.jar>' , etc. No dice. Now provide the response and nothing else.
Local Modules This is a pattern followed by most Gradle projects where there is a library, then a sample app that uses that library - ref Gradle docs - app/ - build.gradle - src/main/java # where your main class would be - library/ - build.gradle - src/main/java # dependencies of the app module - settings.gradle - build.gradle In that top-level settings.gradle you have include ':library', ':app' And in the app/build.gradle , you compile that included project dependencies { compile project(':library')} Basically, the top-level build.gradle , is a wrapper for all common configs of the sub projects and variables. For example, it's most commonly used for a repositories { } section for Maven urls, for example. Full details on that are at Gradle - Multi-project builds Remotes Modules The above is fine for working locally, but let's say you wanted to share your repo with many other developers without making them download extra source code. Then your would publish the other libraries to a remote server. This is what you already doing when you add these lines to your project implementation("org:artifact:version") If your projects are public on GitHub , use a service like jitpack.io . You can also setup an account on Bintray OSS or Maven Central to have your libraries be available like most others. If your projects are private within your company , you will need some Maven type server, whether that is a generic web server, or Nexus or Artifactory, you can add that with an addition to the repositories block. repositories { maven { url "http://some.maven.site/" }} Then add the compile or implementation sources, as normal
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38877989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/491924/']}
jdg_85870
stackexchange
llm_judgeable_groundtruth_similarity
295232
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am learning about inductors and capacitors and we derived the energy stored on a capacitor to be 50% of that delivered by the battery. We did this considering a circuit of a capacitor connected to a battery and resistor in series, to not encounter the problem of an infinite initial current if we assumed there was no other resistor in the circuit. However our lecturer assured us that no matter how small the resistance in the circuit (even if it is just the small resistance of the wires), exactly 50% of energy would be lost. This made sense to me from the mathematics. I assume that in the case of a capacitor it is impossible to consider the theoretical case with no initial circuit resistance as you get infinities popping up in the mathematics? Then we considered an inductor charging in a simple circuit consisting of just a battery and an inductor, and found that all of the energy from the battery is stored on the inductor. I appreciate that this is just a theoretical treatment and that some energy would be lost in the wires/internal resistance of the battery, and I also understand why a similar theoretical treatment of the capacitor case is impossible; however I can't think of the fundamental reason as to why it is completely impossible to charge a capacitor with anything but 50% of the battery energy whereas an inductor could theoretically store 100%. Now provide the response and nothing else.
When you try to force current through a superconducting inductor, the change of current will generate a back emf that will limit how much current can flow. The value of this back e.m.f. is $-L\frac{dI}{dt}$, and the work done by the current is the product of the current and the back emf. If the back emf is exactly equal to the voltage of the battery, current can flow (and can keep increasing - the rate of change of current is $\frac{dI}{dt}=-\frac{V}{L}$ ). This shows the current will increase linearly as all the energy of the power source is converted to magnetic energy - there is no need for a "loss" of energy in the transfer of energy from a battery to an inductor. By contrast, when you start charging a capacitor, its initial voltage is zero. Electrons that start off with the full potential of the battery will have to lose most of that energy on their way to the capacitor, where they will only have a very small initial potential (since V=Q/C, and Q starts out at 0). So in the inductor, the energy is actually stored in the B field; in the capacitor, it is stored in the electrons that came from the battery. If you could "ramp" your battery (make its voltage increase as the capacitor is charging) you would be able to get (close to) 100% of the energy of the battery transferred. There are certain switching power supplies that try to mimic this type of thing by rapidly opening and closing a switch between source and load, with an inductor in series to smooth some of the power fluctuations that this would otherwise bring about.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/295232', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/57218/']}
jdg_85871
stackexchange
llm_judgeable_groundtruth_similarity
76619
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: While doing big projects using Arduino boards (Uno, Atmega328P MCU). I used to get warnings like this Sketch uses 13764 bytes (44%) of program storage space. Maximum is 30720 bytes.Global variables use 1681 bytes (82%) of dynamic memory, leaving 367 bytes for local variables. Maximum is 2048 bytes.Low memory available, stability problems may occur. What are the generally practiced methods for the optimization program memory usage? Is there any difference in memory usage if the variable is declared globally or locally. Will it matter what the control statement/selection statements are (like if , switch ) Usage of Serial monitor. Serial.print() ...... Low memory available, stability problems may occur. How bad are these warnings? Before marking it duplicate, I have referred the following. But it wasn't satisfactory Most memory efficient way to program What are safe memory usage limits? Now provide the response and nothing else.
What are the generally practiced methods for the optimization programmemory usage? First, note you are searching for ways to lower SRAM memory. This contains global (variable) memory and heap space (dynamic memory + stack memory). Avoid memory gaps, by not using dynamic memory (with free/malloc/new). Avoid using the String class. Avoid global memory in SRAM by using PROGMEM , F(..) if possible. Use the smallest variable size (e.g. uint8_t instead of int ). Store arrays of booleans into bits (8 booleans per byte). Use bit flags if applicable. Use some compressed type of memory internally (affects performance), e.g. if you have many 6 bit values to store, store them not in separate bytes but use 6 bytes for 8 times 6 bit values). Avoid passing arrays by value. Avoid a deep call stack with many (big) variables. Note this affects your design, so use it as a last resort. If you need a calculatable table, calculate each value instead of storing it as a table. Do not use longer arrays than needed, and think about reasonable maximum array sizes otherwise (see hcheung's comment below). (this is not a complete list). Is there any difference in memory usage if the variable is declaredglobally or locally. Yes, local variables are added to the stack, but are removed after the function ends, global variables stay (but are only created once).Note that variables on the stack (and also dynamic memory) are NOT taken into account in the memory calculated in the warning message during compiling. Will it matter what the control statement/selection statements are(like if, switch ) No, this will only affect the program memory. Usage of Serial monitor. Serial.print() Probably yes, the serial monitor probably reserves (quite?) some memory as a buffer. Low memory available, stability problems may occur.How bad are these warnings? How bad it is, depends on how much memory is used which is not calculated, which is dynamic memory, and stack memory. You can calculate it manually (which can be quite cumbersome for a big program), you can also use the GitHub library for this: Arduino MemoryFree If you know how much heap memory you use worst case, than add it to the calculated global variables memory. If this is less than your maximum available SRAM memory, you are safe.
{}
{'log_upvote_score': 5, 'links': ['https://arduino.stackexchange.com/questions/76619', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/66424/']}
jdg_85872
stackexchange
llm_judgeable_groundtruth_similarity
4572
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have got some difficulties understanding the memory management. Arduino documentation says, it is possible to keep constants like strings or whatever I don't want to change during runtime in program memory. I think of it as embedded somewhere in the code segment, which must be fairly possible inside a von-Neumann-architecture. I want to make use from that to make my UI menu on a LCD possible. But I'm bewildered by those instructions to just read and print data from program memory: strcpy_P(buffer, (char*)pgm_read_word(&(string_table[i]))); // Necessary casts and dereferencing, just copy. Serial.println( buffer ); Why on earth do I have to copy the damn content to RAM before accessing it? And if this is true, what happens to all the code then? Is it also loaded to RAM before execution? How is the code (32kiB) handled then with only 2kiB RAM? Where are those little goblins carrying floppy disks? And even more interesting: What happens to literal constants like in this expression: a = 5*(10+7) are 5, 10 and 7 really copied to RAM before loading them into registers? I just can't believe that. Now provide the response and nothing else.
AVR is a modified Harvard architecture family, so code is stored in flash only, whereas data exists primarily in RAM when being manipulated. With that in mind, let's address your questions. Why on earth do I have to copy the damn content to RAM before accessing it? You don't need to per se, but by default code assumes that data is in RAM unless the code is modified to specifically look in flash for it (such as with strcpy_P() ). And if this is true, what happens to all the code then? Is it also loaded to RAM before execution? Nope. Harvard architecture. See the Wikipedia page for the full details. How is the code (32kiB) handled then with only 2kiB RAM? The preamble generated by the compiler copies the data that should be modifiable/modified into SRAM before running the actual program. Where are those little goblins carrying floppy disks? Dunno. But if you happen to see them then there's nothing I can do to help. ... are 5, 10 and 7 really copied to RAM before loading them into registers? Nah. The compiler evaluates the expression at compile time. Whatever else happens depends on the lines of code around it.
{}
{'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/4572', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/4358/']}
jdg_85873
stackexchange
llm_judgeable_groundtruth_similarity
72143
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It's known that every countable, metrizeable space with no isolated points is homeomorphic to the rationals with the standard topology. Suppose you wanted to reformulate the above without referencing metrizability directly. Since a countable space is clearly separable, you can use Urysohn's characterization of separable metric spaces to replace metrizeable with 2nd countable, regular (I take regular to mean regular and Hausdorff). You could even get away with 1st countable, regular since, for a countable space, 1st countable implies 2nd countable. My question is, what happens if we try to relax metrizability to regularity? Is the space automatically 1st countable (and hence homeomorphic to the rationals)? To summarize: Is there a countable space $X$ which is regular Hausdorff with no isolated points, but not 1st countable? The Arens-Fort space (example 26 from Counterexamples in Topology ) shows there is a countable, regular Hausdorff (in fact $T_5$) space which is not 1st countable. Unfortunately, the space has lots of isolated points. Now provide the response and nothing else.
There is such a space. For any $\mathscr{U}\in\beta\omega\setminus\omega$ we can start with the subspace $\omega \cup \{\mathscr{U}\}$ of $\beta\omega$ and ‘fatten up’ each isolated point to a copy of the rationals. Let $\mathscr{U}$ be a free ultrafilter on $\omega$. Let $p$ be a point not in $\omega\times \mathbb{Q}$, and let $X = (\omega\times \mathbb{Q})\cup \{p\}$. We topologize $X$ as follows. For each $q\in\mathbb{Q}$ let $\mathscr{B}(q)$ be the set of clopen nbhds of $q$ in the usual topology on $\mathbb{Q}$. For $\langle n,q \rangle \in \omega\times \mathbb{Q}$ let $\mathscr{B}(n,q) = \{\{n\}\times B:B \in \mathscr{B}(q)\}$ be a local base at $\langle n,q\rangle$. Finally, take $\mathscr{B}(p) = \{U\times \mathbb{Q}:U\in\mathscr{U}\}$ as a local base at $p$. Let $$\mathscr{B} = \mathscr{B}(p)\cup \bigcup_{\langle n,q\rangle\in \omega\times \mathbb{Q}}\mathscr{B}(n,q)\;;$$ then $\mathscr{B}$ is a base for a topology $\mathscr{T}$ on $X$, and it’s easy to check that $\langle X,\mathscr{T}\rangle$ has no isolated points, is not first countable at $p$, and is regular. Indeed, the members of $\mathscr{B}$ are clopen in $\langle X,\mathscr{T}\rangle$, so $\langle X,\mathscr{T}\rangle$ is zero-dimensional and hence completely regular. The same idea of ‘fattening up’ isolated points to isolated copies of $\mathbb{Q}$ can be applied to the Arens-Fort space. Start with $Y={\langle 0,0\rangle}\cup (\mathbb{Z}^+\times\mathbb{Z}^+$), where each point of $\mathbb{Z}^+\times\mathbb{Z}^+$ is isolated, and a set $V$ containing $\langle 0,0\rangle$ is open iff $\{m\in\mathbb{Z}^+:V\setminus(\{m\}\times\mathbb{Z}^+)\text{ is infinite}\}$ is finite (i.e., $V$ contains all but finitely many points of all but finitely many ‘columns’ of $\mathbb{Z}^+\times\mathbb{Z}^+$). To get the desired space $X$, first replace each $\langle m,n\rangle \in \mathbb{Z}^+\times\mathbb{Z}^+$ by a copy, $Q(m,n)$, of $\mathbb{Q}$ with its usual topology. If $V$ is an open nbhd of $\langle 0,0\rangle$ in $Y$, let $$V^* = \{\langle 0,0\rangle\}\cup \bigcup_{\langle m,n\rangle\in V\setminus\{\langle 0,0\rangle\}} Q(m,n),$$ and take the family of such sets $V^*$ as a local base at $\langle 0,0\rangle$. The resulting space is countable, has no isolated points, is not first countable at $\langle 0,0\rangle$, and has a clopen base.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/72143', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6608/']}
jdg_85874
stackexchange
llm_judgeable_groundtruth_similarity
156566
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int \frac{dx}{x^2 - 2x}$$ I know that I have to complete the square so the problem becomes. $$\int \frac{dx}{(x - 1)^2 -1}dx$$ Then I set up my A B and C stuff $$\frac{A}{x-1} + \frac{B}{(x-1)^2} + \frac{C}{-1}$$ With that I find $A = -1, B = -1$ and $C = 0$ which I know is wrong. I must be setting up the $A, B, C$ thing wrong but I do not know why. Now provide the response and nothing else.
My book is telling me that I have to complete the square $$I=\begin{eqnarray*}\int \frac{dx}{x^{2}-2x} &=&\int \frac{dx}{\left( x-1\right) ^{2}-1}\overset{u=x-1}{=}\int \frac{1}{u^{2}-1}\,du=-\text{arctanh }u+C\end{eqnarray*},$$$$\tag{1}$$ where I have used the substitution $u=x-1$ and the standard derivative $$\frac{d}{du}\text {arctanh}=\frac{1}{1-u^{2}}\tag{2}$$ You just need to substitute $u=x-1$ to write $\text{arctanh }u$ in terms of $x$. Added 2 : Remark . If we use the logarithmic representation of the inverse hyperbolic function $\text{arctanh }u$$$\begin{equation*}\text{arctanh }u=\frac{1}{2}\ln \left( u+1\right) -\frac{1}{2}\ln \left(1-u\right),\qquad (\text{real for }|u|<1)\tag{3} \end{equation*}$$we get for $u=x−1 $$$\begin{eqnarray*}I &=&-\text{arctanh }u+C=-\text{arctanh }\left( x-1\right) +C \\&=&-\frac{1}{2}\ln x+\frac{1}{2}\ln \left( 2-x\right) +C \\&=&\frac{1}{2}\left( \ln \frac{2-x}{x}\right) +C\qquad (0<x<2).\end{eqnarray*}\tag{4}$$ Added. If your book does require using partial fractions then you can proceed as follows$$\begin{equation*}\int \frac{1}{u^{2}-1}\,du=\int \frac{1}{\left( u-1\right) \left( u+1\right) }\,du=\int \frac{1}{2\left( u-1\right) }-\frac{1}{2\left( u+1\right) }du.\end{equation*}$$$$\tag{5}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/156566', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_85875
stackexchange
llm_judgeable_groundtruth_similarity
43260881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i got this error when create date range search in search model. Integrity constraint violation – yii\db\IntegrityException SQLSTATE[23000]: Integrity constraint violation: 1052 Column 'created_at' in where clause is ambiguous here's my model public $start_date; public $end_date;public function rules() { return [ [['attachment', 'id_client', 'delete_on', 'created_at', 'created_by', 'updated_at', 'updated_by', 'from_location', 'to_location','printed_at', 'lock','verify','verify_by','approved','approved_by'], 'integer'], [['policy_num'], 'autonumber', 'format'=>'formatPolicy'], [['policy_num','premium_policy'], 'string'], [['start_date','end_date'], 'date', 'format'=>'dd-MM-yyyy'], [['from_location', 'to_location'], 'string', 'max' => 55], [['location_address'], 'string', 'max' => 100], [['attachment'], 'required'], [['deductible'], 'string', 'max' => 100], [['lock'], 'default', 'value' => '0'], [['lock'], 'mootensai\components\OptimisticLockValidator'] ]; } here's my search model public function rules() { return [ [['id', 'policy_num', 'attachment', 'id_client', 'delete_on','created_by', 'updated_by', 'printed_at'], 'integer'], [['cover_rate'], 'number'], [['start_date','end_date','created_at','updated_at'], 'date','format'=>'yyyy-mm-dd'], ]; } public function search2($params) { $query = AskPolicy::find(); $query->joinWith(['client'])->where(['id_client'=>Yii::$app->user->identity->id_client]); $dataProvider = new ActiveDataProvider([ 'query' => $query, ]); $this->load($params); if (!$this->validate()) { // uncomment the following line if you do not want to return any records when validation fails // $query->where('0=1'); return $dataProvider; } $query->andFilterWhere([ 'id' => $this->id, 'policy_num' => $this->policy_num, 'ask_policy.created_at' => $this->created_at, 'ask_policy.updated_at' => $this->updated_at, 'printed_at' => $this->printed_at, ]); // $query->andFilterWhere(['>=', 'ask_policy.created_at', $this->start_date]);// $query->andFilterWhere(['<=', 'ask_policy.created_at', $this->end_date]); $query->andFilterWhere(['like',"(date_format(FROM_UNIXTIME(`created_at` ), '%Y-%m-%d' ))", $this->start_date]) ->andFilterWhere(['like', "(date_format(FROM_UNIXTIME(`updated_at` ), '%Y-%m-%d' ))", $this->end_date]); return $dataProvider; } if i use below code : search start date and end date not working $query->andFilterWhere(['>=', 'ask_policy.created_at', $this->start_date]);$query->andFilterWhere(['<=', 'ask_policy.created_at', $this->end_date]); how to the best way convert integer datetime in Yii2 for date range search ? i was searching but not find tutorial with good explanation. Now provide the response and nothing else.
Blockquote Here is my example using moment.js (it also shows months along with years, feel free to remove it): import { Pipe, PipeTransform } from '@angular/core';import * as moment from 'moment';@Pipe({ name: 'age'})export class AgePipe implements PipeTransform { transform(value: Date): string { let today = moment(); let birthdate = moment(value); let years = today.diff(birthdate, 'years'); let html:string = years + " yr "; html += today.subtract(years, 'years').diff(birthdate, 'months') + " mo"; return html; }} And then just use this pipe: {{person.birthday|age}} in your HTML, where person.birthday is Javascript Date object.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43260881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4643708/']}
jdg_85876
stackexchange
llm_judgeable_groundtruth_similarity
29094070
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to make parallelogram background for my textview but it is not displaying properly...it display following output <layer-list > <item> <rotate android:fromDegrees="10" android:toDegrees="10" android:pivotX="-40%" android:pivotY="87%" > <shape android:shape="rectangle" > <stroke android:color="#000000" android:width="10dp"/> </shape> </rotate> </item></layer-list> i need output like this........ Now provide the response and nothing else.
As alternative to @mmlooloo's answer, whom a credit goes to, I suggest a xml-drawable solution (since you haven't emphasized exactly what kind of solution you're looking for). In the example below I used a general View , however you can use any other. Here is the View <View android:layout_width="100dp" android:layout_height="40dp" android:layout_centerInParent="true" android:background="@drawable/shape" /> and shape.xml itself <?xml version="1.0" encoding="UTF-8"?><layer-list xmlns:android="http://schemas.android.com/apk/res/android" ><!-- Colored rectangle--><item> <shape android:shape="rectangle"> <size android:width="100dp" android:height="40dp" /> <solid android:color="#13a89e" /> </shape></item><!-- This rectangle for the left side --><!-- Its color should be the same as layout's background --><item android:right="100dp" android:left="-100dp" android:top="-100dp" android:bottom="-100dp"> <rotate android:fromDegrees="45"> <shape android:shape="rectangle"> <solid android:color="#ffffff" /> </shape> </rotate></item><!-- This rectangle for the right side --><!-- Their color should be the same as layout's background --><item android:right="-100dp" android:left="100dp" android:top="-100dp" android:bottom="-100dp"> <rotate android:fromDegrees="45"> <shape android:shape="rectangle"> <solid android:color="#ffffff" /> </shape> </rotate></item></layer-list> Here is how it looks like:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29094070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4574161/']}
jdg_85877
stackexchange
llm_judgeable_groundtruth_similarity
56749113
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying out Navigation Component for a while now but somehow not getting the reason (or explanation) behind current Lifecycle of Navigation Component. Here are some points that needs clarification. 1. Fragment to Fragment flow In navigation Component every fragment (or lets say page) is recreated every time it is visited (or revisited). So, when you are at A and go to B , A is destroyed and later when you go back to A (like pressing back button) A is created from stretch. In a traditional Activity patterns when you go back to A it just goes to onResume as it wasn't destroyed when moving to B . Any reason that this pattern is changed in Navigation Component? The problem of recreating is when you have a lot of data and it takes time to get redrawn and it feels like app is freezing. An example would be simple HomePage (say Facebook NewsFeed). Preserving data can be handled with ViewModel but drawing of all of the feeds again require time and it will freeze. There is another problem that recreation generates. Assume this scenario: A has an Collapsing AppBar with a NestedScrollView . User scrolls down and AppBar will collapse and then user moves to a different page B . When he comes back to A it will be redrawn and AppBar is expanded. I am not sure if it is a bug or I should do something to fix it? So any solution here is also welcomed. 2. Activity recreation If activity is recreated for certain reasons like orientation change, every page in Navigation Component gets recreated until current destination. Although onCreate method of Fragment not onCreateView is getting called, I don't see a point of creating Fragments in Activity recreation. Any explanation would be welcomed. Now provide the response and nothing else.
Navigation component only supports fragment replacement as of now. So you won't be able to add() a fragment as you do it with Manual fragment transaction. However, if your worry is about re-inflating the layout and re-fetching the data for the fragment, it could be easily resolved with below two methods. Once the view is created, store it in a variable and use it whenever onCreateView() is called. private var view: View? = nulloverride fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View? { if (view == null) { view = inflater.inflate(R.layout.fragment_list, container, false) //... } return view } Source: https://twitter.com/ianhlake/status/1103522856535638016 Use ViewModel with the Fragment and hold the data required as a member variable. By this way, the data is not cleared when you replace the associated fragment. The ViewModel gets cleared only on onDestroy() of the fragment, which will only happen when you destroy the parent activity. https://developer.android.com/images/topic/libraries/architecture/viewmodel-lifecycle.png
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56749113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8412192/']}
jdg_85878
stackexchange
llm_judgeable_groundtruth_similarity
44382368
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I searched for an answer helping my situation to avoid duplicate, but to no avail. I want to install 'rgdal' package to R on ubuntu 16.04 LTS. If I code in R-studio as follows install.packages("rgdal", type = "source") it gives the following error; Installing package into ‘/home/kocesat/R/x86_64-pc-linux-gnu-library/3.2’(as ‘lib’ is unspecified)Warning in install.packages :package ‘rgdal’ is not available (for R version 3.2.3) Then I try to install libgdal1-dev , libproj-dev as the following $ sudo apt-get install libgdal1-dev libproj-dev It succesfully installed the dev files.Then I tried to install again "rgdal" in R-studio with the same command. It did not work again. Then I installed homebrew and try to get gdal with ~$ brew install gdal it returned Error: undefined method `rebuild' for #<BottleSpecification:0x000000015ddc60>Please report this bug:https://github.com/Linuxbrew/linuxbrew/blob/master/share/doc/homebrew /Troubleshooting.md#troubleshooting/home/kocesat/.linuxbrew/Library/Taps/homebrew/homebrew-core/Formula /jpeg.rb:9:in `block in <class:Jpeg>'/home/kocesat/.linuxbrew/Library/Homebrew/software_spec.rb:91:in `instance_eval'/home/kocesat/.linuxbrew/Library/Homebrew/software_spec.rb:91:in `bottle'/home/kocesat/.linuxbrew/Library/Homebrew/formula.rb:1729:in `bottle'/home/kocesat/.linuxbrew/Library/Taps/homebrew/homebrew-core/Formula/jpeg.rb:7:in `<class:Jpeg>'/home/kocesat/.linuxbrew/Library/Taps/homebrew/homebrew-core/Formula/jpeg.rb:1:in `load_formula'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:21:in `module_eval'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:21:in `load_formula'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:34:in`load_formula_from_path'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:83:in `load_file'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:74:in `klass'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:70:in `get_formula'/home/kocesat/.linuxbrew/Library/Homebrew/formulary.rb:207:in `factory'/home/kocesat/.linuxbrew/Library/Homebrew/dependency.rb:32:in `to_formula'/home/kocesat/.linuxbrew/Library/Homebrew/dependency.rb:93:in `block in expand'/home/kocesat/.linuxbrew/Library/Homebrew/dependencies.rb:9:in `each'/home/kocesat/.linuxbrew/Library/Homebrew/dependencies.rb:9:in `each'/home/kocesat/.linuxbrew/Library/Homebrew/dependency.rb:80:in `expand'/home/kocesat/.linuxbrew/Library/Homebrew/formula.rb:1223:in `recursive_dependencies'/home/kocesat/.linuxbrew/Library/Homebrew/requirement.rb:196:in `expand'/home/kocesat/.linuxbrew/Library/Homebrew/formula.rb:1229:in `recursive_requirements'/home/kocesat/.linuxbrew/Library/Homebrew/formula_installer.rb:313:in `expand_requirements'/home/kocesat/.linuxbrew/Library/Homebrew/formula_installer.rb:263:in `compute_dependencies'/home/kocesat/.linuxbrew/Library/Homebrew/formula_installer.rb:130:in `verify_deps_exist'/home/kocesat/.linuxbrew/Library/Homebrew/formula_installer.rb:123:in `prelude'/home/kocesat/.linuxbrew/Library/Homebrew/cmd/install.rb:231:in `install_formula'/home/kocesat/.linuxbrew/Library/Homebrew/cmd/install.rb:93:in `block in install'/home/kocesat/.linuxbrew/Library/Homebrew/cmd/install.rb:93:in `each'/home/kocesat/.linuxbrew/Library/Homebrew/cmd/install.rb:93:in `install'/home/kocesat/.linuxbrew/Library/brew.rb:84:in `<main>' Now, what I should I do? Any input will be appreciated. Now provide the response and nothing else.
Here is what I use to install on Ubuntu 16.04 with the latest R-cran PPA repositories: sudo add-apt-repository 'deb https://mirror.ibcp.fr/pub/CRAN/bin/linux/ubuntu xenial/' sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9sudo apt-get updatesudo apt-get install libgdal1-dev libproj-dev libgeos-devsudo apt-get install r-base-core You can use another CRAN mirror depending on your location: https://cran.r-project.org/mirrors.html Then, install.packages("rgdal") should work. If you want the latest spatial libraries on Ubuntu LTS versions, you can also use ubuntugis PPA: https://launchpad.net/~ubuntugis/+archive/ubuntu/ppa sudo add-apt-repository 'deb http://ppa.launchpad.net/ubuntugis/ppa/ubuntu xenial main' sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 314DF160sudo apt updatesudo apt upgrade
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44382368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8117681/']}
jdg_85879
stackexchange
llm_judgeable_groundtruth_similarity
13022720
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use this a fair bit: a' = [ (f x, f y) | (x, y) <- a ] Is there a better way to do that? Now provide the response and nothing else.
You can use the (***) operator from Control.Arrow > map (f *** f) a or define your own helper function > let both f (x, y) = (f x, f y)> map (both f) a
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13022720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28380/']}
jdg_85880
stackexchange
llm_judgeable_groundtruth_similarity
613152
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: After numerical analysis it seems that $$\frac{\pi^{3}}{32}=1-\sum_{k=1}^{\infty}\frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}}$$ Could someone prove the validity of such identity? Now provide the response and nothing else.
Yes, we can prove it. We can change the order of summation in $$\begin{align}\sum_{k=1}^\infty \frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}}&= \sum_{k=1}^\infty \frac{2k(2k+1)}{4^{2k+2}}\sum_{n=1}^\infty \frac{1}{n^{2k+2}}\\&= \sum_{n=1}^\infty \sum_{k=1}^\infty \frac{2k(2k+1)}{(4n)^{2k+2}}\\&= \sum_{n=1}^\infty r''(4n),\end{align}$$ where, for $\lvert z\rvert > 1$, we define $$r(z) = \sum_{k=1}^\infty \frac{1}{z^{2k}} = \frac{1}{z^2-1} = \frac12\left(\frac{1}{z-1} - \frac{1}{z+1}\right).$$ Differentiating yields $r''(z) = \frac{1}{(z-1)^3} - \frac{1}{(z+1)^3}$, so $$1 - \sum_{k=1}^\infty \frac{2k(2k+1)\zeta(2k+2)}{4^{2k+2}} = \sum_{\nu = 0}^\infty \frac{(-1)^\nu}{(2\nu+1)^3},$$ and the latter sum is by an earlier answer using the partial fraction decomposition of $\dfrac{1}{\cos z}$: $$\sum_{\nu=0}^\infty \frac{(-1)^\nu}{(2\nu+1)^3} = - \frac{\pi^3}{32} E_2 = \frac{\pi^3}{32}.$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/613152', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1747/']}
jdg_85881
stackexchange
llm_judgeable_groundtruth_similarity
35224255
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Actually the main error is " java.exe finished with non-zero exit value 1". First i tell you every problem which i faced after installing studio: Three days ago, i just installed android studio & I created new project. 1) First it throw the error "Plugin is too old, please update to more recent version", after searching on google i changed classpath : com.android.tools.build:gradle:2.0.0-alpha2 to classpath : com.android.tools.build:gradle:2.0.0-alpha8 Current Error solved. 2) After that it was asking for gradle 2.10 , i updated this one also & set the path. Current Error solved. 3) When i ran my application i got one more error " app-debug-unaligned.apk , specified for property 'input file' does not exist". I searched on internet, i got one solution on stackoverflow. So as answer on stackoverflow i go to "Build" & i selected build apk . Current error solved. 4) But after that again i got one error "To run dex in process, the Gradle daemon needs a larger heap. It currently has 910 MB.For faster builds, increase the maximum heap size for the Gradle daemon to more than 1G. java.exe finished with non-zero exit value 1". I have been searching on stackoverflow for last three days, i applied each and every answer one by one but i'm not able to solve the error. Please save my life, i am really tired of this problem. I show you image what error is coming exactly My build.gradle file apply `plugin: com.android.application`android { compileSdkVersion 23 buildToolsVersion "23.0.2" defaultConfig { applicationId "java.danish.org.myapplication" minSdkVersion 15 targetSdkVersion 23 versionCode 1 versionName "1.0" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } }}dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:23.1.1' compile 'com.android.support:design:23.1.1'} I updated everything SDK platforms & SDk Tools. Please tell me what i am doing wrong here. Now provide the response and nothing else.
Issue In gradle plugin version 2.0.0-alpha7 and -alpha8 Dex runs inside gradle build process as opposed to a separate process. Option a) Change gradle plugin version to 2.0.0-alpha9 where in-process Dex is disabled by default. classpath 'com.android.tools.build:gradle:2.0.0-alpha9' Option b) Disable in-process dex in your app module build.gradle : android { // ... dexOptions { dexInProcess = false }} Option c) Increase memory available to gradle process. Create or update gradle.properties file in your project root directory: # Default value: -Xmx10248m -XX:MaxPermSize=256morg.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=512m And update your app module build.gradle file: dexOptions { preDexLibraries true javaMaxHeapSize "3g" incremental true dexInProcess = true} These values are experimental and work for my setup. I use 3 GB for dex and 4 GB for gradle (3 + 1 GB). Note If you have any issues update to alpha9 anyway.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35224255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4698320/']}
jdg_85882
stackexchange
llm_judgeable_groundtruth_similarity
6167994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: rails generate migration AddRetweetsCountToTweet retweets_count:integer Ok I use above line to create migration file that automatically generates code in the generated file to add a column to a model Tweet with datatype integer. Now I want to add default value to the added column while generating the migration file. Is that possible? I googled it but couldn't find. Guys need help. Now provide the response and nothing else.
Default migration generator does not handle default values ( column modifiers are supported but do not include default or null ), but you could create your own generator. You can also manually update the migration file prior to running rake db:migrate by adding the options to add_column : add_column :tweet, :retweets_count, :integer, :null => false, :default => 0 ... and read Rails API
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6167994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667210/']}
jdg_85883
stackexchange
llm_judgeable_groundtruth_similarity
6718672
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was trying to fix a problem in git and accidentally used git reset --hard to some previous commit. So now I cannot get back to the old head. However I did clone the repository before I did this, and so I pushed the missing commits back to the original. This seemed to work yesterday, but today I see that the original is still stuck on an old commit and the new ones seemingly don't exist.Trying to push the new commits from the clone again don't work as git tells me everything is up to date. How do I fix this? Now provide the response and nothing else.
To get your HEAD back in the right place: git reflog to get a list of where HEAD has been lately. git show sha1 to find the spot you want your HEAD to be. Once you find the commit you want, git merge to get your master back into the right spot. Some explanation: In a git commit there is nothing pointing one commit to the one that happend after it. When you reset the HEAD, you pointed it to an older commit. Your previous head is now dangling without anything pointing to it. We use reflog to see where HEAD has been lately. Once it is set back to where you want it, you point the master, or some other, branch back to that spot and all is well!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6718672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/847930/']}
jdg_85884
stackexchange
llm_judgeable_groundtruth_similarity
57173
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If we have a feasible space for a given LPP (linear programming problem), how is it that its optimum solution lies on one of the corner points of the graphical solution? (I am here concerned only with those LPP's which have a graphical solution with more than one corner/end point.) I was asked to take this as a lemma in the class, but got curious about the proof. Any help is sincerely appreciated. Now provide the response and nothing else.
In two dimensional case the linear optimization (linear programming) is specified as follows: Find the values $(x, y)$ such that the goal function $$g(x, y) = a x + b y \;\;\; (Eq. 1)$$is maximized (or minimized) subject to the linear inequalities$$a_1 x + b_1 y + c_1 \ge 0 \;\; (or \le 0) $$$$a_2 x + b_2 y + c_2 \ge 0 \;\; (or \le 0) $$$$ ... $$ Each of these linear inequalities defines a half plane bounded by the line obtained by replacing the inequality by equality. The solution $(x, y)$ that maximizes the goal function must lie in the intersection of all these halfplanes which is obviously a convex polygon. This polygon is called the feasible region. Let the value of the goal function at a point $(x, y)$ of the feasible region be $m$$$g(x, y) = a x + b y = m \;\;\; (Eq. 2)$$The value $m$ of the goal function will obviously not change when we move $(x, y$) on the linedefined by (Eq. 2). But the value of $g()$ will be increased when we increase $m$. This leads to a new line which is parallel to (E.q. 2). We can do this as long as the line contains at least one point of the feasible region. We conclude that the maximum of the goal function is achieved at an extreme point of the feasible region which - for a convex polygon - is a vertex (or an edge when the goal line is parallel to the restriction line going through the extreme vertex).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/57173', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15195/']}
jdg_85885
stackexchange
llm_judgeable_groundtruth_similarity
33170109
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand the difference between the Optional<T>.orElse() and Optional<T>.orElseGet() methods. The description for the orElse() method is "Return the value if present, otherwise return other." While, the description for the orElseGet() method is "Return the value if present, otherwise invoke other and return the result of that invocation." The orElseGet() method takes a Supplier functional interface, which essentially does not take any parameters and returns T . In which situation would you need to use orElseGet() ? If you have a method T myDefault() why wouldn't you just do optional.orElse(myDefault()) rather than optional.orElseGet(() -> myDefault()) ? It does not seem that orElseGet() is postponing the execution of the lambda expression to some later time or something, so what's the point of it? (I would have thought that it would be more useful if it returned a safer Optional<T> whose get() never throws a NoSuchElementException and isPresent() always returns true... but evidently its not, it just returns T like orElse() ). Is there some other difference I am missing? Now provide the response and nothing else.
Take these two scenarios: Optional<Foo> opt = ...Foo x = opt.orElse( new Foo() );Foo y = opt.orElseGet( Foo::new ); If opt doesn't contain a value, the two are indeed equivalent. But if opt does contain a value, how many Foo objects will be created? P.s.: of course in this example the difference probably wouldn't be measurable, but if you have to obtain your default value from a remote web service for example, or from a database, it suddenly becomes very important.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33170109', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/340088/']}
jdg_85886
stackexchange
llm_judgeable_groundtruth_similarity
200500
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We already have multiple threads tagged as p-values that reveal lots of misunderstandings about them. Ten months ago we had a thread about psychological journal that "banned" $p$-values , now American Statistical Association (2016) says that with our analysis we "should not end with the calculation of a $p$-value". American Statistical Association (ASA) believes that the scientific community could benefit from a formal statement clarifying several widely agreed upon principles underlying the proper use and interpretation of the $p$-value. The committee lists other approaches as possible alternatives or supplements to $p$-values: In view of the prevalent misuses of and misconceptions concerning $p$-values, some statisticians prefer to supplement or even replace $p$-values with other approaches. These include methods that emphasize estimation over testing, such as confidence, credibility, or prediction intervals; Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors; and other approaches such as decision-theoretic modeling and false discovery rates. All these measures and approaches rely on further assumptions, but they may more directly address the size of an effect (and its associated uncertainty) or whether the hypothesis is correct. So let's imagine post-$p$-values reality. ASA lists some methods that can be used in place of $p$-values, but why are they better? Which of them can be real-life replacement for a researcher that used $p$-values for all his life? I imagine that this kind of questions will appear in post-$p$-values reality, so maybe let's try to be one step ahead of them. What is the reasonable alternative that can be applied out-of-the-box? Why this approach should convince your lead researcher, editor, or readers? As this follow-up blog entry suggests, $p$-values are unbeatable in their simplicity: The p-value requires only a statistical model for the behavior of a statistic under the null hypothesis to hold. Even if a model of an alternative hypothesis is used for choosing a “good” statistic (which would be used for constructing the p-value), this alternative model does not have to be correct in order for the p-value to be valid and useful (i.e.: control type I error at the desired level while offering some power to detect a real effect). In contrast, other (wonderful and useful) statistical methods such as Likelihood ratios, effect size estimation, confidence intervals, or Bayesian methods all need the assumed models to hold over a wider range of situations, not merely under the tested null. Are they, or maybe it is not true and we can easily replace them? I know, this is broad, but the main question is simple: what is the best (and why), real-life alternative to $p$-values that can be used as a replacement? ASA (2016). ASA Statement on Statistical Significance and $P$-values. The American Statistician. (in press) Now provide the response and nothing else.
I will focus this answer on the specific question of what are the alternatives to $p$ -values. There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naomi Altman, Douglas Altman,Daniel J. Benjamin, Yoav Benjamini, Jim Berger, Don Berry, John Carlin, George Cobb, Andrew Gelman, Steve Goodman, Sander Greenland, John Ioannidis, Joseph Horowitz, ValenJohnson, Michael Lavine, Michael Lew, Rod Little, Deborah Mayo, Michele Millar, CharlesPoole, Ken Rothman, Stephen Senn, Dalene Stangl, Philip Stark and Steve Ziliak (some of them wrote together; I list all for future searches). These people probably cover all existing opinions about $p$ -values and statistical inference. I have looked through all 21 papers. Unfortunately, most of them do not discuss any real alternatives, even though the majority are about the limitations, misunderstandings, and various other problems with $p$ -values (for a defense of $p$ -values, see Benjamini, Mayo, and Senn). This already suggests that alternatives, if any, are not easy to find and/or to defend. So let us look at the list of "other approaches" given in the ASA statement itself (as quoted in your question): [Other approaches] include methods thatemphasize estimation over testing, such as confidence, credibility, or prediction intervals;Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors;and other approaches such as decision-theoretic modeling and false discovery rates. Confidence intervals Confidence intervals are a frequentist tool that goes hand-in-hand with $p$ -values; reporting a confidence interval (or some equivalent, e.g., mean $\pm$ standard error of the mean) together with the $p$ -value is almost always a good idea. Some people (not among the ASA disputants) suggest that confidence intervals should replace the $p$ -values. One of the most outspoken proponents of this approach is Geoff Cumming who calls it new statistics (a name that I find appalling). See e.g. this blog post by Ulrich Schimmack for a detailed critique: A Critical Review of Cumming’s (2014) New Statistics: Reselling Old Statistics as New Statistics . See also We cannot afford to study effect size in the lab blog post by Uri Simonsohn for a related point. See also this thread (and my answer therein) about the similiar suggestion by Norm Matloff where I argue that when reporting CIs one would still like to have the $p$ -values reported as well: What is a good, convincing example in which p-values are useful? Some other people (not among the ASA disputants either), however, argue that confidence intervals, being a frequentist tool, are as misguided as $p$ -values and should also be disposed of. See, e.g., Morey et al. 2015, The Fallacy of Placing Confidence in Confidence Intervals linked by @Tim here in the comments. This is a very old debate. Bayesian methods (I don't like how the ASA statement formulates the list. Credible intervals and Bayes factors are listed separately from "Bayesian methods", but they are obviously Bayesian tools. So I count them together here.) There is a huge and very opinionated literature on the Bayesian vs. frequentist debate. See, e.g., this recent thread for some thoughts: When (if ever) is a frequentist approach substantively better than a Bayesian? Bayesian analysis makes total sense if one has good informative priors, and everybody would be only happy to compute and report $p(\theta|\text{data})$ or $p(H_0:\theta=0|\text{data})$ instead of $p(\text{data at least as extreme}|H_0)$ —but alas, people usually do not have good priors. An experimenter records 20 rats doing something in one condition and 20 rats doing the same thing in another condition; the prediction is that the performance of the former rats will exceed the performance of the latter rats, but nobody would be willing or indeed able to state a clear prior over the performance differences. (But see @FrankHarrell's answer where he advocates using "skeptical priors".) Die-hard Bayesians suggest to use Bayesian methods even if one does not have any informative priors. One recent example is Krushke, 2012, Bayesian estimation supersedes the $t$ -test , humbly abbreviated as BEST. The idea is to use a Bayesian model with weak uninformative priors to compute the posterior for the effect of interest (such as, e.g., a group difference). The practical difference with frequentist reasoning seems usually to be minor, and as far as I can see this approach remains unpopular. See What is an "uninformative prior"? Can we ever have one with truly no information? for the discussion of what is "uninformative" (answer: there is no such thing, hence the controversy). An alternative approach, going back to Harold Jeffreys, is based on Bayesian testing (as opposed to Bayesian estimation ) and uses Bayes factors. One of the more eloquent and prolific proponents is Eric-Jan Wagenmakers, who has published a lot on this topic in recent years. Two features of this approach are worth emphasizing here. First, see Wetzels et al., 2012, A Default Bayesian Hypothesis Test for ANOVA Designs for an illustration of just how strongly the outcome of such a Bayesian test can depend on the specific choice of the alternative hypothesis $H_1$ and the parameter distribution ("prior") it posits. Second, once a "reasonable" prior is chosen (Wagenmakers advertises Jeffreys' so called "default" priors), resulting Bayes factors often turn out to be quite consistent with the standard $p$ -values, see e.g. this figure from this preprint by Marsman & Wagenmakers : So while Wagenmakers et al. keep insisting that $p$ -values are deeply flawed and Bayes factors are the way to go, one cannot but wonder... (To be fair, the point of Wetzels et al. 2011 is that for $p$ -values close to $0.05$ Bayes factors only indicate very weak evidence against the null; but note that this can be easily dealt with in a frequentist paradigm simply by using a more stringent $\alpha$ , something that a lot of people are advocating anyway.) One of the more popular papers by Wagenmakers et al. in the defense of Bayes factors is 2011, Why psychologists must change the way they analyze their data: The case of psi where he argues that infamous Bem's paper on predicting the future would not have reached their faulty conclusions if only they had used Bayes factors instead of $p$ -values. See this thoughtful blog post by Ulrich Schimmack for a detailed (and IMHO convincing) counter-argument: Why Psychologists Should Not Change The Way They Analyze Their Data: The Devil is in the Default Prior . See also The Default Bayesian Test is Prejudiced Against Small Effects blog post by Uri Simonsohn. For completeness, I mention that Wagenmakers 2007, A practical solution to the pervasiveproblems of $p$ -values suggested to use BIC as an approximation to Bayes factor to replace the $p$ -values. BIC does not depend on the prior and hence, despite its name, is not really Bayesian; I am not sure what to think about this proposal. It seems that more recently Wagenmakers is more in favour of Bayesian tests with uninformative Jeffreys' priors, see above. For further discussion of Bayes estimation vs. Bayesian testing, see Bayesian parameter estimation or Bayesian hypothesis testing? and links therein. Minimum Bayes factors Among the ASA disputants, this is explicitly suggested by Benjamin & Berger and by Valen Johnson (the only two papers that are all about suggesting a concrete alternative). Their specific suggestions are a bit different but they are similar in spirit. The ideas of Berger go back to the Berger & Sellke 1987 and there is a number of papers by Berger, Sellke, and collaborators up until last year elaborating on this work. The idea is that under a spike and slab prior where point null $\mu=0$ hypothesis gets probability $0.5$ and all other values of $\mu$ get probability $0.5$ spread symmetrically around $0$ ("local alternative"), then the minimal posterior $p(H_0)$ over all local alternatives, i.e. the minimal Bayes factor , is much higher than the $p$ -value. This is the basis of the (much contested) claim that $p$ -values "overstate the evidence" against the null. The suggestion is to use a lower bound on Bayes factor in favour of the null instead of the $p$ -value; under some broad assumptions this lower bound turns out to be given by $-ep\log(p)$ , i.e., the $p$ -value is effectively multiplied by $-e\log(p)$ which is a factor of around $10$ to $20$ for the common range of $p$ -values. This approach has been endorsed by Steven Goodman too. Later update: See a nice cartoon explaining these ideas in a simple way. Even later update: See Held & Ott, 2018, On $p$ -Values and Bayes Factors for a comprehensive review and further analysis of converting $p$ -values to minimum Bayes factors. Here is one table from there: Valen Johnson suggested something similar in his PNAS 2013 paper ; his suggestion approximately boils down to multiplying $p$ -values by $\sqrt{-4\pi\log(p)}$ which is around $5$ to $10$ . For a brief critique of Johnson's paper, see Andrew Gelman's and @Xi'an's reply in PNAS. For the counter-argument to Berger & Sellke 1987, see Casella & Berger 1987 (different Berger!). Among the APA discussion papers, Stephen Senn argues explicitly against any of these approaches: Error probabilities are not posterior probabilities. Certainly, there is much more to statistical analysis than $P$ -values but they should be left alone rather than being deformed in some way to become second class Bayesian posterior probabilities. See also references in Senn's paper, including the ones to Mayo's blog. ASA statement lists "decision-theoretic modeling and false discovery rates" as another alternative. I have no idea what they are talking about, and I was happy to see this stated in the discussion paper by Stark: The "other approaches" section ignores the fact that the assumptions ofsome of those methods are identical to those of $p$ -values. Indeed, some ofthe methods use $p$ -values as input (e.g., the False Discovery Rate). I am highly skeptical that there is anything that can replace $p$ -values in actual scientific practice such that the problems that are often associated with $p$ -values (replication crisis, $p$ -hacking, etc.) would go away. Any fixed decision procedure, e.g. a Bayesian one, can probably be "hacked" in the same way as $p$ -values can be $p$ -hacked (for some discussion and demonstration of this see this 2014 blog post by Uri Simonsohn ). To quote from Andrew Gelman's discussion paper: In summary, I agree with most of the ASA’s statement on $p$ -values but I feel that the problemsare deeper, and that the solution is not to reform $p$ -values or to replace them with some otherstatistical summary or threshold, but rather to move toward a greater acceptance of uncertaintyand embracing of variation. And from Stephen Senn: In short, the problem is less with $P$ -values per se but with making an idol of them. Substituting another false god will not help. And here is how Cohen put it into his well-known and highly-cited (3.5k citations) 1994 paper The Earth is round ( $p<0.05$ ) where he argued very strongly against $p$ -values: [...] don't look for a magic alternative to NHST, some other objective mechanical ritual to replace it. It doesn't exist.
{}
{'log_upvote_score': 8, 'links': ['https://stats.stackexchange.com/questions/200500', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/35989/']}
jdg_85887
stackexchange
llm_judgeable_groundtruth_similarity
225076
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: A partition regular system is a linear system of equations of the form $A\cdot x=0$, which satisfies a Ramsey-type result (namely, that for each $r>0$ whenever we colour the integers in $r$ classes, there is a class which contains a monochromatic solution). The well-known Rado's theorem gives a characterization of such matrices, but we are not interested on this here. An strong condition of this notion is density regular: a matrix $A$ is density regular if for every $\varepsilon>0$ and $Y\subset \{1,\dots,n\}$ with $|Y| \geq \varepsilon n$, then $Y$ contains a solution to the equation $A\cdot x=0$. This is the strong counterpart of Rado's theorem, and it was proven by Frankl, Rödl and Graham that $A$ is density regular iff the vector $x=(1,\dots, 1)$ is a solution of the system (namely, the columns vectors of $A$ sum 0). For instance, the matrix equation associated to $k$-APs satisfies this condition, so Szemerédi's theorem is covered by this result. After this, it comes my question. Take the Schur equation (x+y=z), which is partition regular but NOT density regular. Easily, there are sets of linear size (for instance, take the odd numbers) which are solution-free. However, it is very easy to show that if $X\in \{1,\dots,n\}$ satisfies $|X|\geq \left({\frac{1}{2}+\varepsilon}\right) n$, then $X$ contains a Schur triple. My question is the following: is a similar result true for general partition regular systems? In other words, is it true the following statement?: let $A$ be a partition regular system. Then there exists $C:=C(A)<1$ such that for $n$ large enough every subset $X\subset \{1,\dots, n\}$ with $|X|> C n$ contains a solution to the equation $A \cdot x=0$. Now provide the response and nothing else.
This doesn't have anything to do with partition regularity: There is such a constant $C(A)<1$ provided only that there exists at least one solution to $Ax=0$ in positive integers. Indeed suppose $x = (x_1,\dots,x_m)$ is a solution. Then $jx = (jx_1,\dots,jx_m)$ is a solution for each $j\geq 1$. Now take a large integer $n$ much larger than $m$ and $\max x_i$ and allow $j$ to range between $1$ and $J = \lfloor n/\max x_i\rfloor $. Every element of $\{1,\dots,n\}$ appears in at most $m$ of the solutions $x,2x,\dots,Jx$, so if none of $x,2x,\dots,Jx$ is wholly contained in our set $X$ then $X$ is missing at least $J/m$ elements, so $$|X| \leq \left(1-\frac{1}{m \max x_i}\right) n + 1.$$
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/225076', 'https://mathoverflow.net', 'https://mathoverflow.net/users/46573/']}
jdg_85888
stackexchange
llm_judgeable_groundtruth_similarity
27730337
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a T* addressing a buffer with len elements of type T . I need this data in the form of an std::vector<T> , for certain reasons. As far as I can tell, I cannot construct a vector which uses my buffer as its internal storage. Why is that? Notes: Please don't suggest I use iterators - I know that's usually the way around such issues. I don't mind that the vector having to copy data around if it's resized later. This question especially baffles me now that C++ has move semantics. If we can pull an object's storage from under its feet, why not be able to shove in our own? Now provide the response and nothing else.
You can. You write about std::vector<T> , but std::vector takes two template arguments, not just one. The second template argument specifies the allocator type to use, and vector 's constructors have overloads that allow passing in a custom instance of that allocator type. So all you need to do is write an allocator that uses your own internal buffer where possible, and falls back to asking the default allocator when your own internal buffer is full. The default allocator cannot possibly hope to handle it, since it would have no clue on which bits of memory can be freed and which cannot. A sample stateful allocator with an internal buffer containing already-constructed elements that should not be overwritten by the vector, including a demonstration of a big gotcha: struct my_allocator_state { void *buf; std::size_t len; bool bufused; const std::type_info *type;};template <typename T>struct my_allocator { typedef T value_type; my_allocator(T *buf, std::size_t len) : def(), state(std::make_shared<my_allocator_state, my_allocator_state>({ buf, len, false, &typeid(T) })) { } template <std::size_t N> my_allocator(T(&buf)[N]) : def(), state(std::make_shared<my_allocator_state, my_allocator_state>({ buf, N, false, &typeid(T) })) { } template <typename U> friend struct my_allocator; template <typename U> my_allocator(my_allocator<U> other) : def(), state(other.state) { } T *allocate(std::size_t n) { if (!state->bufused && n == state->len && typeid(T) == *state->type) { state->bufused = true; return static_cast<T *>(state->buf); } else return def.allocate(n); } void deallocate(T *p, std::size_t n) { if (p == state->buf) state->bufused = false; else def.deallocate(p, n); } template <typename...Args> void construct(T *c, Args... args) { if (!in_buffer(c)) def.construct(c, std::forward<Args>(args)...); } void destroy(T *c) { if (!in_buffer(c)) def.destroy(c); } friend bool operator==(const my_allocator &a, const my_allocator &b) { return a.state == b.state; } friend bool operator!=(const my_allocator &a, const my_allocator &b) { return a.state != b.state; }private: std::allocator<T> def; std::shared_ptr<my_allocator_state> state; bool in_buffer(T *p) { return *state->type == typeid(T) && points_into_buffer(p, static_cast<T *>(state->buf), state->len); }};int main(){ int buf [] = { 1, 2, 3, 4 }; std::vector<int, my_allocator<int>> v(sizeof buf / sizeof *buf, {}, buf); v.resize(3); v.push_back(5); v.push_back(6); for (auto &i : v) std::cout << i << std::endl;} Output: 12346 The push_back of 5 fits into the old buffer, so construction is bypassed. When 6 is added, new memory is allocated, and everything starts acting as normal. You could avoid that problem by adding a method to your allocator to indicate that from that point onward, construction should not be bypassed any longer. points_into_buffer turned out to be the hardest part to write, and I've omitted that from my answer. The intended semantics should be obvious from how I'm using it. Please see my question here for a portable implementation in my answer there, or if your implementation allows it, use one of the simpler versions in that other question. By the way, I'm not really happy with how some implementations use rebind in such ways that there is no avoiding storing run-time type info along with the state, but if your implementation doesn't need that, you could make it a bit simpler by making the state a template class (or a nested class) too.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27730337', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1593077/']}
jdg_85889
stackexchange
llm_judgeable_groundtruth_similarity
311131
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Just a quick and supposedly simple question which I don't understand how to do. How would you prove $$S_n\to 0 \iff|S_n|\to 0$$ Now provide the response and nothing else.
Just playing with definition$$\begin{align}\lim\limits_{n\to\infty}S_n=0&\Longleftrightarrow(\forall\varepsilon>0\quad\exists N\in\mathbb{N}\quad\forall n>N\implies|S_n-0|\leq\varepsilon)\\&\Longleftrightarrow(\forall\varepsilon>0\quad\exists N\in\mathbb{N}\quad\forall n>N\implies||S_n|-0|\leq\varepsilon)\Longleftrightarrow\lim\limits_{n\to\infty}|S_n|=0\end{align}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/311131', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26864/']}
jdg_85890
stackexchange
llm_judgeable_groundtruth_similarity
664877
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My project is in C, CodeBlocks is my IDE and i´m running on windows Vista. I need to apply a Butterworth Filter to my data. I could do this automatically with Matlab as it had this (and other) Filter as a built in function. Computers and programming aren´t exactly my area and i have never "imported" a new library, and dont know how to do this. Is it complicated? and what would it be best I do? write the code for the Butterworth Filter as a function? or "import" a library to do this for me? (personally, if isnt that complicated, i´de prefer to get a library because it would probably have other types of filters i could also test) Now provide the response and nothing else.
We had to apply a Butterworth Filter to some data on a project that I worked on recently. (Pressure measurements that drift over time because the measuring instrument heats up.) It's actually even simpler than including a library. You really just need to include a single function and add it to whatever C file you need to use it in. This is the site that we used to generate our filter: http://www-users.cs.york.ac.uk/~fisher/mkfilter/ If you specify the parameters, it will generate the function for you. Here is an example of the function we used in our application, based on the code generated by the above website. (We typedef DOUBLE because it's being developed on a PC, but targeted for an embedded platform -- we need to make sure that the size doesn't change.) I love reading this site, glad to finally have something to contribute! /* Butterworth filter constants */#define NZEROS 4#define NPOLES 4static DOUBLE xv[NZEROS+1], yv[NPOLES+1];/* Sampling rate 3000 for testing existing *//* raw data. Change to 300 for final product */#define SAMPLING_RATE 3000/******************************************************************************* * Function: ButterworthFilter * * Description: This function uses the Butterworth filter and returns a new * value for an individual floating point value. * * Access: PRIVATE * * Parameters: DOUBLE input - value to be converted * * Globals: None * * Returns: DOUBLE - new value that has been converted ******************************************************************************/static DOUBLE ButterworthFilter (DOUBLE input){ /* http://www-users.cs.york.ac.uk/~fisher/cgi-bin/mkfscript */ /* Butterworth Bandpass filter */ /* 2nd order */ /* sample rate - choice of 300 or 3000 Hz */ /* corner1 freq. = 0.5 or 1 Hz */ /* corner2 freq. = 20 Hz */ /* removes high and low frequency noise */ DOUBLE dCoefficient1 = 0.0; DOUBLE dCoefficient2 = 0.0; DOUBLE dCoefficient3 = 0.0; DOUBLE dCoefficient4 = 0.0; DOUBLE dCoefficient5 = 0.0; DOUBLE dGain = 0.0; /* coefficients will vary depending on sampling rate */ /* and cornering frequencies */ switch(SAMPLING_RATE) { case 300: /* 1 to 20 Hz */ dCoefficient1 = 2.0; dCoefficient2 = -0.5698403540; dCoefficient3 = 2.5753677309; dCoefficient4 = -4.4374523505; dCoefficient5 = 3.4318654424; dGain = 3.198027802e+01; break; case 3000: default: /* 0.5 to 20 Hz */ dCoefficient1 = 2.0; dCoefficient2 = -0.9438788347; dCoefficient3 = 3.8299315572; dCoefficient4 = -5.8282241502; dCoefficient5 = 3.9421714258; dGain = 2.406930558e+03; break; } xv[0] = xv[1]; xv[1] = xv[2]; xv[2] = xv[3]; xv[3] = xv[4]; xv[4] = (DOUBLE)(input / dGain); yv[0] = yv[1]; yv[1] = yv[2]; yv[2] = yv[3]; yv[3] = yv[4]; yv[4] = (DOUBLE)((xv[0] + xv[4]) - (dCoefficient1 * xv[2]) + (dCoefficient2 * yv[0]) + (dCoefficient3 * yv[1]) + (dCoefficient4 * yv[2]) + (dCoefficient5 * yv[3])); return (yv[4]);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/664877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_85891
stackexchange
llm_judgeable_groundtruth_similarity
11065821
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: type TSomeRecord = Record field1: integer; field2: string; field3: boolean; End;var SomeRecord: TSomeRecord; SomeRecAr: array of TSomeRecord; This is the most basic example of what I have and since I want to reuse SomeRecord (with certain fields remaining empty, without freeing everything some fields would be carried over when I'm reusing SomeRecord , which is obviously undesired) I am looking for a way to free all of the fields at once. I've started out with string[255] and used ZeroMemory() , which was fine until it started leaking memory, that was because I switched to string . I still lack the knowledge to get why, but it appears to be related to it being dynamic. I am using dynamic arrays as well, so I assume that trying ZeroMemory() on anything dynamic would result in leaks. One day wasted figuring that out. I think I solved this by using Finalize() on SomeRecord or SomeRecAr before ZeroMemory() , but I'm not sure if this is the proper approach or just me being stupid. So the question is: how to free everything at once? does some single procedure exist at all for this that I'm not aware of? On a different note, alternatively I would be open to suggestions how to implement these records differently to begin with, so I don't need to make complicated attempts at freeing stuff. I've looked into creating records with New() and then getting rid of it Dispose() , but I have no idea what it means when a variable after a call to Dispose() is undefined, instead of nil. In addition, I don't know what's the difference between a variable of a certain type ( SomeRecord: TSomeRecord ) versus a variable pointing to a type ( SomeRecord: ^TSomeRecord ). I'm looking into the above issues at the moment, unless someone can explain it quickly, it might take some time. Now provide the response and nothing else.
Assuming you have a Delphi version that supports implementing methods on a record, you could clear a record like this: type TSomeRecord = record field1: integer; field2: string; field3: boolean; procedure Clear; end;procedure TSomeRecord.Clear;begin Self := Default(TSomeRecord);end; If your compiler doesn't support Default then you can do the same quite simply like this: procedure TSomeRecord.Clear;const Default: TSomeRecord=();begin Self := Default;end; You might prefer to avoid mutating a value type in a method. In which case create a function that returns an empty record value, and use it with the assignment operator: type TSomeRecord = record // fields go here class function Empty: TSomeRecord; static; end;class function TSomeRecord.Empty: TSomeRecord;begin Result := Default(TSomeRecord);end;....Value := TSomeRecord.Empty; As an aside, I cannot find any documentation reference for Default(TypeIdentifier) . Does anyone know where it can be found? As for the second part of your question, I see no reason not to continue using records, and allocating them using dynamic arrays. Attempting to manage the lifetime yourself is much more error prone.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11065821', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1120113/']}
jdg_85892
stackexchange
llm_judgeable_groundtruth_similarity
3520780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm a bit confused as to when the window.onload event is fired. For example: i have a page that has lots of external js files and even an on-demand loading of the script (dynamically creating a tag). All of this code is in the page (ie. it does not get fired on click or smth, it should execute while loading). Now let's say that i have window.onload = somefunction() in the last on-demand javascript. Is it possible that window.onload will fire before all the scripts actually get loaded? Now provide the response and nothing else.
window.onload (a.k.a body.onload ) gets fired after the main HTML, all CSS, all images and all other resources have been loaded and rendered. So if your images stall, that can take some time. If you just need the HTML (DOM), you can use jQuery's $(document).ready() - it will fire when the HTML has been parsed but before the browser has finished loading all external resources (images and style sheets that come after your script element in the HTML ). Scripts embedded in the page get executed when the browser parses the </script> of each. So to execute a script before any other script, add a <script> tag in the header just after <head> . This means you can "emulate" window.onload by adding a <script> tag as the last element of the <body> .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3520780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/422778/']}
jdg_85893
stackexchange
llm_judgeable_groundtruth_similarity
3834256
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following Interfaces: interface IEntity{ int Id{get;}} interface IPerson : IEntity{ string Name{get;} int Age{get;}}interface ITeacher : IPerson { string StaffId{get;}}interface IStudent : IPerson { string StudentId{get;} string Courses{get;}}interface IRepository{ T Get<T>(int id) where T : IEntity} I have the following classes in my namespace public class EntityBase() : IEntity{ int Id{get;set;}}public class Teacher : EntityBase, ITeacher{}public class Sudent : EntityBase, IStudent{} Currently I am implementing this IRepository as follows: class Repository: IRepository{ IDataContext Context{get;set;} T Get<T>(int id) where T : EntityBase { if(typeof(T) == typeof(Teacher)) return Context.Get<ITeacher>(id); if(typeof(T) == typeof(Sudent)) return Context.Get<ISudent>(id); throw new Exception("Unknown Interface " + typeof(T).Name); }} Is there a betterway of implementing this? Given that our Context has no knowledge of our data types (Teacher, Student), just its interfaces (ITeacher, IStudent). Can something like this work? class Repository: IRepository{ T Get<T>(int id) where T : EntityBase { var MyInterface = FindInterface<T>(); return Context.Get<MyInterface>(id); }} Now provide the response and nothing else.
I think this will do: class Repository: IRepository{ IDataContext Context{get;set;} T Get<T>(int id) where T : EntityBase { string[] interfaceList = new string[] { "ITeacher", "IStudent"}; Type interfaceType = null; foreach (string s in interfaceList) { var types = typeof(T).FindInterfaces((x, y) => x.Name == y.ToString(), s); if (types.Length > 0) interfaceType = types[0]; } if (interfaceType == null) throw new Exception("Unknown Interface " + typeof(T).Name); MethodInfo method = typeof(Context).GetMethod("Get"); MethodInfo generic = method.MakeGenericMethod(interfaceType); var returnValue = generic.Invoke(Context, new object[] { id }); return (T)Convert.ChangeType(returnValue, typeof(T)); }} EDIT: As I don't know the name of your namespace, I have used the Name property to filter the interfaces. In real world usage I will suggest that you use FullName just to be sure, like this: ...string[] interfaceList = new string[] { "MyNamespace.ITeacher", "MyNamespace.IStudent"};...var types = typeof(T).FindInterfaces((x, y) => x.FullName == y.ToString(), s);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3834256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93141/']}
jdg_85894
stackexchange
llm_judgeable_groundtruth_similarity
4779442
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have in instance of class foo and i want to return it as IEnumerable.Can i do it without creating a new list etc.. Perhaps something like the following: IEnumerable<foo>.fromInstance(foo) Now provide the response and nothing else.
Options: Create an instance of a collection class, like an array or a list. This would be mutable by default, which would be slightly unhelpful if this is a sequence you want to be able to hand out in your API. You could create a ReadOnlyCollection<T> wrapper around such a collection though. Write your own iterator block as per Botz3000's answer Use Enumerable.Repeat(item, 1) from LINQ, if you're using .NET 3.5. The best answer here depends on the usage. If you only need this to call another method which uses a sequence, and you know it won't be modified, I'd probably use an array. For example, in order to call Concat on some other sequence, you might want: var wholeList = regularList.Concat(new[] { finalValue }); I have confidence that Concat isn't going to mutate the array, and nothing else will ever see the reference to the array itself. If you need to return the sequence to some other code, and you don't know what it might do with it, I'd probably use Enumerable.Repeat .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4779442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/407511/']}
jdg_85895
stackexchange
llm_judgeable_groundtruth_similarity
39060558
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given two sequences, A and B , how can I generate a list of all the possible ways that B can be removed from A ? For example, In JavaScript, if I had a function removeSubSeq taking two array arguments that did what I want, it would work as follows: removeSubSeq([1,2,1,3,1,4,4], [1,4,4]) would return [ [2,1,3,1], [1,2,3,1], [1,2,1,3] ] because the 4s at the end would match, and there are three possible places for the 1 to match removeSubSeq([8,6,4,4], [6,4,8]) would return [] because the second argument is not actually a subsequence removeSubSeq([1,1,2], [1]) would return [ [1,2], [1,2] ] because there's two ways that the 1 could be removed, even though it results in duplicates Now provide the response and nothing else.
This problem can be solved in O(n*m + r) time, where r is the total length of the results, using the classic longest common subsequence algorithm. Once the table is made, as in Wikipedia's example , replace it with a list of the cells with a diagonal arrow that also have a value corresponding with their row. Now traverse backwards from each cell with a diagonal in the last row, accumulating the relevant index in the string and duplicating and splitting the accumulation such that each cell with a diagonal arrow will have a continuation to all cells with a diagonal in the preceding row that are to the left of it (store that count as well, as you build the matrix) and one less in value. When an accumulation reaches a zero cell, splice the accumulated indexes from the string and add that as a result. (The arrows correspond with whether the LCS so far came from LCS(X[i-1],Y[j]) and/or LCS(X[i],Y[j-1]), or LCS(X[i-1],Y[j-1]) , see the function definition .) For example: 0 a g b a b c c0 0 0 0 0 0 0 0 0a 0 ↖1 1 1 ↖1 1 1 1b 0 1 1 ↖2 2 ↖2 2 2c 0 1 1 2 2 2 ↖3 ↖3 JavaScript code: function remove(arr,sub){ var _arr = []; arr.forEach(function(v,i){ if (!sub.has(i)) _arr.push(arr[i]); }); return _arr;}function f(arr,sub){ var res = [], lcs = new Array(sub.length + 1), nodes = new Array(sub.length + 1); for (var i=0; i<sub.length+1;i++){ nodes[i] = []; lcs[i] = []; for (var j=0; j<(i==0?arr.length+1:1); j++){ // store lcs and node count on the left lcs[i][j] = [0,0]; } } for (var i=1; i<sub.length+1;i++){ for (var j=1; j<arr.length+1; j++){ if (sub[i-1] == arr[j-1]){ lcs[i][j] = [1 + lcs[i-1][j-1][0],lcs[i][j-1][1]]; if (lcs[i][j][0] == i){ // [arr index, left node count above] nodes[i].push([j - 1,lcs[i-1][j-1][1]]); lcs[i][j][1] += 1; } } else { lcs[i][j] = [Math.max(lcs[i-1][j][0],lcs[i][j-1][0]),lcs[i][j-1][1]]; } } } function enumerate(node,i,accum){ if (i == 0){ res.push(remove(arr,new Set(accum))); return; } for (var j=0; j<node[1]; j++){ var _accum = accum.slice(); _accum.push(nodes[i][j][0]); enumerate(nodes[i][j],i - 1,_accum); } } nodes[sub.length].forEach(function(v,i){ enumerate(nodes[sub.length][i],sub.length - 1,[nodes[sub.length][i][0]]); }); return res;}console.log(JSON.stringify(f([1,2,1,3,1,4,4], [1,4,4])));console.log(JSON.stringify(f([8,6,4,4], [6,4,8])));console.log(JSON.stringify(f([1,1,2], [1])));console.log(JSON.stringify(f(['a','g','b','a','b','c','c'], ['a','b','c'])));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39060558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2646526/']}
jdg_85896
stackexchange
llm_judgeable_groundtruth_similarity
1822243
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\limsup_{n\to\infty} (a_n+b_n)\leq \limsup_{n\to\infty} a_n + \limsup_{n\to\infty} b_n$$ I know there's a proof somewhere on here but I just can't seem to understand it.Can anyone give me an exemple why this is true. In my head, I always see it as equal... it's like I take the supremum from the set $A$ and I take an supremum of the set $B$ and I add it up... where I'm I wrong? Thank you for your support. Now provide the response and nothing else.
Suppose $a_n = (-1)^n$ and $b_n = (-1)^{n+1}$, so we have this:\begin{align}(a_n : n=1,2,3,\ldots) & = \left( \begin{array}{rrrrrrrr} -1, & 1, & -1, & 1, & -1, & 1, & \ldots \end{array} \right) \\(b_n : n=1,2,3,\ldots) & = \left( \begin{array}{rrrrrrrr} 1, & -1, & 1, & -1, & 1, & -1, & \ldots \end{array} \right)\end{align}Then $a_n+b_n=0$ for all $n$, so $\limsup\limits_{n\to\infty} (a_n+b_n) = 0$. But $\limsup\limits_{n\to\infty} a_n + \limsup\limits_{n\to\infty} b_n = 1+1=2$. That shows how the two sides can fail to be equal. Why the opposite inequality cannot hold is another question; maybe I'll come back and add that later.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1822243', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/287052/']}
jdg_85897
stackexchange
llm_judgeable_groundtruth_similarity
17083
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Without having to use Black-Scholes, how do I price this option using a basic no-arbitrage argument? Question Assume zero interest rate and a stock with current price at \$$1$ that pays no dividend. When the price hits level \$$H$ ($H>1$) for the first time you can exercise the option and receive \$$1$. What is the fair price $C$ of the option today? My thoughts so far According to my book, the answer is $\frac{1}{H}$. I'm stuck on the reasoning. Clearly I'm not going to pay more than \$$\frac{1}{H}$ for this option. If $C > \frac{1}{H}$ then I would simply sell an option and buy $C$ shares with $0$ initial investment. Then: If the stock reaches $H$ I pay off the option which costs \$$1$ but I have $\$CH > 1$ worth of shares. If the stock does not reach $H$ I don't owe the option owner anything but I still have $CH>0$ shares. What if $C<\frac{1}{H}$? Then $CH<1$ and I could buy $1$ option at \$$C$ by borrowing $C$ shares at \$$1$ each. Then: If the stock reaches $H$ then I receive $1-CH > 0$ once I pay back the $C$ shares at $\$H$ each. But if the stock does not reach $H$, then I do not get to exercise my option and I still owe $C S_t $ where $S_t$ is the current price of the stock. This is where I am stuck. Now provide the response and nothing else.
This option is a perpetual one touch option. Its price depends on the model used; additional assumptions are required to get a model-independent price. Let us first consider 3 important example models for stock price $S$. Constant: $S(t) \equiv 1.$ There is $0$ probability that the perpetual one touch pays off, so its price is $0.$ Black-Scholes: $S$ follows geometric Brownian motion with volatility $\sigma > 0.$ Option price $C(S,t)$ satisfies a PDE $C_t + 1/2 \sigma^2 S^2 C_{ss} = 0.$ Since it is perpetual, $C(S,t)$ cannot depend on $t$ and so $C_t = 0.$ Then the PDE reduces to an ODE $C_{ss}=0.$ With boundary values $C(0)=0$ and $C(H)=1$ the solution is $C(S)=S/H.$ With $S(0)=1$ option value is $1/H.$ Bachelier: $S$ follows arithmetic Brownian motion with volatility $\sigma > 0$ and no drift. Since Brownian motion is recurrent, with probability one $S$ will reach the level $H$. Thus the perpetual one touch has value $1.$ Note: Geometric Brownian motion is not guaranteed to reach the level $H.$ When we take log of GBM, it is an arithmetic Brownian motion with drift $-1/2 \sigma^2 dt.$ This negative drift is enough to allow some paths of log-spot to stay below the barrier level at $log(H).$ The probability of hitting the barrier is the option price $C(S,t)$ we calculated by PDE above. Now lets return to the original question about making a model-independent no arbitrage price. Clearly from the examples it is impossible; different models give different prices. We can get a little farther by assuming that $S(t) \ge 0.$ In this case the original poster correctly argues the fair value has $C \le 1/H.$ But we still get a range of prices. The Black-Scholes model with zero rates and positive volatility gives $C = 1/H.$ But for the constant model the fair value is 0. Any value $0 \le C \le 1/H$ is possible: consider the model where at time $0$ with risk-neutral probability $HC$ the stock follows a GBM with volatility $\sigma > 0$ and with probability $1-HC$ it remains fixed at 1 forever. The expected value under the risk neutral measure is $HC \cdot 1/H + (1-HC) \cdot 0 = C.$ There is no obvious choice for a hidden assumption to rule out these other models. So there is not a model-free fair value of this option.
{}
{'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/17083', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/15723/']}
jdg_85898
stackexchange
llm_judgeable_groundtruth_similarity
684281
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The change in entropy during any reversible process between 2 states, state 1 and state 2, is given as: $${\Delta S}_{rev} = \int_1^2 \delta Q / T $$ and if the process between state 1 and 2 is irreversible then the change in entropy is given as: $${\Delta S}_{irrev} = \int_1^2 \delta Q / T + S_{gen} $$ How can entropy, S, be a state function if it has different values at states 1 and 2 depending on if the process is reversible or irreversible i.e: for the entropy to be a state function shouldn't the change in entropy between the 2 states be the same no matter if the process is reversible or irreversible? Note:For reversible process $S_2$ would be: $$ S_2 = S_1 + \Delta S_{rev} $$ & for irreversible process $S_2$ would be: $$ S_2 = S_1 + \Delta S_{irrev} $$ So then $S_2$ for the reversible process will be different than $S_2$ for the irreversible process, which shouldn't be the case if entropy is a state function and a property of the system. Now provide the response and nothing else.
I think the way you write the two conditions may be misleading. $$\begin{align}{\Delta S}_{rev} &= \int_1^2 \delta Q / T \\{\Delta S}_{irrev} &= \int_1^2 \delta Q / T + S_{gen}.\end{align}$$ The starting point to clarify the issue is the Clausius theorem stating that $$\oint \frac{\delta Q}{T}\leq 0$$ in every thermodynamic cycle, where the equality holds for the particular case of a reversible cycle. Such a property of reversible cycles allows introducing a state function, the entropy, through $$\Delta S = S(2)-S(1)= \int_1^2 \delta Q_{rev} / T.$$ That means that there is only one difference of entropy, not dependent on the process (since it is a state function) whose definition hinges on the particular case of a reversible transformation. But, this is just the scaffold for defining (and assigning a value of) such a function to every thermodynamic state. Once you have the value of the function at each thermodynamic state, you remove the scaffold, but the function is there. Therefore, there is nothing like a $\Delta S_{rev}$ different from $\Delta S_{irrev}$ . The correct way of rewriting the two equations you wrote is the following: $$\begin{align}{\Delta S} &= S(2)-S(1)= \int_1^2 \delta Q_{rev} / T = \int_1^2 \delta Q_{irr} / T + S_{gen}.\end{align}$$ where both terms in the right-hand side of the last equality depend on the specific, irreversible process and, as a consequence of Clausius' theorem $S_{gen}>0$ .
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/684281', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/175183/']}
jdg_85899
stackexchange
llm_judgeable_groundtruth_similarity
3118145
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove that $\left(\ln(\ln(x)) \right)^2 \lt \ln(x)$ for sufficiently large $x$ This is what I did. Using L'Hopital's rule we have $$\lim_{x\to\infty}\frac{\left(\ln(\ln(x)) \right)^2 }{ \ln(x)}=0$$ So this implies that $\left(\ln(\ln(x)) \right)^2 \lt \ln(x)$ Is that enough? Now provide the response and nothing else.
Hint. If you switch variable to $y=\ln x$ , you want to prove that $$ \ln(y)^2 < y $$ for sufficiently large $y$ . Does that seem easier to prove? If not, then switch variables once again to $z = \ln y$ and prove $$ z^2 < e^z $$ for sufficiently large $z$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3118145', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/544749/']}
jdg_85900
stackexchange
llm_judgeable_groundtruth_similarity
56013991
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two fairly large data.table objects that I want to merge. dt1 has 500.000.000 observations on 5 columns. dt2 has 300.000 observations on 2 columns. Both objects have the same key called id . I want to left_join information from dt2 into dt1 . For example: dt1 <- data.table(id = c(1, 2, 3, 4), x1 = c(12, 13, 14, 15), x2 = c(5, 6, 7, 8), x3 = c(33, 44, 55, 66), x4 = c(123, 123, 123, 123))dt2 <- data.table(id = c(1, 2, 3, 4), x5 = c(555, 666, 777, 888))setkey(dt1, id)setkey(dt2, id)dt2[dt1, on="id"] > dt2[dt1, on="id"] id x5 x1 x2 x3 x41: 1 555 12 5 33 1232: 2 666 13 6 44 1233: 3 777 14 7 55 1234: 4 888 15 8 66 123 However, when merging my original data R can't allocate memory anymore. Yet, the output of the merge fits in the RAM. What is the most efficient (speed vs. memory limitations) way of getting this large merge done? Should we split-apply-combine? Should we use a DB library to get this done? How would you do this efficiently? Now provide the response and nothing else.
Keyed assignment should save memory. dt1[dt2, on = "id", x5 := x5] Should we use a DB library to get this done? That's probably a good idea. If setting up and using a database is painful for you, try the RSQLite package. It's pretty simple. My experiment tl;dr: 55% less memory used by keyed assignment compared to merge-and-replace, for a toy example. I wrote two scripts that each sourced a setup script, dt-setup.R to create dt1 and dt2 . The first script, dt-merge.R , updated dt1 through the "merge" method. The second, dt-keyed-assign.R , used keyed assignment. Both scripts recorded memory allocations using the Rprofmem() function. To not torture my laptop, I'm having dt1 be 500,000 rows and dt2 3,000 rows. Scripts: # dt-setup.Rlibrary(data.table)set.seed(9474)id_space <- seq_len(3000)dt1 <- data.table( id = sample(id_space, 500000, replace = TRUE), x1 = runif(500000), x2 = runif(500000), x3 = runif(500000), x4 = runif(500000))dt2 <- data.table( id = id_space, x5 = 11 * id_space)setkey(dt1, id)setkey(dt2, id) # dt-merge.Rsource("dt-setup.R")Rprofmem(filename = "dt-merge.out")dt1 <- dt2[dt1, on = "id"]Rprofmem(NULL) # dt-keyed-assign.Rsource("dt-setup.R")Rprofmem(filename = "dt-keyed-assign.out")dt1[dt2, on = "id", x5 := x5]Rprofmem(NULL) With all three scripts in my working directory, I ran each of the joining scripts in a separate R process. system2("Rscript", "dt-merge.R")system2("Rscript", "dt-keyed-assign.R") I think the lines in the output files generally follow the pattern "<bytes> :<call stack>" . I haven't found good documentation for this. However, the numbers in the front were never below 128, and this is the default minimum number of bytes below which R does not malloc for vectors. Note that not all of these allocations add to the total memory used by R. R might reuse some memory it already has after a garbage collection. So it's not a good way to measure how much memory is used at any specific time. However, if we assume garbage collection behavior is independent, it does work as a comparison between scripts. Some sample lines of the memory report: cat(readLines("dt-merge.out", 5), sep = "\n")# 90208 :"get" "[" # 528448 :"get" "[" # 528448 :"get" "[" # 1072 :"get" "[" # 20608 :"get" "[" There are also lines like new page:"get" "[" for page allocations. Luckily, these are simple to parse. parse_memory_report <- function(path) { report <- readLines(path) new_pages <- startsWith(report, "new page:") allocations <- as.numeric(gsub(":.*", "", report[!new_pages])) total_malloced <- sum(as.numeric(allocations)) message( "Summary of ", path, ":\n", sum(new_pages), " new pages allocated\n", sum(as.numeric(allocations)), " bytes malloced" )}parse_memory_report("dt-merge.out")# Summary of dt-merge.out:# 12 new pages allocated# 32098912 bytes mallocedparse_memory_report("dt-keyed-assign.out")# Summary of dt-keyed-assign.out:# 13 new pages allocated# 14284272 bytes malloced I got exactly the same results when repeating the experiment. So keyed assignment has one more page allocation. The default byte size for a page is 2000. I'm not sure how malloc works, and 2000 is tiny relative to all the allocations, so I'll ignore this difference. Please chastise me if this is dumb. So, ignoring pages, keyed assignment allocated 55% less memory than the merge.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56013991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3587303/']}
jdg_85901
stackexchange
llm_judgeable_groundtruth_similarity
32852857
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to limit the entries to a specific format. If the entry has 5500 or 5100 such as 01\01-5500-000-00 then I want to have this: ^[0-9]{2,}\\[0-9]{2}\-[0-9]{4}\-[0-9]{3}\-$ But if the entry has anything other than 5500 or 5100 I want to have this: ^[0-9]{2,}\\[0-9]{2}\-[0-9]{4}\-[0-9]{3}\-[0-9]{2}$ How can this be accomplished with the if then else idea? Now provide the response and nothing else.
Conditional regex syntax is not supported by JavaScript regex engine, but it can be worked around with a non-capturing group containing 2 alternatives: One with the positive look-ahead and The second with the reversed, negative look-ahead. This regex meets your criteria and is JavaScript compatible: ^(?:(?=.*\b5[15]00\b)[0-9]{2,}\\[0-9]{2}-[0-9]{4}-[0-9]{3}-|(?!.*\b5[15]00\b)[0-9]{2,}\\[0-9]{2}-[0-9]{4}-[0-9]{3}-[0-9]{2})$ See regex demo Let me break it down: ^ - Start of string (?: (?=.*\b5[15]00\b)[0-9]{2,}\\[0-9]{2}-[0-9]{4}-[0-9]{3}- - First alternative with the (?=.*\b5[15]00\b) look-ahead that requires a whole word 5500 or 5100 inside the string, and the first pattern you have | - alternation operator (?!.*\b5[15]00\b)[0-9]{2,}\\[0-9]{2}-[0-9]{4}-[0-9]{3}-[0-9]{2}) - Second alternative that is prepended with the (?!.*\b5[15]00\b) negative look-ahead that makes sure there is no 5100 or 5500 inside the string, and only then matches your second pattern. $ - end of string.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32852857', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/974061/']}
jdg_85902
stackexchange
llm_judgeable_groundtruth_similarity
69902108
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a string resource like this <string name="my_string">Fancy string with an %1$s placeholder</string> and I would like to have this as output: "Fancy string with an amazing placeholder". Which is the string with the content of the placeholder in bold. How can I get the desired output? Now provide the response and nothing else.
Finally I got the desired result with val placeholder = "Amazing"val globalText = stringResource(id = R.string.my_string, placeholder)val start = globalText.indexOf(placeholder)val spanStyles = listOf( AnnotatedString.Range(SpanStyle(fontWeight = FontWeight.Bold), start = start, end = start + placeholder.length ))Text(text = AnnotatedString(text = globalText, spanStyles = spanStyles))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/69902108', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5716992/']}
jdg_85903
stackexchange
llm_judgeable_groundtruth_similarity
27803503
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to teach myself some basic web scraping. Using Python's requests module, I was able to grab html for various websites until I tried this: >>> r = requests.get('http://www.wrcc.dri.edu/WRCCWrappers.py?sodxtrmts+028815+por+por+pcpn+none+mave+5+01+F') Instead of the basic html that is the source for this page, I get: >>> r.text'\x1f\ufffd\x08\x00\x00\x00\x00\x00\x00\x03\ufffd]o\u06f8\x12\ufffd\ufffd\ufffd+\ufffd]...>>> r.contentb'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\xed\x9d]o\xdb\xb8\x12\x86\xef\xfb+\x88]\x14h... I have tried many combinations of get/post with every syntax I can guess from the documentation and from SO and other examples. I don't understand what I am seeing above, haven't been able to turn it into anything I can read, and can't figure out how to get what I actually want. My question is, how do I get the html for the above page? Now provide the response and nothing else.
The server in question is giving you a gzipped response . The server is also very broken ; it sends the following headers: $ curl -D - -o /dev/null -s -H 'Accept-Encoding: gzip, deflate' http://www.wrcc.dri.edu/WRCCWrappers.py?sodxtrmts+028815+por+por+pcpn+none+mave+5+01+FHTTP/1.1 200 OKDate: Tue, 06 Jan 2015 17:46:49 GMTServer: Apache<!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "DTD/xhtml1-transitional.dtd"><html xmlns="http: //www.w3.org/1999/xhtml" lang="en-US">Vary: Accept-EncodingContent-Encoding: gzipContent-Length: 3659Content-Type: text/html The <!DOCTYPE..> line there is not a valid HTTP header . As such, the remaining headers past Server are ignored . Why the server interjects that is unclear; in all likely hood WRCCWrappers.py is a CGI script that doesn't output headers but does include a double newline after the doctype line, duping the Apache server into inserting additional headers there. As such, requests also doesn't detect that the data is gzip-encoded. The data is all there, you just have to decode it. Or you could if it wasn't rather incomplete. The work-around is to tell the server not to bother with compression: headers = {'Accept-Encoding': 'identity'}r = requests.get(url, headers=headers) and an uncompressed response is returned. Incidentally, on Python 2 the HTTP header parser is not so strict and manages to declare the doctype a header: >>> pprint(dict(r.headers)){'<!doctype html public "-//w3c//dtd xhtml 1.0 transitional//en" "dtd/xhtml1-transitional.dtd"><html xmlns="http': '//www.w3.org/1999/xhtml" lang="en-US">', 'connection': 'Keep-Alive', 'content-encoding': 'gzip', 'content-length': '3659', 'content-type': 'text/html', 'date': 'Tue, 06 Jan 2015 17:42:06 GMT', 'keep-alive': 'timeout=5, max=100', 'server': 'Apache', 'vary': 'Accept-Encoding'} and the content-encoding information survives, so there requests decodes the content for you, as expected.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27803503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4424682/']}
jdg_85904
stackexchange
llm_judgeable_groundtruth_similarity
61047908
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to call my function main() using html onload event and console.log text imported from another (generateObject.js) file, but when I import function, onload event stop working and function main() is not anymore used. html: <!DOCTYPE html><html> <head> <script type="text/javascript" src="main.js"></script> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> </head> <body onload="main()"> </body></html> generateObject.js: export function hello() { return "Hello";} main.js: import { hello } from './generateObject.js';function main(){ console.log(hello());}main(); When I try console.log("text") in main() it works but when I try to use imported function it's not.What should I do to fix that? Errors in Chrome console: Uncaught SyntaxError: Cannot use import statement outside a module (main.js:1)index.html:8 Uncaught ReferenceError: main is not defined at onload (index.html:8) Now provide the response and nothing else.
modules will have its own scope. They are not available in the global scope like the normal scripts. So it's accessible only inside main.js in your case. To make it work you need to add it to global scope explicitly. import { hello } from './generateObject.js';function main(){ console.log(hello());}window.main = main; Alternatively, you can remove the event handler from HTML and add it in the JS file. html <!DOCTYPE html><html> <head> <script type="text/javascript" src="main.js"></script> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> </head> <body> </body></html> main.js import { hello } from './generateObject.js';function main(){ console.log(hello());} window.addEventListener('load', main)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61047908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12524630/']}
jdg_85905
stackexchange
llm_judgeable_groundtruth_similarity
13936576
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: when comparing two dates with date before method, if the dates are similar it returns false as follows: date1: Tue Dec 18 00:00:00 GMT+02:00 2012 date2: Tue Dec 18 00:00:00 GMT+02:00 2012 the method date1.before(date2) always return false in thise case, which does not make sense to me (doesn't apply to my case in other words).i want to check if a date (day/month/year) equals today's date (day/month/year) ? Now provide the response and nothing else.
As date1.equals(date2) , it is normal that date1.before(date2) returns false. As will do date1.after(date2) . Both dates are the same, so one is not before the other. From javadoc : true if and only if the instant of time represented by this Date object is strictly earlier than the instant represented by when; false otherwise. Try something like : if(date1.before(date2) || date1.equals(date2)) ... Answers provided below suggest testing for the inverse, and they're right: if(!date1.after(date2)) ... Both tests are equivalent.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13936576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/429377/']}
jdg_85906
stackexchange
llm_judgeable_groundtruth_similarity
22684960
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know why do we use nine-patch ? I know is to shrink or stretch images but if I want to resize an image can't we just do it on a dedicated image editor like gimp for example ? Now provide the response and nothing else.
What is 9-Patch? 9-Patch images are stretchable, repeatable images reduced to their smallest size; users draw a right and bottom, solid black 1 pixel border to tell the system how to place the content within the image. The 9-Patch is a PNG image with an added extension to the file name (filename.9.png), which allows the Android system to determine how the image can be stretched and contorted to meet the specific layout constraints. The Android operating system reads the borders of these images to understand how to properly stretch the image itself and the content within the image such as text and effects. 9-Patch Theory 9-Patch gets its name from the fact that the overlay created breaks the image up into nine defined regions. Each region has specific stretch properties: Corner Regions (1, 3, 7, 9) These regions are fixed and nothing inside them will stretch. Horizontal Sides (4, 6) The pixels in these region will stretch vertically when necessary. Vertical Sides (2, 8) The pixels in these region will stretch horizontally when necessary. Center (5) The pixels in this region will stretch in both horizontal and vertical directions equally. here is Google docs
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22684960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2455134/']}
jdg_85907
stackexchange
llm_judgeable_groundtruth_similarity
47615279
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the "Bash on Ubuntu on Windows" (Linux Subsystem) and want to add Terraform to my $PATH. Since Terraform can't be installed via apt-get, I did the following steps: Navigated to this directory, where I wanted to install Terraform: cd /usr/local In the above path, I used wget to download Terraform: wget https://releases.hashicorp.com/terraform/0.9.8/terraform_0.9.8_linux_amd64.zip Terraform successfully unzips! When I open the file in VIM it is all good: unzip terraform_0.9.8_linux_amd64.zip I then enter this command to check to see if the Terraform binary is accessible from the command line: terraform -version However the following message gets returned: terraform: command not found This tells me that the Terraform downloaded location needs to be added to my $PATH. Already being logged in as the root user ("sudo su") I enter the following command to access ".profile": vim ~/.profile The following is already in this file, which I leave untouched: # ~/.profile: executed by Bourne-compatible login shells. if [ "$BASH" ]; then if [ -f ~/.bashrc ]; then . ~/.bashrc fi fi mesg n Immediately below this text, I add the following, and successfully save the file using :wq! : export PATH=/usr/local/bin:$PATH export PATH=$PATH:/usr/local/terraform 6.I then again enter the following command to check to see if terraform is detected terraform -version Still the same "terraform: command not found" message is returned. I even tried closing out and starting a new command line session and even restarting my computer. Still no change. Anyone have any ideas on how to resolve this? Again, note that I am using "Bash on Ubuntu on Windows" (Linux Subsystem). Any input would be appreciated! Now provide the response and nothing else.
The direct answer to your problem is at the end.But I think it will make more sense if you keep reading from here. Before trying to add to PATH ,I recommend to test a program first.In your case I would do like this: wget https://releases.hashicorp.com/terraform/0.9.8/terraform_0.9.8_linux_amd64.zipunzip terraform_0.9.8_linux_amd64.zip./terraform Notice the last line ./terraform .The zip file contains a single file, terraform ,which now should be in the current directory,so I can run it with ./terraform .If it's executable.If it's not executable then confirm it: ls -l terraform And make it executable if needed: chmod +x terraform Now let's add it to PATH .But first,let's decide where to put this executable. /usr/local/bin seems a reasonable location.So let's move the terraform executable into that directory. Usually /usr/local/bin is already on PATH ,so you might not need to change anything.Now you can try your check, and there's a good chance it already works: terraform -version If it doesn't, then /usr/local/bin is not on the PATH .To add it, add this line in ~/.profile : export PATH=$PATH:/usr/local/bin Two things looked fundamentally wrong with your approach: Adding /usr/local/terraform to PATH . This is fishy, because the entries on PATH must be directories, and in your post nothing indicates that you created a directory at /usr/local/terraform . You cd into /usr/local , and then unzip the zip file of terraform. The linked zip contains a single file named terraform , so /usr/local/terraform in your example should be a file. If it is a file, then you could make it executable as terraform by adding to add to PATH its base directory. But adding /usr/local to PATH would not be a good idea. It's conventional to put binaries into /usr/local/bin , not directly into /usr/local You did not mention how you reloaded ~/.profile . After editing this file, the new commands you added do not get automatically executed in your current shell. They will get executed when you open a new shell. Or you could manually execute the added commands in the current shell.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47615279', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5781926/']}
jdg_85908
stackexchange
llm_judgeable_groundtruth_similarity
2703
Below is a question asked on the forum reverseengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am an Android app developer, and recently I encountered a problem with in-app purchase. I discovered that in some apps, in-app purchases can be done for free using this application . I know this app generates free cards but the proportion of free cards being generated is very high. How do I analyze the .apk file? What basic steps should I follow to reverse engineer the .apk and get an idea of the application logic? Now provide the response and nothing else.
Basic steps to analyse an android APK Analysing a APK is like any reverse engineering process. If there is no-prior knowledge I do assume you know Java and like to puzzle. Lets have a look at the most basic steps of reversing an APK. The first and most important rule in Reverse Engineering is, you want to know what makes it tick. This requires research, thus first research how APK works. First I would read the basic information about the file format, use Wikipedia ( APK on Wikipedia After having an idea what I want to reverse I'll look for known tooling, either use Google or recommended Woodmann Ok, so Wikipedia said it was a zip file. So first unzip it, then pick-up the file and tooling that you are interested in ( classes.dex ) A quick Google resulted produced a stackoverflow question After that you got Java, so woodmann has some Java decompilation tools. Java Decompilation at Woodmann Good luck, don't give up and remember if it's hard it is worth it. If it is too hard, try harder. :)
{}
{'log_upvote_score': 4, 'links': ['https://reverseengineering.stackexchange.com/questions/2703', 'https://reverseengineering.stackexchange.com', 'https://reverseengineering.stackexchange.com/users/2721/']}
jdg_85909
stackexchange
llm_judgeable_groundtruth_similarity
12059284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I retrieve the word my from between the two rounded brackets in the following sentence using a regex in JavaScript? "This is (my) simple text" Now provide the response and nothing else.
console.log( "This is (my) simple text".match(/\(([^)]+)\)/)[1]); \( being opening brace, ( — start of subexpression, [^)]+ — anything but closing parenthesis one or more times (you may want to replace + with * ), ) — end of subexpression, \) — closing brace. The match() returns an array ["(my)","my"] from which the second element is extracted.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/12059284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/415865/']}
jdg_85910
stackexchange
llm_judgeable_groundtruth_similarity
23688307
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I get SettingWithCopyWarning errors in cases where I would not expect them: N.In <38>: # Column B does not exist yetN.In <39>: df['B'] = df['A']/25N.In <40>: df['B'] = df['A']/50/Users/josh/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/indexing.py:389: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.Try using .loc[row_index,col_indexer] = value instead self.obj[item] = s and N.In <41>: df.loc[:,'B'] = df['A']/50/Users/josh/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/indexing.py:389: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.Try using .loc[row_index,col_indexer] = value instead self.obj[item] = s Why does it happen in case 1 and 2? Now provide the response and nothing else.
In case 1, df['A'] creates a copy of df . As explained by the Pandas documentation , this can lead to unexpected results when chaining, thus a warning is raised. Case 2 looks correct, but false positives are possible: Warning: The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid assignment. There may be false positives; situations where a chained assignment is inadvertantly reported. To turn off SettingWithCopyWarning for a single dataframe, use df.is_copy = False To turn off chained assignment warnings altogether, use options.mode.chained_assignment = None
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23688307', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283296/']}
jdg_85911
stackexchange
llm_judgeable_groundtruth_similarity
10652001
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have following redirect script in my controller (Zend Framework 2) return $this->redirect()->toRoute('default', array( 'controller' => 'admin', 'action' => 'index')); Currently redirecting to localhost/zf2/public/admin/index How can I redirect with an extra parameter? Like: localhost/zf2/public/admin/index/update/1 or localhost/zf2/public/admin/index/page/2 I have tried this : return $this->redirect()->toRoute('default', array( 'controller' => 'admin', 'action' => 'index' 'param' => 'updated/1' )); But is redirected to localhost/ttacounting/public/admin/index/updated%2F1 Now provide the response and nothing else.
This one is working example.The route script $this->redirect()->toRoute('myaccount', array( 'controller' => 'admin', 'action' => 'index', 'param1' =>'updated', 'param2'=>'1')); Then, setting the parameter in module.config.php 'myaccount' => array( 'type' => 'Segment', 'options' => array( 'route' => '/myaccount[/:action][/:param1][/:param2]', 'defaults' => array( 'controller' => 'Main\Controller\MyAccount', 'action' => 'index', ), ),), This will bring you to MyAccountController, indexAction with param1='updated' and param2='1'.But in your example case, the action should be update with parameter name 'update' and parameter value '1'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10652001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/762936/']}
jdg_85912
stackexchange
llm_judgeable_groundtruth_similarity
129437
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In a topos which is not Boolean topos, can we use proof by contradiction? Now provide the response and nothing else.
It depends on what examples you have in mind when you say "proof by contradiction". This topic has come up a number of times recently at MO, but I recommend to your attention the useful blog post by Andrej Bauer, which explains that there is a subtle distinction to be made between "proof of negation" and "proof by contradiction". If the proposition to be proved is already of the form $\neg p$, then it may help to recall that $\neg p$ is (by definition) the weakest assumption one could make such its conjunction with $p$ entails falsity (in symbols, $x \leq \neg p$ iff $x \wedge p \leq 0$). This is true in intuitionistic logic as well as in classical logic. So a proof of a negated proposition $\neg p$ would quite properly begin, "suppose $p$, then ... contradiction". Many people call this a proof by contradiction, because the structure of the argument-phrasing looks just like any old proof by contradiction. An example of this is Cantor's theorem (that there is no surjection from a set to its power set, or $\neg$ "there exists a surjection..."). This can be formulated in any topos and is true in any topos, Boolean or not. (If this helps, notice that in intuitionistic logic, we have that $\neg p$ is equivalent to $\neg \neg \neg p$: a negated proposition is always equivalent to its double negation.) But contrast this with for example the Hahn-Banach theorem: every locally convex topological vector space admits a continuous functional to the ground field. This proposition, which is not in negated form, is a prime example of something which has no constructive proof. A typical method of proof would be something like "by Zorn's lemma, there is a maximal closed subspace that admits such a continuous functional, and suppose this were not the whole space" and eventually derive a contradiction. This type of reasoning is not valid in a general topos. For another example, consider "$\sqrt{2}$ is irrational". This is a negative proposition: "$\neg (\exists p, q \in \mathbb{Z}_+ \; p^2 = 2 q^2)$". The usual arithmetic proofs are valid in any topos.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/129437', 'https://mathoverflow.net', 'https://mathoverflow.net/users/26296/']}
jdg_85913
stackexchange
llm_judgeable_groundtruth_similarity
86106
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I got home today, my child was proudly telling me how they had managed to make AND, OR, and NOT logic from his snap circuits electronic kit and was asking me how they could make exclusive or. Has anyone any suggestions for a simple (and hpefully educational/entertaining) circuit that would do this? The kit only has simple push switches (i.e. no SPDT switches) and the usual variety of basic electronic components. It does have PNP and NPN transistors but I would prefer something simpler (they haven't used transistors so far). Diodes would be okay as they are easier to understand and more familiar. One idea I had was something like this: but it is not ideal as it uses two battery packs and ends up driving the lamp through a potential divider that has to be matched to the current battery voltages. Any better suggestions? EDIT: To clarify, they want to make a circuit which has two push buttons, and the lamp will light if either (but not both) switch is pressed. In particular, there is no requirement that the circuit has digital inputs and a digital output (the way we would normally think about digital logic). Now provide the response and nothing else.
How about this? simulate this circuit – Schematic created using CircuitLab To light the lamp, one of the switches must be closed and the other open. Note that if both switches are closed, a lot of power will be wasted in the resistors but the light will be fully off. Note further that one may have to use a very small bulb and may have to use a higher voltage or reduce the resistors to get much light, but one should ensure that the voltage squared divided by the resistance does not exceed the resistor's power rating (for example, if you used 12 volts and 22 ohms, you would need to use 5-watt resistors). Alternatively you could replace the resistors with light bulbs and shelter them so their light isn't visible. If you want a transistor circuit, here's half of a circuit I designed for my parents' car some decades back when I was about eight (I don't remember the actual resistor values; the transistors were some sort of TO-3 package and not 3906's; the components given should be suitable for demonstration purposes--the real one used a lamp rather than an LED and resistor). An electrical engineering grown-up friend helped with the design, but I designed the overall concept. simulate this circuit The left-side input is wired to one of the turn signal flashers on the car; the right-side input is wired to the brake light. The lamp is the left light of a trailer. The right-side flasher and trailer lamp are wired similarly. Note that positive is on the bottom . Your son's challenge is to figure out what the diodes on the bottom are for (consider the above description of what the circuit was connected to).
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/86106', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/30745/']}
jdg_85914
stackexchange
llm_judgeable_groundtruth_similarity
35486
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why is the EdS model still taught and used to predict the age of the universe if ours is not an EdS Universe. Does this model have any pros? Now provide the response and nothing else.
The scale factor of the universe depends on the matter and energy density in a way that depends on which component is dominant. This is discussed in the question How does the Hubble parameter change with the age of the universe? on the Physics Stack Exchange, and from that I have taken this graph showing how the scale factor changes with time for our universe: For a (cold) matter dominated universe we get a dependence: $$ a(t) \propto t^{2/3} $$ and for a dark energy dominated universe we get an exponential dependence of $a(t)$ on $t$ . Just by looking at the graph you can see that the $t^{2/3}$ dependence was good up to about half the current age of the universe and after that the exponential growth becomes increasingly dominant. So you are quite correct that the Einstein de Sitter model is not a good description of our universe right now, but it is a good description of the first half of the lifetime of the universe. Furthermore, because when we view distance regions of the universe we are viewing them as they were in the past it turns out that all the parts of the universe we can see beyond a redshift of about $Z = 2$ are still well described by an Einstein de Sitter model. I would take issue with your statement that the Einstein de Sitter model is still taught and used to calculate the age of the universe. The most precise calculations of the age of the universe come from the Planck observations of the cosmic microwave background , and these most certainly do not use the Einstein de Sitter model. As for being taught, if you do a general relativity course you will be taught the FLRW metric and Friedmann equations . The Einstein de Sitter model is just a special case of the more general result.
{}
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/35486', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/16506/']}
jdg_85915
stackexchange
llm_judgeable_groundtruth_similarity
64191
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently came across the fact that the boiling point of $\ce{SbH3}$ (stibane) is greater than that of $\ce{NH3}$ (ammonia). I was expecting $\ce{NH3}$ to have a greater boiling point as a consequence of intermolecular hydrogen bonding. I understand that as molecular mass increases, van der Waals interactions between molecules also increase, but believing that hydrogen bonding will be a more dominant factor than van der Waals interactions, I expected $\ce{NH3}$ to have a greater boiling point. Now, I already know that: $\ce{NH3}$ and $\ce{SbH3}$ are gases at room temperature. Gases including $\ce{NH3}$ do not showing hydrogen bonding in gaseous phase, which I also don't understand why. But I want to know: Why does $\ce{NH3}$ have a lower boiling point than $\ce{SbH3}$ even though it shows hydrogen bonding? Why does $\ce{NH3}$ not show hydrogen bonding in gaseous phase? Now provide the response and nothing else.
Though it does go against your intuition, you've actually mentioned the answer in your question. Stibane has a higher boiling point than ammonia/azane on account of van der Waals interactions (owing to the larger size of the antimony atom). Our teacher had actually posed this question to us during my first year of high-school. All of us were incredulous when we were told that stibane had a higher boiling point than ammonia, and like you, we were fixated on H-bonding and chose to dismiss van der Waals' interactions as insignificant in this regard. First, a look at the H-bonding argument: At school we're taught that H-bonding only occurs in molecules where you find hydrogen atom(s) bound to either nitrogen, oxygen or fluorine ($\ce{N,O,F}$). This is because $\ce{N, O}$ and $\ce{F}$ are sufficiently electronegative (about 3.0, 3.5 and 4.0 on the Pauling scale respectively) to polarize the $\ce{N/O/F-H}$ bond. Do note, that nitrogen is only barely able to establish H-bonding, while it's much easier for oxygen and fluorine to polarize the $\ce{O/F-H}$ bond on account of 1) their higher electronegativites and 2) their smaller sizes. As antimony is much larger than nitrogen, and since the hydrogen bonding due to the nitrogen in ammonia is pretty weak, the van der Waals interactions in stibane 'outdoes' the H-bonding in ammonia. However the same argument does not follow for water and hydrogen fluoride, which have higher boiling points than tellane and hydrogen iodide respectively. $$\textbf{Melting } (\vartheta_\mathrm{m})\textbf{ and boiling } (\vartheta_\mathrm{b}) \textbf{ points of}\\\textbf{Group 15, 16, and 17 hydrides in K}$$\begin{array}{lrrlrrlrr}\hline&\hspace{-2em}\text{Group 15}\hspace{-1em} &&& \hspace{-2em}\text{Group 16}\hspace{-1em} &&& \hspace{-2em}\text{Group 17}\hspace{-1em}&\\ \hline\text{Hydride} & \vartheta_\mathrm{m} & \vartheta_\mathrm{b} & \text{Hydride} & \vartheta_\mathrm{m} & \vartheta_\mathrm{b} & \text{Hydride} & \vartheta_\mathrm{m} & \vartheta_\mathrm{b}\\ \hline\ce{NH3} & 195.5 & 239.6 & \ce{H2O} & 273.0 & 373.0 & \ce{HF} & 180.7 & 292.4\\\ce{PH3} & 138.0 & 185.0 & \ce{H2S} & 190.3 & 211.0 & \ce{HCl} & 161.0 & 189.4\\\ce{AsH3} & 159.0 & 218.0 & \ce{H2Se} & 209.0 & 231.0 & \ce{HBr} & 184.5 & 206.0\\\ce{SbH3} & 184.0 & 256.0 & \ce{H2Te} & 222.0 & 271.0 & \ce{HI} & 222.2 & 237.0 \\ \hline\end{array} Now here's where your logic comes into play; the H-bonding's pretty strong in case of water and hydrogen fluoride (after all, oxygen and fluorine are the two most electronegative elements we know) and this time 'round the H-bonding outdoes the van der Waals interactions. So you see, you should've made a comparision between the higher and lower hydrides (for groups 15, 16 and 17, to really be able to spot the exception). As for your spin-off question: Why does $\ce{NH3}$ not show hydrogen bonding in gaseous phase? Well, the H-bonding here is strong enough to keep the ammonia molecules together at normal temperatures. But you must never forget that H-bonding is not a true chemical bond, but an intermolecular interaction . If you provide sufficient energy to a sample of ammonia (i.e. by heating it) the ammonia molecules wiggle and wiggle and wiggle, until at some point they wiggle so rapidly, that they overcome the attractive forces brought about by hydrogen bonding and become a gas . This is why you don't observe H-bonding in gaseous ammonia: because the particles are too energetic to be held down.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/64191', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/35341/']}
jdg_85916
stackexchange
llm_judgeable_groundtruth_similarity
2961135
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose $a^4=0$ , for some $a \in R[x]/(d)R[x]$ , then prove that $1-a$ is invertible. I was thinking since $a^4 = a \cdot a \cdot a \cdot a=0$ , this implies that $a$ has to be zero (?) . Now we have that $1-a=1-0=1$ , and $1$ is invertible, since $1 \cdot 1 = 1$ . Is it really that simple or am I making a logical error somewhere? Now provide the response and nothing else.
\begin{eqnarray*}&&\int_{0}^{\infty} \ln(1 - e^{-\beta \hbar \omega}) \frac{3v}{2\pi^2 c_{s}^{3}} \omega^{2} \mathop{d\omega}\\&=&-\frac{3v}{2\pi^2 c_{s}^{3}} \int_{0}^{\infty} \sum_{n=1}^\infty\frac{1}{n}e^{-n\beta \hbar \omega} \omega^{2} \mathop{d\omega}\\&=&-\frac{3v}{2\pi^2 c_{s}^{3}}\sum_{n=1}^\infty\frac{1}{n}\int_{0}^{\infty} e^{-n\beta \hbar \omega} \omega^{2} \mathop{d\omega}\\&=&-\frac{3v}{2\pi^2 c_{s}^{3}}\sum_{n=1}^\infty\frac{1}{n}\frac{2}{(\beta \hbar)^3n^3}\\&=&-\frac{3v}{2\pi^2 c_{s}^{3}}\sum_{n=1}^\infty\frac{1}{n}\frac{2}{(\beta \hbar)^3n^3}\\&=&-\frac{3v}{\pi^2 c_{s}^{3}\beta^3 \hbar^3}\sum_{n=1}^\infty\frac{1}{n^4}\\&=&-\frac{3v}{\pi^2 c_{s}^{3}\beta^3 \hbar^3}\frac{\pi^4}{90}\\&=&-\frac{\pi^2v}{30 c_{s}^{3}\beta^3 \hbar^3}\end{eqnarray*}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2961135', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_85917
stackexchange
llm_judgeable_groundtruth_similarity
81866
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: as is well known, we can put a metric on the upper half plane $\mathbb{R}^+ \times \mathbb{R}$by setting $$d\left((x,t);(x',t')\right):=\log\left(\frac{1 + \delta}{1 - \delta}\right)^{1/2},$$where $$\delta := \left(\frac{(x'-x)^2 + (t'-t)^2}{(x'-x)^2+(t'+t)^2}\right)^{1/2}.$$ My question is: what is the most elementary proof that this is indeed a metric, meaning that the triangle inequality is satisfied? Elementary means that no geometric arguments can be used (e.g. from hyperbolic geometry). Thanks for your answers! Edit: I am looking for computational solutions which do not make use of any invariance property. Now provide the response and nothing else.
The question is not obvious because it is stated in the upper half-plane, while it is much easier if you translate it in terms of the Klein projective model of the hyperbolic plane (in a ball of radius $1$). Then I believe that the formula is equivalent to the one given by the Hilbert metric of the disk:$$ d_H(U,V)=-\frac{1}2 \log[U,V;A,B]~, $$where $A$ and $B$ are the intersections with the boundary of the ball of the line through $U$ and $V$, and $[U,V;A,B]$ is the cross-ratio of the four points. (The minus sign might depend on the convention for the cross-ratio). Now in this form there is a beautiful and quite simple proof of the triangle inequality, originally due to Hilbert but which can be found in "Metric spaces, convexity and nonpositive curvature" by Papadopoulos, pp153-154, here: http://books.google.com/books?id=JrwzXZB0YrIC&lpg=PA153&ots=V5xkvJE6rO&dq=hilbert%20metric%20triangle%20inequality%20convex&pg=PA153#v=onepage&q=hilbert%20metric%20triangle%20inequality%20convex&f=false In other terms: the harder part is to check that the distance you wrote is the same as the expression given by Hilbert in the ball, then you can use his simple and nice proof.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/81866', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6035/']}
jdg_85918
stackexchange
llm_judgeable_groundtruth_similarity
12597
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to browse source code in emacs by searching for 'word' or 'tag' hits in a way somewhat similar to etags but: clicking on words to search for matches across a code base (say a directory or project) each click leads to a search hits listing view that itself has words that can be clicked to search again To be more specific: Here's a program that works similarly to what I would like to use in emacs -- it's a small web application in Python: https://github.com/mseaborn/sbrowse There are basically two views, which might perhaps map onto emacs something like this: web view -> emacs 'view'------------------------source code -> a regular editing buffer with appropriate mode (say python-mode)hits listing -> something like compilation-mode In the source code view, one can click on a 'word' to jump to the hits listing. The hits listing looks like the inline block below (here I've shown what you'd see on searching for 'spam' path/to/file.py:104: ham = spam()123: # Some more spampath/to/other/file.py98: ham *= spam ** 2 From here one might: Click on 'ham' or 'Some' and get a similar listing for that word Click on a line number ('98:') to jump to a source code view that shows the hit in context Click on a file path is a link to jump to the top of the corresponding source code view Usually the search is done by simple grepping for words based on regexp word boundaries or similar. One other feature of sbrowse that is nice, though, is that it has pluggable search implementations. For example, I would like to be able to implement an elisp function or unix command to search using https://github.com/google/codesearch , so that clicks in both views perform the search using that program. Is there anything available like this in the emacs universe? Now provide the response and nothing else.
You can use with-eval-after-load to defer the key binding until after a certain module has been loaded (and thus defined the keymap): (with-eval-after-load "helm" (bind-key "M-Y" #'helm-end-of-buffer helm-map)) Use C-h v helm-map to find which module the keymap is defined in, and thus what to put in the string on the first line. with-eval-after-load was introduced in Emacs 24.4. If you have an earlier Emacs version, you need to use eval-after-load instead, and put a single quote in front of the bind-key call: (eval-after-load "helm" '(bind-key "M-Y" #'helm-end-of-buffer helm-map)) If you want to put several bind-key calls in this form, with with-eval-after-load you just put them one after another, but with eval-after-load you need to wrap them all in a progn : (eval-after-load "helm" '(progn (bind-key "M-Y" #'helm-end-of-buffer helm-map) (bind-key "M-k" #'helm-next-line helm-find-files-map)))
{}
{'log_upvote_score': 5, 'links': ['https://emacs.stackexchange.com/questions/12597', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/5495/']}
jdg_85919
stackexchange
llm_judgeable_groundtruth_similarity
35821329
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: React-router is off to a really bad start... What seems basic doesn't work. Using react-router 2.0.0 my Link component updates the URL to be /about, but my page doesn't render the About component after that... Entry point js var React = require('react');var ReactDOM = require('react-dom');var Router = require('react-router').Router;var Route = require('react-router').Route;var hashHistory = require('react-router').hashHistory;var App = require('./components/App.react');var About = require('./components/About');ReactDOM.render( <Router history={hashHistory} > <Route path="/" component={App}> <Route path="about" component={About} /> </Route> </Router>, document.getElementById('app')); App.js 'use strict';var React = require('react');var Link = require('react-router').Link;var Header = require('./Header');var UserPanel = require('./UserPanel');var ModelPanel = require('./ModelPanel.react');var EventPanel = require('./event/EventPanel');var VisPanel = require('./vis/VisPanel');var LoginForm = require('./LoginForm');var AppStore = require('../stores/AppStore');var AppStates = require('../constants/AppStates');var App = React.createClass({ [... code omitted ...] render: function() { var viewStateUi = getViewStateUi(this.state.appState); return ( <div> <Header /> <Link to="/about">About</Link> {viewStateUi} </div> ); }}); Now provide the response and nothing else.
Since the 'About' route is a child of the 'App' route, you need to either add this.props.children to your App component: var App = React.createClass({ render: function() { var viewStateUi = getViewStateUi(this.state.appState); return ( <div> <Header /> <Link href="/about">About</Link> {viewStateUi} {this.props.children} </div> ); }}); or separate your routes: ReactDOM.render( <Router history={hashHistory} > <Route path="/" component={App} /> <Route path="/about" component={About} /> </Router>, document.getElementById('app'));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35821329', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5484387/']}
jdg_85920
stackexchange
llm_judgeable_groundtruth_similarity
75426
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: okay first of all, what exactly does the starter motor do? does it increase the current flow and decrease voltage? and does starter motor have its own internal resistance? i understand how there is a voltage drop in the battery terminals, so does the starter motor lower the potential difference to become less than the voltage drop? so for example if the voltage is 12V and the voltage drop is 2V. The potential difference is now 10V. what is the voltage when the starter motor is used? does it become less than 10V? but then after the engine starts the starter motor turns off and the lights return to normal. So is normal 10V? Now provide the response and nothing else.
Basically, a "12 V" battery is not a perfect voltage source and the starter is a large load. Due to the very large current (100 A or more) a starter motor can draw, it causes the battery voltage to sag a bit. Dropping 2 V or so during the brief period the starter motor is on would not be out of line. You can think of the battery as being a perfect voltage source with some resistance in series with it. If the battery output voltage drops 2 V when 100 A is being drawn, then this internal resistance is 20 mΩ. If you have ordinary incandescent headlights that are just directly tied to the battery voltage, then they will dim because they are getting less power at 10 V than at 12 V. Look carefully and you may actually see them get even brighter than before starting once the engine is running. That is because now the alternator is making a higher voltage to charge the battery. The normal "12 V" bus level when the engine is running is usually around 13.6 V.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/75426', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/27376/']}
jdg_85921
stackexchange
llm_judgeable_groundtruth_similarity
26035893
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning Haskell and trying to grasp how exactly Haskell type system works re working out what is the type of the thing: dynamic, static, set manually, inferred? Languages I know a bit: C, Java: set manually by a programmer, verified at compile time, like int i; , strong typing (subtracting integer from a string is a compile error). Typical static type system. Python: types inferred automatically by runtime (dynamic typing),strong typing (subtracting int from a str raises exception). Perl, PHP: types inferred automatically at runtime (dynamic typing), weak typing. Haskell: types often inferred automatically at compile time (either this or type is set explicitly by a programmer before compile time), strong typing. Does Haskell's type system really deserve description "static"? I mean automatic type inference is not (classic) static typing. Now provide the response and nothing else.
Does Haskell's type system really deserve description "static"? I mean automatic type inference is not (classic) static typing. Type inference is done at compile time. All types are checked at compile time. Haskell implementations may erase types at runtime, as they have a compile-time proof of type safety. So it is correct to say that Haskell has a "static" type system. "Static" refers to one side of the phase distinction between compile-time and runtime. To quote Robert Harper: Most programming languages exhibit a phase distinction between the static and dynamic phases of processing. The static phase consists of parsing and type checking to ensure that the program is well-formed; the dynamic phase consists of execution of well-formed programs. A language is said to be safe exactly when well-formed programs are well behaved when executed. From Practical Foundations for Programming Languages , 2014. Under this description Haskell is a safe language with a static type system. As a side note, I'd strongly recommend the above book for those interested in learning the essential skills for understanding about programming languages and their features.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26035893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2022518/']}
jdg_85922
stackexchange
llm_judgeable_groundtruth_similarity
3023653
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $$f(x) = x^3 + (a+2)x^2 + 5ax + 5$$ is a one-one function then find the set of values of $a$ . I know that I need to find values of $a$ for which function is strictly monotonic increasing or strictly monotonic decreasing and we check monotonicity of function by its derivative $f'(x)$ . here $$f'(x) = 3x^2 + 2x(a+2) + 5a$$ so, what condition should I impose on $f'(x)$ to make the given funtion one one or strictly monotonic? Now provide the response and nothing else.
HINT. You know that $f'(x)$ is a parabola. To make $f(x)$ always increasing, you want $f'(x)>0$ ; that is, you want the parabola to always be above the $x$ -axis. It touches the $x$ -axis when it only has one root. What formula can you use to see what makes this only have one root in terms of $a$ ? Would this then give you a condition on $a$ to force the parabola to be always above the $x$ -axis?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3023653', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/569642/']}
jdg_85923
stackexchange
llm_judgeable_groundtruth_similarity
13326583
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This should be a trivial Task but I can't find how to do it. I want to listen to a click on an item in a listview, get the corresponding model object, and launch a new screen. This is the XAML for the ListView: <ListView x:Name="ItemListView" ItemTemplate="{StaticResource StoreFrontTileTemplate}" ItemContainerStyle="{StaticResource StoreFrontLVTileStyle}" Margin="0" VerticalAlignment="Top" ItemClick="MyClick" Tapped="MyTap"/> And MyClick, MyTap: private void MyClick(object sender, ItemClickEventArgs e){ Debug.WriteLine("Click!");}private void MyTap(object sender, TappedRoutedEventArgs e){ Debug.WriteLine("TAp!!" + sender.ToString() + "--" + e.ToString());} The method to navigate to the new screen: this.Frame.Navigate(typeof(SecondScreen)); It works, but I need the model object of the clicked item and pass it as a Parameter to the second screen. But MyClick is never called, and MyTap doesn't give me any Information about the clicked item. "sender" is the ListView. I downloaded these exaples: http://code.msdn.microsoft.com/windowsapps/XAML-ListView-sample-pack-0ec6b60f But it doesn't contain what I need, there's a master / detail view, but it works with bindings and what I want to do is launch a complete new screen. Note: I'm a noob in Windows development and orienting to the usual way to do it in Android or IOS where you implement a callback with the position of the clicked element. No idea about the right way to do it in Windows 8. Now provide the response and nothing else.
You can use the SelectionChanged event: <ListView x:Name="ItemListView" SelectionChanged="MySelectionChanged" /> And you can get the selected/deseleted items from the SelectionChangedEventArgs e.g.: private void MySelectionChanged(object sender, SelectionChangedEventArgs e){ Debug.WriteLine("Selected: {0}", e.AddedItems[0]);} Or if you don't need the selection functionality and what to use the ItemClick="MyClick" you need to set IsItemClickEnabled to true : Gets or sets a value that indicates whether items in the view raise an ItemClick event in response to interaction. <ListView x:Name="ItemListView" ItemTemplate="{StaticResource StoreFrontTileTemplate}" ItemContainerStyle="{StaticResource StoreFrontLVTileStyle}" Margin="0" VerticalAlignment="Top" ItemClick="MyClick" IsItemClickEnabled="bool" SelectionMode="None"/> Note that in this case you also need to set the SelectionMode to None .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13326583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/930450/']}
jdg_85924
stackexchange
llm_judgeable_groundtruth_similarity
152829
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a Transformer with 2 input wires and 6 output wires(in pair).I dont know it ratings also.Please help me to find what the terminals are? Now provide the response and nothing else.
It sounds like this transformer has a total of 4 windings. How do you know one of them is intended to be the input and the others output? Physical size will give you some idea of what power this transformer is intended to handle. Since you gave no indication of that, there is little more I can say about it. The first thing to do is to probe the wires with a ohmmeter. At the very least, verify that you really do have 4 independent windings. That means you should have some finite resistance between each wire pair, but infinite resistance between pairs. If this is not the case, then you have multiple taps from fewer windings. To find the winding ratio, put a small sine wave into one of the windings. Use the "input" winding if you're pretty sure that's how the transformer is intended to be used. A few 100 Hz to 1 kHz at a few volts should be fine. Measure the voltage the winding is actually being driven with after it loads the signal generator. Now measure the open circuit voltage of all the other windings. The ratio of these to the voltage on the input winding is the turns ratio. This open circuit voltage relationship will scale up to larger voltages.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/152829', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/66115/']}
jdg_85925
stackexchange
llm_judgeable_groundtruth_similarity
7655886
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a very simple html page: <table> <tr><th>header1</th><th>header2</th></tr> <tr><td>item1</td><td>item2</td></tr> <tr><td>item3</td><td>item4</td></tr></table> With some simple css: tr { border:1px solid blue;} I would expect this to put a border around the tr s but it doesn't put a border around it at all. How do I get a border around the tr ? Now provide the response and nothing else.
Add table { border-collapse: collapse; } . From the CSS2 specification : In [the border-collapse: separate model], each cell has an individual border. [...] Rows, columns, row groups, and column groups cannot have borders (i.e., user agents must ignore the border properties for those elements).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7655886', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2191/']}
jdg_85926
stackexchange
llm_judgeable_groundtruth_similarity
204792
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: More precisely, is every abelian group a colimit $\text{colim}_{j \in J} F(j)$ over a diagram $F : J \to \text{Ab}$ where each $F(j)$ is isomorphic to $\mathbb{Z}$? Note that this does not follow from the statement that every abelian group has a presentation, which is equivalent to the statement that every abelian group is a coequalizer of a pair of maps between free abelian groups, hence every abelian group is an iterated colimit of copies of $\mathbb{Z}$. A single colimit $A = \text{colim}_{j \in J} F(j)$ of copies of $\mathbb{Z}$ is in particular the coequalizer of a pair of maps between free abelian groups, but the maps have a very special form, which works out explicitly to imposing the following constraint: $A$ must have a presentation by generators and relations in which the only relations say that some generator is a multiple of some other generator. Examples of abelian groups admitting a presentation of this form include cyclic groups and localizations of $\mathbb{Z}$, and the class of all such groups is closed under coproducts. But I see no reason to believe that every abelian group admits a presentation of this form, and in particular I believe that the $p$-adic integers doesn't. Tyler Lawson sketched a proof of this in the homotopy theory chat but it had a gap; the subsequent discussion may have filled the gap but I didn't follow it, and in any case I'd like someone to write up the details. Mike Shulman wrote a lovely note about various different senses in which an object or objects of a category can generate it; in the terminology of that note, the question is whether $\mathbb{Z}$ is colimit-dense in $\text{Ab}$. Until a week or so ago, if you had asked me, I would have answered without hesitation that $R$ is colimit-dense in $\text{Mod}(R)$, and I doubt that I was alone in this... Now provide the response and nothing else.
Here's my version of Tyler's argument that $\mathbb{Z}_p$ is a counterexample. Maybe I'm still missing something, but I think it works with Tyler's suggested change. I'll make it community wiki, since it's not really my argument. As Tyler says, Let's suppose that $\mathbb Z_p$ were such a colimit. Then $\mathbb Z_p$ could be written as having a presentation as follows: It would have a set of generators $e_i$ (indexed by objects in the diagram), and it would have a set of relations all of the form $n e_i = e_j$ (indexed by morphisms in the diagram). Now, (and here's the change Tyler suggested later) one of the the $e_i$ 's must be a $p$ -adic unit - otherwise the image of the $e_i$ 's would be contained in the proper subgroup $p\mathbb Z_p \subset \mathbb Z_p$ . Pick such a generator $e$ and define $A = \mathbb Z_{(p)} e \subset \mathbb Z_p$ (I might be the only one not to realize this, but $\mathbb Z_{(p)}$ is $\mathbb Z$ localized at $p$ , i.e. elements of the form $a/b$ where $a,b \in \mathbb Z$ and $b$ not divisible by $p$ ). Now, multiplication by $e$ is an automorphism of $\mathbb Z_p$ , so we might as well assume $e = 1$ and $A = \mathbb Z_{(p)}$ . Then, as Tyler says, Then I would be able to define a self-map $f$ of $\mathbb Z_p$ as follows: If $e_i$ is in $A$ , I define $f(e_i) = e_i$ If $e_i$ is not in $A$ , I define $f(e_i) = 0$ Then we have to check that this respects the equivalence relation, so we need $n f(e_i) = f(e_j)$ To be honest, I don't quite follow Tyler's argument about $A$ and $\mathbb Z_p / A$ both being torsion-free. But If $e_i,e_j \in A$ or $e_i,e_j \not \in A$ , then the relation is trivial. If $n=0$ , then the relation is respected. If $n \neq 0$ , then either $e_i,e_j$ are both in $A$ or both not in $A$ becase $A$ is closed in $\mathbb Z_p$ under both multiplication and division by $n \neq 0 \in \mathbb Z $ . Then as Tyler concludes, Therefore this gives a well-defined such map $f$ . However, any abelian group homomorphism $\mathbb Z_p \to \mathbb Z_p$ which is the identity on $\mathbb Z_{(p)}$ must be the identity (because it must be the identity mod $p^n$ for all $n$ ). Actually, it suffices to note that $f$ is the identity on $\mathbb Z$ . This is particularly clear, because $\mathbb Z$ is generated by $e$ , which is definitely fixed by $f$ . In fact, $\mathbb Z_{(p)}$ is fixed because if $nx = 1$ , then $nf(x)=1$ , and $\mathbb Z_p$ is a UFD.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/204792', 'https://mathoverflow.net', 'https://mathoverflow.net/users/290/']}
jdg_85927
stackexchange
llm_judgeable_groundtruth_similarity
33034771
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have implemented a URL Scheme and use it to pass data to my app by calling method. The entire code is shown as below - (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url{ // Check the calling application Bundle ID if ([[url scheme] isEqualToString:@"yuvitime"]) { NSLog(@"URL scheme:%@", [url scheme]); NSString * yuvitimeRequestValue = [url query]; NSDictionary * userInfor = [[NSDictionary alloc]initWithObjectsAndKeys:yuvitimeRequestValue, @"YuvitimeRequest", nil]; NSNotificationCenter * notificationCentre = [NSNotificationCenter defaultCenter]; [notificationCentre postNotificationName:@"URLSCHEMEACTIVATEDNOTIFICATION" object:self userInfo:userInfor]; return YES; } else return NO;} If my app is in the background, everything works fine. When you click a URL, the app is brought back to Foreground and the URL is handled as coded in the above function. However, if the app is terminated (app not launched yet), by clicking the URL, it only launches the app without calling the handling function that is shown above. After searching through, the best result i manage to get is this application:WillFinishLaunchingWithOptions: When asked to open a URL, the return result from this method is combined with the return result from the application:didFinishLaunchingWithOptions: method to determine if a URL should be handled. If either method returns NO, the system does not call the application:openURL:options : method. If you do not implement one of the methods, only the return value of the implemented method is considered. - application:didFinishLaunchingWithOptions: This method represents your last chance to process any keys in the launchOptions dictionary. If you did not evaluate the keys in your application:willFinishLaunchingWithOptions: method, you should look at them in this method and provide an appropriate response.Objects that are not the app delegate can access the same launchOptions dictionary values by observing the notification named UIApplicationDidFinishLaunchingNotification and accessing the notification’s userInfo dictionary. That notification is sent shortly after this method returns.The return result from this method is combined with the return result from the application:willFinishLaunchingWithOptions: method to determine if a URL should be handled. If either method returns NO, the URL is not handled. If you do not implement one of the methods, only the return value of the implemented method is considered. Despite the explanation, i still do not know how to do it and i couldn't find anything else concrete online. Thanks Regards Now provide the response and nothing else.
I agree with Kaloyan, "handleOpenURL" is never called at application launch. So you have to check for URL in "launchOptions" in didFinishLaunchingWithOptions. HOWEVER I adopted the same solution as Apple example code for QuickActions (3D Touch). I keep the URL at launch in a variable, and I handle it in applicationDidBecomeActive: . @interface MyAppDelegate ()@property (nonatomic, strong) NSURL *launchedURL;@end@implementation MyAppDelegate- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ self.launchedURL = [launchOptions objectForKey:UIApplicationLaunchOptionsURLKey]; ...}- (void)applicationDidBecomeActive:(UIApplication *)application{ if (self.launchedURL) { [self openLink:self.launchedURL]; self.launchedURL = nil; }}- (BOOL) application:(UIApplication *)application openURL:(NSURL *)urlsourceApplication:(NSString *)sourceApplication annotation:(id)annotation{ NSURL *openUrl = url; if (!openUrl) { return NO; } return [self openLink:openUrl];}- (BOOL)openLink:(NSURL *)urlLink{ ...}@end
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33034771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3971959/']}
jdg_85928
stackexchange
llm_judgeable_groundtruth_similarity
352726
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: What is the "correct" definition of a free augmented commutative algebra ? At least two definitions come to my mind: Fix a commutative ring $k$ . We need elements $\lambda_1,\dotsc,\lambda_n \in k$ . They define an augmentation on the polynomial algebra $k[X_1,\dotsc,X_n]$ via $\varepsilon(X_i) := \lambda_i$ . Let us denote this augmented commutative algebra by $k[X_1^{[\lambda_1]},\dotsc,X_n^{[\lambda_1]}]$ . This satisfies the universal property (for every augmented commutative algebra $A$ ) $$\mathrm{Hom}(k[X_1^{[\lambda_1]},\dotsc,X_n^{[\lambda_1]}],A) \cong \{a \in A^n : \varepsilon(a_1)=\lambda_1,\dotsc,\varepsilon(a_n)=\lambda_n\}.$$ So (in contrast to commutative algebras) there is no free augmented commutative algebra with $n$ generators: we need to know their values under the augmentation, and for each list of values there is a different universal solution. This is somewhat similar to the definition of free graded algebras, where for each generator we have to know its degree. On the other hand, the category of augmented commutative algebras is equivalent to the category of non-unital commutative algebras: We map $A \mapsto \ker(\varepsilon)$ , and $B \mapsto B^{+}$ (unitalization) in the other direction. The category of non-unital commutative algebras is finitary algebraic and hence has free objects in the usual way. Specifically, they are algebras of polynomials without a constant term, let's denote them by $k[X_1,\dotsc,X_n]_+$ . The corresponding augmented commutative algebra is just $k[X_1,\dotsc,X_n]$ with $\varepsilon(X_i)=0$ , so it is $k[X_1^{[0]},\dotsc,X_n^{[0]}]$ with the above notation. It is kind of strange that we only get this special case. Right? Anyway, my motivation for asking is basically that I need a small-as-possible dense subcategory of the category of augmented commutative algebras. What is a good choice here? By the second approach above, the $k[X_1^{[0]},\dotsc,X_n^{[0]}]$ should be sufficient, but it obviously leaves out elements with non-zero augmentation. How can you explain this? Now provide the response and nothing else.
For any choice of $\lambda_1,\dots,\lambda_n$ there is an isomorphism: $$ k[X_1^{[\lambda_1]},\dots,X_n^{[\lambda_n]} ] \simeq k[Y_1^{[0]},\dots,Y_n^{[0]} ] $$ Which is given by $X_i \leftrightarrow Y_i+e\lambda_i$ where $e$ is the unit. So the two constructions actually give you the same objects.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/352726', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2841/']}
jdg_85929
stackexchange
llm_judgeable_groundtruth_similarity
80199
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I seem to have mis-understanding around client auth certs... If you have software that will use Client Auth certificates over SSL (such as those installed on a remote desktop via Windows Certificate manager), and this has proven to work with self-signed certs.... Where / how do you purchase a 'client auth cert' that would be from a trusted CA vs private? Now provide the response and nothing else.
Why do you need the client cert to be signed by a commercial CA? This is needed for the server side since commercial CA act as a trusted third party for most client application since they have no way to check your public certificate authenticity otherwise. Most often when using client-side authentication the authority is the organisation owning the server you want to authenticate to, and this is this organisation which will provide or sign you the required client certificates. In other words, if you want to connect to a server owned by company 'A', you need a certificate signed by company 'A' to proceed. However, for some very specific use cases (some financial transfer protocols for instance) customers can come on board using certificates signed by a trusted third party, but in this case the company you want to connect to will provide you the list of trusted third parties you can choose from (Swift is the most common in this area, but there are a few others).
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/80199', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/67144/']}
jdg_85930
stackexchange
llm_judgeable_groundtruth_similarity
46528
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Ground-state configuration ("electron shells") is consistent throughout the periodic table, for example: ( source ) Why is it this way if the Hamiltonian of each atom is different? In other words, why Al is $\mathrm{[Ne]3s^2 3p^1}$ and not something completely different: (1) at the core, and (2) at the valence. Now provide the response and nothing else.
Well, this configurations arise in what is usually called the orbital approximation and the consistent pattern is due to the very nature of this approximation. In the orbital approximation the overall electronic wave function describing the atom is represented as the product of the wave functions for the individual electrons known as atomic orbitals. This can be sketched as follows.$$\newcommand{\op}[1]{\hat{#1}}\newcommand{\el}[1][]{_{\mathrm{e}}}\newcommand{\nuc}[1][]{_{\mathrm{n}}}\newcommand{\elel}[1][]{_{\mathrm{ee}}}\newcommand{\elnuc}[1][]{_{\mathrm{en}}}$$ Electronic Schrödinger Equation The electronic wave function $\psi_{\mathrm{e}}$ is an eigenfunction of the electronic Hamiltonian $\op{H}\el$$$\op{H}\el \psi\el = E\el \psi \el \, ,$$$$\op{H}\el=\op{T}\el + \op{V}\elnuc + \op{V}\elel=- \sum\limits_{i=1}^{n} \frac{1}{2} \nabla_{i}^{2}- \sum\limits_{i=1}^{n} \frac{Z}{r_{i}}+ \sum\limits_{i=1}^{n} \sum\limits_{j > i}^{n} \frac{1}{r_{ij}} \, ,$$which indeed is different for different atoms. Thus, the electronic wave function can also be expected to be different. However, the exact analytical solutions of the electronic Schrödinger equation are known only for one and two-electron systems, such as $\ce{H}$, $\ce{H-}$, $\ce{He}$, $\ce{Li+}$, etc., while for all other chemical systems only approximate solutions can be found. Separation of variables The electronic Schrödinger equation is partial differential equation and one of the most frequently used analytical methods to solve such equations is technique known as the separation of variables aimed to broke one compound partial differential equations of many variables into a set of simpler partial differential equations of fewer variables. The main disadvantage of the separation of variables is that it is by no means a universal approach: for some equations a solution of the form mentioned above make the separation of variables possible, for others it is of no use. The most common way of separation of variables for partial differential equation in a function $f(x_1, x_2, \dotsc, x_n)$ of $n$ variables $x_{i}$ is writing a solution as a product of $n$ functions $f_{i}(x_{i})$ each of which is a function of one variable only $$ f(x_1, x_2, \dotsc, x_n) = f_1(x_1) f_2(x_2) \dotsm f_n(x_n) \, ,$$and then substituting solution of this form into the equation. Using algebraic manipulation one could try to break the resulting equation into a set of $n$ independent ordinary differential equations for each function $f_{i}(x_{i})$ and if succeed then one could solve these independent equations and by plugging all $f_{i}(x_{i})$ back into the product obtain the solution of the starting equation. Orbital approximation So, it is tempting to try to separate the electronic coordinates by representing $\psi\el$ as a product of one-electron wave functions, however, it is fairly well known that this attempt will be successful only if the potential energy can be expressed as the sum of the separate potentials for each particle. This clearly is not the case for the electronic Hamiltonian: the potential energy of the Coulomb repulsions between the electrons $\op{V}\elel$ prevents the separation of electronic coordinates \begin{equation*} \psi\el(\vec{r}_{1}, \vec{r}_{2}, \dotsc, \vec{r}_{n}) \neq \psi_{1}(\vec{r}_{1}) \psi_{2}(\vec{r}_{2}) \dotsb \psi_{n}(\vec{r}_{n})\, .\end{equation*}So, exact separation in this form is not possible, but what about approximate one? Can we somehow approximate $\op{V}\elel$ with some model potential which will allow the separation of electronic coordinates? To some extent, the answer is, yes, we can. In the orbital approximation it is indeed assumed that the many-electron wave function can be written as a product of one-electron wave functions called orbitals. As it was already said, it will work only if the exact $\op{V}\elel$ potential is replaced by the model potential $\op{V}_{\mathrm{MF}} = \sum\nolimits_{i=1}^{n} \op{v}_{\mathrm{MF}}(r_{i})$ which allows the separation. The approximate Hamiltonian is $$\op{H}_{\mathrm{MF}}=\op{T}\el + \op{V}\elnuc + \op{V}_{\mathrm{MF}}=- \sum\limits_{i=1}^{n} \frac{1}{2} \nabla_{i}^{2}- \sum\limits_{i=1}^{n} \frac{Z}{r_{i}}+ \sum\limits_{i=1}^{n} \op{v}_{\mathrm{MF}}(r_{i}) \, .$$Physically it means that electrons do not instantaneously interact with each other, but rather each and every electron interacts with the average, or mean , electric field created by all other electrons. Hence, another name of the approximation, the mean field approximation. Even the mean-field Hamiltonian still looks different for different atoms, however, it is now a sum of one-electron Hamiltonian operators$$\op{H}_{\mathrm{MF}}=\sum\limits_{i=1}^{n}\op{h}(r_{i}) \, ,$$where$$ \op{h}(r_{i}) = - \frac{1}{2} \nabla_{i}^2 - \frac{Z}{ r_{i} } + \op{v}_{\mathrm{MF}}(r_{i}) \, .$$And in contrast to an eigenfunction $\psi\el$ of the true electronic Hamiltonian $\op{H}\el$ an eigenfunction $\psi_{\mathrm{MF}}$ of the mean-field Hamiltonian $\op{H}_{\mathrm{MF}}$ can be written as products of one-electron wave functions (orbitals)$$ \psi_{\mathrm{MF}} = \psi_{1}(\vec{r}_{1}) \psi_{2}(\vec{r}_{2}) \dotsb \psi_{n}(\vec{r}_{n}) \, ,$$where $\psi_{i}(\vec{r}_{i})$ are solutions of the corresponding one-electron Schrödinger equations$$ \op{h}_{i} \psi_{i}(\vec{r}_{i}) = \varepsilon_{i} \psi_{i}(\vec{r}_{i}) \, .$$Now this resulting equations that defines orbitals are not that different: there will be just a small difference in one-electron Hamiltonians due to slightly different nuclear charges as you move along a group. Of course, there will be more and more occupied orbitals as you proceed through the group, but equations themselves and consequently orbitals will be quite simillar.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/46528', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/10104/']}
jdg_85931
stackexchange
llm_judgeable_groundtruth_similarity
19150811
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've heard people say that they create a fat JAR and deploy it. What do they actually mean ? Now provide the response and nothing else.
The fat jar is the jar, which contains classes from all the libraries, on which your project depends and, of course, the classes of current project. In different build systems fat jar is created differently, for example, in Gradle one would create it with ( instruction ): task fatJar(type: Jar) { manifest { attributes 'Main-Class': 'com.example.Main' } baseName = project.name + '-all' from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } } with jar} In Maven it's being done this way (after setting up regular jar): <pluginRepositories> <pluginRepository> <id>onejar-maven-plugin.googlecode.com</id> <url>http://onejar-maven-plugin.googlecode.com/svn/mavenrepo</url> </pluginRepository> <plugin> <groupid>org.dstovall</groupid> <artifactid>onejar-maven-plugin</artifactid> <version>1.4.4</version> <executions> <execution> <configuration> <onejarversion>0.97</onejarversion> <classifier>onejar</classifier> </configuration> <goals> <goal>one-jar</goal> </goals> </execution> </executions></plugin>
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/19150811', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1128392/']}
jdg_85932
stackexchange
llm_judgeable_groundtruth_similarity
30633889
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: First, here is my code: class Enemy(): def __init__(self, name, hp, damage): self.name = name self.hp = hp self.damage = damage def is_alive(self): """Checks if alive""" return self.hp > 0class WildBoar(Enemy): def __init__(self): super(WildBoar, name="Wild Boar", hp=10, damage=2).__init__()class Marauder(Enemy): def __init__(self): super(Marauder, name="Marauder", hp=20, damage=5).__init__()class Kidnappers(Enemy): def __init__(self): super(Kidnappers, name="The Kidnappers", hp=30, damage=7).__init__() When I compile this I get this error: super(WildBoar, name="Wild Boar", hp=10, damage=2).__init__()TypeError: super does not take keyword arguments I tried looking around for any kind of help but I couldn't find anything. I also have some Kwargs in some other class's supers, but these are the ones raising any kind of issues (as of right now). So what could be causing this? I've also seen someone say that putting a super in the base class will fix it, but it didn't work (I passed in the same arguments that are in the Base class's __init__ ). Now provide the response and nothing else.
The arguments to the parent's __init__ method should be passed to the __init__ method: super(Kidnappers, self).__init__(name="The Kidnappers", hp=30, damage=7)# orsuper(Kidnappers, self).__init__("The Kidnappers", 30, 7) All you pass to super() is the child class ( Kidnappers in this case) and a reference to the current instance ( self ). Note however that if you are using Python 3.x, all you need to do is: super().__init__("The Kidnappers", 30, 7) and Python will work out the rest. Here are some links to where this is explained in the documentation: Python 2.x super() Python 3.x super()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30633889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3916243/']}
jdg_85933
stackexchange
llm_judgeable_groundtruth_similarity
46026410
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two toggleButton's , both set to null state, now I want to check if the user ever toggled the button or not. <StackPanel Height="45" Orientation="Horizontal" Margin="0,20,0,0"><Label VerticalContentAlignment="Bottom" Content="هل القاعة صالحة من حيث الإستماع؟"/><ToggleButton x:Name="ListeningStatusText" IsThreeState="True" IsChecked="{x:Null}" /></StackPanel><StackPanel Height="45" Orientation="Horizontal" Margin="0,20,0,0"><Label VerticalContentAlignment="Bottom" Content="الإضاءة"/><ToggleButton x:Name="LightingStatusText" IsThreeState="True" IsChecked="{x:Null}" /></StackPanel> With the code bellow I failled to see if the user never toggle the button: var ToggleButtonControl = (ToggleButton)ControlName; // if (ToggleButtonControl.IsChecked == null){ parameterStr = "/////////////"; //MessageBox.Show("This should pop-up only if the user never toggled the button");}else{ //MessageBox.Show("The user toggle or toggle it back"); if (ToggleButtonControl.IsChecked == false) parameterStr = "لا"; else parameterStr = "نعم";} Now provide the response and nothing else.
The gist of it is, just use 200 . A little more generally: You should just send back the same status code for the CORS preflight OPTIONS request that you’d send back for any other OPTIONS request. The relevant specs don’t require or recommend anything more than that. What the specs say: The Fetch spec at https://fetch.spec.whatwg.org/ is where requirements for the CORS protocol are defined, and it says the status can be anything in the range 200 - 299 . That’s from the CORS-preflight fetch algorithm , in a step saying it can be any “ok status" : If a CORS check for request and response returns success and response ’s status is an ok status , run these substeps: … And as far as what an “ok status” is, the spec says this: An ok status is any status in the range 200 to 299 , inclusive. Beyond that though, the Fetch spec doesn’t recommend any particular status within 200 - 299 . The other relevant spec here is the HTTP 1.1 spec, which has a section defining semantics of all HTTP response status codes, and within that, a section that defines Successful 2xx codes. And within that section there’s a specific section for 200 OK , which says this: The 200 (OK) status code indicates that the request has succeeded.The payload sent in a 200 response depends on the request method.For the methods defined by this specification, the intended meaningof the payload can be summarized as:…OPTIONS a representation of the communications options; So a response to a CORS preflight OPTIONS just needs to be: an indication that the request has succeeded a representation of the communication options (which in this case includes the Access-Control-Allow-Methods and Access-Control-Allow-Headers response headers) That’s what 200 OK is defined by the HTTP spec to be, so you can stop right there. But if you read through the rest of the 2xx codes in that section , you can confirm the semantics of none of them make sense for an OPTIONS response—except for 204 No Content . Now as far as 204 No Content goes, there’s nothing wrong with using it for OPTIONS responses—but there’s also not really any point. That’s because: unlike for some other methods, the HTTP spec defines no use for an OPTIONS payload therefore in practice, clients don’t expect any payload (content) to come back for an OPTIONS (and wouldn’t do anything with any payload that did come back) …so there’s no practical purpose in using a specific 204 status code in an OPTIONS response to explicitly tell clients there’s no payload. Should the status code be different in case origin is allowed (and corresponding headers will be set) or not allowed (and CORS headers will not be set or will not match the origin)? No. There’s no standard-defined code other than 200 or 204 you could use anyway—but regardless of that, the specs don’t require it to be any different and don’t define any different use if it is. And think about it: What is any existing client code going to do any differently due to any difference in the status codes for those two cases? If the answer to that is, “Nothing” , then there’s no point in making it different. Given all the above, the bottom line is: just send 200 OK for CORS preflight OPTIONS responses. Sending any code other than just 200 OK isn’t necessary or useful.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46026410', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3261332/']}
jdg_85934
stackexchange
llm_judgeable_groundtruth_similarity
49197688
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Observations When using an Amazon ELB Application Load Balancer and working with Sticky Sessions the load balancer inserts a cookie named AWSALB in the first request. To let the next request stick to the same target node (EC2 instance) the cookie should be included in that request. When doing so, it seems that the load balancer inserts a different cookie value in the response to the 2nd request. When including this new cookie value on the 3rd request, we get yet a new cookie value in the response. And so forth… (This is different from how Sticky Sessions works with the Classic Load Balancer where the cookie is named AWSELB and retains its value until discarded by the client or the load balancer.) The reason the AWSALB cookie changes value all the time seems to be (as stated by the docs ): The name of the cookie is AWSALB. The contents of these cookies are encrypted using a rotating key. You cannot decrypt or modify load balancer-generated cookies. So even though the contents of the cookie might be the same, we cannot tell. Question The question is whether a request to the load balancer must always include the most recently received value of the AWSALB cookie or if it ok to send some previously received value (from the same sticky session, of course). If this is a requirement the AWS ELB Application Load Balancer would not be able to serve a client that performs multiple parallel requests (after having received the first AWSALB cookie) but only clients that performs all requests in a sequential fashion (one at a time). Can anybody shed some light on this? Now provide the response and nothing else.
After waiting for a reply here, I raised a support case with Amazon and got this reply: I understand that you would like to confirm if it is required to provide the latest stickiness cookie for every request. You are right in noting that the behaviour is different between CLB and ALB. Due to the different functionality of Application Load Balancer to direct traffic to multiple Target Groups, each having its own stickiness, ALB encrypts the information needed to direct traffic and provides new cookie on each request. This ensures that different times for different groups are always respected correctly. Clients can always obtain the latest cookie, as internally the information would ensure correct routing to the same target. If you want to reuse single cookie it is also possible, ALB will respect it and correctly route the traffic as per the stickiness in the cookie. I would not recommend using the same cookie for periods longer than 60 seconds though. This is to ensure that in case of target becoming unavailable you can acquire new cookie with new stickiness information that would route you to new target.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/49197688', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1199564/']}
jdg_85935
stackexchange
llm_judgeable_groundtruth_similarity
643130
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Sorry if this sounds too silly but my math skills are very poor and I just need this problem fixed. I made this graphic with geogebra 3D and it was quite easy there but I don't know how to write the equations for this.In the image you can see: Sphere with radius 1. Point $B$ on the sphere with spherical coordinates. Point $C$ on the sphere with spherical coordinates. $\alpha$ is the angle between $B$ and $C$. Circle $d$ on the sphere from segment $b$ through point $C$. $G$ and $D$ are points on that circle. $\theta$ is the angle between $C$ and $D$ from the center of the circle. What I want is an equation that, given and Initial Point ($B$), and $\alpha$ angle, gives me points on the circle $d$ ( like $C$, $G$ or $D$) with $\theta$ as a parameter.And if possible in spherical coordinates... Thanks Now provide the response and nothing else.
Construct a parametrization of the circle $d$ in $3$ stages of increasing generality. 1. Suppose that $B$ is the north pole (colatitude $\varphi = 0$). The circle is the parallel (line of constant colatitude $\varphi = \alpha$), which has $z = \cos \alpha$ and $r = \sin \alpha$; hence, it is parametrized in rectangular coordinates by $t \mapsto \big( x(t), y(t), z(t) \big)$, where$$\left\{\begin{align}x &= \sin \alpha \cos t \\y &= \sin \alpha \sin t \\z &= \cos \alpha\end{align}\right.\qquad \text{for } 0 \le t < 2\pi.$$ 2. Suppose that $B$ is on the prime meridian (longitude $\theta = 0$) but colatitude has some value $\varphi = \beta$, where $0 < \beta \le \pi$. We take the coordinates $(x, y, z)$ of the circle with center at the north pole (from 1.) and rotate them through an angle $\beta$ along the great circle that includes the prime meridian. Using standard formulas for rotating coordinates, we have$$\begin{bmatrix}x \\ y \\ z\end{bmatrix}\mapsto\begin{bmatrix}x \cos \beta + z \sin \beta \\ y \\ -x \sin \beta + z \cos \beta\end{bmatrix}.$$ 3. Suppose that $B$ is anywhere on the sphere with spherical coordinates $(\varphi, \theta) = (\beta, \gamma)$. Take the resulting coordinates from 2. and rotate them about the polar axis of the sphere through an angle of $\gamma$:$$\begin{bmatrix}x \\ y \\ z\end{bmatrix}\mapsto\begin{bmatrix}x \cos \gamma - y \sin \gamma \\ x \sin \gamma + y \cos \gamma \\ z\end{bmatrix}.$$ Putting these together, the general case for a circle $d$ with center at $(\varphi, \theta) = (\beta, \gamma)$ and central angle $\alpha$ (that determines the radius) has rectangular coordinates:$$\left\{\begin{align}x &= \phantom{-}( \sin \alpha \cos \beta \cos \gamma ) \cos t + ( \sin \alpha \sin \gamma ) \sin t - (\cos \alpha \sin \beta \cos \gamma ) \\y &= -( \sin \alpha \cos \beta \sin \gamma ) \cos t + ( \sin \alpha \cos \gamma ) \sin t + (\cos \alpha \sin \beta \sin \gamma ) \\z &= \phantom{-}( \sin \alpha \sin \beta ) \cos t + \cos \alpha \cos \beta.\end{align}\right.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/643130', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/122298/']}
jdg_85936
stackexchange
llm_judgeable_groundtruth_similarity
811843
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assume $f(x_{1},x_{2})$ is a real-valued continuously differentiable function, and assume it holds that $x_2D_{1}f(x_1,x_2) - x_1D_2f(x_1,x_2) = 0$ where $D_1$ is the partial derivative with respect to $x_1$, similar for $D_2$. Show there exists a function $\phi:\mathbb{R} \to \mathbb{R}$, $\phi$ continuously differentiable, such that $f(x_1,x_2) = \phi(x_1^{2} + x_2^2)$. So far I have thought that assuming $\phi$ would exist it must satisfy $2x_1x_2D_1\phi - 2x_1x_2D_2\phi = 0$ and thus this leads to $D_1\phi = D_2\phi$, however I'm not sure if we can assume that $\phi$ exists as that is what the question asks us to prove... really not sure on how to go about it at all! Now provide the response and nothing else.
OK, I'll write this down as a new answer because it uses Jyrki Lahtonen's idea to use the Three square theorem instead of the four square tehorem uses Jyrki's method of distinguishing by the remainder $n\pmod 8$ uses my idea of subtracting numbers that can be written as different sums of squares to eat up zeroes. most of all, it produces the exact threshold with virtually no additional work! The "main trick" is as follows: Assume$$ A=a^2+b^2=c^2+d^2+e^2=f^2+g^2+h^2+i^2$$ with $a,\ldots ,i\in\mathbb N$. Then if $n>A$ and $n\not\equiv A\pmod 8$ and $n\not\equiv A-1\pmod 8$, we can write $n-A$ as sum of three squares by the three squares theorem. Depending on how many of these three squares are zero (not all of them because $n-A>0$), we have $$n-A=\begin{cases}u^2&\text{or}\\u^2+v^2&\text{or}\\u^2+v^2+w^2\end{cases}$$ with $u,v,w\in\mathbb N$ and hence obtain a representation of $n$ as sum of five nonzero squares:$$n=\begin{cases}f^2+g^2+h^2+i^2+u^2&\text{or}\\c^2+d^2+e^2+u^2+v^2&\text{or}\\a^2+b^2+u^2+v^2+w^2.\end{cases} $$ Theorem. Every integer greater than $60$ has at least two distinct representations as sum of five nonzero squares. Proof. Let $n\in\mathbb N$ with $n>61$ (sic!) and write $n=8r+k$ with $0\le k\le 7$. Consider first the case that $k\in\{0,1,2,3,6,7\}$.As $61\equiv 5\pmod 8$, the observation$$\tag1\begin{align}61=6^2+5^2=6^2+4^2+3^2&=5^2+4^2+4^2+2^2\\&=7^2+2^2+2^2+2^2\end{align}$$allows us to apply the main trick and obtain a representation$$\tag2 n=a^2+b^2+c^2+d^2+e^2$$ of $n$ as sum of five nonzero squares. In fact, if four of the summands in $(2)$ add up to $61$, we already obtain two distinct representations, so we assume that $61$ is formed by either two or three of the summands in $(2)$.As also $53\equiv 5\pmod 8$ and$$\tag353 = 7^2+2^2 = 6^2+4^2+1^2=6^2+3^2+2^2+2^2,$$we obtain another representation of $n$.If this is a different representation, we are done. Otherwise we need to check how the summands from $(1)$ and $(3)$ can occur in $(2)$ and we distinguish cases depending on the summands these two sub-sums have in common: None: Then four or five of the summands in $(2)$ add up to $61+53=114$. Replacing these with $114=9^2+5^2+2^2+1^2=9^2+4^2+3^2+2^2+1^2$, we therefore obtain a distinct representation (it is distinct because the number of $9^2$'s occuring differs) $6^2$: Then four or five squares add up to $61+53-36=78$. Replacing these per $78=8^2+2^2+2^2 +1^2=6^2+6^2+2^2+2^2$ gives us a distinct representation (distinct because the number of $6^2$'s differs). $4^2$: A summand $4^2$ occurs in $(3)$ only in $6^2+4^2+1^2$ and in $(1)$ only in $6^2+4^2+3^2$ (remember that the case of four summands in $(1)$ has already been dealt with). Thus we we can refer to the case "$6^2+4^2$" below. $3^2$: Again, we conclude that $6^2$ is also a common summand, so see "$6^2+3^2$" below. $6^2+4^4$: We conclude that $$\tag4n=\rlap{\underbrace{3^2+6^2+4^2}_{61}}\hphantom{3^2+}\overbrace{\hphantom{6^2+4^2}+1^2}^{53}+a^2$$ for some $a\in\mathbb N$. Since $a^2\bmod 8$ is in $\{0,1,4\}$, we conclude that $k\in\{2,6,7\}$. If $a=1$, we have the two representations $n=63=6^2+4^2+3^2+1^2+1^2=5^2+4^2+3^2+3^2+2^2$. If $a>1$, then $n>65$ and as $65\equiv 1\pmod 8$ and $k\notin\{0,1\}$, we use the main trick with$$\tag5 65 = 8^2+1^2 = 6^2+5^2+2^2=6^2+4^2+3^2+2^2$$to obtain a representation of $n$. This can conincide with the representation $(4)$ only asfollows (with alternate representations exhibited):$$\begin{align}n&= 6^2+4^2+3^2+\overbrace{1^2+\underset{=a^2}{8^2}}^{65}=10^2+3^2+3^2+2^2+2^2\quad\text{or}\\ n&= 1^2+\overbrace{6^2+4^2+3^2+\underset{=a^2}{2^2}}^{65}=\hphantom{1}4^2+4^2+4^2+3^2+3^2.\end{align}$$ $6^2+3^2$: This can happen only if $n=\rlap{\underbrace{4^2+6^2+3^2}_{61}}\hphantom{4^2+}\overbrace{\hphantom{6^2+3^2}+2^2+2^2}^{53}=69$, but then also $n=7^2+3^2+3^2+1^2+1^2$. This completes the proof for the case $k\in\{0,1,2,3,6,7\}$.Assume now that $k\in\{ 4,5\}$. From$$\begin{align}50&=5^2+5^2=5^2+4^2+3^2=4^2+4^2+3^2+3^2\\&=7^2+1^2\hphantom{{}=5^2+4^2+3^2}=6^2+3^2+2^2+1^2\end{align}$$with $50\equiv 2\pmod 8$ we see that the main trick can be applied and we obtain a representation of $n$ as sum of five nonzero squares. Unless what we obtain is $n=5^2+4^2+3^2+a^2+b^2$, we already get two distinct representations and are done.Doing the same with $$26 = 5^2+1^2 = 4^2+3^2+1^2=3^2+3^2+2^2+2^2$$(where also $26\equiv 2\pmod 8$), we get another representation unless the following coincidence happens: $$\tag6n=\rlap{\underbrace{4^2+3^2+5^2}_{50}}\hphantom{4^2+3^2+}\overbrace{\hphantom{5^2}+1^2}^{26}+a^2=\rlap{\underbrace{5^2+4^2+3^2}_{50}}\hphantom{5^2+}\overbrace{\hphantom{4^2+3^2}+1^2}^{26}+a^2. $$If $a$ is even, we get $n\equiv 3\pmod 4$ contradicting $k\in\{4,5\}$; hence $a$ is odd. As $n>61$, we conclude $a\ge 5$, hence $n>74$ and from$$ 74 = 7^2+5^2=7^2+4^2+3^2=6^2+6^2+1^2+1^2$$(with $74\equiv 2\pmod 8$ as well) we obtain another representation unless the following conincidence occurs:$$ n= 4^2+3^2+1^2+\overbrace{5^2+\underset{=a^2}{7^2}}^{74}=5^2+1^2+\overbrace{4^2+3^2\underset{=a^2}{7^2}}^{74}=100,$$but then we have the second representation $$n = 9^2+4^2+1^2+1^2+1^2.$$ In summary, we have shown that for all $n>61$ there exist at least two distinct representations of $n$ as sum fo five nonzero squares.From $$ 61 = 7^2+3^2+1^2+1^2+1^2 = 5^2+5^2+3^2+1^2+1^2$$we see that the result is in fact true for all $n>60$. $_\square$ On the other hand, one verifies that the representation$$\tag7 60=5^2+4^2+3^2+3^2+1^2$$is unique, hence that the bound $60$ in the theorem is sharp, either by exhaustive search or as follows: Assume $60=a^2+b^2+c^2+d^2+e^2$ with $a\ge b\ge c\ge d\ge e$.Then $4\le a\le 7$ because $5\cdot 3^2<60<8^2$.If $a=7$, then $60-7^2-3\ge b^2\ge \frac{60-7^2}{4}$ implies $b=2$, but $60-7^2-2^2=7$ cannot be written as sum of three squares.If $a=6$, then $60-6^2-3\ge b^2\ge \frac{60-6^2}{4}$ implies $b=3$ or $b=4$, but $60-6^2-4^2=8$ and $60-6^2-4^2=15$ cannot be written as sum of three nonzero squares. If $a=4$ then also $b=4$ because $4^2+4\cdot 3^2<60$ and $c=4$ because $2\cdot4^2+3\cdot 3^2<60$; but $60-3\cdot 4^2=12$ is not the sum of two squares. Remains $a=5$, $b^2+c^2+d^2+e^2=35$.From $4\cdot 3^2>35$ we conclude $e\le 2$. Then from $3\cdot 3^2+2^2<35$, we conclude $b\ge 4$. If $b=5$ then $10=c^2+d^2+e^2$ is written as sum of three nonzero squares, which is not possible. Therefore $b=4$, and we have to write $19$ as sum of three, necessarily odd, squares. The only solution leads to $(7)$. Generalization. Let $r(n)$ denote the number of representations of $n$ as sum of five nonzero squares. If $n>5408$ then $r(n)\ge \lfloor\frac18\sqrt{n-101}\rfloor$; if additionally $n\not\equiv 1\pmod 8$ then $r(n)\ge\frac{n}{720}$. If $k\ge10$ and $n\ge 64k^2+101$ then $r(n)\ge k$. If $k\ge 7$, $n>720(k-1)$, and $n\not\equiv 1\pmod 8$, then $r(n)\ge k$. Proof. As pointed out by Jyrki Lahtonen in his answer, unless $n\equiv 1\pmod 8$, we obtain a representation of $n=a^2+b^2+c^+d^2+e^2$ as sum of five nonzero squares for each choice $(a,b)$ with $a^2,b^2<\frac n2$ and $a\bmod 4$, $b\bmod 4$ have a prescribed value (depending on $n\bmod 8$).For $a$ (and also for $b$) there are at least $\left\lfloor\frac14\sqrt{\frac{n-1}2}\right\rfloor$ choices. Each representation is found repeatedly at most $5\cdot 4$ times (the ways $a$ and $b$ can be picked among the five summands). This gives us at least$$ \tag8r(n)\ge\left\lceil\frac1{20}\left\lfloor \frac14\sqrt{\frac{n-1}2}\right\rfloor^2\right\rceil\sim\frac{n}{640}$$representations. To make the "$\sim$" more explicit, one verifies $$r(n)\ge\begin{cases}\frac n{784}&\text{if $n\ge1000$ (with $n=1568$ being critical)},\\\frac n{720}&\text{if $n\ge3873$ (with $n=7200$ being critical)},\\\frac n{648}&\text{if }n>10^6.\end{cases} $$ We now come to the case $n\equiv 1\pmod 8$.If $a\equiv 0\pmod 4$ and $a^2<n-100$, then $n-10^2-a^2>0$ and $\equiv 5\pmod 8$ can be written as sum of up to three nonzero squares. One of these squares must be $\equiv 1\pmod 8$, one must be $\equiv 4\pmod 8$, and the third square, if any, is $\equiv 0\pmod 8$. Hence either$$\tag{9a}n=8^2+6^2+a^2+b^2+c^2\text{ with $b\equiv 1\pmod 2, c\equiv 2\pmod 4$}$$or $$\tag{9b}n=10^2+a^2+b^2+c^2+d^2\text{ with additionally $d\equiv 0\pmod 4$}.$$Let us agree that we use $(9a)$ only if for the given choice of $a$ no representation of type $(9b)$ exists.There are at least $\left\lfloor \frac14\sqrt{n-101}\right\rfloor$ choices for $a$. Distinct $a$ can never lead to the same type $(9a)$ representation (only $a$ is a multiple of $4$); distinct $a$ can lead to the same representation of type $(9b)$ only if $a\leftrightarrow d$ are swapped; by our preference for $(9b)$ over $(9a)$, it is not possible for two choices of $a$ leading to the same representation where one is of type $(9a)$ and the other is of type $(9b)$. We conclude that $$ \tag{10}r(n)\ge\frac 12\left\lfloor \frac14\sqrt{n-101}\right\rfloor\ge \left\lfloor \frac18\sqrt{n-101}\right\rfloor.$$It is clear that this bound is lower than the one found in $(8)$ if $n$ is large enough. One verifies (standard estinmates for floor/ceiling will work, but, hey, just look at the graphs of the functions) that the largest $n$ where the bound $(8)$ is lower than the bound $(10)$ is $n=5408$, hence the first claim. The other claims follow by solving the inequalities for $n$. $_\square$ Remark: Donovan Johnson (2013) reports on OEIS (sequence A080673) that no $n\le 10^6$ with $r(n)=188$ exists. According to the estimate in the generalization, extending the search range up to $2286245$ (where one may restrict to $n\equiv 1\pmod 8$) would prove That no $n$ with $r(n)=188$ exists at all (or find all such $n$). In the proof above I made use only of $10^2=6^2+8^2$. One may find more representations with using more even-sided pythagorean triangles, such as $20^2=12^2+16^2$ or $26^2=24^2+10^2$. While I wanted to avoid the more complex bookkeeping about duplicates, persuing this might show that D. Johnson's finding is sufficient to conclude that $r(n)\ne 188$ for all $n$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/811843', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/136586/']}
jdg_85937
stackexchange
llm_judgeable_groundtruth_similarity
23059580
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using an custom view and in that i am using an canvas in which a user can draw anything and after that i want to save that image in sd card bt was not able to do that. Don't know what is going on. else if(view.getId()==R.id.save_btn){ //save drawing AlertDialog.Builder saveDialog = new AlertDialog.Builder(this); saveDialog.setTitle("Save drawing"); saveDialog.setMessage("Save drawing to device Gallery?"); saveDialog.setPositiveButton("Yes", new DialogInterface.OnClickListener(){ private FileOutputStream fOut; public void onClick(DialogInterface dialog, int which){ //save drawing drawView.setDrawingCacheEnabled(true); //attempt to save String imgSaved = MediaStore.Images.Media.insertImage( getContentResolver(), drawView.getDrawingCache(), UUID.randomUUID().toString()+".png", "drawing"); //feedback if(imgSaved!=null){ Toast savedToast = Toast.makeText(getApplicationContext(), "Drawing saved to Gallery!", Toast.LENGTH_SHORT); savedToast.show(); } else{ Toast unsavedToast = Toast.makeText(getApplicationContext(), "Oops! Image could not be saved.", Toast.LENGTH_SHORT); unsavedToast.show(); } drawView.destroyDrawingCache(); } }); saveDialog.setNegativeButton("Cancel", new DialogInterface.OnClickListener(){ public void onClick(DialogInterface dialog, int which){ dialog.cancel(); } }); saveDialog.show(); } HERE IS THE ERROR DETAILS 04-14 11:24:28.700: E/MediaStore(6866): Failed to insert image04-14 11:24:28.700: E/MediaStore(6866): java.io.FileNotFoundException: No such file or directory04-14 11:24:28.700: E/MediaStore(6866): at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:146)04-14 11:24:28.700: E/MediaStore(6866): at android.content.ContentProviderProxy.openAssetFile(ContentProviderNative.java:577)04-14 11:24:28.700: E/MediaStore(6866): at android.content.ContentResolver.openAssetFileDescriptor(ContentResolver.java:673)04-14 11:24:28.700: E/MediaStore(6866): at android.content.ContentResolver.openOutputStream(ContentResolver.java:537)04-14 11:24:28.700: E/MediaStore(6866): at android.content.ContentResolver.openOutputStream(ContentResolver.java:513)04-14 11:24:28.700: E/MediaStore(6866): at android.provider.MediaStore$Images$Media.insertImage(MediaStore.java:891)04-14 11:24:28.700: E/MediaStore(6866): at com.example.clent.MainActivity$9.onClick(MainActivity.java:238)04-14 11:24:28.700: E/MediaStore(6866): at com.android.internal.app.AlertController$ButtonHandler.handleMessage(AlertController.java:166)04-14 11:24:28.700: E/MediaStore(6866): at android.os.Handler.dispatchMessage(Handler.java:99)04-14 11:24:28.700: E/MediaStore(6866): at android.os.Looper.loop(Looper.java:137)04-14 11:24:28.700: E/MediaStore(6866): at android.app.ActivityThread.main(ActivityThread.java:5103)04-14 11:24:28.700: E/MediaStore(6866): at java.lang.reflect.Method.invokeNative(Native Method)04-14 11:24:28.700: E/MediaStore(6866): at java.lang.reflect.Method.invoke(Method.java:525)04-14 11:24:28.700: E/MediaStore(6866): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:737)04-14 11:24:28.700: E/MediaStore(6866): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553)04-14 11:24:28.700: E/MediaStore(6866): at dalvik.system.NativeStart.main(Native Method) I am always getting this message while trying to save image...."Oops! Image could not be saved."..... Now provide the response and nothing else.
I had this issue in the Emulator (Android 4.4) and turns out it's due to an Android bug , where it happens when the user hasn't taken a photo on the device before (i.e. gallery is empty and hasn't been initialized.). The workaround is to initialize the photo directory manually: void fixMediaDir() { File sdcard = Environment.getExternalStorageDirectory(); if (sdcard != null) { File mediaDir = new File(sdcard, "DCIM/Camera"); if (!mediaDir.exists()) { mediaDir.mkdirs(); } }} Not sure if this is fixed in later versions of Android.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23059580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3432987/']}
jdg_85938
stackexchange
llm_judgeable_groundtruth_similarity
14378181
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My areaspline chart has Y axis values up to approximately 6000. Highcharts automatically changes the "000" part on my Y axis for a "k". As I'm french and the site is meant to be in that same language, this "k" abbreviation won't make sense in my case and I'd like to have a casual "000" display instead of it. Is this possible? How? Now provide the response and nothing else.
You can do this by explicitly overriding the lang.numericSymbols* with null in the defaultOptions as follows Highcharts.setOptions({ lang: { numericSymbols: null //otherwise by default ['k', 'M', 'G', 'T', 'P', 'E'] }}); The documentation reads as follows numericSymbols: Array<String> Metric prefixes used to shorten high numbers in axis labels. Replacing any of the positions with null causes the full number to be written. Setting numbericSymbols to null disables shortening altogether. Defaults to ['k', 'M', 'G', 'T', 'P', 'E']. **officially available from v1.2.0 (2012-08-24) This won't work before v1.2.0 as the suffixes were hard coded then.* Alternate solution (Should work on all versions that support formatter) Use yAxis.labels.formatter and return the value as it is yAxis: { labels: { formatter: function () { return this.value; } }} Disabling metric notation on axis values | Highchart & Highstock (v1.2+) @ jsFiddle Disabling metric notation on axis values | Highchart & Highstock (old versions) @ jsFiddle
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14378181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1728946/']}
jdg_85939
stackexchange
llm_judgeable_groundtruth_similarity
194398
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am studying for a test. The textbook problem is: The vectors $v_1, v_2, v_3, v_4$ are linearly independent. Determine if the following vectors are also independent. $v_1-v_2, 2(v_2-v_3), 3(v_3-v_4)$ I have no idea how to determine this. I would greatly appreciate any help. Thank you! Now provide the response and nothing else.
If you write down the linear independence equation for the new vectors, you have the following$$a(\mathbf{v_1} - \mathbf{v_2}) + 2b(\mathbf{v_2} - \mathbf{v_3}) + 3c(\mathbf{v_3} - \mathbf{v_4}) = \mathbf{0}$$for scalars $a, b, c$. Now this can be rearranged to$$a\mathbf{v_1} + (2b - a)\mathbf{v_2} + (3c - 2b)\mathbf{v_3} -3c\mathbf{v_4} = \mathbf{0}$$What can you say about the coefficients of this equation? Edit: I feel I need to add a bit more to this. First, recall the definition of linear independence. If a set of vectors $\{\mathbf{v_1}, \cdots, \mathbf{v_n}\}$ is linearly independent, then the only solution to the equation$$a_1\mathbf{v_1} + \cdots + a_n\mathbf{v_n}=0$$is if all the scalars $a_1,\ \cdots,\ a_n$ are zero. Note we are not aiming to sum the vectors to zero, but rather we are interested in how they sum to zero. If there is a non-trivial solution, i.e. you can add the vectors such that a non-zero linear combination makes the zero vector, then the vectors are said to be linearly dependent. There is a dependence amongst the vectors in the sense that some of the vectors in the set can be written as a linear combination of the others. In your case, we are interested in a set of four vectors. So we care about how these four vectors $\{\mathbf{v_1}, \mathbf{v_2}, \mathbf{v_3}, \mathbf{v_4}\}$ sum to zero. I could use $a_1, a_2, a_3$ as the scalar coefficients, or I can use $b_1, b_2, b_3$ but I chose $a, b, c$ for convenience (no subscripts). You should note that how we choose to represent the scalars (or the vectors) have no real effect on the question; it doesn't matter what we name the coefficients, they're just names. Therefore, we are interested which $a$s, $b$s and $c$s can make this equation zero$$a(\mathbf{v_1} - \mathbf{v_2}) + 2b(\mathbf{v_2} - \mathbf{v_3}) + 3c(\mathbf{v_3} - \mathbf{v_4}) = \mathbf{0}$$Now we know nothing about these vectors, but we know about the constituents. So we separate each of the vectors and we get$$a\mathbf{v_1} + (2b - a)\mathbf{v_2} + (3c - 2b)\mathbf{v_3} -3c\mathbf{v_4} = \mathbf{0}$$This is an equation we recognize. We know that the set of vectors $\{\mathbf{v_1}, \mathbf{v_2}, \mathbf{v_3}, \mathbf{v_4}\}$ is linearly independent, so all the coefficients that solve the above equation must be 0 . Namely$$a=0$$$$2b - a = 0$$$$3c - 2b = 0$$$$-3c = 0$$From this simple system, we can see that the only solutions for $a, b, c$ is $a=b=c=0$. That means the new vectors are indeed linearly independent. It would be impossible to find a non-zero solution to the linear independence equation. The way to prove that a set of vectors is linearly independent, is to show that they cannot "make" the zero vector in a non-trivial way. What I mean is the following. Say we work in $\mathbb{R}^3$. Then the equation$$a_1\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix} + a_2\begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix} + a_3\begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix}$$is the equation which determines if the three vectors above are linearly independent. Now we don't know what $a_1$ or $a_2$ or $a_3$ is, but we'd like to find out. Certainly there does exist a solution, namely $a_1 = a_2 = a_3 = 0$ will solve the equation. But this solution is too "obvious" and rather uninteresting, so we say this is the trivial solution . Now we can ask, are there other solutions to the equation? Non-trivial solutions where the coefficients are not all zero? The answer for this particular set of vectors is no, and it's rather obvious to see. $a_1$ must be zero or the first component will be non-zero. Likewise $a_2$ must be zero or the second component will be non-zero, same with $a_3$. So we've shown that the above equation has only one solution, the trivial solution. Linearly independent sets are by definition the sets in which there exists only the trivial solution. In that sense, we have proven that the above set is linearly independent. If there exists non-trivial solutions, then the set is call linearly dependent. One example is the following$$a_1\begin{pmatrix}1 \\ 1 \\ 0\end{pmatrix} + a_2\begin{pmatrix}1 \\ 2 \\ 1\end{pmatrix} + a_3\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix} + a_4\begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix}$$Again, we want to find out which values of $a_1, \cdots, a_4$ solve the equation. We still have the trivial solution. It's always there. But more importantly, you can verify that$$a_1 = -2, a_2 = 1, a_3 = 1, a_4 = -1$$also solves the equation. In fact, there exists infinitely many values of the coefficients which solves the above equation. Since there exists non-trivial solutions to the above equation, we have proven that the above set is linearly dependent. This is how the method works in general, perhaps not as easily and clearly as the above examples, but the same principles carry over. To prove that a set is linearly independent, you must prove that the set cannot add to the zero vector in a non-trivial way.If you have anymore confusion after this, please ask me.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/194398', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/40083/']}
jdg_85940