source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
17909294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What I need is: pro [-a xxx | [-b yyy -c zzz]] I tried this but does not work. Could someone help me out? group= parser.add_argument_group('Model 2')group_ex = group.add_mutually_exclusive_group()group_ex.add_argument("-a", type=str, action = "store", default = "", help="test")group_ex_2 = group_ex.add_argument_group("option 2")group_ex_2.add_argument("-b", type=str, action = "store", default = "", help="test")group_ex_2.add_argument("-c", type=str, action = "store", default = "", help="test") Thanks! Now provide the response and nothing else.
add_mutually_exclusive_group doesn't make an entire group mutually exclusive. It makes options within the group mutually exclusive. What you're looking for is subcommands . Instead of prog [ -a xxxx | [-b yyy -c zzz]], you'd have: prog command 1 -a: ... command 2 -b: ... -c: ... To invoke with the first set of arguments: prog command_1 -a xxxx To invoke with the second set of arguments: prog command_2 -b yyyy -c zzzz You can also set the sub command arguments as positional. prog command_1 xxxx Kind of like git or svn: git commit -amgit merge develop Working Example # create the top-level parserparser = argparse.ArgumentParser(prog='PROG')parser.add_argument('--foo', action='store_true', help='help for foo arg.')subparsers = parser.add_subparsers(help='help for subcommand', dest="subcommand")# create the parser for the "command_1" commandparser_a = subparsers.add_parser('command_1', help='command_1 help')parser_a.add_argument('a', type=str, help='help for bar, positional')# create the parser for the "command_2" commandparser_b = subparsers.add_parser('command_2', help='help for command_2')parser_b.add_argument('-b', type=str, help='help for b')parser_b.add_argument('-c', type=str, action='store', default='', help='test') Test it >>> parser.print_help()usage: PROG [-h] [--foo] {command_1,command_2} ...positional arguments: {command_1,command_2} help for subcommand command_1 command_1 help command_2 help for command_2optional arguments: -h, --help show this help message and exit --foo help for foo arg.>>>>>> parser.parse_args(['command_1', 'working'])Namespace(subcommand='command_1', a='working', foo=False)>>> parser.parse_args(['command_1', 'wellness', '-b x'])usage: PROG [-h] [--foo] {command_1,command_2} ...PROG: error: unrecognized arguments: -b x Good luck.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17909294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/591506/']}
jdg_86241
stackexchange
llm_judgeable_groundtruth_similarity
32734
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Does there exist a notion of Jordan curve homotopy? In particular, suppose we have two Jordan curves $C_0 : S^1 \rightarrow \mathbb{R}^2$ and $C_1 : S^1 \rightarrow \mathbb{R}^2$. When does there exist a continuous function $f: S^1 \times [0,1] \rightarrow \mathbb{R}^2$ such that: $f(x,0) = C_0(x)$, $f(x,1) = C_1(x)$, and for all $t \in [0,1]$, the function $C_t: S^1 \rightarrow \mathbb{R}^2$ defined by $C_t(x) = f(x,t)$ is a Jordan curve. My intuition tells me that such a function always exists, but I'm unsure about how to go about proving this. Also, if this is a known result, are there similar results for manifolds other than $\mathbb{R}^2$? Now provide the response and nothing else.
Homotopies through embeddings are usually called isotopies. There is a subtlety called local flatness that comes up in higher dimensions. Let $E$ be any embedding of $\mathbb R$ in $\mathbb R^3$ such that $E(s)=(s,0,0)$ when $s<-1$ or $s>1$. Define a homotopy $H_t$ with $E_0(s)=(s,0,0)$ for all $s\in \mathbb R$ and $E_1=E$, as follows: $E_t(s)=tE(s/t)$ if $-t\le s\le t$ and otherwise $E_t(s)=(s,0,0)$. This is a homotopy through embeddings, but it (un)ties the knot. This is easily adapted to apply to examples of embeddings of $S^1$ in $\mathbb R^3$, for example. The way to fix this problem is to only consider embeddings that are locally flat and isotopies that are locally flat. An embedding $E:M\to N$ is (topologically) locally flat if for every point $p\in M$ there exist charts around $p$ and $E(p)$ such that $E$ looks like $(x_1,\dots,x_m)\mapsto (x_1,\dots,x_m,0,\dots,0)$. An isotopy $E_t$ is locally flat if for each point $p\in M$ and time $\tau\in I$ there are charts around $(p,\tau)\in M\times I$ and around $(E_\tau(p),\tau)\in N\times I$, both of them using projection to $I$ as last coordinate, such that locally $(x,t)\mapsto (E_t(x),t)$ looks like $(x_1,\dots,x_m,t)\mapsto (x_1,\dots,x_m,0,\dots,0,t)$. Local flatness is automatic when $m=1$ and $n=2$. The example I gave (with $m=1$ and $n=3$) was such that the isotopy was not locally flat although if the original embedding $E_1$ was locally flat then for every $t$ the embedding $E_t$ was, too. The Alexander horned sphere ($m=2$, $n=3$, not locally flat) can be smoothed out by such a procedure, too. Another way of limiting oneself to the right kind of isotopies is to use ambient isotopies: to require $E_t$ to be $H_t\circ E_0$ where $H_t$ is a homeomorphism $N\to N$ depending on $t$. (Local flatness in the case $m=n$ follows from invariance of domain.) Another way is to limit oneself to smooth embeddings (meaning, as usual, smooth maps that are topological embeddings and that are one to one on the tangent-space level, or equivalently locally flat in the smooth category) and smooth isotopies.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/32734', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7726/']}
jdg_86242
stackexchange
llm_judgeable_groundtruth_similarity
26675814
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When you upgrade from gradle 2.0 to 2.1, you may see following error. Could not HEAD 'https://jcenter.bintray.com/de/undercouch/gradle-download-task/1.0/gradle-download-task-1.0.pom'. > peer not authenticated To be fair, this error is not due to gradle. It is problem of how java handles ssl connections. I am using following java version. java -versionjava version "1.7.0_40"Java(TM) SE Runtime Environment (build 1.7.0_40-b43)Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) You can use following answer to import security certificate your java installation. I do not care about https downloads, http is enough for me. So see my answer. Now provide the response and nothing else.
Change your repository syntax to one of the following repositories { jcenter { url "http://jcenter.bintray.com/" }}repositories { maven { url "http://repo1.maven.org/maven2" }} Current (2.1) gradle docs claims that Warning: Be aware that the central Maven 2 repository is HTTP only and HTTPS is not supported. If you need a public HTTPS enabled central repository, you can use the JCenter public repository (see Section 51.6.3, “Maven JCenter repository”). I think maven central went for https. If I use following syntax. repositories { mavenCentral()} I get following error. D:\tools\gradle-2.1-bin\bin\gradle.bat -q FAILURE: Build failed with an exception. What went wrong: A problem occurred configuring root project 'XXXX'. Could not resolve all dependencies for configuration ':classpath'. Could not resolve de.undercouch:gradle-download-task:1.0. Required by: :XXXX:unspecified Could not HEAD ' https://repo1.maven.org/maven2/de/undercouch/gradle-download-task/1.0/gradle-download-task-1.0.pom '. peer not authenticated Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26675814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41782/']}
jdg_86243
stackexchange
llm_judgeable_groundtruth_similarity
5300
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: a. For every real number $c$, neither $f$ nor $g$ is continuous at $c$. b. For every real number $c$, the sum $f + g$ is continuous at $c$. c. For every real number $c$, the product $fg$ is continuous at $c$. Hint: consider functions defined one way on rationals and another on irrationals. Now provide the response and nothing else.
Take $f=1_\mathbb Q$ (that is, $f(x)=1$ if $x$ is rational and $f(x)=0$ otherwise) and $g=1-f$. Then neither $f$ nor $g$ is continuous at any point but $f+g$ and $fg$ are very simple.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/5300', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1958/']}
jdg_86244
stackexchange
llm_judgeable_groundtruth_similarity
36587023
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Tomcat JDBC connection pool along with Spring boot, JDBC template and SQL Server. I need to know what is going inside connection pool while application is waiting for database connection. Such as.... No of active connections No of idle connections No of blocked connections, additional info why this connection is blocked No of available connections and ... Is there any way to get these info by debugging or using logging frameworks like log4j? Any idea will be appreciated. Now provide the response and nothing else.
After a lot of research, I am able to find 3 ways to log & monitor database connection pool. https://tomcat.apache.org/tomcat-8.0-doc/jdbc-pool.html Monitoring using Spring Boot properties . Monitoring using JMX ( Java Management Extensions ) (as @nitin suggested) Monitoring using Spring Aspects . 1st Way: Monitoring using Spring Boot properties. I found below Spring boot properties which will be much useful to log & monitor database connection pool. These properties (and some more too) were not documented .Please refer below github issue for more details. https://github.com/spring-projects/spring-boot/issues/1829 #Maximum no.of active connectionsspring.datasource.max-active=10#Log the stack trace of abandoned connectionspring.datasource.log-abandoned=true#Remove abandoned connection,So, new connection will be created and made available to threads which are waiting for DB connectionspring.datasource.remove-abandoned=true#If any connection is not used for 10 seconds, consider that connection as "abandoned"spring.datasource.remove-abandoned-timeout=10 #Number of ms to wait before throwing an exception if no connection is available.spring.datasource.max-wait=1000 This list contains more properties which are related to datasource only .(taken from the link above) spring.datasource.abandon-when-percentage-fullspring.datasource.access-to-underlying-connection-allowedspring.datasource.alternate-username-allowedspring.datasource.auto-commitspring.datasource.catalogspring.datasource.commit-on-returnspring.datasource.connection-customizerspring.datasource.connection-customizer-class-namespring.datasource.connection-init-sqlspring.datasource.connection-init-sqlsspring.datasource.connection-propertiesspring.datasource.connection-test-queryspring.datasource.connection-timeoutspring.datasource.data-sourcespring.datasource.data-source-class-namespring.datasource.data-source-j-n-d-ispring.datasource.data-source-propertiesspring.datasource.db-propertiesspring.datasource.default-auto-commitspring.datasource.default-catalogspring.datasource.default-read-onlyspring.datasource.default-transaction-isolationspring.datasource.driver-class-loaderspring.datasource.fair-queuespring.datasource.idle-timeoutspring.datasource.ignore-exception-on-pre-loadspring.datasource.init-s-q-lspring.datasource.initialization-fail-fastspring.datasource.isolate-internal-queriesspring.datasource.jdbc-interceptorsspring.datasource.jdbc-urlspring.datasource.jdbc4-connection-testspring.datasource.leak-detection-thresholdspring.datasource.log-abandonedspring.datasource.log-validation-errorsspring.datasource.log-writerspring.datasource.login-timeoutspring.datasource.max-agespring.datasource.max-lifetimespring.datasource.max-open-prepared-statementsspring.datasource.maximum-pool-sizespring.datasource.metrics-tracker-class-namespring.datasource.minimum-idlespring.datasource.num-tests-per-eviction-runspring.datasource.pool-namespring.datasource.pool-prepared-statementsspring.datasource.pool-propertiesspring.datasource.propagate-interrupt-statespring.datasource.read-onlyspring.datasource.record-metricsspring.datasource.register-mbeansspring.datasource.remove-abandonedspring.datasource.remove-abandoned-timeoutspring.datasource.rollback-on-returnspring.datasource.suspect-timeoutspring.datasource.test-on-connectspring.datasource.thread-factoryspring.datasource.transaction-isolationspring.datasource.use-disposable-connection-facadespring.datasource.use-equalsspring.datasource.use-lockspring.datasource.validation-intervalspring.datasource.validation-query-timeoutspring.datasource.validatorspring.datasource.validator-class-namespring.datasource.xaspring.datasource.xa.data-source-class-namespring.datasource.xa.properties 2nd Way: Monitoring using JMX ( Java Management Extensions ) Tomcat JDBC pool provides a MBean namely ConnectionPoolMBean. https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/tomcat/jdbc/pool/jmx/ConnectionPoolMBean.html Spring Boot registers JMX MBeans automatically.So, no need to register/export this MBean into MBean server. Just open the JConsole which is coming with JDK, To open, In Windows-> Command prompt ->jconsole, thats it.Refer below screenshot for more info. This MBean also notifies whenever a connection is abandoned, connection failed, when a query is taking long time etc. Refer screenshot below. 3rd Way: Monitoring using Spring Aspects (only for development/QA environment). I use this aspect to log TomcatJdbc Connection Pool. I created a Spring Aspect which will intercept every database call.This will surely affect the performance . So, use this aspect in development/QA environment, comment out this method when it is not required (for example : during production deployment). @Before("execution(* com.test.app.db.dao.*.*(..))") public void logBeforeConnection(JoinPoint jp) throws Throwable { String methodName = ""; methodName += jp.getTarget().getClass().getName(); methodName += ":"; methodName += jp.getSignature().getName(); logger.info("before method call : " + methodName + " : number of connections in use by the application (active) : "+ tomcatJdbcPoolDataSource.getNumActive()); logger.info("before method call : " + methodName + " : the number of established but idle connections : "+ tomcatJdbcPoolDataSource.getNumIdle()); logger.info("before method call : " + methodName + " : number of threads waiting for a connection : "+ tomcatJdbcPoolDataSource.getWaitCount()); }@After("execution(* com.test.app.db.dao.*.*(..)) ")public void logAfterConnection(JoinPoint jp) throws Throwable { String methodName = ""; methodName += jp.getTarget().getClass().getName(); methodName += ":"; methodName += jp.getSignature().getName(); logger.info("after method call : " + methodName + " : number of connections in use by the application (active) : "+ tomcatJdbcPoolDataSource.getNumActive()); logger.info("after method call : " + methodName + " : the number of established but idle connections : "+ tomcatJdbcPoolDataSource.getNumIdle()); logger.info("after method call : " + methodName + " : number of threads waiting for a connection : "+ tomcatJdbcPoolDataSource.getWaitCount()); //tomcatJdbcPoolDataSource.checkAbandoned();} Now, you can easily identify the particular database call which creates connection leak in your application.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36587023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1401019/']}
jdg_86245
stackexchange
llm_judgeable_groundtruth_similarity
26346336
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Immediately after publishing a new version of an application on Google Play, I get a burst of new crashes from an unknown device called "unknown Full Android on Emulator". Here is a screenshot from Crashlytics after the application was uploaded to Google Play. This crash did not come from a real device. Does anyone know what this emulator is? Crashlytics console: A similar pattern of crashes can be seen in this screen clip from the Splunk console: Now provide the response and nothing else.
Since soon after publishing a new version I see crashes almost only from this device, I'm guessing it's a Google Play new app publishing flow, where they monkey test your app. I'm seeing crashes from Debug activities that can't be opened in the UI, but can be accessed via a direct intent, so I'm also guessing this monkey test goes over the manifest, launches random activities directly, and plays with them. See: http://developer.android.com/tools/help/monkey.html
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26346336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1016092/']}
jdg_86246
stackexchange
llm_judgeable_groundtruth_similarity
7525589
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Makefile with a set of booleans which must be used to control the flags for an external application. The problem is that the flag must be passed as a comma-separated string. Something like this (non-working pseudo code): WITH_LIST = ""WITHOUT_LIST = ""ifeq ($(BOOL_A),y) # Append A to list "WITH_LIST"else # Append A to list "WITHOUT_LIST"endififeq ($(BOOL_B),y) # Append B to list "WITH_LIST"else # Append B to list "WITHOUT_LIST"endififeq ($(BOOL_C),y) # Append C to list "WITH_LIST"else # Append C to list "WITHOUT_LIST"endif Now assuming BOOL_A == y, BOOL_B == n and BOOL_C == y, I need to run the following command: ./app --with=A,C --with-out=B How can I generate these string using Gnu Make? Now provide the response and nothing else.
First you create the two white-space separated lists, either using your method, or thiton's.Then you use the little trick from the end of section 6.2 of the GNU make manual to create a variable holding a single space, and one holding a comma. You can then use these in $(subst ...) to change the two lists to comma-separated. PARTS := A B CBOOL_A := yBOOL_B := nBOOL_C := yWITH_LIST := $(foreach part, $(PARTS), $(if $(filter y, $(BOOL_$(part))), $(part)))WITHOUT_LIST := $(filter-out $(WITH_LIST), $(PARTS))null :=space := $(null) #comma := ,WITH_LIST := $(subst $(space),$(comma),$(strip $(WITH_LIST)))WITHOUT_LIST := $(subst $(space),$(comma),$(strip $(WITHOUT_LIST)))all: ./app --with=$(WITH_LIST) --with-out=$(WITHOUT_LIST)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7525589', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/339753/']}
jdg_86247
stackexchange
llm_judgeable_groundtruth_similarity
1703637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an abstract class that implements IDisposable, like so: public abstract class ConnectionAccessor : IDisposable{ public abstract void Dispose();} In Visual Studio 2008 Team System, I ran Code Analysis on my project and one of the warnings that came up was the following: Microsoft.Design : Modify 'ConnectionAccessor.Dispose()' so that it calls Dispose(true), then calls GC.SuppressFinalize on the current object instance ('this' or 'Me' in Visual Basic), and then returns. Is it just being silly, telling me to modify the body of an abstract method, or should I do something further in any derived instance of Dispose ? Now provide the response and nothing else.
You should follow the conventional pattern for implementing Dispose . Making Dispose() virtual is considered bad practice, because the conventional pattern emphasizes reuse of code in "managed cleanup" (API client calling Dispose() directly or via using ) and "unmanaged cleanup" (GC calling finalizer). To remind, the pattern is this: public class Base{ ~Base() { Dispose(false); } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); // so that Dispose(false) isn't called later } protected virtual void Dispose(bool disposing) { if (disposing) { // Dispose all owned managed objects } // Release unmanaged resources }} Key here is that there's no duplication between finalizer and Dispose for unmanaged cleanup, and yet any derived class can extend both managed and unmanaged cleanup. For your case, what you should do is this: protected abstract void Dispose(bool disposing) and leave everything else as is. Even that is of dubious value, since you're enforcing your derived classes to implement Dispose now - and how do you know that all of them need it? If your base class has nothing to dispose, but most derived classes likely do (with a few exceptions, perhaps), then just provide an empty implementation. It is what System.IO.Stream (itself abstract) does, so there is precedent.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1703637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38743/']}
jdg_86248
stackexchange
llm_judgeable_groundtruth_similarity
22282117
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I pull a reference out of an Option and pass it back with the specific lifespan of the caller? Specifically, I want to borrow a reference to a Box<Foo> from a Bar that has an Option<Box<Foo>> in it. I thought I would be able to do: impl Bar { fn borrow(&mut self) -> Result<&Box<Foo>, BarErr> { match self.data { Some(e) => Ok(&e), None => Err(BarErr::Nope), } }} ...but that results in: error: `e` does not live long enough --> src/main.rs:17:28 |17 | Some(e) => Ok(&e), | ^ does not live long enough18 | None => Err(BarErr::Nope),19 | } | - borrowed value only lives until here |note: borrowed value must be valid for the anonymous lifetime #1 defined on the body at 15:54... --> src/main.rs:15:55 |15 | fn borrow(&mut self) -> Result<&Box<Foo>, BarErr> { | _______________________________________________________^ starting here...16 | | match self.data {17 | | Some(e) => Ok(&e),18 | | None => Err(BarErr::Nope),19 | | }20 | | } | |_____^ ...ending hereerror[E0507]: cannot move out of borrowed content --> src/main.rs:16:15 |16 | match self.data { | ^^^^ cannot move out of borrowed content17 | Some(e) => Ok(&e), | - hint: to prevent move, use `ref e` or `ref mut e` Hm, ok. Maybe not. It looks vaguely like what I want to do is related to Option::as_ref , like maybe I could do: impl Bar { fn borrow(&mut self) -> Result<&Box<Foo>, BarErr> { match self.data { Some(e) => Ok(self.data.as_ref()), None => Err(BarErr::Nope), } }} ...but, that doesn't work either. Full code I'm having trouble with: #[derive(Debug)]struct Foo;#[derive(Debug)]struct Bar { data: Option<Box<Foo>>,}#[derive(Debug)]enum BarErr { Nope,}impl Bar { fn borrow(&mut self) -> Result<&Box<Foo>, BarErr> { match self.data { Some(e) => Ok(&e), None => Err(BarErr::Nope), } }}#[test]fn test_create_indirect() { let mut x = Bar { data: Some(Box::new(Foo)) }; let mut x2 = Bar { data: None }; { let y = x.borrow(); println!("{:?}", y); } { let z = x2.borrow(); println!("{:?}", z); }} I'm reasonably sure what I've trying to do is valid here. Now provide the response and nothing else.
First of all, you don't need &mut self . When matching, you should match e as a reference. You are trying to return a reference of e , but the lifetime of it is only for that match statement. enum BarErr { Nope}struct Foo;struct Bar { data: Option<Box<Foo>>}impl Bar { fn borrow(&self) -> Result<&Foo, BarErr> { match self.data { Some(ref x) => Ok(x), None => Err(BarErr::Nope) } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22282117', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/353820/']}
jdg_86249
stackexchange
llm_judgeable_groundtruth_similarity
181226
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: If a metric space is separable, then any open set is a countable union of balls. Is the converse statement true? UPDATE1. It is a duplicate of the question here https://math.stackexchange.com/questions/94280/if-every-open-set-is-a-countable-union-of-balls-is-the-space-separable/94301#94301 UPDATE2. Let me summarize here the positive answer following Joel David Hamkins and Ashutosh. It is a matter of taste, but I omit using ordinals and use Zorn lemma instead, which may be more usual for most mathematicians (at least, it is for me). Lemma 1. If $(X,d)$ is non-separable metric space, then for some $r>0$ there exists an uncountable subset $X_1\subset X$ such that $d(x,y)>r$ for any two points $x\ne y$ in $X_1$. Proof. For each $r=1/n$ consider the maximal (by inclusion) subset with such property. If it is countable, then $X$ has a countable $1/n$-net for each $n$, hence it is separable. Now consider two cases. Define $X_2\subset X_1$ as a set of points $x\in X_1$ for which there exist a point $y_x\in X$ such that $0<d(x,y_x)<r/10$. Consider two cases. 1) $X_2$ is uncountable. Consider the union of open balls $U=\cup_{x\in X_2} B(x,d(x,y_x))$. Consider any open ball $B(z,a)$ containing in $U$. We have $z\in U$, so $d(z,x)<d(x,y_x)$ for some $x$, but $r/5>2d(x,y_x)\geq d(x,z)+d(x,y_x)\geq d(z,y_x)>a$ since $y_x\notin B(z,a)$. It implies that $B(z,a)$ is contained in a unique ball $B(x,d(x,y_x))$, hence we need uncountably many such balls to cover whole $U$. 2) $X_3=X\setminus X_2$ is uncountable. For any $x\in X_3$ define $R(x)>0$ as a radius of maximal at most countable open ball centered in $x$. Clearly $R(x)\geq r/10$ for any $x\in X_3$. For any $x\in X_3$ define a star centered in $x$ as a union $D=x\cup C$, where $C=\{z_1,z_2,\dots\}\subset X_3$ is a countable sequence of points with $d(x,z_i)\rightarrow R(x)+0$. Choose a maximal disjoint subfamily of stars. Clearly it is uncountable, else we may easily increase it. Denote by $U$ the set of centers of chosen stars. It is open (as any subset of $X_3$), assume that it is a countable union of balls $U=\cup_{i=1}^{\infty} B(x_i,r_i)$, $x_i\in U$. We have $r_i> R(x_i)$ for some $i$, else $U$ is at most countable. But then $B(x_i,r_i)$ contains infinitely many points of the star $D$ centered in $x_i$, while by our construction $U\cap D=\{x_i\}$. A contradiction. Now provide the response and nothing else.
Towards a contradiction, let us assume that we have a metric space $X = \{x_i : i < \omega_1\}$ in which any two points are at least unit distance apart and every subset of $X$ is the union of a countable family of open balls. Let $r_i$ be the supremum of all $r > 0$ such that $B(x_i, r)$ is countable. Construct $\{C_i : i < \omega_1\}$ such that (1) Each $C_i$ is countable and the infimum of $\{d(x_i, y): y \in C_i\}$ is $r_i$ (2) If $i < j$, then $x_i \notin C_j$ Now let $I \in [\omega_1]^{\omega_1}$ be such that whenever $i < j$, $x_j \notin C_i$. Suppose $Y = \{x_i : i \in I\}$ can be covered by a family $F$ of countably many balls. Let $i \in I$ be least such that $B(x_i, r) \in F$ for some $r > r_i$. Pick $y \in C_i \cap B(x_i, r)$. So $y \in \bigcup F = Y$ so that $y = x_j$ for some $j \in I$ which is impossible.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/181226', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4312/']}
jdg_86250
stackexchange
llm_judgeable_groundtruth_similarity
57914
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code to show a red area defined by inequalities: ClearAll["Global`*"];p = Reduce[y <= 3/10 x + 18 && y > x^2/8, {x, y}]r = RegionPlot[p, {x, -15, 18}, {y, -5, 25}, GridLines -> {Table[i, {i, -15, 18}], Table[j, {j, -5, 25}]}, PlotStyle -> Directive[{Opacity[0.5], Red}]];bg = Graphics[{Opacity[0.2], Yellow, Rectangle[{-16, -6}, {19, 26}]}];range = First /@ Differences /@ (PlotRange /. Options[r]); target = 1; Show[{r, bg}, AspectRatio -> (Last[range]/First[range]/target)] which shows: How can I count those integer grids and highlight them with colored dots? Now provide the response and nothing else.
You can do : p = ImplicitRegion[y <= 3/10 x + 18 && y > x^2/8, {x, y}]points = Reduce[Element[{x, y}, p], {x, y}, Integers]pp = Cases[points, x == xx_ && y == yy_ -> {xx, yy}]pp // Length(* 286 *) Show[RegionPlot[p], ListPlot[pp]]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/57914', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9851/']}
jdg_86251
stackexchange
llm_judgeable_groundtruth_similarity
32304486
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: recently I started working on GraphQL, I am able to insert data in flat schema without any problem but when it comes to an Array of data I am getting an error like { "errors": [ { "message": "Must be input type" } ]} I am testing my query using postman, my mutation query is mutation M { AddEvent ( title: "Birthday event" description:"Welcome to all" media:[{url:"www.google.com", mediaType:"image" }] location:[{address:{state:"***", city:"****"}}] ) {title,description,media,location,created,_id}} This is my Event Schema: EventType = new GraphQLObjectType({ name: 'Event', description: 'A Event', fields: () => ({ _id: { type: GraphQLString, description: 'The id of the event.', }, id: { type: GraphQLString, description: 'The id of the event.', }, title: { type: GraphQLString, description: 'The title of the event.', }, description: { type: GraphQLString, description: 'The description of the event.', }, media:{ type:new GraphQLList(mediaType), description:'List of media', }, location:{ type:new GraphQLList(locationType), description:'List of location', } })});// Media Typeexport var mediaType = new GraphQLObjectType({ name: 'Media', description: 'A Media', fields: () => ({ _id: { type: GraphQLString, description: 'The id of the event.', }, url:{ type: GraphQLString, description: 'The url of the event.', }, mediaType:{ type: GraphQLString, description: 'The mediaTypa of the event.', } })}); // Location Typeexport var locationType = new GraphQLObjectType({ name: 'Location', description: 'A location', fields: () => ({ _id: { type: GraphQLString, description: 'The id of the event.', }, address:{ type: GraphQLString, description: 'The address.', }, state:{ type: GraphQLString, description: 'The state.', }, city:{ type: GraphQLString, description: 'The city.', }, zip:{ type: GraphQLString, description: 'The zip code.', }, country:{ type: GraphQLString, description: 'The country.', } })}); Mongoose Schema: var EventSchema = new mongoose.Schema({ title: { required: true, type: String, trim: true, match: /^([\w ,.!?]{1,100})$/ }, description: { required: false, type: String, trim: true, match: /^([\w ,.!?]{1,100})$/ }, media: [{ url: { type: String, trim: true }, mediaType: { type: String, trim: true } }], location: [{ address: { type: String }, city: { type: String }, state: { type: String }, zip: { type: String }, country: { type: String } }]}) Mutation Type: addEvent: { type: EventType, args: { _id: { type: GraphQLString, description: 'The id of the event.', }, title: { type: GraphQLString, description: 'The title of the event.', }, description: { type: GraphQLString, description: 'The description of the event.', }, media:{ type:new GraphQLList(mediaType), description:'List of media', }, location:{ type:new GraphQLList(locationType), description:'List of media', }, created: { type: GraphQLInt, description: 'The created of the user.', } }, resolve: (obj, {title,description,media,location,created,_id}) => { let toCreateEvent = { title, description, created:new Date(), start: new Date(), media, location, _id, }; return mongo() .then(db => { return new Promise( function(resolve,reject){ let collection = db.collection('events'); collection.insert(toCreateEvent, (err, result) => { db.close(); if (err) { reject(err); return; } resolve(result); }); }) }); } } Now provide the response and nothing else.
Your issue is that when you define mutations, all types must be input types, hence the error you get "Must be input type" . So in here (from your mutation): media:{ type:new GraphQLList(mediaType), description:'List of media', },location:{ type:new GraphQLList(locationType), description:'List of media', }, GraphQLList , mediaType and locationType must be input types. GraphQLList is already an input type (see here https://github.com/graphql/graphql-js/blob/master/src/type/definition.js#L74-L82 to see the list of GraphQL types considered as input types). However your types mediaType and locationType are of GraphQLObjectType type, which is not an input type but if you look at the list of input types again: https://github.com/graphql/graphql-js/blob/master/src/type/definition.js#L74-L82 , you'll find GraphQLInputObjectType which is an object input type, so, what you need to do is to replace mediaType and locationType by their "input" version. What I suggest to do is to create mediaInputType and locationInputType which would have the same field structure as mediaType and locationType but created with new GraphQLInputObjectType({... instead of new GraphQLObjectType({... and use them in your mutation. I ran into the same issue and I solved it like that, feel free to comment if you have any question.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32304486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5283410/']}
jdg_86252
stackexchange
llm_judgeable_groundtruth_similarity
176446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why do I get compiler errors with this Java code? 1 public List<? extends Foo> getFoos()2 {3 List<? extends Foo> foos = new ArrayList<? extends Foo>();4 foos.add(new SubFoo());5 return foos;6 } Where 'SubFoo' is a concrete class that implements Foo, and Foo is an interface. Errors I get with this code: On Line 3: "Cannot instantiate ArrayList<? extends Foo>" On Line 4: "The method add(capture#1-of ? extends Foo) in the type List<capture#1-of ? extends Foo> is not applicable for the arguments (SubFoo)" Update: Thanks to Jeff C, I can change Line 3 to say "new ArrayList<Foo>();". But I'm still having the issue with Line 4. Now provide the response and nothing else.
Use this instead: 1 public List<? extends Foo> getFoos()2 {3 List<Foo> foos = new ArrayList<Foo>(); /* Or List<SubFoo> */4 foos.add(new SubFoo());5 return foos;6 } Once you declare foos as List<? extends Foo> , the compiler doesn't know that it's safe to add a SubFoo. What if an ArrayList<AltFoo> had been assigned to foos ? That would be a valid assignment, but adding a SubFoo would pollute the collection.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/176446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2197/']}
jdg_86253
stackexchange
llm_judgeable_groundtruth_similarity
3475750
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a multi-threaded application in a POSIX/Linux environment - I have no control over the code that creates the pthreads. At some point the process - owner of the pthreads - receives a signal. The handler of that signal should abort,cancel or stop all the pthreads and log how many pthreads where running. My problem is that I could not find how to list all the pthreads running in process. Now provide the response and nothing else.
There doesn't seem to be any portable way to enumerate the threads in a process. Linux has pthread_kill_other_threads_np , which looks like a leftover from the original purely-userland pthreads implementation that may or may not work as documented today. It doesn't tell you how many threads there were. You can get a lot of information about your process by looking in /proc/self (or, for other processes, /proc/123 ). Although many unices have a file or directory with that name, the layout is completely different, so any code using /proc will be Linux-specific. The documentation of /proc is in Documentation/filesystems/proc.txt in the kernel source. In particular, /proc/self/task has a subdirectory for each thread. The name of the subdirectory is the LWP id; unfortunately, [1] [2] [3] there doesn't seem to be a way to associate LWP ids with pthread ids (but you can get your own thread id with gettid(2) if you work for it). Of course, reading /proc/self/task is not atomic; the number of threads is available atomically through /proc/self/status (but of course it might change before you act on it). If you can't achieve what you want with the limited support you get from Linux pthreads, another tactic is to play dynamic linking tricks to provide your own version of pthread_create that logs to a data structure you can inspect afterwards.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3475750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/418267/']}
jdg_86254
stackexchange
llm_judgeable_groundtruth_similarity
1841341
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We use tags in git as part of our deployment process. From time to time, we want to clean up these tags by removing them from our remote repository. This is pretty straightforward. One user deletes the local tag and the remote tag in one set of commands. We have a little shell script that combines both steps. The 2nd (3rd, 4th,...) user now has local tags that are no longer reflected on the remote. I am looking for a command similar to git remote prune origin which cleans up locally tracking branches for which the remote branch has been deleted. Alternatively, a simple command to list remote tags could be used to compare to the local tags returned via git tag -l . Now provide the response and nothing else.
This is great question, I'd been wondering the same thing. I didn't want to write a script so sought a different solution. The key is discovering that you can delete a tag locally, then use git fetch to "get it back" from the remote server. If the tag doesn't exist on the remote, then it will remain deleted. Thus you need to type two lines in order: git tag -l | xargs git tag -dgit fetch --tags These: Delete all tags from the local repo. FWIW, xargs places each tag output by "tag -l" onto the command line for "tag -d". Without this, git won't delete anything because it doesn't read stdin (silly git). Fetch all active tags from the remote repo. This even works a treat on Windows.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/1841341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144071/']}
jdg_86255
stackexchange
llm_judgeable_groundtruth_similarity
43673142
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im trying to create a foreign keys using artisan , but this error show up. [Illuminate\Database\QueryException] SQLSTATE[HY000]: General error: 1215 Cannot add foreign key constraint (SQL: alter table `comments` add constraint `comments_comment_lot_id_foreign` foreign key (`comment_lot_id`) references `lots` (`lot_id` ) on delete cascade) This is my migration: <?phpuse Illuminate\Support\Facades\Schema;use Illuminate\Database\Schema\Blueprint;use Illuminate\Database\Migrations\Migration;class CreateCommentsTable extends Migration{ /** * Run the migrations. * * @return void */ public function up() { Schema::create('comments', function (Blueprint $table) { $table->increments('id'); $table->text('comment'); $table->integer('comment_lot_id')->unsigned(); $table->timestamps(); }); Schema::table('comments', function ($table) { $table->foreign('comment_lot_id')->references('lot_id')->on('lots')->onDelete('cascade'); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropForeign(['comment_lot_id']); Schema::dropIfExists('comments'); }} in the lots table i use lot_id as id it model Lot.php i add: <?phpnamespace App;use Illuminate\Database\Eloquent\Model;class Lot extends Model { protected $primaryKey = 'lot_id';} Any idea how can i resolve this error? Now provide the response and nothing else.
Apply these rules below to your migration files: [1] The parent, pivot table(s) must be based on engines that supports foreign key referencing (e.g InnoDB for mysql). Do $table->engine = “InnoDB”; in your migration file, right before other column definitions. I observe laravel always default to MyISAM hence this line is a must. [2] The referenced columns in the parent must be a primary or unique key(s). These declarations in the parent table are fine: $table->increments(“id”); means column “id” is referencable $table->column_type(“column_name”)->unique(); means column “column_name” is referencable [3] The pivot table column must be of the same type as that of its referenced parent table column. So for example, the pivot table column that should reference increments(“id”) must of type of unsignedInteger. If parent table is type char(20), then pivot table column used to reference it must be type char(20) as well. Having done all three above, define your foreign key relationship as appropriate.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43673142', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7873025/']}
jdg_86256
stackexchange
llm_judgeable_groundtruth_similarity
379503
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Building my last application, everbody started to lose control over the increasing complexity of business rules, which would be added every week - most of all the app owners themselves. In the end, we had to explain the behaviour of their own app to them, because they forgot what they had defined a few weeks before. With my current app, I see a similar situation approaching; in their daily business, the app owners face issues with singular clients which they usually convert into a business rule, like: if the current client is xy, use a different notification template. We tried to use Lucidcharts to create some sort of diagram or wireframe to keep track of what the app is actually supposed to do. I would mark a rule in the wireframe with an ID, ie #01, and then refer that ID to the actual code. But it's hard to get everybody involved, especially non-tech people (business owners). What would be the format to keep an organized catalog of business rules, as a reference for the coder, the project manager and the business owner? And if someone feels like voting down this question, please explain why. Now provide the response and nothing else.
Requirements need to be documented. You are already attempting to do that, but requirements are often broken down into a number of categories: Business requirements — Things like "A blog post requires a title" would be considered a business requirement. Functional requirements — These spell out in greater detail how the application behaves. Technical jargon starts to crop up: "If the blog post title field is empty, display this validation message: "The title is a required field" Technical requirements — Fraught with technical jargon, these are almost incomprehensible to the business: "Use the RuleFor(...) method in FluentValidation to mark the PostTitle property "not empty" in the BlogPostForm view model There are many tools out there that specialize in organizing requirements, but tool recommendations are off topic for this site. Beyond requirements, a development methodology called Behavior Driven Development (BDD) (utilizing the Gherkin language ) essentially allows you to write user acceptance tests in a natural language format, and using bindings to a programming language, these tests become actual functioning tests that can interact with a real user interface and database. An advantage of BDD is your business requirements become tests that pass or fail as the application evolves. In order to keep the tests passing, you need to update the business rules. In order for the application to keep enforcing new or changed business rules, you need to update the application after rewriting the rules. It forces you to keep the documentation up to date, because you have failing tests.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/379503', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/312892/']}
jdg_86257
stackexchange
llm_judgeable_groundtruth_similarity
34815482
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My Jenkins box needs to access Stash and Jira through their REST apis. For that I need to store their credentials. The way I am doing is via the Credentials Parameter, which asks me for a Name, Credential type, Required, Default Value, and a Description. I define a Name as CREDENTIAL_PARAMETER, in the type I set it as "Username with password", and then I pick one credential from the list in the Default Value. Next in the Build section I define that a shell should be executed, which is something like echo $CREDENTIAL_PARAMETER I was expecting to get something like "username:password" as the CREDENTIAL_PARAMETER. However, I get a hash that I think is how the username and password can be retrieved. How can I get the credentials based on the hash using bash? Now provide the response and nothing else.
Just as a note to myself, and hopefully this will help others I'm going to go a bit more in depth than @Alexandre Santos, though his answer is extremely helpful. The important thing to note is that there is a difference between the Credentials Parameter and the Credentials Binding . If you are using a parameterized build, you can add a Credentials Parameter that references a credentials binding. When you run the build you'll notice that there is an environment variable that correlates to a credential's GUID in your credential store. For this to actually be useful you have to inject a "Credentials Binding" into your environment. Head to the Build Environment section of your job definition. Check Use secret text(s) or file(s) . This will actually inject the secret into your build environment. The "Credentials Parameter" created earlier can be used here to let you select different credentials parameters. For files it will drop the file somewhere in the workspace(?), and then inject a secret environment variable with the full path to the file. This blog from Cloudbees should help with the rest.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34815482', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/919710/']}
jdg_86258
stackexchange
llm_judgeable_groundtruth_similarity
2040306
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The latus rectum of the parabola defined parametrically by $x=at^2+bt+c$ and $y=a't^2+b't+c'$ is--- I tried to eliminate $t$ from both the equations and I got the following equation$$(a'x-ay+c'a-ca')^2=(ab'-ba')(b'x-by+c'b-cb')$$ However I could not find out the length of latus rectum. I also tried to find out the equation of axis using this but it turned out a bit complex. Please help me in this regard. Thanks. :) Now provide the response and nothing else.
You’ve done some nice work in finding the Cartesian equation for the parabola, but it think it might be easier overall to work with the parametric form $\mathbf r:t\mapsto(x(t),y(t))$. (I’ve omitted most of the tedious details of the algebraic manipulations in the following.) The first order of business is to find the angle of the parabola’s axis of symmetry. One way to do this is to use the fact that for any two points on a parabola, the line defined by their midpoint and the intersection of the tangents at the two points is parallel to this axis. Taking $t=\pm1$ is reasonably convenient. The midpoint is simply $(\mathbf r(1)+\mathbf r(-1))/2$. For the intersection of the tangents, we have $\mathbf r(1)+s\mathbf r'(1)=\mathbf r(-1)+t\mathbf r'(-1)$, which expands into the system $$\begin{align}s(2a+b)+a+b+c &= t(-2a+b)+a-b+c \\s(2a'+b')+a'+b'+c'&=t(-2a'+b')+a'-b'+c\end{align}$$ with solution $s=-1$, $t=1$. The direction vector for the parabola’s axis is thus $$\begin{align}\frac12(\mathbf r(1)+\mathbf r(-1))-(\mathbf r(-1)+\mathbf r'(-1)) &= \frac12(\mathbf r(1)-\mathbf r(-1))-\mathbf r'(-1) \\&=(b,b')-(b-2a,b'-2a') \\ &=(2a,2a'),\end{align}$$ so we can take $\mathbf n=(a,a')$ as the direction of the axis. Next, we find the parabola’s vertex. The tangent to the parabola at the vertex is orthogonal to its axis, which gives rise to the equation $$\mathbf n\cdot\mathbf r'(t)=(a,a')\cdot(at+b,a't+b')=a(at+b)+a'(a't+b')=0.$$ Solving for $t$ we get $$t_c=-\frac12{ab+a'b'\over a^2+a'^2}\tag{1}.$$ You can verify that the function $\mathbf r$ is symmetric with respect to this point in the sense that the chord defined by the points $\mathbf r(t_c\pm\Delta t)$ is orthogonal to the parabola’s axis, so the two points are equidistant from the vertex. The tangents at the ends of the latus rectum meet the axis at a 45° angle, which means that they are orthogonal to each other. This fact leads to the equation $$\mathbf r'(t_c+\Delta t)\cdot\mathbf r'(t_c-\Delta t)=(2a(t_c+\Delta t)+b)(2a(t_c-\Delta t)+b)+(2a'(t_c+\Delta t)+b')(2a'(t_c-\Delta t)+b')=0$$ for the ends of the latus rectum, which has the solutions $$\Delta t=\pm\frac12{ab'-a'b\over a^2+a'^2}\tag{2}.$$ Because of the symmetry noted previously, we know that both of these solutions represent the same pair of points. Finally, we compute the length of the latus rectum: $$\begin{align}\|\mathbf r(t_c+\Delta t)-\mathbf r(t_c-\Delta t)\|^2 &= (\mathbf r(t_c+\Delta t)-\mathbf r(t_c-\Delta t))\cdot(\mathbf r(t_c+\Delta t)-\mathbf r(t_c-\Delta t)) \\&= \left({a'(ab'-a'b)^2\over(a^2+a'^2)^2}\right)^2+\left({a(ab'-a'b)^2\over(a^2+a'^2)^2}\right)^2 \\&={(ab'-a'b)^4\over(a^2+a'^2)^3},\end{align}$$ so the length of the latus rectum is $${(ab'-a'b)^2\over(a^2+a'^2)^{3/2}}.\tag{3}$$ With this distance in hand, you can now easily find the parabola’s focus and directrix, if needed. Incidentally, this is another path to a Cartesian equation for this parabola. If you have the directrix given by an equation in the form $\mathbf n\cdot\mathbf x=d$ and the focus $\mathbf f$, then an equation of the parabola is $$\left({d-\mathbf n\cdot\mathbf x\over\|\mathbf n\|}\right)^2=(\mathbf x-\mathbf f)\cdot(\mathbf x-\mathbf f).$$ To continue working with the Cartesian equation instead, I’d take a slightly different approach than in the cited paper. First look at the equation of a parabola with axis parallel to the $y$-axis, $y=ax^2+bx+c$. For such a parabola, the length of the latus rectum is simply $|1/a|$. For the general parabola $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$, we take Erick Wong’s suggestion to rotate so as to eliminate the quadratic terms involving $y$. The equation will then be of the form $A'x'^2+D'x'+E'y'+F=0$ (note that the constant term is unchanged by a rotation), with latus rectum length $|E'/A'|$. Invariance of the trace tells us that $A'=A+C$, but finding $E'$ will take a a bit more work. For a parabola, $B^2=4AC$, so we can rewrite the general equation as $(\alpha x+\beta y)^2+Dx+Ey+F=0$, first multiplying through by $-1$ if necessary to make $A$ and $C$ positive. In matrix form this is $\begin{bmatrix}x&y&1\end{bmatrix}M\begin{bmatrix}x&y&1\end{bmatrix}^T=0$, with $$M=\begin{bmatrix}\alpha^2&\alpha\beta&D/2\\\alpha\beta&\beta^2&E/2\\D/2&E/2&F\end{bmatrix}.$$ The direction of the parabola’s axis is given by an eigenvector of $0$ (i.e., an element of the kernel) of the quadratic part of this matrix, one of which is $\begin{bmatrix}-\beta&\alpha\end{bmatrix}^T$. We want to rotate so as to bring this vector parallel to the $y$-axis. The appropriate rotation is $$R=\begin{bmatrix}{\alpha\over\sqrt{\alpha^2+\beta^2}}&-{\beta\over\sqrt{\alpha^2+\beta^2}}&0\\{\beta\over\sqrt{\alpha^2+\beta^2}}&{\alpha\over\sqrt{\alpha^2+\beta^2}}&0\\0&0&1\end{bmatrix}$$ and $$R^TMR = \begin{bmatrix}\alpha^2+\beta^2&0&\frac12{\alpha D+\beta E\over\sqrt{\alpha^2+\beta^2}} \\0&0&\frac12{\alpha E-\beta D\over\sqrt{\alpha^2+\beta^2}} \\\frac12{\alpha D+\beta E\over\sqrt{\alpha^2+\beta^2}} & \frac12{\alpha E-\beta D\over\sqrt{\alpha^2+\beta^2}} & F\end{bmatrix}.$$ Comparing this to the result in the first paragraph above, we find that the latus rectum length for the general equation is $${|\alpha E-\beta D|\over(\alpha^2+\beta^2)^{3/2}} = {|E\sqrt A\mp D\sqrt C|\over(A+C)^{3/2}}.$$ Choose the sign opposite to that of $B$. Many of the terms in the Cartesian equation you derived are constant, so pulling the coefficients of $x$ and $y$ out of it to construct $D$ and $E$ doesn’t look too bad.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2040306', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/356886/']}
jdg_86259
stackexchange
llm_judgeable_groundtruth_similarity
1071012
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using a form and jQuery to make a quick change on a web site. I would like to change the button text to 'Saved!' then change it back to Update after a few seconds so the user can change the value again. Of course they can hit the now 'Saved!' button again, but it doesn't look nice. $("form.stock").submit(function(){ // Example Post $.post($(this).attr('action'), { id: '123', stock: '1' }); $(this).find(":submit").attr('value','Saved!'); // This doesn't work, but is what I would like to do setTimeout($(this).find(":submit").attr('value','Update'), 2000); return false;}); Now provide the response and nothing else.
First argument to setTimeout is function. So wrap your code inside an anonymous function and you are good to go. $("form.stock").submit(function(){ // Example Post $.post($(this).attr('action'), { id: '123', stock: '1' }); var submit = $(this).find(":submit").attr('value','Saved!'); //Creating closure for setTimeout function. setTimeout(function() { $(submit).attr('value','Update') }, 2000); return false;}); I am not able to test this code right now. Let me know if it doesn't work. EDIT: As suggested by redsquare, it makes sense to create closure from the submit button itself.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1071012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/497/']}
jdg_86260
stackexchange
llm_judgeable_groundtruth_similarity
5972331
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have used other SQL tools and some allow me to see the output of a threatening query before committing. I like this two step process (so I can double check I'm not doing something bad). Is there a way in SQL Server 2008 R2 to "execute" a query, see the output for the affected rows, and then choose to accept or throw away the commit? EDIT: I also found another question asking the same with a different answer using the OUTPUT clause. How do I preview a destructive SQL query? Now provide the response and nothing else.
Yes this is possible. You can either use the session option SET IMPLICIT_TRANSACTIONS ON or create an explicit transaction as below. BEGIN TRANUPDATE YourTable SET foo=1 /*To Review Changes can use OUTPUT clause here...*/OUTPUT INSERTED.*, DELETED.*WHERE bar=2/*... or a SELECT against the table*/SELECT *FROM YourTableWHERE bar=2-- Execute the COMMIT or ROLLBACK commands when ready However you should be aware that your open transaction will hold locks until the transaction completes which may block other transactions so this should be used with caution in any multi user environment.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5972331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/422996/']}
jdg_86261
stackexchange
llm_judgeable_groundtruth_similarity
4048
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: What are the best techniques to improve rsync over ssh mirroring between unix boxes, assuming that one system will always have the master copy and the other system will always have a recent copy (less than 48hrs old) Also, what would one have to do to scale that approach to handle dozens of machines getting a push of those changes? Now provide the response and nothing else.
If : The modification time of your files are right The files are not really big No push can be missed (or there is some kind of backlog processing) You can use find -ctime or file -cnewer to make a list of changed file since the last execution, and copying over only the modified files (Just a glorified differential push). This translated itself quite nicely for multiple hosts : just do a differential tar on the source, and untar it on all the hosts. It gives you something like that : find -type f -cnewer /tmp/files_to_send.tar.gz > /tmp/files_to_send.txttar zcf /tmp/files_to_send.tar.gz --files-from /tmp/files_to_send.txt for HOST in host1 host2 host3 ...do cat /tmp/files_to_send.tar.gz | ssh $HOST "tar xpf -"done The script has te be refined, but you get the idea.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/4048', 'https://serverfault.com', 'https://serverfault.com/users/931/']}
jdg_86262
stackexchange
llm_judgeable_groundtruth_similarity
29028703
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have text file with JSON data in it with following format: [ { "belongs_to_suite": "no", "belongs_to_suite_id": "", "brand_family": "", "cat_manufacturer_id": 4382, "cat_sw_edition_id": null, "cat_sw_product_id": 38, "cat_sw_release_id": 47354894, } ] I want to read only brand_family , for which I suppose I need a JSON array.Do I need to define this format using JSON objects first? I have downloaded org.json lib, and I have following code in which I am reading the text file, but I cannot find out how do I define format and then import all data in [{}...{}] into an array and: FileInputStream fstream = new FileInputStream("file.txt");BufferedReader br = new BufferedReader(new InputStreamReader(fstream));try { JSONObject obj = new JSONObject(br); String n = obj.getString("version"); //int a = obj.getInt("age"); System.out.println(n);} Now provide the response and nothing else.
The error message comes from builtin/fetch.c#check_not_current_branch() . That function goes all the way back to commit 8ee5d73, Oct. 2008, git 1.6.0.4 . (see also " Git refusing to fetch into current branch "). The comment is instructive: Some confusing tutorials suggested that it would be a good idea to fetchinto the current branch with something like this: git fetch origin master:master (or even worse: the same command line with "pull" instead of "fetch"). While it might make sense to store what you want to pull, it typically isplain wrong when the current branch is " master ". This should only be allowed when (an incorrect) " git pull origin master:master " tries to work around by giving --update-head-ok to underlying " git fetch ", and otherwise we should refuse it, but somewhere along the lines we lost that behavior. The check for the current branch is now only performed in non-barerepositories, which is an improvement from the original behaviour. Considering that the function check_not_current_branch() is called with : if (!update_head_ok) check_not_current_branch(ref_map); That means a git fetch -u origin develop:develop should work. -u--update-head-ok By default git fetch refuses to update the head which corresponds to the current branch. This flag disables the check. This is purely for the internal use for git pull to communicate with git fetch , and unless you are implementing your own Porcelain you are not supposed to use it. Even though you are not supposed to use that option, it does answer your initial requirement, making “ git fetch origin branch:branch ” work on a current branch. Regarding the origin of this patch, follow the discussion there . While it might make sense to store what you want to pull That is the fetch part: it stores the remote history from the updated origin/master . But that is especially broken when the current local branch is also master . As mentioned in this answer : I think " git fetch url side:master " when master is the current branchand we have omitted --update-head-ok is broken. The test fails on current master . It would also fail to update the working directory and would leavethe index as if you're removing everything. See " git pull with refspec " as an example. torek shows an example where: suppose that I run git fetch and it brings in two new commits that I will label C and D . C 's parent is A , and D 's is the node just before B : C / ...--o--o--A <-- master \ o--B <-- develop \ D The output from this git fetch will list this as: aaaaaaa..ccccccc master -> origin/master+ bbbbbbb...ddddddd develop -> origin/develop (forced update) That forced update might be what you want if your current branch is not develop . But if you are on develop when you type git fetch origin develop:develop , and if the fetch was allowed to update HEAD, ... then your current index would reflect D , and no longer B . So a git diff done in your working tree would show differences between your files and D , not your previous HEAD B . That is bad, because your initial git checkout develop created a working tree identical to B HEAD files. Even if your git status was clean (no modification of any kind), if git fetch origin develop:develop updated HEAD (forcing an update from B to D), git status would now report differences where there were none before the fetch. That is why, by default git fetch refuses to update the head which corresponds to the current branch . Note: a bug in Git 2.29 also triggers a similar error message. When " git commit-graph " ( man ) detects the same commit recorded more than once while it is merging the layers, it used to die. The code now ignores all but one of them and continues, fixed in Git 2.30 (Q1 2021). See commit 85102ac , commit 150f115 (09 Oct 2020) by Derrick Stolee ( derrickstolee ) . (Merged by Junio C Hamano -- gitster -- in commit 307a53d , 02 Nov 2020) commit-graph : ignore duplicates when merging layers Reported-by: Thomas Braun Helped-by: Taylor Blau Co-authored-by: Jeff King Signed-off-by: Derrick Stolee Thomas reported that a " git fetch " ( man ) command was failing with an error saying "unexpected duplicate commit id". $ git fetch origin +refs/head/abcd:refs/remotes/origin/abcdfatal: unexpected duplicate commit id31a13139875bc5f49ddcbd42b4b4d3dc18c16576 The root cause is that they had fetch.writeCommitGraph enabled which generates commit-graph chains, and this instance was merging two layers that both contained the same commit ID. The initial assumption is that Git would not write a commit ID into a commit-graph layer if it already exists in a lower commit-graph layer. Somehow, this specific case did get into that situation, leading to this error. While unexpected, this isn't actually invalid (as long as the two layers agree on the metadata for the commit). When we parse a commit that does not have a graph_pos in the commit_graph_data_slab, we use binary search in the commit-graph layers to find the commit and set graph_pos . That position is never used again in this case. However, when we parse a commit from the commit-graph file, we load its parents from the commit-graph and assign graph_pos at that point. If those parents were already parsed from the commit-graph , then nothing needs to be done. Otherwise, this graph_pos is a valid position in the commit-graph so we can parse the parents, when necessary. Thus, this die() is too aggressive. The easiest thing to do would be to ignore the duplicates. If we only ignore the duplicates, then we will produce a commit-graph that has identical commit IDs listed in adjacent positions. This excess data will never be removed from the commit-graph , which could cascade into significantly bloated file sizes. Thankfully, we can collapse the list to erase the duplicate commit pointers. This allows us to get the end result we want without extra memory costs and minimal CPU time. The root cause is due to disabling core.commitGraph , which prevents parsing commits from the lower layers during a ' git commit-graph write --split ( man ) ' command. Since we use the ' graph_pos ' value to determine whether a commit is in a lower layer, we never discover that those commits are already in the commit-graph chain and add them to the top layer. This layer is then merged down, creating duplicates. The test added in t5324-split-commit-graph.sh fails without this change. However, we still have not completely removed the need for this duplicate check. That will come in a follow-up change. And: commit-graph : don't write commit-graph when disabled Reported-by: Thomas Braun Helped-by: Jeff King Helped-by: Taylor Blau Signed-off-by: Derrick Stolee The core.commitGraph config setting can be set to ' false ' to prevent parsing commits from the commit-graph file(s). This causes an issue when trying to write with " --split " which needs to distinguish between commits that are in the existing commit-graph layers and commits that are not. The existing mechanism uses parse_commit() and follows by checking if there is a ' graph_pos ' that shows the commit was parsed from the commit-graph file. When core.commitGraph=false , we do not parse the commits from the commit-graph and ' graph_pos ' indicates that no commits are in the existing file. The --split logic moves forward creating a new layer on top that holds all reachable commits, then possibly merges down into those layers, resulting in duplicate commits. The previous change makes that merging process more robust to such a situation in case it happens in the written commit-graph data. The easy answer here is to avoid writing a commit-graph if reading the commit-graph is disabled. Since the resulting commit-graph will would not be read by subsequent Git processes. This is more natural than forcing core.commitGraph to be true for the ' write ' process. git commit-graph now includes in its man page : Write a commit-graph file based on the commits found in packfiles. Ifthe config option core.commitGraph is disabled, then this command willoutput a warning, then return success without writing a commit-graph file.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29028703', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3591950/']}
jdg_86263
stackexchange
llm_judgeable_groundtruth_similarity
5253149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this class : public class Repo{ public Repo() : this(ConfigurationManager.AppSettings["identity"], ConfigurationManager.AppSettings["password"]) { } public Repo(string identity,string password) { //Initialize properties. }} I added a line to web.config so that this type will be automatically constructed by Unity container. but during the execution of my application, I receive this error message : "System.InvalidOperationException : the parameter identity could not be resolved when attempting to call constructor Repo(String identity, String password) -->Microsoft.Practices.ObjectBuilder2.BuildFailedException : The current Build operation ...." 1) Why isn't Unity using the default constructor ? 2) Suppose I want Unity to use the second constructor (the parametized one), How do I pass that information to Unity via the configuration file ? Now provide the response and nothing else.
Unity by default picks the constructor with the most parameters. You have to tell Unity to use a different one explicitly. One way to do this is with the [InjectionConstructor] attribute like this: using Microsoft.Practices.Unity;public class Repo{ [InjectionConstructor] public Repo() : this(ConfigurationManager.AppSettings["identity"], ConfigurationManager.AppSettings["password"]) { } public Repo(string identity,string password) { //Initialize properties. }} A second way of doing this, if your opposed to cluttering up classes/methods with attributes, is to specify which constructor to use when configuring your container using an InjectionConstructor : IUnityContainer container = new UnityContainer();container.RegisterType<Repo>(new InjectionConstructor()); From the documentation : How Unity Resolves Target Constructors and Parameters When a target class contains more than one constructor, Unity will usethe one that has the InjectionConstructor attribute applied. If thereis more than one constructor, and none carries theInjectionConstructor attribute, Unity will use the constructor withthe most parameters. If there is more than one such constructor (morethan one of the "longest" with the same number of parameters), Unitywill raise an exception.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5253149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91607/']}
jdg_86264
stackexchange
llm_judgeable_groundtruth_similarity
1960939
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have seen this question asked in a couple of different ways on SO and several other websites, but most of them are either too specific or out-of-date. I'm hoping someone can provide a definitive answer here without pandering to speculation. Is there a way, either with CSS or javascript, to change the default printer settings when someone prints within their browser? And of course by "prints from their browser" I mean some form of HTML, not PDF or some other plug-in reliant mime-type. Please note: If some browsers offer this and others don't (or if you only know how to do it for some browsers) I welcome browser-specific solutions. Similarly, if you know of a mainstream browser that has specific restrictions against EVER doing this, that is also helpful, but some fairly up-to-date documentation would be appreciated. (simply saying "that goes against XYZ's security policy" isn't very convincing when XYZ has made significant changes in said policy in the last three years). Finally, when I say "change default print settings" I don't mean forever, just for my page, and I am referring specifically to print margins, headers, and footers. I am very aware that CSS offers the option of changing the page orientation as well as the page margins. One of the many struggles is with Firefox. If I set the page margins to 1 inch, it ADDS this to the half inch it already puts into place. I very much want to reduce the usage of PDFs on my client's site, but the infringement on presentation (as well as the lack of reliability) are their main concern. Now provide the response and nothing else.
The CSS standard enables some advanced formatting. There is a @page directive in CSS that enables some formatting that applies only to paged media (like paper). See http://www.w3.org/TR/1998/REC-CSS2-19980512/page.html . Downside is that behavior in different browsers is not consistent. Safari still does not support setting printer page margin at all, but all the other major browsers now support it. With the @page directive, you can specify printer margin of the page (which is not the same as normal CSS margin of an HTML element): <html xmlns="http://www.w3.org/1999/xhtml"><head> <title>Print Test</title> <style type="text/css" media="print"> @page { size: auto; /* auto is the initial value */ margin: 0mm; /* this affects the margin in the printer settings */ } html { background-color: #FFFFFF; margin: 0px; /* this affects the margin on the html before sending to printer */ } body { border: solid 1px blue ; margin: 10mm 15mm 10mm 15mm; /* margin you want for the content */ } </style></head><body> <div>Top line</div> <div>Line 2</div></body></html> Note that we basically disables the page-specific margins here to achieve the effect of removing the header and footer, so the margin we set on the body will not be used in page breaks (as commented by Konrad ) This means that it will only work properly if the printed content is only one page. This does not work in Firefox 3.6 , IE 7 , Safari 5.1.7 or Google Chrome 4.1 . Setting the @page margin does have effect in IE 8 , Opera 10 , Google Chrome 21 and Firefox 19 . Although the page margins are set correctly for your content in these browsers, the behavior is not ideal in trying to solve the hiding of the header/footer. This is how it behaves in different browsers: In Internet Explorer , the margin is actually set to 0mm in the settings for this printing, and if you do Preview you will get 0mm as default, but the user can change it in the preview. You will see that the page content actually are positioned correctly, but the browser print header and footer is shown with non-transparent background, and so effectively hiding the page content at that position. In Firefox newer versions, it is positioned correctly, but both the header/footer text and content text is displayed, so it looks like a bad mix of browser header text and your page content. In Opera , the page content hides the header when using a non-transparent background in the standard css and the header/footer position conflicts with content. Quite good, but looks strange if margin is set to a small value that causes the header to be partially visible. Also the page margin is not set properly. In Chrome newer versions, the browser header and footer is hidden if the @page margin is set so small that the header/footer position conflicts with content. In my opinion, this is exactly how this should behave. So the conclusion is that Chrome has the best implementation of this in respect to hiding the header/footer.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1960939', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49478/']}
jdg_86265
stackexchange
llm_judgeable_groundtruth_similarity
11234188
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using mod_rewrite to rewrite .html to .php. RewriteRule ^([0-9a-zA-Z-_]+)\.html$ index.php?p=$1 [L] But, I'm having issues when using a $_GET in the URL. It works fine if requested URL is in this format: index.php?a=f1 BUT, if it rewrites it to home.html?a=f1 , NOTHING. Assuming it's just the browser stops reading after it hits the .html. Is there a way to make this work? Now provide the response and nothing else.
You need to add the QSA flag: RewriteRule ^([0-9a-zA-Z-_]+)\.html$ index.php?p=$1 [QSA,L] The reason is best explained in the documentation: When the replacement URI contains a query string, the default behavior of RewriteRule is to discard the existing query string, and replace it with the newly generated one. Using the [QSA] flag causes the query strings to be combined.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11234188', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/682393/']}
jdg_86266
stackexchange
llm_judgeable_groundtruth_similarity
10574969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm following Michael Hartl's Rails tutorial, so far I've installed Git 1.7.5.4 x86_64 (I'm running OSX 10.5.8) and I'm trying to install rvm After I run the following: $ curl -kL get.rvm.io | bash -s stable I get: BASH 3.2.25 required (you have 3.2.17(1)-release) I've tried changing the shell, using chsh -s /opt/local/bin/bash but I get shell '/opt/local/bin/bash' does not exist Not sure where to go from here but I'd appreciate any guidance. Thanks! Now provide the response and nothing else.
Homebrew is generally a bit nicer than MacPorts, as it doesn't require lots of sudo action. Here's an article that guided me to upgrading my install of bash: http://concisionandconcinnity.blogspot.com/2009/03/upgrade-bash-to-40-in-mac-os-x.html As for steps: Install Homebrew from the docs on their homepage Install Git using Homebrew (optional, but nice to have a more up-to-date git ) brew install git Now install bash : brew install bash Add this install of bash to the allowed shells list: echo '/usr/local/bin/bash' | sudo tee -a /etc/shells; Homebrew installs things to /usr/local/Cellar/ by default, then symlinks any binaries to /usr/local/bin , so you've now got the latest bash sitting at /usr/local/bin/bash Finally, change your shell to use this new one: chsh -s /usr/local/bin/bash Open a new terminal window/tab, and run these commands to double-check your work: $ echo $SHELL/usr/local/bin/bash$ echo $BASH_VERSION4.2.37(2)-release
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10574969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1392583/']}
jdg_86267
stackexchange
llm_judgeable_groundtruth_similarity
17538686
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have installed Genymotion for Android in one machine (windows PC) and downloaded a Nexus virtual device. How can I copy the virtual device to another development machine? or do I have to download again for each dev machine? Genymotion is storing the virtual devices in {users folder}\VirtualBox Vms How can this virtual box can be moved to another machine and loaded into Genymotion? Now provide the response and nothing else.
The files in the VirtualBox folder are likely just the actual machine configurations, not the downloaded image files. On my system (Windows 8 x64) the downloaded images are located here: \Users\{username}\AppData\Local\Genymobile\ Inside this folder is another folder called LaunchPad . If you copy this folder to the same location on your other machine it should work. I would advise you install and run Genymobile on the other machine, then exit it and copy the folder mentioned above. Re-launching it should then pickup the image files. You will have to re-create the actual devices, but you won't have to download the images again. I have also noticed that you only need to download 4 images - any tablet with/without google apps and any phone with/without google apps. All the different phones/tablets use the same physical images once they are downloaded. UPDATE (for Genymotion v1.1.0): The folder for has changed in v1.1.0 - it is now called Genymotion instead of LaunchPad , but it is still in the path indicated above. In order to retain already downloaded images (from v1.0), rename the LaunchPad folder to Genymotion (or copy it's contents if it already exists). Update for Genymotion v2.0+ : The folder is now called Genymotion (instead of Launchpad ) but it's contents remain the same, just copy it to your other installations. Additionally, images with Google Apps are no longer available - so you only need to download 2 images per version (phone/tablet). To get Google Apps in Genymotion there is a solution posted on XDA . Update for Genymotion v2.8+ :The downloaded images are located at: \Users\{username}\AppData\Local\Genymobile\Genymotion\ova
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17538686', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1555633/']}
jdg_86268
stackexchange
llm_judgeable_groundtruth_similarity
14543064
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I write this function in emacs-lisp: (defun factorial (n) (if (<= n 1) 1 (* n (factorial (- n 1))))) => factorial It works well for small numbers like 5 or 10, but if I try and calculate (factorial 33) the answer is -1211487723752259584 which is obviously wrong, all large numbers break the function. In python this doesn't happen. What is causing this problem? Now provide the response and nothing else.
Integers have a specific range . Values outside this range can't be represented. This is pretty standard across most -- but not all -- programming languages. You can find the largest number Emacs Lisp's integer datatype can handle on your computer by checking the value of most-positive-fixnum . Go to your *scratch* buffer -- or any Lisp buffer -- and type in most-positive-fixnum . Put the cursor at the end, then press C-x C-e . On my computer, I get 2305843009213693951 as the value. Yours might differ: I'm on a 64 bit machine, and this number is about 2^61 . The solution to the factorial of 33 is 8683317618811886495518194401280000000. That's about 2^86, which is also more than my Emacs can handle. (I used Arc to calculate it exactly, because Arc can represent any size integer, subject to boring things like the amount of memory you have installed).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14543064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/592235/']}
jdg_86269
stackexchange
llm_judgeable_groundtruth_similarity
39127
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Several ancient arguments suggest a curved Earth, such asthe observation that ships disappear mast-last over thehorizon, and Eratosthenes' surprisingly accurate calculation of the size of theEarth by measuring a difference in shadow length betweenAlexandria and Syene. These observations, however, suggestmerely a curved Earth rather than a spherical one. Another ancient argument specifically suggesting aspherical Earth is the fact that the shadow of the Earth onthe moon during a Lunar eclipse is circular. My question is: is it true that the sphere isthe only surface all of whose projections are disks? It surely seems to be true. The corresponding fact,however, is not true in two dimensions. The Reuleauxtriangle pictured below is a figure of constantwidth ,meaning that every projection of it in the plane is a linesegment of the same length. There are also surfaces of constantwidth in higher dimensions, meaning that any two parallelbounding set of hyperplanes (touching the boundary) haveconstant separation. But all of the non-spherical examplesof such surfaces I have seen have obviously non-circularprojections. It also seems clear that finitely many circular projectionsis insufficient, since intersecting finitely many cylinderswould produce a surface having corners and containing somestraight line segments. The fact that you can spin such a surface with all circularprojections inside any bounding cylinder is suggestive, but itis also true that you can spin the Reuleaux triangleinside a square ,even though it isn't circular. Further questions would include: To what extent are other surfaces determined by theirprojections? That is, which other shapes can we recognizeby the set of their shadows? In particular, can we recognize the cube and other regular solids by their shadows? Which sets of shadows are realizable as projections of asurface? Is there some way to characterize these sets?Clearly they must be continuously deformable to oneanother and obey several other obvious conditions. We had a great time discussing the question after our logicseminar here in New York this week, when our speaker Maryanthe Malliaris askedthe spherical Earth question. December 20, 2010: In light (or dark, as it were) of the lunar eclipse tonight , I am bumping this question, with the remark also that despite the truly outstanding answers we have received, several of the further questions stated above are not fully answered. Now provide the response and nothing else.
The answer to the title question is yes (well, I assume that by a "surface" you mean something reasonable, like a boundary of a convex set). Let $AB$ be the longest segment with endpoints on the surface. We may assume that its length equals 2 and its midpoint is the origin. Consider projections to the planes that contain $AB$. Since projections do not increase distances, $AB$ is a diameter of each projection. Hence all projections to this family of planes are unit discs centered at the origin. The intersection of the corresponding cylinders is the unit ball, hence the result. Added. In general, we cannot determine a convex body from the set of shadows (if we don't know the correspondence between shadows and directions of projections). Take a unit ball and cut off three identical tiny caps whose centers form a regular triangle on the sphere and are not on one great circle. Looking at shadows, you cannot tell whether all three or only two caps are removed, because each projection shows you no more than two of them. The same construction works for polyhedra if you start with an icosahedron rather than a ball.
{}
{'log_upvote_score': 8, 'links': ['https://mathoverflow.net/questions/39127', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1946/']}
jdg_86270
stackexchange
llm_judgeable_groundtruth_similarity
21915834
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the Angular $http docs , it mentions that you can set the 'timeout' config to either a number or a promise. timeout – {number|Promise} – timeout in milliseconds, or promise that should abort the request when resolved. But I am not sure how to make this work using a promise. how do i set a number and a promise ?Basically I want to be able to know whether an http call (promise) errored due to a 'timeout' or something else. I need to be able to tell the difference.Thanks for any help !!! Now provide the response and nothing else.
This code is from $httpBackend source code : if (timeout > 0) { var timeoutId = $browserDefer(timeoutRequest, timeout);} else if (timeout && timeout.then) { timeout.then(timeoutRequest);}function timeoutRequest() { status = ABORTED; jsonpDone && jsonpDone(); xhr && xhr.abort();} timeout.then(timeoutRequest) means that when the promise is resolved (not rejected) timeoutRequest is invoked and xhr request is aborted. If the request was timeout then reject.status === 0 ( Note: in case of a network failure, then reject.status will also be equals to 0 ), An example: app.run(function($http, $q, $timeout){ var deferred = $q.defer(); $http.get('/path/to/api', { timeout: deferred.promise }) .then(function(){ // success handler },function(reject){ // error handler if(reject.status === 0) { // $http timeout } else { // response error status from server } }); $timeout(function() { deferred.resolve(); // this aborts the request! }, 1000);});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21915834', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130560/']}
jdg_86271
stackexchange
llm_judgeable_groundtruth_similarity
788
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Then Joseph could not restrain himself before all those who stood by him; and he cried, "Cause every man to go out from me!" And there stood no man with him while Joseph made himself known unto his brethren. And he wept aloud, and the Egyptians and the house of Pharaoh heard. And Joseph said unto his brethren, "I am Joseph. Doth my father yet live?" And his brethren could not answer him, for they were troubled at his presence (Genesis 45:1 - 3 KJV ) Nahmanides phrases the question eloquently: "How is it that Joseph, after living many years in Egypt, having attained a high and influential position in the house of an important Egyptian official, did not send his father even one message to inform him (that he was alive) and comfort him? Egypt is only six days' travel from Hebron, and respect for his father would have justified even a year's journey! (Commenting on Gen. 42:9)?" So...why didn't Joseph ever try to contact his father? EDIT: Some of my own thoughts on this question, to offer a concrete direction for inquiry. In Genesis 13, Abraham separates from his nephew Lot. At that moment, Lot is excluded from the covenantal relationship that God forges with Abraham and his offspring. In Genesis 25, Abraham rejects the children of his concubines, including Ishmael, and excludes them from the covenantal relationship which he bequeaths to Isaac. Genesis 27 - Isaac's son Esau is excluded from the covenant in a dramatic tale which includes Jacob stealing a blessing from his blind father. Jacob's earlier stealing of the birthright in chapter 25 suggests that the siblings understood they were in competition - one of the two might not be included in the Abrahamic covenant. This familial tension may be useful in trying to understand the hatred that Joseph's brothers harbored toward Joseph and their desire to kill him: Now Israel loved Joseph more than all his children, because he was the son of his old age; and he made him a coat of many colors. And when his brethren saw that their father loved him more than all his brethren, they hated him and could not speak peaceably unto him. And Joseph dreamed a dream, and he told it to his brethren; and they hated him yet the more (Genesis 37:3-5 KJV ). How does the lack of communication between Joseph in Egypt and his father play into this larger family dynamic? How does the unfolding of the Joseph story play with and twist assumptions held by Joseph, his brothers and the reader as the story progresses? Now provide the response and nothing else.
Genesis, in many ways, is the story of the Abrahamic Covenant being fulfilled: The Lord said to Abram, “Go forth from your native land and from your father’s house to the land that I will show you. I will make of you a great nation, And I will bless you; I will make your name great, And you shall be a blessing. I will bless those who bless you And curse him that curses you; And all the families of the earth Shall bless themselves by you.”—Genesis 12:1-3 ( NJPS ) The drama from the individual stories arises mostly from the conflict between this promise and a variety of threats to its fulfillment. Central to the movement of the story from one generation to the next is the question of which member of Abraham's family will be the inheritor of this covenant. We see several stories of close calls when some more powerful man threatens to father children with the wives of a patriarch. In fact, the first occurs immediately following the establishment of the covenant in Genesis 12:10-20 when Abraham and Sarai go down to Egypt because of a famine and pharaoh nearly takes her as a wife. Meanwhile, each Patriarch has his own set of problems producing a legitimate heir. Abraham produces Ishmael and Issac, but only Issac is chosen. Issac produces Esau and Jacob, but only Jacob is chosen. Jacob intensifies the problem by producing 12 sons. Normally, the inheritance would pass to the oldest, but Jacob has no loyalty to that law and has clearly signaled his favor to Joseph. What's more, Joseph receives two dreams that indicate God favors him too. The second involved not only Joseph's brothers, but his parents as well: He dreamed another dream and told it to his brothers, saying, “Look, I have had another dream: And this time, the sun, the moon, and eleven stars were bowing down to me.” And when he told it to his father and brothers, his father berated him. “What,” he said to him, “is this dream you have dreamed? Are we to come, I and your mother and your brothers, and bow low to you to the ground? So his brothers were wrought up at him, and his father kept the matter in mind.—Genesis 37:9-11 ( NJPS ) Now we know the bigger story of jealousy, revenge and despair, but from Joseph's perspective, it must have been bewildering. Might not his father have changed his mind about giving his property and rights to an arrogant teenager who can't keep from boasting? Perhaps God intended to put Joseph through the trial his grandfather endured—being placed on the altar of blood sacrifice only to be rescued by an angel at the last moment? Whatever else might be true, Joseph had to assume the blessing and promise would pass to one of his brothers rather than to him. For the next 13 or so years, Joseph was not able to move where he pleased but was held as a slave and a prisoner. But the situation changed after he was able to interpret Pharaoh's dreams: Pharaoh said to Joseph, “I am Pharaoh; yet without you, no one shall lift up hand or foot in all the land of Egypt.” Pharaoh then gave Joseph the name Zaphenath-paneah; and he gave him for a wife Asenath daughter of Poti-phera, priest of On. Thus Joseph emerged in charge of the land of Egypt.—Joseph was thirty years old when he entered the service of Pharaoh king of Egypt.—Leaving Pharaoh’s presence, Joseph traveled through all the land of Egypt.—Genesis 41:44-46 ( NJPS ) Two point in relation to the question: Joseph had considerable freedom within the land of Egypt, but he seems to have exchanged his Hebrew identity with an Egyptian one. He was given and Egyptian name, family, and position. Contacting his father at this point, might have raised questions about his loyalty. More so, since his new position would have required him to be exceptionally busy . Joseph could not know anything of the drama back home. As far as he was concerned, his father might have disowned him or established one of the other brothers as favorite. Maybe the sons killed their father too? Perhaps God fulfilled the dreams when Joseph became vizier of the land. At any rate, when his brothers show up, Joseph has lived in Egypt more than half his life and while he recognizes them, they don't know him. He has become an Egyptian. (Even his burial would be an Egyptian one.) The tension in chapters 42-44 turns on Joseph not knowing if he can trust his brothers. Joseph seems especially concerned about his mother's son, Benjamin. The tension is released in one of the climaxes of the Bible: Joseph could no longer control himself before all his attendants, and he cried out, “Have everyone withdraw from me!” So there was no one else about when Joseph made himself known to his brothers. His sobs were so loud that the Egyptians could hear, and so the news reached Pharaoh’s palace. Joseph said to his brothers, “I am Joseph. Is my father still well?” But his brothers could not answer him, so dumfounded were they on account of him. Then Joseph said to his brothers, “Come forward to me.” And when they came forward, he said, “I am your brother Joseph, he whom you sold into Egypt. Now, do not be distressed or reproach yourselves because you sold me hither; it was to save life that God sent me ahead of you. It is now two years that there has been famine in the land, and there are still five years to come in which there shall be no yield from tilling. God has sent me ahead of you to ensure your survival on earth, and to save your lives in an extraordinary deliverance. So, it was not you who sent me here, but God; and He has made me a father to Pharaoh, lord of all his household, and ruler over the whole land of Egypt.—Genesis 45:1-8 ( NJPS ) Similarly, after Jacob died: His brothers went to him themselves, flung themselves before him, and said, “We are prepared to be your slaves.” But Joseph said to them, “Have no fear! Am I a substitute for God? Besides, although you intended me harm, God intended it for good, so as to bring about the present result—the survival of many people. And so, fear not. I will sustain you and your children.” Thus he reassured them, speaking kindly to them.—Genesis 50:18-21 ( NJPS ) While there's an issue of Joseph recalling the story in light of his current status (and an even bigger issue of Moses reinterpreting the story as well), we can see that Joseph saw the time apart from his family and homeland as a provision from God. Better than anyone, Joseph can see how the fate of God's promise to Abraham was achieved by God's grace and not by Joseph's efforts. In turn, Joseph can be gracious to his brothers who tried to destroy him. Summary Joseph could not know his status with Jacob and the other 11 brothers when he finally was given the freedom to contact his birth family. Simultaneously, he received an Egyptian identity. Reading between the lines, he neither expected nor desired to see them again. But when he discovered the Lord had used him to fulfill the covenant with his great-grandfather, Joseph was able to see the bigger picture and forgive his bothers.
{}
{'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/788', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/76/']}
jdg_86272
stackexchange
llm_judgeable_groundtruth_similarity
60905518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: TypeScript allows us to alias an array-typed variable with a variable of a supertype (TypeScript arrays are covariant): const nums: number[] = [];const things: (number | string)[] = nums;things.push("foo");nums[0] *= 3;console.log(nums[0]); // `NaN` !! Why? This seems like a nice place to protect us from runtime errors. Given how Java was mocked for having covariant arrays, it seems this was an intentional TS feature. This was asked by someone else on a stale TypeScript issue , but I didn't see any answers. Now provide the response and nothing else.
As you've noted, array covariance is unsound and can lead to errors at runtime. One of TypeScript's Design Non-Goals is Apply a sound or "provably correct" type system. Instead, strike a balance between correctness and productivity. which means that if some unsound language feature is very useful, and if requiring soundness would make the language very difficult or annoying to use, then it's likely to stay, despite potential pitfalls. Apparently there comes a point when it is "a fool's errand" to try to guarantee soundness in a language whose primary intent is to describe JavaScript. I'd say that the underlying issue here is that TypeScript wants to support some very useful features, which unfortunately play poorly together. The first is subtyping , where types form a hierarchy, and individual values can be of multiple types. If a type S is a subtype of type T , then a value s of type S is also a value of type T . For example, if you have a value of type string , then you can also use it as a value of type string | number (since string is a subtype of string | X for any X ). The entire edifice of interface and class hierarchy in TypeScript is built on the notion of subtyping. When S extends T or S implements T , it means that S is a subtype of T . Without subtyping, TypeScript would be harder to use. The second is aliasing , whereby you can refer to the same data with multiple names and don't have to copy it. JavaScript allows this: const a = {x: ""}; const b = a; b.x = 1; . Except for primitive data types, JavaScript values are references. If you tried to write JavaScript without passing around references, it would be a very different language. If TypeScript enforced that in order to pass an object from one named variable to another you had to copy all of its data over, it would be harder to use. The third is mutability . Variables and objects in JavaScript are generally mutable; you can reassign variables and object properties. Immutable languages are easier to reason about / cleaner / more elegant, but it's useful to mutate things. JavaScript is not immutable, and so TypeScript allows it. If I have a value const a: {x: string} = {x: "a"}; , I can follow up with a.x = "b"; with no error. If TypeScript required that all aliases be immutable, it would be harder to use. But put these features together and things can go bad: let a: { x: string } = { x: "" }; // subtypelet b: { x: string | number }; // supertype b = a; // aliasingb.x = 1; // mutationa.x.toUpperCase(); // explosion Playground link to code Some languages solve this problem by requiring variance markers. Java's wildcards serve this purpose, but they are fairly complicated to use properly and (anecdotally) considered annoying and difficult. TypeScript has decided not to do anything here and treat all property types as covariant, despite suggestions to the contrary . Productivity is valued above correctness in this aspect. For similar reasons, function and method parameters were checked bivariantly until TypeScript 2.6 introduced the --strictFunctionTypes compiler option, at which point only method parameters are still always checked bivariantly. Bivariant type checking is unsound. But it's useful because it allows mutations, aliasing, and subtyping (without harming productivity by requiring developers to jump through hoops). And method parameter bivariance results in array covariance in TypeScript . Okay, hope that helps; good luck!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60905518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2482570/']}
jdg_86273
stackexchange
llm_judgeable_groundtruth_similarity
11048835
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to pass parameters from PHP Command Line Interface, and then read in the values using PHP script, something like this: <?php $name1 = $argv[1]; echo $name1;?> I pass the variable from CLI like this: C:\xampp\php\php.exe name.php Robby The above works, I get Robby as the output. But I want to do something like this: C:\xampp\php\php.exe name.php -inputFirstName="Robby" So that the user is well informed to enter the correct parameters in the correct places. What is the appropriate way to parse these parameters? Now provide the response and nothing else.
When calling a PHP script from the command line you can use $argc to find out how many parameters are passed and $argv to access them. For example running the following script: <?php var_dump($argc); //number of arguments passed var_dump($argv); //the arguments passed?> Like this:- php script.php arg1 arg2 arg3 Will give the following output int(4)array(4) { [0]=> string(21) "d:\Scripts\script.php" [1]=> string(4) "arg1" [2]=> string(4) "arg2" [3]=> string(4) "arg3"} See $argv and $argc for further details. To do what you want, lets say php script.php arg1=4 You would need to explode the argument on the equals sign:- list($key, $val) = explode('=', $argv[1]);var_dump(array($key=>$val)); That way you can have whatever you want in front of the equals sign without having to parse it, just check the key=>value pairs are correct. However, that is all a bit of a waste, just instruct the user on the correct order to pass the arguments.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11048835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/809901/']}
jdg_86274
stackexchange
llm_judgeable_groundtruth_similarity
286457
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A charged particle fixed to a frame $S^\prime$ is accelerating w.r.t an inertial frame $S$. For an observer A in the $S$ frame, the charged particle is accelerating (being attached to frame $S^\prime$) and therefore, he observes it to radiate. However, for the non-inertial observer B standing on the non-inertial frame $S^\prime$, the charged particle is at rest, and therefore, does not radiate at all. A infers that the charged particle radiates but B infers it doesn't. Can both inferences (mutually contradictory) be correct simultaneously? If yes, how? Now provide the response and nothing else.
First, I'll note that unless the non-inertial frame has a changing acceleration, there is some doubt as to whether it radiates at all. https://en.wikipedia.org/wiki/Paradox_of_a_charge_in_a_gravitational_field Assuming there is changing acceleration, yes the particle radiates, and this can be observed in both frames. Seeing it radiate would be a way of determining that you are in fact in a non-inertial frame, assuming your inner-ear was not already telling you that.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/286457', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/36793/']}
jdg_86275
stackexchange
llm_judgeable_groundtruth_similarity
14932739
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working with cakephp. I need to add three validation on email field. First validation if email not given, second for valid email address, third if email address is given then it should be unique. Because its a signup form. How I have add three validations on one field I try with the following code but it did not work for me. public $validate = array( 'email' => array( 'email' => array( 'rule' => array('email'), 'message' => 'Invalid email address', 'allowEmpty' => false, //'required' => false, //'last' => false, // Stop validation after this rule //'on' => 'create', // Limit validation to 'create' or 'update' operations ) ), 'email' => array( 'rule' => 'isUnique', 'message' => 'Email already registered' ) ); Now provide the response and nothing else.
You have two identical indexes 'email' which PHP won't allow you. Change to something like:- array( 'email' => array( 'notEmpty' => array( 'rule' => 'notEmpty', 'message' => 'Provide an email address' ), 'validEmailRule' => array( 'rule' => array('email'), 'message' => 'Invalid email address' ), 'uniqueEmailRule' => array( 'rule' => 'isUnique', 'message' => 'Email already registered' ) )); Otherwise only one of your rules will be used.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14932739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1124894/']}
jdg_86276
stackexchange
llm_judgeable_groundtruth_similarity
60095
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question: How hot is the water in the pot? More precisely speaking, how can I get a temperature of the water as a function of time a priori? Background & My attempt: Recently I started spend some time on cooking. And I'm curious about it. I have learned mathematics as a undergraduate student for four years, but I know a little about thermodynamics. (I listened to such a lecture once. So I've heard of $dU = TdS - pdV$, Entropy and Gibbs energy for example though I forgot almost everything; anyway I think I've never seen a formula depending on time.) So I conduct a small experiment first: I heat 100ml of water by IH correspond approximately to 700W and measure its temperature every 30 seconds. Here is the results. It looks almost linear, but I think linear approximation is inappropriate; because if so, the water gets higher than $100^\circ\mathrm{C}$. So I guess it's some convex increasing function like $T(t) = 100 - \alpha e^{-t/\beta}$ for some positive constant $\alpha$ and $\beta$. But it doesn't fit the data. (It does fit the data. I just made a mistake in simple calculation. See my answer .) I think I ignored too many factors. So feel free to assume anything reasonable. I would greatly appreciate if you help me. Thank you. Additional question: I do a experiment and some calculation to deal with a problem pointed out in the comments of my answer: bad fitting at lower temperature. However, I cannot get a better solution. Fitting seems worse than before... Here is the results what I got. I heated 100ml water in pot with 9cm radius by IH correspond to 700W. (For calculation, I added linear interpolation values in the graph.) How can I get a better solution? (Light blue curve is a logistic approximation defined by $T = \dfrac{100}{1 + 1.62 e^{-0.0168 t}}$ as mentioned here .) Now provide the response and nothing else.
To close this post, I write an answer by myself though it turned out that I made just a simple calculation mistake. Since the temperature increase should be monotonic and approach to zero at boiling point, it's reasonable to assume that the temperature increase $dT/dt$ is proportional to the difference $T - 100 \mathrm{^\circ C}$, that is, $$ \frac{dT}{dt} = -k(T - 100 \mathrm{^\circ C}) $$holds for some positive constant $k$.Solving this equation gives $$ T = 100 \mathrm{^\circ C} + (T(t_0) - 100 \mathrm{^\circ C})e^{-k(t - t_0)}. $$ Let's determine coefficient $k$ from $N$ measurements by linear regression. Let $c$ be a time interval of experiments and $x_n$ the temperature of the water on $t_n = cn$. Then estimate the slope of the tangent line by$$ y_n = \mathrm{mean}\big(\frac{t_{n + 1} - t_{n}}{c}, \frac{t_{n} - t_{n - 1}}{c}\big) = \frac{t_{n + 1} - t_{n - 1}}{2c} $$for $0 < n < N$. From above equation, there should be a relation of the form$$ y_n = -k(x_n - 100 \mathrm{^\circ C}) + \varepsilon_n $$where $\varepsilon_n$ stands for experimental errors. I denote this equation by$$ y = -kx +\varepsilon $$as shorthand. The best estimator $\hat{k}$ is given if $x$ and $\varepsilon$ are orthogonal to each other. Therefore $$ \hat{k} = -\frac{(x, y)}{(x, x)} \approx 0.00667 \mathrm{s^{-1}} $$from calculations . And this value fits the experimental data well. Note: I completely rewrite this answer. Here, I would like to review where my answer was inappropriate. It seems not a problem of physic but a problem of statistics. Last time, I solved DE first and took logarithm to make it linear. However the experimental errors also transformed. Especially, $\ln(100 - x_n) \to -\infty$ as $x_n \to 100$. So this seems to cause overfitting at higher temperature and bad fitting at lower temperature. (Considering a effect of the pot looks a good idea but everything I tried fails. It won't fit data and still open though I already got a reasonable approximation.) Thank you so much for helping me, Chris, Stefan Bischof, Michael Brown and Christoph.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/60095', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/22786/']}
jdg_86277
stackexchange
llm_judgeable_groundtruth_similarity
10793
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a string contained in a variable, and I want to extract substrings based on position relative to another substring. My solution seems to work unless the string is sent to a function as an arg. I am using the bash shell. #!/usr/bin/bashvar0="-a check one two three"var1="check"function getsubstr() {echo ${*#*"${2}"} }# this prints 'one two three' which is what I wantecho ${var0#*"${var1}"}# this prints '-a one two three', not what I want.getsubstr $var0 note that when I put echo $* in the getsubstr function it prints the same string as $var0 (-> '-a check one two three'), and that when I put echo $2 in the getsubstr function is prints the same string as $var1 (-> 'check'). So, it seems to me that I'm asking to print the same substring in both circumstances. A further conundrum is that if, instead of echo ${*#*"${2}"} in the getsubstr function I use echo ${*%"${2}"*} , I get the exact same result. Any help in understanding this behavior would be greatly appreciated. BTW, I realize that ${*:3} inside the getsubstr function works to return the substring I want, but I'm trying to understand the #*<regexp> and %<regextp>* behavior. Now provide the response and nothing else.
Your getsubstr $var0 is passing 5 args to the function. Also, $* and $@ tests each individual $1 $2 etc.. arg against the # patttern. Regarding RegEx in bash : I've added some examples at the end, and btw, '*' is only a special regex char when it is used in a regex context, ie. when using =~ .In your first use of * in ${* , the asterisk's special use is as the (psuedo)name of a var which expands to a concatenation of all the vars: $1 $2 $...etc... Your second use of an asterisk, in #*"${2}" , means "$2" preceded by anything including nothing , is to be matched against each passed $1 etc arg separately/individually. The following script may help with $@ and $* (by example)... #!/bin/bash# getsubstr() { echo -n " ${#@} args"; [[ "$1$2$3$4$5$6" == *\ * ]] && echo " (with embedded spaces)" || echo " (no spaces)" echo ' "${*}" '\|"${*}"\| echo ' ${*} '\|${*}\| echo ' "${@}" '\|"${@}"\| echo ' ${@} '\|${@}\| echo ' "${*#*"${2}}" '\|"${*#*"${2}"}"\| echo ' ${*#*"${2}} '\|${*#*"${2}"}\| echo ' "${@#*"${2}}" '\|"${@#*"${2}"}"\| echo ' ${@#*"${2}} '\|${@#*"${2}"}\| echo ' ${*#B} '\|${*#B}\| echo ' "${*#B}" '\|"${*#B}"\| echo ' ${@#B} '\|${@#B}\| echo ' "${@#B}" '\|"${@#B}"\|}var0="a B c "echoecho -n "Passing "; getsubstr "$var0" ; echoecho -n "Passing "; getsubstr $var0 ; echoecho -n "Passing "; getsubstr "$var0" "$var0" ; echoecho -n "Passing "; getsubstr $var0 $var0 ; echoechoexit ################################################################### RegEx in bash # Regex checks: "=~" uses extended regular expression#+ Parenthesized subexpressions within the regular expression are saved#+ in the array variable BASH_REMATCH#+ $BASH_REMATCH / ${BASH_REMATCH[0]} is the string matching the entire regular expression. #+ ${BASH_REMATCH[n]} is the sub string matching the nth parenthesized subexpression [[ "abcdef" =~ (.)(.)(.) ]] && echo "# $BASH_REMATCH"# abc [[ "abcdef" =~ (.)(.)(.) ]] && echo "# ${BASH_REMATCH[0]}"# abc [[ "abcdef" =~ (.)(.)(.) ]] && echo "# ${BASH_REMATCH[2]}"# b [[ "abcdef" =~ (.)(.)(.) ]] && echo "# ${BASH_REMATCH[@]}"# abc a b c
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/10793', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/3127/']}
jdg_86278
stackexchange
llm_judgeable_groundtruth_similarity
9150702
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My regex does not pick the closest 'cont' pair to the inner text. How can I fix that? Input: cont cont ItextI /cont /cont Regex: cont.*?I(.*?)I.*?/cont Match: cont cont ItextI /cont Match I need: cont ItextI /cont Now provide the response and nothing else.
cont(?:(?!/?cont).)*I(.*?)I(?:(?!/?cont).)*/cont will only match the innermost block. Explanation: cont # match "cont"(?: # Match... (?!/?cont) # (as long as we're not at the start of "cont" or "/cont") . # any character.)* # Repeat any number of times.I # Match "I"(.*?) # Match as few characters as possible, capturing them.I # Match "I"(?: # Same as above (?!/?cont) .)*/cont # Match "/cont" This explicitly forbids cont or /cont to appear between the opening cont and the to-be-captured text (and between that text and the closing /cont ).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9150702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/792313/']}
jdg_86279
stackexchange
llm_judgeable_groundtruth_similarity
12119031
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have a loop be executed once every minute when datetime.utcnow().second is zero.So far I have this while True: while datetime.utcnow().second != 0: pass do_something() But the problem with this is that I am wasting cpu processes. I would use time.sleep(60) , but I don't know how it would sync with the UTC clock, because time.sleep(60) could stray from the official UTC time as time passes. Now provide the response and nothing else.
Best way I can think of would be to sleep until the next minute: while True: sleeptime = 60 - datetime.utcnow().second time.sleep(sleeptime) ... If you want to be really precise: while True: t = datetime.utcnow() sleeptime = 60 - (t.second + t.microsecond/1000000.0) time.sleep(sleeptime) ... This sleeps for exactly the amount of time necessary to reach the next minute, with subsecond precision. EDITED to fix minute rollover bug.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12119031', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/299648/']}
jdg_86280
stackexchange
llm_judgeable_groundtruth_similarity
1539367
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using jQuery, I'd like to remove the whitespace and line breaks between HTML tags. var widgetHTML = ' <div id="widget"> <h2>Widget</h2><p>Hi.</p> </div>'; Should be: alert(widgetHTML); // <div id="widget"><h2>Widget</h2><p>Hi.</p></div> I think the pattern I will need is: >[\s]*< Can this be accomplished without using regex? Now provide the response and nothing else.
I tried the technique that user76888 laid out and it worked nicely. I packaged it into a jQuery plugin for convenience, and thought the community might enjoy it, so here: jQuery.fn.cleanWhitespace = function() { this.contents().filter( function() { return (this.nodeType == 3 && !/\S/.test(this.nodeValue)); }) .remove(); return this;} To use this, just include it in a script tag, then select a tag to clean with jQuery and call the function like so: $('#widget').cleanWhitespace();
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1539367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/169109/']}
jdg_86281
stackexchange
llm_judgeable_groundtruth_similarity
293545
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that $$S=\{(x,y,z)\in \mathbb R^3: z^2=x^2+y^2\}$$ is not a regular surface, because it has a vertex in $(0,0,0)$ . But how to show it precisely? Maybe here is useful the theorem that a regular surface is locally a graph of infinite differentiable function of the form $z=f(x,y)$ or $y=g(x,z)$ or $x=h(y,z)$ ? Now provide the response and nothing else.
Consider the upper cone $S_1$ where $$x^2 + y^2 = z^2$$ with $z\geq0$ and the lower cone $S_2$ where $$x^2 + y^2 = z^2$$ with $z\lt0$. Notice that any open set in $\mathbb{R^3}$ containing the vertex $(0,0,0)$ must also contain points of $S_1$(besides the vertex) and of $S_2$. We have that $S = S_1 \bigcup S_2$. Suppose there is an open set $U$ in $\mathbb{R^2}$, an open set $W$ in $\mathbb{R^3}$ containing $(0,0,0)$, and an homeomorphism $H:U\to S\cap W$. If $a,b$ and $c$ are three distinct points in $U$ such that $H(a)=u$, $H(b)=(0,0,0)$ and $H(c)=v$, with $u\in S_1$ and $v \in S_2$. You can find a path connecting $a$ and $c$ in $U$ such that $b$ is not in this path, but as $H$ is an homeomorphism, it means that the image of this path by $H$ does not pass through the vertex, i.e., the image of this path by $H$ would connect a point in $S_1$ (different from the vertex) and a point of $S_2$ but not contain the vertex.Contradiction.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/293545', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14284/']}
jdg_86282
stackexchange
llm_judgeable_groundtruth_similarity
4220864
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to quickly sum values from 1 to 100, but exclude values with digits of 7 and/or 8 (e.g., 7,8,17,18,70,78,....) from the sum. This is a mental math problem that I want to do in a really quick way. The fastest approach I could come up with is note that sum of 1 to 100 is $100*101/2=5050$ . Then subtract sum of 70 to 79, which is $10 * 70 + 9 * 10 / 2 = 745$ . Then subtract sum of 80 to 89 which is $745 + 100 = 845$ . Then we need to subtract $7 + 8 + 17 + 18 + 27 + 28 + \ldots + 67 + 68 + 97 + 98 = 6 * 7 / 2 * 10 * 2 + 180 + 8 * (7 * 8) = 720$ . Let me explain the last expression. The $6 * 7 / 2 * 10 * 2 + 180$ is the sum of just the tens and the $8 * (7 * 8)$ is the sum of the ones. The $6*7/2 * 10$ is the sum of $10 + \ldots + 60$ . We multiply this by 2 because there're two of these sums. 180 is sum of the tens part of 97 and 98. Then finally $8 * (7 * 8)$ is saying that there are eight 7s and 8s in the ones spot. Can you come up with something better? Now provide the response and nothing else.
Here's a much faster way. Hint: Ignore 100 first. Claim: The sum of all the other numbers is $$(1+2+3+4+5+6+9+0) \times 8 \times (10 + 1) = 30 \times 8 \times 11 = 2640.$$ Hence, the total is $2640 + 100 = 2740$ . Why is the claim true? How many times does each allowed digit appear in the units place? How many times does each allowed digit appear in the tens place?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4220864', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/512679/']}
jdg_86283
stackexchange
llm_judgeable_groundtruth_similarity
28654
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an Arduino data collector with a W5100 module (see bottom of picture). Simple fixed IP connection: byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; byte ip[] = { 192, 168, 1, 242 }; Ethernet.begin(mac,ip); This works with my old routers (Asus WL500, TP-Link MR3420), but fails to connect when plugged in a gigabit switch or my new router (D-Link DWR-116). The connection LED on the new devices does not lit up or blinks slowly, showing repeated connection attempts. So the W5100 fails to make the physical connection. I made it work by adding my 10-year old Surecom switch to the mix, but wonder what the problem could be. Is there some cable configuration that is used by the W5100 but have been removed from these new network devices? (Not sure if the cable is cross or straight, trying another cable is messy, the whole thing is built in the wall.) Now provide the response and nothing else.
I think your problem is due to the MAC address that is being loading into the W5100. Try setting the 2 low order bits of the first byte to 0. These bits define whether the MAC address is unicast or multicast and whether it is private or universal. Some switches appear to block ARP requests if these bits are set incorrectly. I changed the first byte of the MAC address from x'47' to x'44' and things started working.
{}
{'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/28654', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/26011/']}
jdg_86284
stackexchange
llm_judgeable_groundtruth_similarity
67134
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to find the element of a list, with the highest weight. Example: {{1, 3.80737}, {2, 4.48538}, {3, 2.64947}, {4, 1.06387}, {5, 5.07804}, {6, 1.33265}, {7, 9.11426}, {8, 6.90628}, {9, 5.34919}, {10, 3.90156}} the program should pick {7, 9.11426} . I've done the following working example: n=50;tabint = Table[{i, RandomReal[{1, n}]}, {i, 1, n}] (*to generate a random table. This command does not matter for the efficiency of my programme. It's only for this example*)tabsort = Sort[tabint, (#1[[2]] >= #2[[2]]) &];initialvls = tabsort[[1]]; The thing that bothers me with my code is the sort. It seems that I lose time in sorting the whole list than those I would need to if just looking for the maximum weight. I bet there is a faster way to do this. Any help would be appreciated. P.S.: I've tried to find other questions similar to this one, but the answers don't seems to be applicable to my problem. May be wrong though... Now provide the response and nothing else.
lst = {{1, 3.80737}, {2, 4.48538}, {3, 2.64947}, {4, 1.06387}, {5, 5.07804}, {6, 1.33265}, {7, 9.11426}, {8, 6.90628}, {9, 5.34919}, {10, 3.90156}};f = #[[Ordering[#[[All, 2]], -1]]][[1]] &;f@lst(* {7, 9.11426} *) Timing: n = 500000;tabint = Table[{i, RandomReal[{1, n}]}, {i, 1, n}];f@tabint // AbsoluteTiming(* {0.017019,{378308,499999.}} *)First@Cases[#, {_, Max@#[[All, 2]]}] &@tabint // AbsoluteTiming(* {0.130194,{378308,499999.}} *)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/67134', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/11313/']}
jdg_86285
stackexchange
llm_judgeable_groundtruth_similarity
15985
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Again, I am reading this .I am finding it a bit difficult to understand the definition of n-dimensional smooth manifold . Now, $\{U_a; x^1_a, x^2_a, ..., x^n_a\}$ ----(1) Is the thing (1) a set? (I think it is not). Is it a tuple? Also, is $U_a$ a set or a set of sets? What is significance of the subscript $a$? I would like it very much if someone explains the definition easier to understand. With good examples. Now provide the response and nothing else.
This is there in the notes. $\{ U_a \}$ is an open cover of $M$. This is a set where each element $U_a$ is an open subset of $M$. The $a$ is used as an index for this set; so it would help to have put $a\in A$, say, where appropriate. Each $x_a^i$ is a coordinate function on the open set $U_a$ (for given $a$). The notes do give an example of the unit sphere which has an open cover consisting of two open subsets.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/15985', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/705/']}
jdg_86286
stackexchange
llm_judgeable_groundtruth_similarity
24459
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When working on a project under version control with git, I often want to do some things in a shell that affect many of my open files, then revert every buffer that I have open to make sure that I don't accidentally clobber the new version with whatever I had open. I know magit can be helpful here, but I'm used to my workflow in the shell and I'd like to keep it for now. So instead, I'd like to revert all open buffers, and maybe close any that have stopped existing (e.g. because of a git checkout of a branch that no longer has that file). I have the following snippet of elisp that I grabbed from a Google search: (defun revert-all-buffers () "Refreshes all open buffers from their respective files" (interactive) (let* ((list (buffer-list)) (buffer (car list))) (while buffer (when (and (buffer-file-name buffer) (not (buffer-modified-p buffer))) (set-buffer buffer) (revert-buffer t t t)) (setq list (cdr list)) (setq buffer (car list)))) (message "Refreshed open files")) But this breaks if it hits an error in one of my open files, i.e. when reverting B1 , B2 , B3 ,..., Bn an error while trying to revert B2 prevents B3 - Bn from being reverted. How can I tell emacs to ignore any errors that pop up in this case? I don't want to use global-auto-revert-mode because each revert triggers some heavy duty stuff like my auto-complete and syntax checker re-parsing the file, hanging emacs for a second or so. Now provide the response and nothing else.
Original Here is my slightly improved version of the snippet in the question. Reviewing my VC history, I confirm that the below snippet started out as the snippet posted by the OP. So I do pay attribute to that. Here's the code that has been stable for me: (defun modi/revert-all-file-buffers () "Refresh all open buffers from their respective files." (interactive) (let* ((list (buffer-list)) (buffer (car list))) (while buffer (let ((filename (buffer-file-name buffer))) ;; Revert only buffers containing files, which are not modified; ;; do not try to revert non-file buffers like *Messages*. (when (and filename (not (buffer-modified-p buffer))) (if (file-exists-p filename) ;; If the file exists, revert the buffer. (with-current-buffer buffer (revert-buffer :ignore-auto :noconfirm :preserve-modes)) ;; If the file doesn't exist, kill the buffer. (let (kill-buffer-query-functions) ; No query done when killing buffer (kill-buffer buffer) (message "Killed non-existing file buffer: %s" filename))))) (setq buffer (pop list))) (message "Finished reverting buffers containing unmodified files."))) Update Here's an improved and a better documented version of above after looking at @Drew's solution . (defun modi/revert-all-file-buffers () "Refresh all open file buffers without confirmation.Buffers in modified (not yet saved) state in emacs will not be reverted. Theywill be reverted though if they were modified outside emacs.Buffers visiting files which do not exist any more or are no longer readablewill be killed." (interactive) (dolist (buf (buffer-list)) (let ((filename (buffer-file-name buf))) ;; Revert only buffers containing files, which are not modified; ;; do not try to revert non-file buffers like *Messages*. (when (and filename (not (buffer-modified-p buf))) (if (file-readable-p filename) ;; If the file exists and is readable, revert the buffer. (with-current-buffer buf (revert-buffer :ignore-auto :noconfirm :preserve-modes)) ;; Otherwise, kill the buffer. (let (kill-buffer-query-functions) ; No query done when killing buffer (kill-buffer buf) (message "Killed non-existing/unreadable file buffer: %s" filename)))))) (message "Finished reverting buffers containing unmodified files.")) Reference
{}
{'log_upvote_score': 5, 'links': ['https://emacs.stackexchange.com/questions/24459', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/10368/']}
jdg_86287
stackexchange
llm_judgeable_groundtruth_similarity
3820
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been asked a question - if I calculate some sd, can I change one value and still keep the same sd. The answer is simply yes. For example: sd(c(2,3,4))sd(c(3,4,5)) But what I wondered then is, assuming you change k values, under what rules do you change them so to always keep the same sd (is it true to ask what are the degrees of freedom here?!) I am not sure how this applies to anything practical, but I imagine there was a good deal of theoretical work on such questions - but I don't know where to even look for them. Thanks. Now provide the response and nothing else.
The question is about the data, not random variables. Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y_1, \ldots, x_n+y_n)$. From $$\text{Var}(X) = \text{Var}(X+Y) = \text{Var}(X) + 2 \text{Cov}(X,Y) + \text{Var}(Y)$$ we deduce that $$(*) \quad \text{Var}(Y) + 2 \text{Cov}(X,Y) = 0$$ is necessary for the variance to be unchanged. Add in $n-k$ additional constraints to zero out all but $k$ of the $y_i$ (there are ${n \choose k}$ ways to do this) and note that all $n-k+1$ constraints almost everywhere have linearly independent derivatives. By the Implicit Function Theorem, this defines a manifold of $n - (n-k+1)$ = $k-1$ dimensions (plus perhaps a few singular points): those are your degrees of freedom. For example, with $X = (2, 3, 4)$ we compute $$3 \text{Var}(y) = y_1^2 + y_2^2 + y_3^2 - (y_1+y_2+y_3)^2/3$$ $$3 \text{Cov}(x,y) = (2 y_1 + 3 y_2 + 4 y_3) - 3(y_1 + y_2 + y_3)$$ If we set (arbitrarily) $y_2 = y_3 = 0$ the solutions to $(*)$ are $y_1 = 0$ (giving the original data) and $y_1 = 3$ (the posted solution). If instead we require $y_1=y_3 = 0$ the only solution is $y_2 = 0$: you can't keep the SD constant by changing $y_2$. Similarly we can set $y_3 = -3$ while zeroing the other two values. That exhausts the possibilities for $k=1$. If we set only $y_3 = 0$ (one of the cases where $k = 2$) then we get a set of solutions $$y_2^2 - y_1 y_2 + y_1^2 - 3y_1 == 0$$ which consists of an ellipse in the $(y_1, y_2)$ plane. Similar sets of solutions arise in the choices $y_2 = 0$ and $y_1 = 0$.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/3820', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/253/']}
jdg_86288
stackexchange
llm_judgeable_groundtruth_similarity
16724669
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to inverse the true/false value in my numpy masked array. So in the example below i don't want to mask out the second value in the data array, I want to mask out the first and third value. Below is just an example. My masked array is created by a longer process than runs before. So I can not change the mask array itself. Is there another way to inverse the values? import numpydata = numpy.array([[ 1, 2, 5 ]])mask = numpy.array([[0,1,0]])numpy.ma.masked_array(data, mask) Now provide the response and nothing else.
import numpydata = numpy.array([[ 1, 2, 5 ]])mask = numpy.array([[0,1,0]])numpy.ma.masked_array(data, ~mask) #note this probably wont work right for non-boolean (T/F) values#ornumpy.ma.masked_array(data, numpy.logical_not(mask)) for example >>> a = numpy.array([False,True,False])>>> ~aarray([ True, False, True], dtype=bool)>>> numpy.logical_not(a)array([ True, False, True], dtype=bool)>>> a = numpy.array([0,1,0])>>> ~aarray([-1, -2, -1])>>> numpy.logical_not(a)array([ True, False, True], dtype=bool)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16724669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1738154/']}
jdg_86289
stackexchange
llm_judgeable_groundtruth_similarity
1644362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a file with lines like account = "TEST1" Qty=100 price = 20.11 subject="some value" values="3=this, 4=that" There is no special delimiter and each key has a value that is surrounded by double quotes if its a string but not if it is a number. There is no key without a value though there may exist blank strings which are represented as "" and there is no escape character for a quote as it is not needed I want to know what is a good way to parse this kind of line with python and store the values as key-value pairs in a dictionary Now provide the response and nothing else.
We're going to need a regex for this. import re, decimalr= re.compile('([^ =]+) *= *("[^"]*"|[^ ]*)')d= {}for k, v in r.findall(line): if v[:1]=='"': d[k]= v[1:-1] else: d[k]= decimal.Decimal(v)>>> d{'account': 'TEST1', 'subject': 'some value', 'values': '3=this, 4=that', 'price': Decimal('20.11'), 'Qty': Decimal('100.0')} You can use float instead of decimal if you prefer, but it's probably a bad idea if money is involved.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1644362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/178437/']}
jdg_86290
stackexchange
llm_judgeable_groundtruth_similarity
14385
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From what I know, potentiometers vary an analog current with an input usually in the form of a knob. And transistors vary an analog current with an input in the form of a smaller current. Is there any practical difference between the two other than one varies between the position of a knob and the other varies between a smaller current? Now provide the response and nothing else.
They are entirely different. Any resistor is a linear device, a transistor is nonlinear. A linear device has a linear relationship between voltage and current, simply put. Transistors show very much more complex behavior. One is an interface, the other not. You cannot twiddle a transistor to change its characteristics. A potentiometer is used to change a setting of a device, permanently at production, or during use. As long as it is not touched, the setting stays the same, being a passive device. Transistors can't be used to do that. A transistor, being an active device, puts some of its characteristics, say the output resistance in relation to one of its inputs, say the collector current. A transistor does exhibit resistance in the sense there is a voltage and a current, and when I measure those, it looks like a resistance. But when you change the CE-voltage, the current will not change in the same ratio. Say, you double CE-voltage, you will not get double the current, but rather nearly the same current mostly, making it a differential resistance. On any resistor, you would get a linear response, double voltage ==> double current. This leads to the possibility of self-feedback on a transistor, making it possible to really regulate something, say in a constant current source. Twice the voltage, same current? Completely impossible with only passive devices, no resistor can do that.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/14385', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/4161/']}
jdg_86291
stackexchange
llm_judgeable_groundtruth_similarity
30739244
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Flask as a REST endpoint which adds an application request to a queue. The queue is then consumed by a second thread. server.py def get_application(): global app app.debug = True app.queue = client.Agent() app.queue.start() return [email protected]("/api/v1/test/", methods=["POST"])def test(): if request.method == "POST": try: #add the request parameters to queue app.queue.add_to_queue(req) except Exception: return "All the parameters must be provided" , 400 return "", 200 return "Resource not found",404 client.py class Agent(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.active = True self.queue = Queue.Queue(0) def run(self): while self.active: req = self.queue.get() #do something def add_to_queue(self,request): self.queue.put(request) Is there a shutdown event handler in flask so that I can cleanly shutdown the consumer thread whenever the flask app is shutdown (like when the apache service is restarted)? Now provide the response and nothing else.
There is no app.stop() if that is what you are looking for, however using module atexit you can do something similar: https://docs.python.org/2/library/atexit.html Consider this: import atexit#defining function to run on shutdowndef close_running_threads(): for thread in the_threads: thread.join() print "Threads complete, ready to finish"#Register the function to be called on exitatexit.register(close_running_threads)#start your processapp.run() Also of note- atexit will not be called if you force your server down using Ctrl-C. For that there is another module- signal . https://docs.python.org/2/library/signal.html
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30739244', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2912843/']}
jdg_86292
stackexchange
llm_judgeable_groundtruth_similarity
4962290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If an item in memcached is set to never expire, is it exempt from LRU eviction? The docs that I've seen don't paint a clear picture as to which takes precedence. In my mind, it would be ideal (perhaps very complicated internally) to have LRU only apply to items that had an expiry > 0. Now provide the response and nothing else.
No, it is not exempt. Memcached is a cache, not persistent storage. Any item within it, or the entire cache itself may disappear at any moment (but it's not likely unless it's full, or there's a major problem). Under heavy memory pressure, the LRU algorithm will remove whatever it feels necessary. What is memcached's cache? The cache structure is an LRU (Least Recently Used), plus expiration timeouts. When you store items into memcached, you may state how long it should be valid in the cache. Which is forever, or some time in the future. If the server is out of memory, expired slabs are replaced first, then the oldest unused slabs go next. If the system has no areas of expired data, it will throw away the least recently used block (slab) of memory.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4962290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/125487/']}
jdg_86293
stackexchange
llm_judgeable_groundtruth_similarity
16500040
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following type class class MyClass c where aFunction :: c -> Bool and two instances for two different data types data MyDataType1 = MyDataType1instance MyClass MyDataType1 where aFunction c = Truedata MyDataType2 = MyDataType2instance MyClass MyDataType2 where aFunction c = False I want to write a function a function which takes two parameters of typeclass MyClass (which might be the same data type or might be different and returns one of them. I'm struggling to work out the type signature for this and I think I might be taking the wrong approach. Would this be correct? If not what should I use instead? chooseOne :: (MyClass a, MyClass b) => a -> b -> ?chooseOne x y = if (aFunction x) then x else y Now provide the response and nothing else.
Your return value could be of either type, so the compiler will complain unless you use the same type for both, giving chooseOne :: (MyClass a, MyClass a) => a -> a -> a which isn't what you mean. To combine two potentially different types into one, you can use the Either data type: data Either a b = Left a | Right b so you would have chooseOne :: (MyClass a, MyClass b) => a -> b -> Either a bchooseOne x y = if (aFunction x) then Right x else Left y But I'd rather write that chooseOne :: (MyClass a, MyClass b) => a -> b -> Either a bchooseOne x y | aFunction x = Right x | otherwise = Left y
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16500040', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/368186/']}
jdg_86294
stackexchange
llm_judgeable_groundtruth_similarity
1589214
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's my scenario: Let's say I have a stored procedure in which I need to call another stored procedure on a set of specific ids; is there a way to do this? i.e. instead of needing to do this: exec p_MyInnerProcedure 4exec p_MyInnerProcedure 7exec p_MyInnerProcedure 12exec p_MyInnerProcedure 22exec p_MyInnerProcedure 19 Doing something like this: *magic where I specify my list contains 4,7,12,22,19*DECLARE my_cursor CURSOR FAST_FORWARD FOR*magic select*OPEN my_cursor FETCH NEXT FROM my_cursor INTO @MyIdWHILE @@FETCH_STATUS = 0BEGINexec p_MyInnerProcedure @MyIdFETCH NEXT FROM my_cursor INTO @MyIdEND My Main goal here is simply maintainability (easy to remove/add id's as the business changes), being able to list out all Id's on a single line... Performance shouldn't be as big of an issue Now provide the response and nothing else.
declare @ids table(idx int identity(1,1), id int)insert into @ids (id) select 4 union select 7 union select 12 union select 22 union select 19declare @i intdeclare @cnt intselect @i = min(idx) - 1, @cnt = max(idx) from @idswhile @i < @cntbegin select @i = @i + 1 declare @id = select id from @ids where idx = @i exec p_MyInnerProcedure @idend
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1589214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/30006/']}
jdg_86295
stackexchange
llm_judgeable_groundtruth_similarity
6296235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Anyone know how to get around this? On OSX, trying to get RSpec running with Rails 3.0.7. Full details at: https://gist.github.com/1017044 it "renders buttons_widgets partial" do get :buttons_widgets response.should render_template("buttons_widgets") end→ rspec tools_model_spec.rb/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/version.rb:4: warning: already initialized constant STRING/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/metadata.rb:48: warning: already initialized constant RESERVED_KEYS/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/pending.rb:6: warning: already initialized constant DEFAULT_MESSAGE/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/world.rb:6: warning: already initialized constant PROC_HEX_NUMBER/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/world.rb:7: warning: already initialized constant PROJECT_DIR/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/configuration.rb:43: warning: already initialized constant CONDITIONAL_FILTERS/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/configuration.rb:48: warning: already initialized constant DEFAULT_BACKTRACE_PATTERNS/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core/runner.rb:13: warning: already initialized constant AT_EXIT_HOOK_BACKTRACE_LINE/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/bundler/gems/rspec-core-bea2366c817e/lib/rspec/core.rb:35: warning: already initialized constant SharedContextRun filtered excluding {:if=>#<Proc:/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/gems/rspec-core-2.6.4/lib/rspec/core/configuration.rb:43>, :unless=>#<Proc:/Users/mm/.rvm/gems/ruby-1.9.2-p0@evergreen/gems/rspec-core-2.6.4/lib/rspec/core/configuration.rb:44>}FFailures: 1) ToolsController renders buttons_widgets partial Failure/Error: get :buttons_widgets NoMethodError: undefined method `get' for #<RSpec::Core::ExampleGroup::Nested_1:0x00000106db51f8># ./tools_model_spec.rb:7:in `block (2 levels) in <top (required)>' Now provide the response and nothing else.
RSpec doesn't know that your spec is a controller spec, so your examples don't have access to a get method. RSpec 2.x assumes that everything in the controllers directory is a controller spec. This was changed in RSpec 3: File-type inference disabled by default Previously we automatically inferred spec type from a file location, this was a surprising behaviour for new users and undesirable for some veteran users so from RSpec 3 onwards this behaviour must be explicitly opted into with: RSpec.configure do |config| config.infer_spec_type_from_file_location!end https://www.relishapp.com/rspec/rspec-rails/docs/upgrade#file-type-inference-disabled In the rspec-rails README : Controller specs default to residing in the spec/controllers folder . Tagging any context with the metadata :type => :controller treats it's examples as controller specs. An example of setting the controller context metadata for RSpec: describe ToolsController, :type => :controller do # ...end
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6296235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214365/']}
jdg_86296
stackexchange
llm_judgeable_groundtruth_similarity
606781
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider a ball free falling from a certain height onto a flat surface. Assuming there is no energy losses, the ball would keep on bouncing and exert a periodic motion. The problem is to find the period of motion using action-angle variables. I am struggling to write out the Hamiltonian. I think $H=\frac{p^2}{2m}+mgy$ is not enough as it doesn't account for the periodic motion. Its phase diagram is not of closed curve or periodic function. What should the Hamiltonian be? Now provide the response and nothing else.
You're right that the Hamiltonian you've provided is incomplete, you need to include the fact that there's an impenetrable barrier underneath the ball (i.e., the "floor"). As a result, your potential energy can't be a smooth function, but would rather be something like: $$V(y) = \begin{cases}mg y \quad &y\geq0 \\\infty\quad &y < 0\end{cases}.$$ The Hamiltonian for the system would then just be $$H = \frac{p^2}{2m} + V(y).$$ In the case of such problems it's usually much more convenient to model the barrier as a constraint rather than as a potential. But nevertheless, the potential will be as shown below (left), and I'll leave it to you to show why the phase diagram looks like the image on the right (ask yourself what happens when the ball hits the floor, and it should be clear). In the diagrams above I've assumed units in which $m=1$ , $g=1$ , and that the ball is released at a height of $y=2$ , so that its total energy is $E=2$ in these units. To find the "action" variable, you'll need to calculate $$J = \oint p \text{d}y = \int_\text{Going down} \text{(something)} + \int_\text{Going up} \text{(something else)},$$ where I'll leave it to you to find the appropriate limits and integrand.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/606781', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/285337/']}
jdg_86297
stackexchange
llm_judgeable_groundtruth_similarity
44001
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This is a question from Jech's Set Theory (Ex. 17.12) which I'm reading at the moment and pretty much stuck on. If $D$ is a normal measure on $\kappa$ and $\{ \aleph_\alpha \colon> 2^{\aleph_\alpha} \le> \aleph_{\alpha+\beta}\} \in D$ (for some constant $\beta < \kappa$), then $2^\kappa> \le \aleph_{\kappa + \beta}$ He gives the following hint: If $f$ is such that $f(\aleph_\alpha) = \aleph_{\alpha+\beta}$ for all $\alpha < \kappa$, then $[f]_D = (\aleph _{ \kappa+j(\beta)})^M$ I think that I am just confused about the whole representation in $M$ and how to use it to solve this problem. Hints, partial or complete solutions are most welcomed. Now provide the response and nothing else.
The question you've stated isn't the question in Jech, you've made a minor typo. Here's the actual problem: If $\beta < \kappa$ and {$\aleph _{\alpha} : 2^{\aleph _{\alpha}} \leq \aleph _{\alpha + \beta}$} $\in D$ and $D$ is a normal measure on $\kappa$, then $2^{\aleph _{\kappa}} \leq \aleph _{\kappa + \beta}$ Note that since $\kappa$ is measurable, $\aleph _{\kappa} = \kappa$. Okay, now we know that a normal measure extends the club filter, and the set of cardinals below $\kappa$ is club in $\kappa$, hence it makes sense in the hint to define $f(\aleph _{\alpha}) = \aleph _{\alpha + \beta}$ without specifying how $f$ acts on non-cardinals. Following my comment, let $g(\aleph _{\alpha}) = 2^{\aleph _{\alpha}}$. Then $g \leq f$ almost everywhere, and so: $M \vDash [g] \leq [f]$ i.e. $M \vDash j(g)(\kappa) \leq j(f)(\kappa)$ i.e. $M \vDash 2^{\kappa} \leq \aleph _{\kappa + j(\beta)}$ Since $\beta < \kappa$, $j(\beta) = \beta$. Thus there is an injection from $(2^{\kappa})^M$ to $\aleph _{\kappa + \beta} ^M$. Since $P(\kappa) = P^M(\kappa)$, it means there's an injection from $2^{\kappa}$ to $\aleph _{\kappa + \beta}^M$. Finally, $\aleph _{\kappa + \beta} ^M \leq \aleph _{\kappa + \beta}$ since $M \subseteq V$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/44001', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7206/']}
jdg_86298
stackexchange
llm_judgeable_groundtruth_similarity
3451518
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to evaluate limit $$\lim \limits_{x \to -1^+} (x+1)^{\frac{1}{x+1}}$$ The answer is $0$ but I am not able to get it. My working: $$\lim \limits_{x \to -1^+} e^{\frac{ln(x+1)}{x+1}}$$ $$e^{\lim \limits_{x \to -1^+} (\frac{ln(x+1)}{x+1})}$$ $$\Rightarrow \lim \limits_{x \to -1^+} \frac{ln(x+1)}{x+1}$$ Using L'hopital rule: $$\lim \limits_{x \to -1^+} \frac{1}{x+1}$$ Substituting $-1^+$ into the equation $$\frac{1}{-1+1} = \infty$$ $$\therefore e^\infty = \infty$$ Am I doing something wrong? Now provide the response and nothing else.
You correctly reduce to evaluating $$\lim_{x\to -1^+}\frac{\ln(x+1)}{x+1}$$ You cannot apply l'Hôpital here, because the hypotheses are not satisfied. This limit is $-\infty$ , because the numerator has limit $-\infty$ and the denominator has limit $0$ , but taking positive values. So your limit is $e^{-\infty}=0$ . You should have realized that something went wrong: for $-1<x<0$ , you have $$\ln(x+1)<0,\qquad x+1>0$$ so the limit above can't be $\infty$ , because the function only takes on negative values in a right neighborhood of $-1$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3451518', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/670407/']}
jdg_86299
stackexchange
llm_judgeable_groundtruth_similarity
338621
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm downloading a logfile quite often from a ftp-server (which I'm not in control over, btw), and it seems rather rediciolus to download the whole file every time. So I'm looking for a program (linux-ish) or Perl module that in a way combines ftp and rsync, and only "updates" the file. The logfile is constantly growing. Anything like that around? Now provide the response and nothing else.
Install curlftpfs (if on Windows, use cygwin)# Create local mount pathmkdir -p /mnt/myftp# Mount the destination ftp site using curlftpfscurlftpfs -o allow_other ftp://myusername:[email protected] /mnt/myftp# rsync inplace using append option# use a long timeout value as the first long phase# (the inplace comparison) takes a whilersync -rzvvv --inplace --append --progress --stats --timeout=7200 /mnt/myftp/path/to/source/file.log /path/to/local/destination/file.log# When you need to umount the ftp sitesudo umount myftp# You can also mount from /etc/fstab by appending the following line# curlftpfs#myusername:[email protected] /mnt/myftp fuse allow_other,rw,user,noauto 0 0# References:# http://linux.byexamples.com/archives/344/mounting-ftp-host-to-local-directory-on-top-of-fuse/# http://lists.samba.org/archive/rsync/2007-May/017762.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/338621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/33232/']}
jdg_86300
stackexchange
llm_judgeable_groundtruth_similarity
18674475
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a page that includes a php file I have written like so HTML <div id="playlists_div_holder"> <?php include(dirname(__FILE__) . '/includes/get_playlists.php'); ?></div> I have created a button that will refresh the playlists , incase the user has uploaded (added/removed) anything, and I am trying to do so with an ajax request to get the file and include it back into that div, but I am receiving a 500 error. I am doing this in WordPress so that may be an issue. The Button <span class="button-primary refresh-playlists" onclick="ReloadPlaylists()">Refresh Playlists</span> Ajax Request function ReloadPlaylists() { jQuery.get('<?php echo plugins_url();?>/Player/includes/get_playlists.php', function(data) { jQuery('#playlists_div_holder').html(data); alert('Load was performed.'); });} Including it works but an ajax get throws a 500 error Now provide the response and nothing else.
In the version 6.0 the bin directory is missing the scripts which run javacc . That is why you are getting the error from the windows command prompt. What you have is a jar file javacc.jar located in the lib directory. All you need is to add that jar file to your classpath and run the java.exe and pass the main class which runs javacc , the later happens to be named javacc too, so to run javacc just proceed like this: cmd> java -cp C:\javacc-6.0\bin\lib\javacc.jar javacc In the latest version they seem to have forgotten to add the scripts in the bin folder of the package. You can download version 5.0 , it containes all the script files you need, among others a file with the name javacc.bat , this is the one the window commad prompt is looking for and not finding in your case. Of course, you can just copy those scripts from the 5.0 version to the bin directory of the 6.0 version, they will also work. and since you already have set the path to contain C:\javacc-6.0\bin then you can run it like you have tried before, without closing the command prompt window or even restarting your whole computer! Edit - new links The links above are unfortunately no longer valid, luckily the content has been moved to github . here the new links: Project url: https://javacc.org/ Project url on github: https://javacc.github.io/javacc/ Earlier versions: https://github.com/javacc/javacc/branches/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18674475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2172218/']}
jdg_86301
stackexchange
llm_judgeable_groundtruth_similarity
56548
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've heard that differential forms are related to densities, however I'm still a little confused about that. I thought on the case of charge density and I came to that: let $U\subset\mathbb{R}^3$ be a region of $3$-space, and let $\rho : U \to \mathbb{R}$ give the charge density at every point of $U$. I can then create the $3$-form $\omega = \rho \ dx \wedge dy \wedge dz$, which in my understanding gives me the approximate amount of charge enclosed by a volume determined by $3$ vectors when they're given. So, if I give the vectors $v, u, w$, the value of $dx \wedge dy \wedge dz(v,u,w)$ should be the volume enclosed by those vectors, and hence $\omega(u, v, w)$ should be an approximation of the charge enclosed. Is this correct? My only problem is: in this point of view, the form isn't giving me the density, the density itself is being given by a scalar field, while the form gives me the charge instead of the density. Is this correct? The form is always meant to give the charge instead of the density? The density should always be regarded as a scalar field? Now provide the response and nothing else.
By 'density' in this case I think you just mean "something on a manifold that can be integrated to give you a scalar". By this definition, on an $n$-manifold, a density would be an $n$-form (since if you integrate over a form of lower dimension you get zero). So in your 3d case, take 3 smooth functions $f,g,h:M^3\to \mathbb{R}$, the form $df\wedge dg\wedge dh$ is a density. Now, in your example you are integrating over a scalar field multiplied by a 3-form, which is again a 3-form, which can be integrated over a 3-manifold to get you the change in the region. But the scalar field $\rho:M^3 \to\mathbb{R}$ is NOT a density (not a 3-form), so it cannot be integrated over to find the total charge. The charge density is $\omega=\rho dx\wedge dy\wedge dz,$ and $\rho$ just tells us how 'big' this should be. In other words, the mathematical term 'density' can be stated as '$n$-form on $M^4$', whereas the colloquial 'density' for 'something per unit length/area/volume' is shorthand for what we really mean ($n$-form).
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/56548', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/21146/']}
jdg_86302
stackexchange
llm_judgeable_groundtruth_similarity
3018616
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to show that if $\mathbf{S}$ is symmetric, then it's trace sums to the sum of the eigenvalues. But I don't know how to show this. Can anybody give me a hint? P.S. Shame on my google skills, buy I really can't find any pages on this specific issue. Not with the assumption that $\mathbf{S}$ is symmetric, and no proofs. Now provide the response and nothing else.
If $S$ is a symmetric matrix then $S$ has a spectral decomposition as $S=PDP'$ where $D$ is the diagonal matrix consisting the eigenvalues of $S$ and $P$ is orthogonal. Then $tr(S)=tr(PDP')=tr(DP'P)=tr(D)=\sum \text{eigen values of } S.$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3018616', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/385675/']}
jdg_86303
stackexchange
llm_judgeable_groundtruth_similarity
15623698
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a directive with an isolate-scope (so that I can reuse the directive in other places), and when I use this directive with an ng-repeat , it fails to work. I have read all the documentation and Stack Overflow answers on this topic and understand the issues. I believe I have avoided all the usual gotchas. So I understand that my code fails because of the scope created by the ng-repeat directive. My own directive creates an isolate-scope and does a two-way data-binding to an object in the parent scope. My directive will assign a new object-value to this bound variable and this works perfectly when my directive is used without ng-repeat (the parent variable is updated correctly). However, with ng-repeat , the assignment creates a new variable in the ng-repeat scope and the parent variable does not see the change. All this is as expected based on what I have read. I have also read that when there are multiple directives on a given element, only one scope is created. And that a priority can be set in each directive to define the order in which the directives are applied; the directives are sorted by priority and then their compile functions are called (search for the word priority at http://docs.angularjs.org/guide/directive ). So I was hoping I could use priority to make sure that my directive runs first and ends up creating an isolate-scope, and when ng-repeat runs, it re-uses the isolate-scope instead of creating a scope that prototypically inherits from the parent scope. The ng-repeat documentation states that that directive runs at priority level 1000 . It is not clear whether 1 is a higher priority level or a lower priority level. When I used priority level 1 in my directive, it did not make a difference, so I tried 2000 . But that makes things worse: my two-way bindings become undefined and my directive does not display anything. I have created a fiddle to show my issue . I have commented out the priority setting in my directive. I have a list of name objects and a directive called name-row that shows the first and last name fields in the name object. When a displayed name is clicked, I want it to set a selected variable in the main scope. The array of names, the selected variable are passed to the name-row directive using two-way data-binding. I know how to get this to work by calling functions in the main scope. I also know that if selected is inside another object, and I bind to the outer object, things would work. But I am not interested in those solutions at the moment. Instead, the questions I have are: How do I prevent ng-repeat from creating a scope that prototypically inherits from the parent scope, and instead have it use my directive's isolate-scope? Why is priority level 2000 in my directive not working? Using Batarang, is it possible to know what type of scope is in use? Now provide the response and nothing else.
Okay, through a lot of the comments above, I have discovered the confusion. First, a couple of points of clarification: ngRepeat does not affect your chosen isolate scope the parameters passed into ngRepeat for use on your directive's attributes do use a prototypically-inherited scope the reason your directive doesn't work has nothing to do with the isolate scope Here's an example of the same code but with the directive removed: <li ng-repeat="name in names" ng-class="{ active: $index == selected }" ng-click="selected = $index"> {{$index}}: {{name.first}} {{name.last}}</li> Here is a JSFiddle demonstrating that it won't work. You get the exact same results as in your directive. Why doesn't it work? Because scopes in AngularJS use prototypical inheritance. The value selected on your parent scope is a primitive . In JavaScript, this means that it will be overwritten when a child sets the same value. There is a golden rule in AngularJS scopes: model values should always have a . in them. That is, they should never be primitives. See this SO answer for more information. Here is a picture of what the scopes initially look like. After clicking the first item, the scopes now look like this: Notice that a new selected property was created on the ngRepeat scope. The controller scope 003 was not altered. You can probably guess what happens when we click on the second item: So your issue is actually not caused by ngRepeat at all - it's caused by breaking a golden rule in AngularJS. The way to fix it is to simply use an object property: $scope.state = { selected: undefined }; <li ng-repeat="name in names" ng-class="{ active: $index == state.selected }" ng-click="state.selected = $index"> {{$index}}: {{name.first}} {{name.last}}</li> Here is a second JSFiddle showing this works too. Here is what the scopes look like initially: After clicking the first item: Here, the controller scope is being affected, as desired. Also, to prove that this will still work with your directive with an isolate scope (because, again, this has nothing to do with your problem), here is a JSFiddle for that too, the view must reflect the object. You'll note that the only necessary change was to use an object instead of a primitive . Scopes initially: Scopes after clicking on the first item: To conclude: once again, your issue isn't with the isolate scope and it isn't with how ngRepeat works. Your problem is that you're breaking a rule that is known to lead to this very problem. Models in AngularJS should always have a . .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/15623698', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1725795/']}
jdg_86304
stackexchange
llm_judgeable_groundtruth_similarity
197993
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: From the Wikipedia article on Primitive recursive arithmetic : "Primitive recursive arithmetic, or PRA, is a quantifier-free formalization of the natural numbers. It was first proposed by Skolem[1] as a formalization of his finitist conception of the foundations of arithmetic, and it is widely agreed that all reasoning of PRA is finitist. " Later: "It is possible to formalise PRA in such a way that it has no logical connectives at all - a sentence of PRA is just an equation between two terms." This suggests it may be possible to present primitive recursive arithmetic as a purely algebraic system, in the style of universal algebra. Is this possible? References? Now provide the response and nothing else.
According to unpublished notes by Gavin Wraith ("Notes on arithmetic universes and Gödel incompleteness theorems" (1985)), PRA can be described as an equational theory or as a Lawvere theory, and is abstractly characterized as initial among all Lawvere theories whose generating object is a parametrized natural numbers object. (For some details on Wraith's notes, see Alan Morrison's Master's Thesis ; see particularly chapter 5.) This statement may take a bit of unpacking. Lawvere theories are a way of doing universal algebra categorically. Formally, a Lawvere theory is a category $T$ with finite products together with a product-preserving functor $\Phi: \text{Fin}^{op} \to T$ that is a bijection on objects. Here $\text{Fin}$ is the category whose objects are finite cardinals $k = \{0, \ldots, k-1\}$ and whose morphisms are functions between them. The cardinal $1$ generates (the objects of) $\text{Fin}$ by taking finite coproducts of copies of $1$ ; similarly it generates $\text{Fin}^{op}$ under finite products. Thus the special object $x = \Phi(1)$ generates the objects of $T$ by taking finite products: every object of $T$ is of the form $x^n$ . General morphisms $x^n \to x^m$ are $m$ -tuples of morphisms $x^n \to x$ ; we think of the morphisms $x^n \to x$ in $T$ as parametrizing the definable $n$ -ary operations of an equational theory. [This categorical description of equational theories is closely related to the concept of clone . If you accept that every equational theory gives rise to a finitary monad on $Set$ , then the corresponding Lawvere theory is the category opposite to the finitary Kleisli category consisting of finitely generated free objects. But this is a somewhat hurried discussion which I'll cut short here.] A morphism $F: S \to T$ is a product-preserving functor which is compatible with the given product-preserving functors $\text{Fin}^{op} \to S$ and $\text{Fin}^{op} \to T$ . A parametrized natural numbers object in a category with finite products $\mathbf{C}$ is an object $N$ that comes equipped with maps $z: 1 \to N$ (here $1$ denotes the terminal object, and read ' $z$ ' as 'zero') and $s: N \to N$ (successor), such that given any objects $A, X$ of $\mathbf{C}$ and maps $f: A \to X$ , $g: X \to X$ , there exists a unique map $h: N \times A \to X$ such that the following diagram commutes: $$\begin{array}{ccc}A & \stackrel{\langle z \circ !, 1_A\rangle}{\to} & N \times A & \stackrel{s \times 1_A}{\leftarrow} & N \times A \\ & f \searrow & \downarrow h & & \downarrow h \\ & & X & \underset{g}{\leftarrow} & X\end{array}$$ (here $!$ denotes the unique map $A \to 1$ ). This axiom is what you need to internalize primitive recursion in a category with finite products. So now the Lawvere theories we are interested in are those for which the generator $x = \Phi(1)$ is a parametrized natural numbers object. A concrete example of such is the full subcategory of $Set$ whose objects are finite powers $\mathbb{N}^n$ of the set of natural numbers. For that matter, for any category with finite products and a natural numbers object $N$ (for example, a Grothendieck topos), you can cook up a Lawvere theory by considering the full subcategory consisting of finite powers of $N$ . Or, the subcategory needn't be full: just retain enough arrows to retain finite product structure and primitive recursive structure guaranteed by the axiom of parametrized NNO's. Finally, we are interested in the initial such Lawvere theory. As is the case with any initial algebraic object, the explicit construction is syntactic: we start with $N$ and $z: 1 \to N$ and $s: N \to N$ and use the axiom of parametrized NNO's together with finite cartesian product structure (products of copies of $N$ , projection maps, diagonal maps) and categorical composition to generate formally all the arrows. (As a simple exercise, show how to construct formal addition and formal multiplication on $N$ .) One thing to check is that morphisms $1 \to N$ (the definable constants of the equational theory, considered up to provable equality) correspond bijectively to standard natural numbers. If $T$ is the initial such theory (which according to Wraith is PRA), with generator denoted $N$ , and if $f: T \to Set$ is the unique Lawvere theory morphism sending $N$ to $\mathbb{N}$ , then the functorial map $\hom_T(N^n, N) \to \hom(f(N^n), f(N)) \cong \hom(\mathbb{N}^n, \mathbb{N})$ is a surjection onto the total $n$ -ary primitive recursive functions on the standard natural numbers. Is is an injection? No. For example, consider the primitive recursive "function" $G: N \to N$ defined by $G(n) = 1$ if $n$ codes the proof of a contradiction in ZFC, and $G(n) = 0$ otherwise. (This is primitive recursive because no unbounded searches are required to verify the validity of a proof.) In the standard model of arithmetic living in the ZFC model $Set$ , $G$ would be sent to the constant $0$ function mapping $\mathbb{N} \to \mathbb{N}$ . But we could equally consider a model $\mathcal{M}$ of $ZFC + \neg Con(ZFC)$ and its natural number object $\mathbb{N}_\mathcal{M}$ , in which $G$ would not be sent to the constant $0$ function. Thus $G$ and the constant $0$ function must be distinct in the category $T$ . (Thanks to Zhen Lin Low for supplying this argument, here .)
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/197993', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10110/']}
jdg_86305
stackexchange
llm_judgeable_groundtruth_similarity
6930839
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Google Map that won't show up. The Problem seems to be the same in FF & Chrome, but even more "bad" in IE (always latest version). In FF & Chrome I have a problem with the position: relative; css element style. As soon as I switch to (with dev tools) position: absolute(or: fixed); everything displays fine in FF. In Chrome the map only shows the upper 30% (from top). In IE the map doesn't even get loaded. Here's the script stuff from the <head> . Content is only for testing and means nothing. Note: I only use this to get the map loaded. This will be exchanged later. <!-- Script inside <head> tag --> <script type="text/javascript" src="http://maps.google.com/maps/api/js? file=api&amp; v=2&amp; key=<?php echo self::GOOGLE_API_KEY; ?>&amp; sensor=false"> </script> <script type="text/javascript"> function initialize() { var startpos = new google.maps.LatLng( 50.978056,11.029167 ); var ops = { zoom: 6 ,center: startpos ,mapTypeId: google.maps.MapTypeId.ROADMAP ,tileSize: new google.maps.Size( 256, 256 ) } var map = new google.maps.Map( document.getElementById("map_canvas"), ops ); var pos1 = new google.maps.LatLng( 50.7510776,12.4820724 ); var contentString1 = '<div align="left" dir="ltr" class="infowin"><h3>test</h3>testen</div>'; var infowindow1 = new google.maps.InfoWindow( { content: contentString1 ,maxWidth: 5 } ); var marker1 = new google.maps.Marker( { position: pos1 ,map: map ,title: 'test' } ); google.maps.event.addListener( marker1 ,'click' ,function() { infowindow1.open( map, marker1 ); } ); } </script> This is the whole mark up for the page. <!-- html markup - There *really* isn't anything else --><body onload="initialize()"><div id="map_canvas" style="width:100%; height:100%;"></div></body> I have spend a lot of time on google without finding anything. Any ideas? Thanks! Now provide the response and nothing else.
html, body should be height:100%; . But keep in mind, if your map holder element is a child of another element then that element should also have height:100%; Otherwise, setting just the html and body won't do you any good. An Example to explain my point: <html><head> <style> html, body { height:100%; } </style></head><body><div id="wrapper"> <div id="google-map-holder" style="width:100%; height:100%;"></div></div></body></html> So, if you're doing something like the above. The height:100%; won't work here. To make this work, you have to do the same thing with all of the parents that the #google-map-holder is a child of, in this case we would add height:100%; to #wrapper element. IF, the #google-map-holder element was directly outside the #wrapper element and a child of the body directly then just doing html, body would be enough.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6930839', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376483/']}
jdg_86306
stackexchange
llm_judgeable_groundtruth_similarity
17665977
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to filter the following list to return all items beginning with the first item that starts with "Group", up until, but not including the next item that starts with "Group" (or the up until the last item). List<string> text = new List<string>();text.Add("Group Hear It:");text.Add(" item: The Smiths");text.Add(" item: Fernando Sor");text.Add("Group See It:");text.Add(" item: Longmire");text.Add(" item: Ricky Gervais Show");text.Add(" item: In Bruges"); After filtering, I want to have the following items in the first grouping: "Group Hear It:"" item: The Smiths"" item: Fernando Sor" And the following items in the second grouping: "Group See It:"" item: Longmire"" item: Ricky Gervais Show"" item: In Bruges" This doesn't work because I'm filtering the list in the first where to exclude the "item:" items... Am I close with TakeWhile , or way off? var group = text.Where(t => t.StartsWith("Group "))) .TakeWhile(t => t.ToString().Trim().StartsWith("item")); Now provide the response and nothing else.
Like Jeff Mercado's but doesn't pre-process entire enumerable: public static class Extensions{ public static IEnumerable<IList<T>> ChunkOn<T>(this IEnumerable<T> source, Func<T, bool> startChunk) { List<T> list = new List<T>(); foreach (var item in source) { if(startChunk(item) && list.Count > 0) { yield return list; list = new List<T>(); } list.Add(item); } if(list.Count > 0) { yield return list; } }} Use like: List<string> text = new List<string>();text.Add("Group Hear It:");text.Add(" item: The Smiths");text.Add(" item: Fernando Sor");text.Add("Group See It:");text.Add(" item: Longmire");text.Add(" item: Ricky Gervais Show");text.Add(" item: In Bruges");var chunks = text.ChunkOn(t => t.StartsWith("Group"));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17665977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70870/']}
jdg_86307
stackexchange
llm_judgeable_groundtruth_similarity
579783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to detect IP address changes on the local machine in Linux programmatically using C++? Now provide the response and nothing else.
here you go.. this does it without polling. it only listens for RTM_NEWADDR but it should be easy to change to support RTM_DELADDR if you need #include <stdio.h>#include <string.h>#include <netinet/in.h>#include <linux/netlink.h>#include <linux/rtnetlink.h>#include <net/if.h>intmain(){ struct sockaddr_nl addr; int sock, len; char buffer[4096]; struct nlmsghdr *nlh; if ((sock = socket(PF_NETLINK, SOCK_RAW, NETLINK_ROUTE)) == -1) { perror("couldn't open NETLINK_ROUTE socket"); return 1; } memset(&addr, 0, sizeof(addr)); addr.nl_family = AF_NETLINK; addr.nl_groups = RTMGRP_IPV4_IFADDR; if (bind(sock, (struct sockaddr *)&addr, sizeof(addr)) == -1) { perror("couldn't bind"); return 1; } nlh = (struct nlmsghdr *)buffer; while ((len = recv(sock, nlh, 4096, 0)) > 0) { while ((NLMSG_OK(nlh, len)) && (nlh->nlmsg_type != NLMSG_DONE)) { if (nlh->nlmsg_type == RTM_NEWADDR) { struct ifaddrmsg *ifa = (struct ifaddrmsg *) NLMSG_DATA(nlh); struct rtattr *rth = IFA_RTA(ifa); int rtl = IFA_PAYLOAD(nlh); while (rtl && RTA_OK(rth, rtl)) { if (rth->rta_type == IFA_LOCAL) { char name[IFNAMSIZ]; if_indextoname(ifa->ifa_index, name); char ip[INET_ADDRSTRLEN]; inet_ntop(AF_INET, RTA_DATA(rth), ip, sizeof(ip)); printf("interface %s ip: %s\n", name, ip); } rth = RTA_NEXT(rth, rtl); } } nlh = NLMSG_NEXT(nlh, len); } } return 0;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/579783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70121/']}
jdg_86308
stackexchange
llm_judgeable_groundtruth_similarity
37133210
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've spent quite some time but I can't get past this (configuration) issue. Technology stack : Java (1.8), Springboot (starter-parent, starter-web), Maven, IntelliJ IDEA Description: Trying to create a multi module Java application composed (at first) of 2 modules : core module : main module (main business logic, every other module should see and interact through this one). This module contains the main application class. webgateway module : Simple Rest Controller that will map a request and make calls to the core module Issue: Springboot doesn't load/scan the RestController from the webgateway module => 404 error when sending the http request Github repo : https://github.com/Sorin-J/Greeter Project configuration : Greeter | + pom.xml (parent pom) | + -- core | | | + ... | | | + pom.xml | + -- webgateway | + ... | + pom.xml (depends on core pom.xml) Parent pom.xml : <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.bet.jbs</groupId> <artifactId>Greeter</artifactId> <packaging>pom</packaging> <version>1.0-SNAPSHOT</version> <modules> <module>core</module> <module>webgateway</module> </modules> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.3.3.RELEASE</version> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies></project> core module pom.xml : <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>Greeter</artifactId> <groupId>com.bet.jbs</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>core</artifactId></project> webgateway module pom.xml : <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>Greeter</artifactId> <groupId>com.bet.jbs</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>webgateway</artifactId> <dependencies> <dependency> <groupId>com.bet.jbs</groupId> <artifactId>core</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies></project> MainApplication class from core module : package com.bet.jbs.core;import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.EnableAutoConfiguration;import org.springframework.context.annotation.ComponentScan;import org.springframework.context.annotation.Configuration;@Configuration@ComponentScan(basePackages = {"com.bet.jbs.core", "com.bet.jbs.webgateway"})@EnableAutoConfigurationpublic class MainApplication { public static void main(String[] args) throws Exception { SpringApplication.run(MainApplication.class, args); }} GreetingController class from webgateway module : package com.bet.jbs.webgateway.controller;import com.bet.jbs.core.util.GreetingGenerator;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RequestMethod;import org.springframework.web.bind.annotation.RestController;@RestControllerpublic class GreetingController { @RequestMapping(value = "/webgreeting", method = RequestMethod.GET) public String getGreeting() { return "WEBGATEWAY module says " + GreetingGenerator.getRandomGreeting(); }} Just to test that an identical REST controller would work fine if it is located in the core module, I've created a similar GreetingController class also in core module (and this one works fine) : package com.bet.jbs.core.controller;import com.bet.jbs.core.util.GreetingGenerator;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RequestMethod;import org.springframework.web.bind.annotation.RestController;/* * This REST controller should not be in the CORE component. * It is just for proving that this controller is recognized and the other one from WEBGATEWAY component is not. * */@RestControllerpublic class GreetingController { @RequestMapping(value = "/coregreeting", method = RequestMethod.GET) public String getGreeting() { return "CORE module says " + GreetingGenerator.getRandomGreeting(); }} Now provide the response and nothing else.
The Spring Boot main application is in the core module, which does not have a dependency on the webgateway module. Therefore the class with the controller will not be present at runtime and can not be discovered by spring. Fix : Add dependency to the webgateway to the core or move the launcher/main class to the webgateway module. You can also use a third module that does the launching and has the dependencies to core and webgateway .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37133210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4733319/']}
jdg_86309
stackexchange
llm_judgeable_groundtruth_similarity
57842232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to dplyr::summarise a dataset (collapse) by different summarise_at / summarise_if functions so that I have the same named variables in my output dataset. Example: library(tidyverse)data(iris)iris$year <- rep(c(2000,3000),each=25) ## for groupingiris$color <- rep(c("red","green","blue"),each=50) ## character columniris$letter <- as.factor(rep(c("A","B","C"),each=50)) ## factor columnhead(iris, 3) Sepal.Length Sepal.Width Petal.Length Petal.Width Species year color letter1 5.1 3.5 1.4 0.2 setosa 2000 red A2 4.9 3.0 1.4 0.2 setosa 2000 red A3 4.7 3.2 1.3 0.2 setosa 2000 red A The resulting dataset should look like this: full Species year Sepal.Width Petal.Width Sepal.Length Petal.Length letter color <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <fct> <chr>1 setosa 2000 87 6.2 5.8 1.9 A red 2 setosa 3000 84.4 6.1 5.5 1.9 A red 3 versicolor 2000 69.4 33.6 7 4.9 B green4 versicolor 3000 69.1 32.7 6.8 5.1 B green5 virginica 2000 73.2 51.1 7.7 6.9 C blue 6 virginica 3000 75.5 50.2 7.9 6.4 C blue I can achieve this by doing the following which is a bit repetitive: sums <- iris %>% group_by(Species, year) %>% summarise_at(vars(matches("Width")), list(sum))max <- iris %>% group_by(Species, year) %>% summarise_at(vars(matches("Length")), list(max))last <- iris %>% group_by(Species, year) %>% summarise_if(is.factor, list(last))first <- iris %>% group_by(Species, year) %>% summarise_if(is.character, list(first))full <- full_join(sums, max) %>% full_join(last) %>% full_join(first) I have found similar approaches below but can't figure out the approach I've tried here. I would prefer not to make my own function as I think something like this is cleaner by passing everything through a pipe and joining: test <- iris %>% #group_by(.vars = vars(Species, year)) %>% #why doesnt this work? group_by_at(.vars = vars(Species, year)) %>% #doesnt work {left_join( summarise_at(., vars(matches("Width")), list(sum)), summarise_at(., vars(matches("Length")), list(max)), summarise_if(., is.factor, list(last)), summarise_if(., is.character, list(first)) ) } #doesnt work This doesnt work, any suggestions or other approaches? Helpful: How can I use summarise_at to apply different functions to different columns? Summarize different Columns with different Functions Using dplyr summarize with different operations for multiple columns Now provide the response and nothing else.
Using .indices() is not a workaround, it is a proper way of doing it. Alternatively, you can also use the code in the release notes for an indexed() array: struct ContentView: View { var persons = ["Boris", "Anna", "Tom"] var body: some View { VStack { List(persons.indexed(), id: \.1.self) { idx, person in Text("\(idx) - \(person)") } } }}// This is taken from the Release Notes, with a typo correction, marked belowstruct IndexedCollection<Base: RandomAccessCollection>: RandomAccessCollection { typealias Index = Base.Index typealias Element = (index: Index, element: Base.Element) let base: Base var startIndex: Index { base.startIndex } // corrected typo: base.endIndex, instead of base.startIndex var endIndex: Index { base.endIndex } func index(after i: Index) -> Index { base.index(after: i) } func index(before i: Index) -> Index { base.index(before: i) } func index(_ i: Index, offsetBy distance: Int) -> Index { base.index(i, offsetBy: distance) } subscript(position: Index) -> Element { (index: position, element: base[position]) }}extension RandomAccessCollection { func indexed() -> IndexedCollection<Self> { IndexedCollection(base: self) }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57842232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4083743/']}
jdg_86310
stackexchange
llm_judgeable_groundtruth_similarity
5714616
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I see some usage of internal struct in c++ function. There is a common interface IBase. Here is the draft code. class IBase{ virtual Method()=0;}vector<IBase*> baseList; Then a function defined an internal class based on that IBase, and then push the internal class object into the baseList. void func(){ struct Object : public IBase { virtual Method() { // Method of Object in func } } IBase* base = new Object(); baseList->push(base);} It seems a strange usage, but a nice implementation of message/event creation pattern. Other threads maybe use this baseList to handle the incoming event. What's the scope of internal struct of "struct Object"? It's very interesting. Is there some documents talking about this? Now provide the response and nothing else.
What's the scope of internal struct of "struct Object"? The scope of the local classes is the function in which they're defined.But that isn't interesting in itself. What makes local classes interesting is that if they implement some interface (like your code does), then you can create instances of it (using new ) and return them (for example, as std::vector<IBase*> ), thereby making the implementation accessible through the base class pointer even outside the function. Some other facts about local classes: They cannot define static member variables. They cannot access nonstatic "automatic" local variables of the enclosing function. But they can access the static variables. They can be used in template functions. If they are defined inside a template function, then they can use the template parameters of the enclosing function. Local classes are final, that means users outside the function cannot derive from local class to function. Without local classes, you'd have to add an unnamed namespace in separate translation unit. Local classes are used to create trampoline functions usually known as thunks . EDIT Some references from the Standard (2003) 9.8 Local class declarations [class.local] \1. A class can be defined within a function definition; such a class is called a local class. The name of a local class is local to its enclosing scope. The local class is in the scope of the enclosing scope, and has the same access to names outside the function as does the enclosing function. Declarations in a local class can use only type names, static variables, extern variables and functions, and enumerators from the enclosing scope. [Example:int x;void f(){ static int s ; int x; extern int g(); struct local { int g() { return x; } // error: x is auto int h() { return s; } // OK int k() { return ::x; } // OK int l() { return g(); } // OK };// ...}local* p = 0; // error: local not in scope—end example] \2. An enclosing function has no special access to members of the local class; it obeys the usual access rules (clause 11). Member functions of a local class shall be defined within their class definition, if they are defined at all. \3. If class X is a local class a nested class Y may be declared in class X and later defined in the definition of class X or be later defined in the same scope as the definition of class X. A class nested within a local class is a local class. \4. A local class shall not have static data members.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5714616', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/610846/']}
jdg_86311
stackexchange
llm_judgeable_groundtruth_similarity
8547827
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a master/slave replication on my MySql DB. my slave DB was down for a few hours and is back up again (master was up all the time), when issuing show slave status I can see that the slave is X seconds behind the master. the problem is that the slave dont seem to catch up with the master, the X seconds behind master dont seem to drop... any ideas on how I can help the slave catch up? Now provide the response and nothing else.
Here is an idea In order for you to know that MySQL is fully processing the SQL from the relay logs. Try the following: STOP SLAVE IO_THREAD; This will stop replication from downloading new entries from the master into its relay logs. The other thread, known as the SQL thread, will continue processing the SQL statements it downloaded from the master. When you run SHOW SLAVE STATUS\G , keep your eye on Exec_Master_Log_Pos . Run SHOW SLAVE STATUS\G again. If Exec_Master_Log_Pos does not move after a minute, you can go ahead run START SLAVE IO_THREAD; . This may reduce the number of Seconds_Behind_Master . Other than that, there is really nothing you can do except to: Trust Replication Monitor Seconds_Behind_Master Monitor Exec_Master_Log_Pos Run SHOW PROCESSLIST; , take note of the SQL thread to see if it is processing long running queries. BTW Keep in mind that when you run SHOW PROCESSLIST; with replication running, there should be two DB Connections whose user name is system user . One of those DB Connections will have the current SQL statement being processed by replication. As long as a different SQL statement is visible each time you run SHOW PROCESSLIST; , you can trust mysql is still replicating properly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8547827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/553823/']}
jdg_86312
stackexchange
llm_judgeable_groundtruth_similarity
817658
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm in a linear algebra class and am having a hard time wrapping my head around what subspaces of a vector space are useful for (among many other things!). My understanding of a vector space is that, simplistically, it defines a coordinate plane that you can plot points on and figure out some useful things about the relationship between vectors/points. I think what I'm curious about is more application of some of these ideas. Such as, is a subspace useful for a reason other than you don't have to look at the entire space something exists in (I guess one way I've been thinking about it is if you want to make a map of a city, you don't necessarily need to make a map of the state it's in) or am I even wrong about that much? Also, even though I feel like I should know this at this point, is if the subspace is linearly independent, is it still a subspace? If it is, what exactly does that describe and/or why is that still useful? If it's not, is it still useful for something? I think the most difficult part of this for me is I'm having a hard time being able to visualize what exactly we're talking about and I have a hard time thinking that abstractly. I know one or two examples of this might be too specific and doesn't generalize the concept enough, but I think if I have some example to relate back to when applying the idea to new things it might be helpful. Now provide the response and nothing else.
It can help to think of these concepts geometrically. In the context of our 3d world, subspaces might be thought of as lines or planes (through the origin). Why do we care about subspaces? Again, a geometric picture of linear transformations (which is what we use matrices to model) helps with these ideas. A linear transformation (matrix) might leave certain lines invariant: they simply map the line to the same line, within a scaling factor. Any vector on such a line is an eigenvector , and the scale factor by which the line is magnified or shrunk is the eigenvalue . A linear transformation (matrix) might, even when given any vector in 3d, only spit out vectors on a certain plane or line. The set of vectors spat out in this way is the image of the transformation, and you should see that the dimensionality of the image is clear from its geometric dimension: if the image is a plane, then the image has dimension 2, and so on. For inputs to the transformation (matrix), some lines or planes might be wholly annihilated by the transformation---the transformation (matrix) forces them to zero. These lines or planes form the kernel of the transformation (matrix). Now, something you might not be taught about are whole planes that are left invariant under a transformation, even though no individual line is kept invariant. These planes might be scaled by some factor, and so they can be thought as "eigenplanes". Rotation maps are an example of transformations that leave whole planes invariant without leaving any individual (real) line in that plane invariant.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/817658', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/153190/']}
jdg_86313
stackexchange
llm_judgeable_groundtruth_similarity
504913
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a large set of files in a directory. The files contains arbitrary text. I want to search for the file name inside that particular file text. To clarify, I have file1.py.txt (yeas, two dots .py.txt ) and file2.py.txt both contains texts. I want to search for the existence of the string @code prefix.file1.py inside file1.py.txt and for the string @code prefix.file2.py inside file2.py.txt How can I customize grep such that it goes through every file in the directory, search for the string in each file using that particular file name? EDIT: The output I am looking for is written in a separate file, result.txt which contains:filename (if a match is found), the line text (where the match is found) Now provide the response and nothing else.
With GNU awk : gawk ' BEGINFILE{search = "@code prefix." substr(FILENAME, 3, length(FILENAME) - 6)} index($0, search)' ./*.py.txt Would report the matching lines. To print the file name and matching line, change index($0, search) to index($0, search) {print FILENAME": "$0} Or to print the file name only: index($0, search) {print FILENAME; nextfile} Replace FILENAME with substr(FILENAME, 3) to skip outputting the ./ prefix. The list of files is lexically sorted. The ones whose name starts with . are ignored (some shells have a dotglob option to add them back; with zsh , you can also use the (D) glob qualifier).
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/504913', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/299440/']}
jdg_86314
stackexchange
llm_judgeable_groundtruth_similarity
43317970
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is the table: class APIBackoffInterrupt(Base): """ When a user calls a Google API method, and it returns 403 or 429, we should apply an exponential backoff """ __tablename__ = 'backoff_interrupt' _id = Column(Unicode(36), primary_key=True) user_id = Column(Unicode(36)) date_created_utc = Column(DateTime) expiry_date_utc = Column(DateTime, nullable=True) exponent = Column(Integer, default=-1) I'll like to get all unique user_id , sorted by date_created_utc (desc), where expiry_date_utc is < the current utc now. How can I do this? Now provide the response and nothing else.
You need to remove the mode: 'no-cors' setting from your request. Setting no-cors mode is exactly the cause of the problem you’re having. A no-cors request makes the response type opaque . The log snippet in the question shows that. Opaque means your frontend JavaScript code can’t see the response body or headers. https://developer.mozilla.org/en-US/docs/Web/API/Request/mode explains: no-cors — JavaScript may not access any properties of the resulting Response So the effect of setting no-cors mode is essentially to tell browsers, “Don’t let frontend JavaScript code access the response body or headers under any circumstances.” People sometimes try setting no-cors mode when a response doesn’t include the Access-Control-Allow-Origin response header or else because the request is one that triggers a CORS preflight , and so your browser does an OPTIONS preflight. But using no-cors mode isn’t a solution to those problems. The solution is either to: configure the server to which you’re making the request such that it sends the Access-Control-Allow-Origin response header, and such that it handles OPTIONS requests or set up a CORS proxy using code from https://github.com/Rob--W/cors-anywhere/ or such; see the How to use a CORS proxy to get around “No Access-Control-Allow-Origin header” problems section of the answer at No 'Access-Control-Allow-Origin' header is present on the requested resource—when trying to get data from a REST API
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43317970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/118644/']}
jdg_86315
stackexchange
llm_judgeable_groundtruth_similarity
3199099
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a navigation controller. One of the views adds custom subviews in its viewDidAppear: . I notice that the first time I navigate to an instance of this view controller after launching the app, viewDidAppear: invokes twice. If I pop this view off the stack and navigate to it again, viewDidAppear: invokes only once per appearance. All subsequent appearances invoke viewDidAppear: once. The problem for me is that the first time I get to this view I end up with twice the number of subviews. I work around this problem by introducing a flag variable or some such, but I'd like to understand what is happening and how come I get two invocations in these circumstances. Now provide the response and nothing else.
You should never rely on -viewWillAppear: / -viewDidAppear: being called appropriately balanced with the disappear variants. While the system view controllers will do the best they can to always bracket the calls properly, I don't know if they ever guarantee it, and certainly when using custom view controllers you can find situations where these can be called multiple times. In short, your -viewWillAppear: / -viewDidAppear: methods should be idempotent, meaning if -viewDidAppear: is called twice in a row on your controller, it should behave properly. If you want to load custom views, you may want to do that in -viewDidLoad instead and then simply put the on-screen (if they aren't already) in -viewDidAppear: . You could also put a breakpoint in your -viewDidAppear: method to see why it's being called twice the first time it shows up.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3199099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/271594/']}
jdg_86316
stackexchange
llm_judgeable_groundtruth_similarity
6857807
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to make an Android application to download and use a Java library at runtime? Here is an example: Imagine that the application needs to make some calculations depending on the input values. The application asks for these input values and then checks if the required Classe s or Method s are available. If not, it connects to a server, downloads the needed library, and loads it at runtime to calls the required methods using reflection techniques. The implementation could change depending on various criteria such as the user who is downloading the library. Now provide the response and nothing else.
Sorry, I'm late and the question has already an accepted answer, but yes , you can download and execute external libraries. Here is the way I did: I was wondering whether this was feasible so I wrote the following class: package org.shlublu.android.sandbox;import android.util.Log;public class MyClass { public MyClass() { Log.d(MyClass.class.getName(), "MyClass: constructor called."); } public void doSomething() { Log.d(MyClass.class.getName(), "MyClass: doSomething() called."); }} And I packaged it in a DEX file that I saved on my device's SD card as /sdcard/shlublu.jar . Then I wrote the "stupid program" below, after having removed MyClass from my Eclipse project and cleaned it: public class Main extends Activity { @SuppressWarnings("unchecked") @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { final String libPath = Environment.getExternalStorageDirectory() + "/shlublu.jar"; final File tmpDir = getDir("dex", 0); final DexClassLoader classloader = new DexClassLoader(libPath, tmpDir.getAbsolutePath(), null, this.getClass().getClassLoader()); final Class<Object> classToLoad = (Class<Object>) classloader.loadClass("org.shlublu.android.sandbox.MyClass"); final Object myInstance = classToLoad.newInstance(); final Method doSomething = classToLoad.getMethod("doSomething"); doSomething.invoke(myInstance); } catch (Exception e) { e.printStackTrace(); } }} It basically loads the class MyClass that way: create a DexClassLoader use it to extract the class MyClass from "/sdcard/shlublu.jar" and store this class to the application's "dex" private directory (internal storage of the phone). Then, it creates an instance of MyClass and invokes doSomething() on the created instance. And it works... I see the traces defined in MyClass in my LogCat: I've tried on both an emulator 2.1 and on my physical HTC cellphone (which is running Android 2.2 and which is NOT rooted). This means you can create external DEX files for the application to download and execute them. Here it was made the hard way (ugly Object casts, Method.invoke() ugly calls...), but it must be possible to play with Interface s to make something cleaner. Wow. I'm the first surprised. I was expecting a SecurityException . Some facts to help investigating more: My DEX shlublu.jar was signed, but not my app My app was executed from Eclipse / USB connection. So this is an unsigned APK compiled in DEBUG mode
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6857807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9350/']}
jdg_86317
stackexchange
llm_judgeable_groundtruth_similarity
931257
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to learn about reverse engineering, using Minesweeper as a sample application. I've found this MSDN article on a simple WinDbg command that reveals all the mines but it is old, is not explained in any detail and really isn't what I'm looking for. I have IDA Pro disassembler and the WinDbg debugger and I've loaded winmine.exe into both of them. Can someone provide some practical tips for either of these programs in terms of finding the location of the data structure that represents the mine field? In WinDbg I can set breakpoints, but it is difficult for me to imagine at what point to set a breakpoint and at what memory location. Similarly, when I view the static code in IDA Pro, I'm not sure where to even begin to find the function or data structure that represents the mine field. Are there any Reverse Engineers on Stackoverflow that can point me in the right direction? Now provide the response and nothing else.
Part 1 of 3 If you are serious into reverse engineering - forget about trainers and cheat engines. Good reverse engineer should first get to know OS, core API functions, program general structure (what is run loop, windows structures, event handling routines), file format (PE). Petzold's classics "Programming Windows" can help (www.amazon.com/exec/obidos/ISBN=157231995X) as well as online MSDN. First you should think about where minefield initialization routine can be called. I thought of following: When you launch the game When you click happy face When you click Game->New or press F2 When you change level difficulty I decided to check out F2 accelerator command. To find accelerator handling code you are to find window message handling procedure (WndProc). It can be traced down by CreateWindowEx and RegisterClass calls. To read: CreateWindowEx http://msdn.microsoft.com/en-us/library/ms632680%28VS.85%29.aspx RegisterClass http://msdn.microsoft.com/en-us/library/ms633586%28VS.85%29.aspx Petzold's Chapter 3 "Windows and Messages" Open up IDA, Imports window, find "CreateWindow*", jump to it and use "Jump xref to operand (X)" command to see where it is called. There should be just one call. Now look above for RegisterClass function and it's parameter WndClass.lpfnWndProc. I already named function mainWndProc in my case. .text:0100225D mov [ebp+WndClass.lpfnWndProc], offset mainWndProc.text:01002264 mov [ebp+WndClass.cbClsExtra], edi.text:01002267 mov [ebp+WndClass.cbWndExtra], edi.text:0100226A mov [ebp+WndClass.hInstance], ecx.text:0100226D mov [ebp+WndClass.hIcon], eax.text:01002292 call ds:RegisterClassW Hit Enter on function name (use 'N' to rename it to something better) Now take a look at .text:01001BCF mov edx, [ebp+Msg] This is message id, which in case of F2 button press should contain WM_COMMAND value. You are to find where it is compared to 111h. It can be done either by tracing down edx in IDA or by setting conditional breakpoint in WinDbg and pressing F2 in the game. Either way leads to something like .text:01001D5B sub eax, 111h.text:01001D60 jz short loc_1001DBC Right click on 111h and use "Symbolic constant" -> "Use standard symbolic constant", type WM_ and Enter. You should now have .text:01001D5B sub eax, WM_COMMAND.text:01001D60 jz short loc_1001DBC It is an easy way to find out message id values. To understand accelerator handling check out: Using Keyboard Accelerators Resource hacker ( http://angusj.com/resourcehacker/ ) It's quite a lot of text for a single answer. If you are interested I can write another couple of posts. Long story short minefield stored as an array of bytes [24x36], 0x0F shows that byte is not used (playing smaller field), 0x10 - empty field, 0x80 - mine. Part 2 of 3 Ok, let's go on with F2 button. According to Using Keyboard Accelerators when F2 button is pressed wndProc function ... receives a WM_COMMAND or WM_SYSCOMMAND message. The low-order word of the wParam parameter contains the identifier of the accelerator. Ok, we already found where WM_COMMAND is processed, but how to determine corresponding wParam parameter value? This is where Resource hacker comes into play. Feed it with binary and it shows you everything. Like accelerators table for me. alt text http://files.getdropbox.com/u/1478671/2009-07-29_161532.jpg You can see here, that F2 button corresponds to 510 in wParam. Now let's get back to code, that handles WM_COMMAND. It compares wParam with different constants. .text:01001DBC HandleWM_COMMAND: ; CODE XREF: mainWndProc+197j.text:01001DBC movzx eax, word ptr [ebp+wParam].text:01001DC0 mov ecx, 210h.text:01001DC5 cmp eax, ecx.text:01001DC7 jg loc_1001EDC.text:01001DC7.text:01001DCD jz loc_1001ED2.text:01001DCD.text:01001DD3 cmp eax, 1FEh.text:01001DD8 jz loc_1001EC8 Use context menu or 'H' keyboard shortcut to display decimal values and you can see our jump .text:01001DBC HandleWM_COMMAND: ; CODE XREF: mainWndProc+197j.text:01001DBC movzx eax, word ptr [ebp+wParam].text:01001DC0 mov ecx, 528.text:01001DC5 cmp eax, ecx.text:01001DC7 jg loc_1001EDC.text:01001DC7.text:01001DCD jz loc_1001ED2.text:01001DCD.text:01001DD3 cmp eax, 510.text:01001DD8 jz loc_1001EC8 ; here is our jump It leads to code chunk that calls some proc and exits wndProc. .text:01001EC8 loc_1001EC8: ; CODE XREF: mainWndProc+20Fj.text:01001EC8 call sub_100367A ; startNewGame ?.text:01001EC8.text:01001ECD jmp callDefAndExit ; default Is that the function that initiates new game? Find that out in the last part! Stay tuned. Part 3 of 3 Let's take a look at the first part of that function .text:0100367A sub_100367A proc near ; CODE XREF: sub_100140C+CAp.text:0100367A ; sub_1001B49+33j ....text:0100367A mov eax, dword_10056AC.text:0100367F mov ecx, uValue.text:01003685 push ebx.text:01003686 push esi.text:01003687 push edi.text:01003688 xor edi, edi.text:0100368A cmp eax, dword_1005334.text:01003690 mov dword_1005164, edi.text:01003696 jnz short loc_10036A4.text:01003696.text:01003698 cmp ecx, dword_1005338.text:0100369E jnz short loc_10036A4 There are two values (dword_10056AC, uValue) read into registers eax and ecx and compared to another two values (dword_1005164, dword_1005338). Take a look at actual values using WinDBG ('bp 01003696'; on break 'p eax; p ecx') - they seemed like minefield dimensions for me. Playing with custom minefield size showed that first pair are new dimensions and second - current dimensions. Let's set new names. .text:0100367A startNewGame proc near ; CODE XREF: handleButtonPress+CAp.text:0100367A ; sub_1001B49+33j ....text:0100367A mov eax, newMineFieldWidth.text:0100367F mov ecx, newMineFieldHeight.text:01003685 push ebx.text:01003686 push esi.text:01003687 push edi.text:01003688 xor edi, edi.text:0100368A cmp eax, currentMineFieldWidth.text:01003690 mov dword_1005164, edi.text:01003696 jnz short loc_10036A4.text:01003696.text:01003698 cmp ecx, currentMineFieldHeight.text:0100369E jnz short loc_10036A4 A little bit later new values overwrite current and subroutine is called .text:010036A7 mov currentMineFieldWidth, eax.text:010036AC mov currentMineFieldHeight, ecx.text:010036B2 call sub_1002ED5 And when I saw it .text:01002ED5 sub_1002ED5 proc near ; CODE XREF: sub_1002B14:loc_1002B1Ep.text:01002ED5 ; sub_100367A+38p.text:01002ED5 mov eax, 360h.text:01002ED5.text:01002EDA.text:01002EDA loc_1002EDA: ; CODE XREF: sub_1002ED5+Dj.text:01002EDA dec eax.text:01002EDB mov byte ptr dword_1005340[eax], 0Fh.text:01002EE2 jnz short loc_1002EDA I was completely sure that I found minefield array. Cause of cycle which inits 360h bytes length array (dword_1005340 ) with 0xF. Why 360h = 864? There are some cues below that row takes 32 bytes and 864 can be divided by 32, so array can hold 27*32 cells (although UI allows max 24*30 field, there is one byte padding around array for borders). Following code generates minefield top and bottom borders (0x10 byte). I hope you can see loop iteration in that mess ;) I had to use paper and pen .text:01002EE4 mov ecx, currentMineFieldWidth.text:01002EEA mov edx, currentMineFieldHeight.text:01002EF0 lea eax, [ecx+2].text:01002EF3 test eax, eax.text:01002EF5 push esi.text:01002EF6 jz short loc_1002F11 ; .text:01002EF6.text:01002EF8 mov esi, edx.text:01002EFA shl esi, 5.text:01002EFD lea esi, dword_1005360[esi].text:01002EFD.text:01002F03 draws top and bottom borders.text:01002F03 .text:01002F03 loc_1002F03: ; CODE XREF: sub_1002ED5+3Aj.text:01002F03 dec eax.text:01002F04 mov byte ptr MineField?[eax], 10h ; top border.text:01002F0B mov byte ptr [esi+eax], 10h ; bottom border.text:01002F0F jnz short loc_1002F03.text:01002F0F.text:01002F11.text:01002F11 loc_1002F11: ; CODE XREF: sub_1002ED5+21j.text:01002F11 lea esi, [edx+2].text:01002F14 test esi, esi.text:01002F16 jz short loc_1002F39 And the rest of subroutine draws left and right borders .text:01002F18 mov eax, esi.text:01002F1A shl eax, 5.text:01002F1D lea edx, MineField?[eax].text:01002F23 lea eax, (MineField?+1)[eax+ecx].text:01002F23.text:01002F2A.text:01002F2A loc_1002F2A: ; CODE XREF: sub_1002ED5+62j.text:01002F2A sub edx, 20h.text:01002F2D sub eax, 20h.text:01002F30 dec esi.text:01002F31 mov byte ptr [edx], 10h.text:01002F34 mov byte ptr [eax], 10h.text:01002F37 jnz short loc_1002F2A.text:01002F37.text:01002F39.text:01002F39 loc_1002F39: ; CODE XREF: sub_1002ED5+41j.text:01002F39 pop esi.text:01002F3A retn Smart usage of WinDBG commands can provide you cool minefield dump (custom size 9x9). Check out the borders! 0:000> db /c 20 01005340 L36001005340 10 10 10 10 10 10 10 10-10 10 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005360 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005380 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................010053a0 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................010053c0 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................010053e0 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005400 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005420 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005440 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005460 10 0f 0f 0f 0f 0f 0f 0f-0f 0f 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................01005480 10 10 10 10 10 10 10 10-10 10 10 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................010054a0 0f 0f 0f 0f 0f 0f 0f 0f-0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................010054c0 0f 0f 0f 0f 0f 0f 0f 0f-0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................010054e0 0f 0f 0f 0f 0f 0f 0f 0f-0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f 0f ................................ Hmm, looks like I'll need another post to close the topic
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/931257', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64878/']}
jdg_86318
stackexchange
llm_judgeable_groundtruth_similarity
63768921
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a terraform configuration which needs to: Create a lambda Invoke the lambda Iterate on the lambda's json result which returns an array and create a CloudWatch event rule per entry in the array The relevant code looks like: Create lambda code...data "aws_lambda_invocation" "run_lambda" { function_name = "${aws_lambda_function.deployed_lambda.function_name}" input = <<JSON {} JSON depends_on = [aws_lambda_function.deployed_lambda]}resource "aws_cloudwatch_event_rule" "aws_my_cloudwatch_rule" { for_each = {for record in jsondecode(data.aws_lambda_invocation.run_lambda.result).entities : record.entityName => record} name = "${each.value.entityName}-event" description = "Cloudwatch rule for ${each.value.entityName}" schedule_expression = "cron(${each.value.cronExpression})"} The problem is that when I run it, I get: Error: Invalid for_each argument on lambda.tf line 131, in resource "aws_cloudwatch_event_rule" "aws_my_cloudwatch_rule": 131: for_each = {for record in jsondecode(data.aws_lambda_invocation.aws_lambda_invocation.result).entities : record.entityName => record}The "for_each" value depends on resource attributes that cannot be determineduntil apply, so Terraform cannot predict how many instances will be created.To work around this, use the -target argument to first apply only theresources that the for_each depends on. I've read a bunch of posts on the problem but couldn't find a workaround. The problem is that Terraform needs to know the size of the array returned by the lambda in the planning phase before the lambda was created. What is the best approach to solving such a task? Since it is run as part of a CI/CD pipeline I prefer a solution that doesn't include the "-target" flag. Now provide the response and nothing else.
If you want to solve this in pure terraform, the workarounds are at the moment to either split your deployment into multiple stacks/phases (e.g. first deploy a stack with the lambda, and then the second stack that is using the lambda as a data source) or as you already found out, partially deploy your stack using -target and then deploy the full stack. (Be sure to remove the depends_on in this case as it will defer reading the data source to the apply phase all the time.) Another option is to use a tool like terragrunt that solves the partial apply issue by deploying a set of terraform modules in the right order if all dependencies between those modules are defined. with terragrunt you can deploy everything in a single run of e.g. terragrunt apply-all . The Downside is you still won't get a nice preview of changes in your CI to be reviewed by your peers. I would suggest splitting this into two phases as you might actually want to review both before applying the final changes. Else you might end up with a setup where a broken lambda results in destroying all your existing cloudwatch rules unnoticed by you or your team.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/63768921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/927477/']}
jdg_86319
stackexchange
llm_judgeable_groundtruth_similarity
31608
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For reference I'm working on a 1986 Corvette at about 5,000 feet above sea level (I'm in Colorado). The outside temperature is about 80 degrees, and humidity is relatively low. The old R12 system had supposedly sprung a leak and the system had emptied, so I decided to convert it over to R-134. I flushed the evaporator, condenser, and all of the hoses separately, and installed a new compressor and orifice tube that came with the conversion kit I ordered. I also replaced a couple of the O-rings that were warped on the hoses. I now have everything hooked back up with the new R-134 valves on and a new accumulator in place, and am in the process of pulling a vacuum on the system before I recharge it. The vacuum only gets down to about -21 in/Hg, and won't go any further even though I let it run for at least 2 hours. The thing is, when I close the valves on the gauge the vacuum will hold in the system at that point when I let it sit for an hour and a half, which leads me to believe there aren't any major leaks in the system. What are some possible reasons the vacuum wouldn't pull close to -30 in/HG like it should, and is absolutely necessary the vacuum gets this low to remove all the moisture? Now provide the response and nothing else.
I don't know the devices used for this job, but due to the negative sign, the pressure seems to be relative, i.e. difference to ambient pressure. (When the device is disconnected and doesn't run, the gauge should show a pressure of zero, too) The absolute air pressure at altitude is shown in this plot: (Sorry for the odd values, but I'm used to SI units...) 30inHg is the absolute air pressure at sealevel at standard conditions. At your altitude, the absolute pressure is 25inHg. (Sanity check: Denver Airport (1656m) currently has 832hPa, which fits really good into the curve.) So you can't pull 30inHg, since then you would have a negative amount of gas molecules inside the system. The highest value you can get is 25inHg. You get 21inHg which corresponds to an altitude of about 11,000ft, so this can't be explained by you being at not exactly 5,000ft. Since the system holds the vacuum, I'd say there's a problem with the device. Either the pump itself or the pipes between pump and valve has a small leak. The pump may be defect, or the oil also used as sealant is low, old or contaminated. (Though it looks and sometimes smells like motor oil, it isn't, and using the wrong oil can heavily affect the performance) Creating a vacuum is not only done to remove water (seriously, how much water could be in there, plus there's a dry cartridge), it's more because it's important to have no other gases except the R-134 inside system. From that point of view, if you can't create a good vacuum, it may be possible to flush the system by adding a little R-134 and removing it again a few times. (I really wonder why the pressure is measured relative to ambient pressure, since this leads to exactly the problem occurring here, too: Not knowing what max. value should be achieved at your location...) Edit: Since people seems to have interest in this graph, here's the formula you can use for excel etc.: Pressure = 1013hPa * exp (- altitude / 8400m) 1013hPa is the normalized air pressure at sea level in hecto-pascal. You can replace it by 760 Torr , 1.013bar , 14.7psi , 29.9inHg or whatever the pressure is in your favorite unit. 8400m is a constant in meters, you can replace it by 27559ft or whatever to match your unit of altitude.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/31608', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/18952/']}
jdg_86320
stackexchange
llm_judgeable_groundtruth_similarity
712013
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently revising a course on graph theory that I took earlier this year. While thinking about planar graphs, I noticed that a finite planar graph corresponds to a (finite) polygonisation of the Euclidean plane (or whichever surface you're working with). By considering, for example a full triangulation of the plane, you can find an object that I feel perfectly comfortable considering as a countably infinite planar graph. Can we take this even further? As soon as you start considering uncountably many vertices, you necessarily get accumulation points in the plane (as the plane is separable). Can we find a (formal) way for the property of being planar to make sense in such a context? Perhaps something along the lines of: "an infinite graph is planar if every finite subgraph is planar." It seems that our graph can have cardinality at most the continuum, as a planar graph defines a natural injection from the set of vertices into the plane (although perhaps this breaks down if we use the above definition?) For finite graphs, you can always extend a planar graph to a maximal planar graph. Can we use Zorn's lemma to do the same for arbitrary uncountable planar graph? For example, if we take the real line as an uncountable path in the sense of graph theory, it certainly feels like it should give an example of a continuum-cardinality planar graph. Our course mainly focused on finite graphs. Still, we considered a couple of uncountable examples every now and again when there was something interesting to say. I would be grateful for any insight/references/nudges in a fruitful direction that anybody could provide. Now provide the response and nothing else.
It depends on what you consider a plane drawing to be, but how about something like: Put a vertex at $(0,0)$ and one at $(1,x)$ for every $x\in\mathbb R$, with a straight edge going between it and $(0,0)$. Naively this seems to satisfy the requirements of a planar graph drawing: No point lies on two edges (except endpoints), no vertex lies on an edge it is not an endpoint of, every edge is represented by a continuous path between its endpoints. On the other hand: A problem with this is that if we consider an abstract graph a topological space (with one point per vertex and a copy of the unit interval for each edge, stitched together in the obvious way), then the topology of the above drawing as a subspace of $\mathbb R^2$ is not the same as that of the graph. Indeed if we define a "plane drawing" of a graph to be a subset of $\mathbb R^2$ which (with the subspace topology) is homeomorphic to the graph, then there can't be any uncountable planar graph, simply because there's no uncountable discrete subset of $\mathbb R^2$ that can be the image of the vertices. On the other other hand: There are some topological subtleties hidden beneath "stitched together in the obvious way" here. Actually, as soon as there's a node with countable degree the most obvious way of stitching together (resulting in a quotient space ) gives something that is not homeomorphic to a straightforward drawing of the graph -- such as a node at $(0,0)$ and one at $(1,n)$ for every $n\in\mathbb N$, with a straight edge to $(0,0)$. This can be fixed, though, by definig the stiching in a slightly more ad-hoc way which gives a different topology on the abstract graph.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/712013', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43511/']}
jdg_86321
stackexchange
llm_judgeable_groundtruth_similarity
4482392
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Django forms. I'm validating in the model layer: def clean_title(self): title = self.cleaned_data['title'] if len(title) < 5: raise forms.ValidationError("Headline must be more than 5 characters.") return title However, there are some things that I need to validate in the views.py . For example...was the last time the user posted something more than a minute ago? That kind of stuff requires request.user, which the models layer cannot get. So, I must validate in the views.py. How do I do something in the views.py to do the exact thing as this? raise forms.ValidationError("Headline must be more than 5 characters.") Now provide the response and nothing else.
I think gruszczy's answer is a good one, but if you're after generic validation involving variables that you think are only available in the view, here's an alternative: pass in the vars as arguments to the form and deal with them in the form's main clean() method. The difference/advantage here is that your view stays simpler and all things related to the form content being acceptable happen in the form. eg: # IN YOUR VIEW # pass request.user as a keyword argument to the formmyform = MyForm(user=request.user)# IN YOUR forms.py# at the top:from myapp.foo.bar import ok_to_post # some abstracted utility you write to rate-limit posting # and in your particular Form definitionclass MyForm(forms.Form) ... your fields here ... def __init__(self, *args, **kwargs): self.user = kwargs.pop('user') # cache the user object you pass in super(MyForm, self).__init__(*args, **kwargs) # and carry on to init the form def clean(self): # test the rate limit by passing in the cached user object if not ok_to_post(self.user): # use your throttling utility here raise forms.ValidationError("You cannot post more than once every x minutes") return self.cleaned_data # never forget this! ;o) Note that raising a generic ValidationError in the clean() method will put the error into myform.non_field_errors so you'll have to make sure that your template contains {{form.non_field_errors}} if you're manually displaying your form
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4482392', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179736/']}
jdg_86322
stackexchange
llm_judgeable_groundtruth_similarity
17475166
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The allegedly "clever" (but actually inefficient) way of swapping two integer variables, instead of using temporary storage, often involves this line: int a = 10;int b = 42;a ^= b ^= a ^= b; /*Here*/printf("a=%d, b=%d\n", a, b); But I'm wondering, compound assignment operators like ^= are not sequence points, are they?Does this mean it's actually undefined behavior? Now provide the response and nothing else.
a ^= b ^= a ^= b; /*Here*/ It is undefined behavior. You are modifying an object ( a ) more than once between two sequence points. (C99, 6.5p2) "Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Simple assignments as well as compound assignments don't introduce a sequence point. Here there is a sequence point before the expression statement expression and after the expression statement. Sequence points are listed in Annex C (informative) of the c99 and c11 Standard.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17475166', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1455631/']}
jdg_86323
stackexchange
llm_judgeable_groundtruth_similarity
36573510
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The maven-dependency-plugin identifies what it believes to be unused dependencies when you compile by producing warnings at compile time. [WARNING] Unused declared dependencies found:[WARNING] org.foo:bar-api:jar:1.7.5:compile In some cases this message is a false positive and the dependency is required transitively. Question : How can I identify in my pom.xml that this is the case? Now provide the response and nothing else.
You should configure in your pom the ignoredDependencies element: List of dependencies that will be ignored. Any dependency on this list will be excluded from the "declared but unused" and the "used but undeclared" list. The filter syntax is: [groupId]:[artifactId]:[type]:[version] where each pattern segment is optional and supports full and partial * wildcards. An empty pattern segment is treated as an implicit wildcard. * As also specified by the official Exclude dependencies from dependency analysis .A sample configuration would be: <build> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.10</version> <executions> <execution> <id>analyze-dep</id> <goals> <goal>analyze-only</goal> </goals> <configuration> <ignoredDependencies> <ignoredDependency>org.foo:bar-api:jar:1.7.5</ignoredDependency> </ignoredDependencies> </configuration> </execution> </executions> </plugin> </plugins></build>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36573510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1118307/']}
jdg_86324
stackexchange
llm_judgeable_groundtruth_similarity
1609353
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get all the email addresses associated to a given AD user. For the user I have the domain and the login name (ex. DOMAIN\UserName) and I the AD is storing the email addresses in: The mail attribute. In proxyAddresses attributes. So far, I don't know what C# API to use to connect to the AD, and how to properly filter by the user to fetch all the email addresses. I'm using .NET 3.5. Thank you. Now provide the response and nothing else.
Here's a possible solution using various classes in the System.DirectoryServices namespace. string username = "username";string domain = "domain";List<string> emailAddresses = new List<string>();PrincipalContext domainContext = new PrincipalContext(ContextType.Domain, domain);UserPrincipal user = UserPrincipal.FindByIdentity(domainContext, username);// Add the "mail" entryemailAddresses.Add(user.EmailAddress);// Add the "proxyaddresses" entries.PropertyCollection properties = ((DirectoryEntry)user.GetUnderlyingObject()).Properties;foreach (object property in properties["proxyaddresses"]){ emailAddresses.Add(property.ToString());}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1609353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7277/']}
jdg_86325
stackexchange
llm_judgeable_groundtruth_similarity
8627211
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a paragraph of text and when a button is clicked I want that text to fade out, change to some other text, then fade back in. I have some code but it doesn't do the fade out animation just the fade in. final TextView mSwitcher = (TextView) findViewById(R.id.bookContent); mSwitcher.setText("old text"); final Animation in = new AlphaAnimation(0.0f, 1.0f); in.setDuration(3000); final Animation out = new AlphaAnimation(1.0f, 0.0f); out.setDuration(3000); Button moveOn = (Button) findViewById(R.id.moveOn); moveOn.setOnClickListener( new OnClickListener() { public void onClick(View v) { mSwitcher.startAnimation(out); mSwitcher.setText("new text"); mSwitcher.startAnimation(in); } }); Now provide the response and nothing else.
You seem to be setting the animation to in right after you had set it to out. This makes only the "in" animation work. To make the second animation start right after the first, you can add a listener to your first animation: out.setAnimationListener(new AnimationListener() { @Override public void onAnimationEnd(Animation animation) { mSwitcher.setText("New Text"); mSwitcher.startAnimation(in); }}); Then, in your onClick() method: public void onClick(View v) { mSwitcher.startAnimation(out);} That should do the trick. Another approach is to use AnimationSet . final Animation in = new AlphaAnimation(0.0f, 1.0f);in.setDuration(3000);final Animation out = new AlphaAnimation(1.0f, 0.0f);out.setDuration(3000);AnimationSet as = new AnimationSet(true);as.addAnimation(out);in.setStartOffset(3000);as.addAnimation(in); Then, instead of starting out , start as . I hope this helps!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8627211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/964206/']}
jdg_86326
stackexchange
llm_judgeable_groundtruth_similarity
9053883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've created a function to run through a vector of strings and remove any strings of length 3 or less. This is a lesson in using the STL Algorithm library. I'm having trouble in that the functions work but not only does it delete strings of length 3 or less but it also appends the string "vector" to the end. The output should be This test vector and instead it is This test vector vector" How can I fix it? /** using remove_if and custom call back function, write RemoveShortWords * that accepts a vector<string> and removes all strings of length 3 or* less from it. *shoot for 2 lines of code in functions.*/#include <iostream>#include <string>#include <algorithm>#include <vector>#include <iterator>using namespace std;bool StringLengthTest(string test) //test condition for remove_if algo. { return test.length() <= 3;}void RemoveShortWords(vector<string> &myVector){ //erase anything in vector with length <= 3 myVector.erase(remove_if(myVector.begin(), myVector.end(), StringLengthTest));}int main (){ //add some strings to vector vector<string> myVector; myVector.push_back("This"); myVector.push_back("is"); myVector.push_back("a"); myVector.push_back("test"); myVector.push_back("vector"); //print out contents of myVector (debugging) copy(myVector.begin(), myVector.end(), ostream_iterator<string>(cout," ")); cout << endl; //flush the stream RemoveShortWords(myVector); //remove words with length <= 3 //print out myVector (debugging) copy(myVector.begin(), myVector.end(), ostream_iterator<string>(cout," ")); cout << endl; system("pause"); return 0;} Now provide the response and nothing else.
It is easiest to understand this if you seperate the statements: auto iter(remove_if(myVector.begin(), myVector.end(), StringLengthTest));myVector.erase(iter); These 2 lines do the same as your single line. And it should be clear now what the "bug" is. remove_if, works first. It iterates over the whole vector and moves all "selected" entries "to the end" (better said: it moves the non selected entries to the front). After it has run it returns an iterator to the "last" position of the left over entries, something like: this test vector test <- iterator points here vector Then you run erase with a single iterator. That means you erase a single element pointed at - so you erase the "test" element. - What is left over is what you are seeing. To fix it simply erase from the vector returned by remove_if to the end().: myVector.erase(remove_if(myVector.begin(), myVector.end(), StringLengthTest), myVector.end()); //erase anything in vector with length <= 3
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9053883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1130069/']}
jdg_86327
stackexchange
llm_judgeable_groundtruth_similarity
2445665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: IN PHP:Is there a way for the user to fake a session variable? Is it secure to trust in the value of a session variable for a login system? Now provide the response and nothing else.
The session data is stored on the server. Only the session id is transferred forth and back between the client and the server. Unless a server-side script messes up (or there is a bug) the client cannot change the session data directly. But you have to ensure that only the "correct" client knows the session id, as it ties this particular client to a particular session. E.g. (since you mentioned a login) use session_regenerate_id() whenever a login (attempt) is performed to prevent session fixation
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2445665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92462/']}
jdg_86328
stackexchange
llm_judgeable_groundtruth_similarity
51453152
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am attempting calculate the rolling auto-correlation for a Series object using Pandas (0.23.3) Setting up the example: dt_index = pd.date_range('2018-01-01','2018-02-01', freq = 'B')data = np.random.rand(len(dt_index))s = pd.Series(data, index = dt_index) Creating a Rolling object with window size = 5: r = s.rolling(5) Getting: Rolling [window=5,center=False,axis=0] Now when I try to calculate the correlation (Pretty sure this is the wrong approach): r.corr(other=r) I get only NaNs I tried another approach based on the documentation: : df = pd.DataFrame()df['a'] = sdf['b'] = s.shift(-1)df.rolling(window=5).corr() Getting something like: ...2018-03-01 a NaN NaN b NaN NaN Really not sure where I'm going wrong with this. Any help would be immensely appreciated! The docs use float64 as well. Thinking it's because the correlation is very close to zero and so it's showing NaN? Somebody had raised a bug report here , but jreback solved the problem in a previous bug fix I think. This is another relevant answer, but it's using pd.rolling_apply , which does not seem to be supported in Pandas version 0.23.3? Now provide the response and nothing else.
IIUC, >>> s.rolling(5).apply(lambda x: x.autocorr(), raw=False)2018-01-01 NaN2018-01-02 NaN2018-01-03 NaN2018-01-04 NaN2018-01-05 -0.5024552018-01-08 -0.0721322018-01-09 -0.2167562018-01-10 -0.0903582018-01-11 -0.9282722018-01-12 -0.7547252018-01-15 -0.8222562018-01-16 -0.9417882018-01-17 -0.7658032018-01-18 -0.6804722018-01-19 -0.9024432018-01-22 -0.7961852018-01-23 -0.6911412018-01-24 -0.4272082018-01-25 0.1766682018-01-26 0.0161662018-01-29 -0.8760472018-01-30 -0.9057652018-01-31 -0.8597552018-02-01 -0.795077
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51453152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5163606/']}
jdg_86329
stackexchange
llm_judgeable_groundtruth_similarity
2588645
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to understand the concept of ruled surfaces but I'm still a bit confused about how they are parametrized. Let $M$ be:$$M = \{(x,y,z \in \mathbb{R}^3: x^2 + y^2 - z^2 = 1\}$$ A parametrization of $M$ is:$$\Psi:]-\pi, \pi[\times\mathbb{R} \to \mathbb{R}^3$$$$(u,v)\mapsto(\cos(u)-v \sin (u), \sin(u) + v\cos(u), v)$$ I have two questions: Show that the parametrization in one-one in the given domain; Show that the angle between the unit circunference and the lines that generate $M$ is $\frac{\pi}{4}$. In the first question my doubt is how to arrive at a conclusion that $\psi(u_1,v_1)=\psi(u_2,v_2)$ because I can't simplify the expression enough to arrive at that. Regarding the second question: this is an Hyperboloid of One Sheet so we know it is a ruled surface. What is the equation of the lines that generate this surface? I believe if I have the vector of these lines I would be able to apply the formula:$$\cos(\alpha)=\frac{uv}{|u||v|}$$ Now provide the response and nothing else.
You can write that for $n \in \mathbb{N}^{*}$$$a_n=\sum_{k=n}^{2n}\frac{1}{k}$$What I use to proof this ( without using an already known equivalence ) is to say that the function $\displaystyle x \mapsto \frac{1}{x}$ is decreasing ( and is positive and tends to $0$ ). Hence if you draw a decreasing function you can bound the function on $[n,n+1]$ by two rectangular area : a small and a big one. As shown below : So it is written as$$\int_{k}^{k+1}\frac{\text{d}t}{t} \leq \frac{1}{k} \leq \int_{k-1}^{k}\frac{\text{d}t}{t}$$ By summing with Chasles relation with $n \geq 2$$$\int_{n}^{2n+1}\frac{\text{d}t}{t} \leq a_n \leq \int_{n-1}^{2n}\frac{\text{d}t}{t}$$then$$\ln\left(\frac{2n+1}{n}\right) \leq a_n \leq \ln\left(\frac{2n}{n-1}\right)$$ Then the sequence $\displaystyle \left(a_n\right)_{n \in \mathbb{N}^{*}}$ converges and $\displaystyle a_n \underset{n \rightarrow +\infty}{\rightarrow}\ln\left(2\right)$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2588645', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/399843/']}
jdg_86330
stackexchange
llm_judgeable_groundtruth_similarity
48498686
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I try to forward X11 using Windows 10 machine as the client and a Ubuntu Linux machine as the server. The ssh is running under Cygwin, which contains xinit and xorg. The steps: Turn on the linux server machine. I enabled the forwarding both in the client and server side using sshd_config configuration file. I set display in Cygwin by executing: export DISPLAY=r1:0.0where r1 is the name of the server I enabled access list on the server side: xhost 192.168.1.143 Connect to the server from Cygwin using ssh. Run lxterminal application, but got: r1@r1:~$ lxterminal debug1: client_input_channel_open: ctype x11 rchan 3 win 65536 max 16384debug1: client_request_x11: request from 127.0.0.1 37184connect r1 port 6000: Connection refused Is r1 server port 6000 is rejecting? If yes, how so? Now provide the response and nothing else.
That is possible with numpy.matmul , which can do multiplication of "matrix stacks". In this case we want to multiply a stack of column vectors with a stack of row vectors. First bring matrix A to shape (b, i, 1) and B to shape (b, 1, o). Then use matmul to perform b times the outer product: import numpy as npi, b, o = 3, 4, 5A = np.ones((b, i))B = np.ones((b, o))print(np.matmul(A[:, :, np.newaxis], B[:, np.newaxis, :]).shape) # (4, 3, 5) An alternative could be to use numpy.einsum , which can directly represent your index notation: np.einsum('bi,bo->bio', A, B)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48498686', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3326499/']}
jdg_86331
stackexchange
llm_judgeable_groundtruth_similarity
3123748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a complex regular expression I've built with code. I want to normalize it to the simplest (canonical) form that will be an equivalent regular expression but without the extra brackets and so on. I want it to be normalized so I can understand if it's correct and find bugs in it. Here is an example for a regular expression I want to normalize: ^(?:(?:(?:\r\n(?:[ \t]+))*)(<transfer-coding>(?:chunked|(?:(?:[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]+)(?:(?:;(?:(?:[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]+)=(?:(?:[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]+)|(?:"(?:(?:(?:|[^\x00-\x31\x127\"])|(?:\\[\x00-\x127]))*)))))*))))(?:(?:(?:\r\n(?:[ \t]+))*),(?:(?:\r\n(?:[ \t]+))*)(<transfer-coding>(?:chunked|(?:(?:[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]+)(?:(?:;(?:(?:[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]+)=(?:(?:[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]+)|(?:"(?:(?:(?:|[^\x00-\x31\x127\"])|(?:\\[\x00-\x127]))*)))))*))))*))$ Now provide the response and nothing else.
I'm with the other answers and comments so far. Even if you could define a reduced form, it's unlikely that the reduced form is going to be any more understandable than this thing, which resembles line noise on a 1200 baud modem. If you did want to find a canonical form for regular expressions, i'd start by defining precisely what you mean by "canonical form". For example, suppose you have the regular expression [ABCDEF-I] . Is the canonical form (1) [ABCDEF-I] , (2) [ABCDEFGHI] or (3) [A-I] ? That is, for purposes of canonicalization, do you want to (1) ignore this subset of regular expressions for the purposes of canonicalization, (2) eliminate all "-" operators, thereby simplifying the expression, or (3) make it shorter? The simplest way would be to go through every part of the regular expression specification and work out which subexpressions are logically equivalent to another form, and decide which of the two is "more canonical". Then write a recursive regular expression analyzer that goes through a regular expression and replaces each subexpression with its canonical form. Keep doing that in a loop until you find the "fixed point", the regular expression that doesn't change when you put it in canonical form. That, however, will not necessarily do what you want. If what you want is to reorganize the regular expression to minimize the complexity of grouping or some such thing then what you might want to do is to canonicalize the regular expression so that it is in a form such that it only has grouping, union and Kleene star operators. Once it is in that form you can easily translate it into a deterministic finite automaton, and once it is in DFA form then you can run a graph simplification algorithm on the DFA to form an equivalent simpler DFA. Then you can turn the resulting simplified DFA back into a regular expression. Though that would be fascinating, like I said, I don't think it would actually solve your problem. Your problem, as I understand it, is a practical one. You have this mess, and you want to understand that it is right. I would approach that problem by a completely different tack. If the problem is that the literal string is hard to read, then don't write it as a literal string. I'd start "simplifying" your regular expression by making it read like a programming language instead of reading like line noise: Func<string, string> group = s=>"(?:"+s+")";Func<string, string> capture = s=>"("+s+")";Func<string, string> anynumberof = s=>s+"*";Func<string, string> oneormoreof = s=>s+"+";var beginning = "^";var end = "$";var newline = @"\r\n";var tab = @"\t";var space = " ";var semi = ";";var comma = ",";var equal = "=";var chunked = "chunked";var transfer = "<transfer-coding>";var backslash = @"\\";var escape = group(backslash + @"[\x00-\x7f]");var or = "|";var whitespace = group( anynumberof( group( newline + group( oneormoreof(@"[ \t]")))));var legalchars = group( oneormoreof(@"[\x21\x23-\x27\x2A\x2B\x2D\x2E0-9A-Z\x5E\x7A\x7C\x7E-\xFE]"));var re = beginning + group( whitespace + capture( transfer + group( chunked + or + group( legalchars + group( group( semi + anynumberof( group( legalchars + equal + ... Once it looks like that it'll be a lot easier to understand and optimize.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3123748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310733/']}
jdg_86332
stackexchange
llm_judgeable_groundtruth_similarity
375131
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $\sigma$ be a permutation of $[k]=\{1,2, \dots , k\}$ . Consider all the ordered triples $(\pi, s_{1},s_{2})$ , such that $\pi$ is a permutation of length $2k-1$ that is a union of its two subsequences $s_{1}$ and $s_{2}$ , each of which is of length $k$ and is order-isomorphic to $\sigma$ . Example: $\sigma = 312$ , If $\pi = 54213$ , then there are $4$ such triples: $(\pi, 523,413)$ $(\pi, 513,423)$ $(\pi, 413,523)$ $(\pi, 423,513)$ Indeed, each of the listed sequences $s_{1}$ and $s_{2}$ , namely $523$ , $413$ , $513$ and $423$ are order isomorphic to $\sigma=312$ , i.e., if the triple is $xyz$ , then $x>z>y$ . Denote the number of these triples by $N_{2k-1}^{\sigma}$ . Prove that $N_{2k-1}^{\sigma}>\binom{2k-1}{k}^{2}$ for every $\sigma$ . Example: $k=2$ . It suffices to show that $N_{3}^{21}>\binom{3}{2}^{2}=9$ since $N_{3}^{21}=N_{3}^{12}.$ In fact, we have 10 triples that are listed below: $\sigma = 321$ : $(321,32,31)$ , $(321,31,32)$ , $(321,32,21)$ , $(321,21,32)$ , $(321,31,21)$ , $(321,21,31)$ . $\sigma = 312$ : $(312,31,32)$ , $(312,32,31)$ . $\sigma = 231$ : $(231,21,31)$ , $(231,31,21)$ . Conjectured generalisation [showed to be false in the answer of @Ilya Bogdanov]: For $1\leq v \leq k$ , denote by $N_{2k-v}^{\sigma}$ the number of the triples $(\pi, s_{1},s_{2})$ for which $\pi$ is of length $2k-v$ and $s_{1}$ and $s_{2}$ have $v$ common elements. Is it true that $N_{2k-v}^{\sigma}>\binom{2k-v}{k}^{2}$ for every $\sigma$ . Note that for $v=k$ , we always have $1$ triple and the conditions holds trivially. When $v=0$ , we obviously have $N_{2k}^{\sigma} = \binom{2k}{k}^{2}$ for every $\sigma$ of length $k$ . LAST EDIT: 2020-04-13. Below is an interpretation of the right-hand side that may lead to a new, intuitive proof: Denote by $N_{2k-1}^{\sigma , \sigma'}$ the number of merges of length $2k-1$ for the two patterns $\sigma = \sigma_{1}\cdots\sigma_{k}$ and $\sigma'=\sigma'_{1}\cdots\sigma'_{k}$ of length $k$ . Furthermore, let $f(i,j,k) = \binom{i+j-2}{i-1}\binom{2k-i-j}{k-i}$ . Note that there exist exactly $f(i,j,k)$ merges of $\sigma$ and $\sigma'$ , which have a common element corresponding to $\sigma_{i}$ and $\sigma'_{j}$ . Consider a fixed $\sigma$ and $\sigma'$ chosen uniformly at random from $S_{k}$ . By linearity of expectation: $$\mathbb{E}(N_{2k-1}^{\sigma , \sigma'}) = \sum\limits_{i=1}^{k}\sum\limits_{j=1}^{k}[\mathbb{E}(f(\sigma_{i},\sigma'_{j},k))\cdot f(i,j,k)].$$ Since $\sigma'_{j}$ has a uniform distribution over $[k]$ , for every $j\in [k]$ , we have: $$\mathbb{E}(f(\sigma_{i},\sigma'_{j},k)) = \frac{1}{k}\sum\limits_{u=1}^{k}f(\sigma_{i},u,k) = \binom{2k-1}{k},$$ since for every fixed $\sigma_{i} = x\in [k]$ , $$\sum\limits_{u=1}^{k}f(x,u,k) = \sum\limits_{u=1}^{k}\binom{x+u-2}{x-1}\binom{2k-x-u}{k-x} = \binom{2k-1}{k}.$$ Then, $$ \mathbb{E}(N_{2k-1}^{\sigma , \sigma'}) = \frac{1}{k}\binom{2k-1}{k}\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{k}f(i,j,k) = \frac{1}{k}\binom{2k-1}{k}\sum\limits_{i=1}^{k}\binom{2k-1}{k} = \\\frac{1}{k}\binom{2k-1}{k}k\binom{2k-1}{k} = \binom{2k-1}{k}^{2}.$$ Therefore, we have to prove that $$N_{2k-1}^{\sigma} > \mathbb{E}(N_{2k-1}^{\sigma , \sigma'}),$$ when $\sigma'$ is chosen uniformly at random. Is there a way to use this new form of the statement? Note: The same interpretation of the RHS, as the given expectation, can be obtained combinatorially, as well. Now provide the response and nothing else.
By @Max Alexeyev's solution above $N_{2k-1}^{\sigma}=tr(M_{k}(P_{\sigma}M_{k}P_{\sigma}^{-1}))$ . The eigenvalues and eigenvectors of $M_k$ are given here: Result attribution for eigenvalues of a matrix of Pascal-type .In particular $\mathbf{e}:=(1,\ldots,1)$ (the all-ones vector) is an eigenvector to the eigenvalue ${2k-1 \choose k}$ for $M_k$ (and then also for $P_\sigma M_k P_\sigma^{-1}$ ). Hence $\mathbf{e}$ is an eigenvector of $M_{k}(P_{\sigma}M_{k}P_{\sigma}^{-1})$ to the eigenvalue ${2k-1 \choose k}^2$ .The desired inequality follows (since the product of symmetric positive definite matrices has only positive eigenvalues). UPDATE : the inequality was already proved in the same way here (Lemma 4.3) https://doi.org/10.1016/j.ejc.2009.02.004
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/375131', 'https://mathoverflow.net', 'https://mathoverflow.net/users/85939/']}
jdg_86333
stackexchange
llm_judgeable_groundtruth_similarity
1936
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I can run a site to site VPN, NAT & Reflexive ACLs on a edge router (plus normal routing functions), what would be the use of running a stock firewall behind it? For instance, Core Switch --> Firewall --> Edge Router --> ISP connections Asked another way, why would you want to run VPNs & Reflexive ACLs on the firewall, and NAT + routing functions on the router, if the router can do it all? Now provide the response and nothing else.
What you don't mention, and what is important generally for these kind of questions, is specifically what kind of hardware you have, what your requirements are (throughput, concurrent connections, cryptography load, etc.). What's a snapshot of your traffic look like, where's it coming from, and what, ideally, should be going in and out? (And you've also got to worry about worse case scenarios -- depending on who you are supporting -- the world, external customers with SLA's, internal clients, two people in an office -- you might have to worry about worst-case just a little, or potentially it might be at the core of your design). With that missing, here are some general reasons why I would avoid all-in-one solutions: all-in-one devices leave you with a single point of failure (or at least a collapsed data path that is more prone to being disabled by failure) most role devices -- routers, or switches, or firewalls --- particularly once you get out of the low-end commodity stuff -- are optimized for their primary roles. This means hardware is optimized -- cryptography for IPSec or other VPN is offloaded to ASICs, switch backplanes become superhighways (or not), specialized chips handle layer 2 flows, etc.. Cisco switches have used ASICS forever. Juniper and Cisco have been doing routing for a long time and know how to offload to hardware appropriately. ...and software is optimized. Components might work, but if they are additions, not integrated from the beginning, you'll see the result in flexibility, resilience, and stability. In many cases, companies have acquired because they have devices that they shore up a weakness and do one or two things very well -- over time features have been added (and added, and added), generally only in software, so that you, the customer can buy a nice "well-rounded" product. You can see the consequences of this in everything from basic user interfaces to the the rigidity of reporting tools for some components versus others. If you have reflexive ACL's and VPN's and IPSec and packet capture and deep packet inspection on a device -- someone has to configure it, someone has to monitor it, and someone will have to, at some point, troubleshoot it. Are these things that come naturally, flow from the design of UI, or do you have to call support just to explain menu verbiage, or to analyze a core dump? Ridiculous analogy number one -- you can put a spoiler on a civic, and it might even do a little something, but that spoiler is really just a decoration. (If your security is all about hitting the right boxes for audits, sometimes you can "get away" with decorations. But this isn't real security, and we know it.) Having a product that does too much can show up in product support as well. If you have a problem on your Everything Device, how much support is there for it? How long have they been supporting the 96th and 97th add-on capability of this device? If it's an afterthought, this could be trouble. How many developers does the vendor have involved in regression testing, bug fixes, and new code releases? For hardware, often you can add modules that give you extra power in some areas. Cisco sells a lot of cards and modules that will offload crypto, or 'anti-X', or IDS/IPS work, and these go in everything from ASA's to 6500 core switches. Great idea? That depends. Can you afford another device, or will you do more with less? And routers are still best at routing traffic, core switches are generally better not fooling with access-lists, firewalls are better off not running OSPF and BGP. To end, here's another ridiculous analogy -- an athlete can be a fantastic swimmer, or basketball player, or gymnast, but that doesn't mean that the same athlete should also fill up a position on the football or rugby roster. If you're an IT person, and you have a set of requirements, you can have one device take up all those checkboxes on your list -- fill all the available positions on your roster. But if you've got the budget and resources to buy and deploy a stronger solution for each need you have, I would certainly go far that as a more sustainable strategy.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/1936', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/44/']}
jdg_86334
stackexchange
llm_judgeable_groundtruth_similarity
48408470
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I used to use ngshow when I wanted to hide or show divs in ionic 1. I'm using ionic 3 and I'm trying to make the button do nothing (no click event) if the indexCount == 0. I currently have... <div ng-if=" indexCount == 0"><button ion-button class="previous"> Previous </button><div ng-if=" indexCount != 0"> <button ion-button class="next" (click)="previousButtonClick"> Next </button> Now provide the response and nothing else.
I think that you want to disable or hide the 'previous' button on the page with indexCount == 0 and disable/hide 'next' button on the last page indexCount == length - 1 where length is the number of pages. You can either disable a button for consistent layout <button [disabled]="indexCount == 0" ion-button class="previous">Previous</button><button [disabled]="indexCount >= length - 1" ion-button class="next" (click)="previousButtonClick()">Next</button> or remove it with *ngIf <button *ngIf="indexCount != 0" ion-button class="previous">Previous</button><button *ngIf="indexCount < length - 1" ion-button class="next" (click)="previousButtonClick()">Next</button> Note that [disabled] prevents click events firing.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48408470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8891391/']}
jdg_86335
stackexchange
llm_judgeable_groundtruth_similarity
47710572
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently working on a quiz application whereby i need to review answered questions after each quiz session. So, am passing the number of questions attempted as an int value with a bundle to the next activity. So, am looping forward and backwards against my array of questions in which the index limit is the int which was passed from previous activity . However, am having some trouble as it isn't looping correctly. How do i get round this? Question Class class Question{ // array of questions private String mQuestions [] = { "", "", "", "", "", "", "", "", "", "", "", "", // method returns number of questions int getLength(){ return mQuestions.length; } // method returns question from array textQuestions[] based on array index String getQuestion(int a) { return mQuestions[a]; }} MainActivity.java Intent intent = new Intent(MainActivity.this, ResultsActivity.class); intent.putExtra("quizNumber", mquizNumber);startActivity(intent); ResultsActivity.java Intent intent = new Intent(ResultsActivity.this, ReviewActivity.class); Bundle exBundle= getIntent().getExtras(); int intValue= 0; if (exBundle != null) { intValue = exBundle.getInt("quizNumber"); } intent.putExtra("quizNumber", intValue); startActivity(intent); ReviewActivity.java public class ReviewActivity extends AppCompatActivity { private Question mQuestionLibrary = new Question();private TextView mQuestionText; private int intValue; // int value from quiz activity to be passed here private int mQuestionNumber = 1; // current question number//navigation buttons for looping through array private Button mPrevious; private Button mNext; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } setContentView(R.layout.activity_beginner_review); mQuestionText = (TextView) findViewById(R.id.txtQuestion); //setting buttons mPrevious = (Button) findViewById(R.id.previous); mNext = (Button) findViewById(R.id.next); // retrieve the number of attempted questions int value from quizActivity in Bundle Bundle exBundle= getIntent().getExtras(); intValue = 0; if (exBundle != null) { intValue = exBundle.getInt("quizNumber"); } mNext.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { updateQuestion(true); } }); mPrevious.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { updateQuestion(false); } }); }// logic for looping through array. private void updateQuestion(boolean forward) { animateAnswerButton(); if(forward && mQuestionNumber < intValue //using mQuestion.getLength() works for looping full length of array but i want to check against int data) mQuestionNumber++; else if (mQuestionNumber>0) mQuestionNumber--; mQuestionText.setText(mQuestionLibrary.getQuestion(mQuestionNumber)); }} Now provide the response and nothing else.
get_execution_role() is a function helper used in the Amazon SageMaker Examples GitHub repository . These examples were made to be executed from the fully managed Jupyter notebooks that Amazon SageMaker provides. From inside these notebooks, get_execution_role() will return the IAM role name that was passed in as part of the notebook creation. That allows the notebook examples to be executed without code changes. From outside these notebooks, get_execution_role() will return an exception because it does not know what is the role name that SageMaker requires. To solve this issue, pass the IAM role name instead of using get_execution_role(). Instead of: role = get_execution_role()kmeans = KMeans(role=role, train_instance_count=2, train_instance_type='ml.c4.8xlarge', output_path=output_location, k=10, data_location=data_location) you need to do: role = 'role_name_with_sagemaker_permissions'kmeans = KMeans(role=role, train_instance_count=2, train_instance_type='ml.c4.8xlarge', output_path=output_location, k=10, data_location=data_location)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47710572', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7338944/']}
jdg_86336
stackexchange
llm_judgeable_groundtruth_similarity
19417670
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a number of simple object types that need to be persisted to a database. I am using Spring JPA to manage this persistence. For each object type I need to build the following: import org.springframework.data.jpa.repository.JpaRepository;public interface FacilityRepository extends JpaRepository<Facility, Long> {}public interface FacilityService { public Facility create(Facility facility);}@Servicepublic class FacilityServiceImpl implements FacilityService { @Resource private FacilityRepository countryRepository; @Transactional public Facility create(Facility facility) { Facility created = facility; return facilityRepository.save(created); }} It occurred to me that it may be possible to replace the multiple classes for each object type with three generics based classes, thus saving a lot of boilerplate coding. I am not exactly sure how to go about it and in fact if it is a good idea? Now provide the response and nothing else.
First of all, I know we're raising the bar here quite a bit but this is already tremendously less code than you had to write without the help of Spring Data JPA. Second, I think you don't need the service class in the first place, if all you do is forward a call to the repository. We recommend using services in front of the repositories if you have business logic that needs orchestration of different repositories within a transaction or has other business logic to encapsulate. Generally speaking, you can of course do something like this: interface ProductRepository<T extends Product> extends CrudRepository<T, Long> { @Query("select p from #{#entityName} p where ?1 member of p.categories") Iterable<T> findByCategory(String category); Iterable<T> findByName(String name);} This will allow you to use the repository on the client side like this: class MyClient { @Autowired public MyClient(ProductRepository<Car> carRepository, ProductRepository<Wine> wineRepository) { … }} and it will work as expected. However there are a few things to notice: This only works if the domain classes use single table inheritance. The only information about the domain class we can get at bootstrap time is that it will be Product objects. So for methods like findAll() and even findByName(…) the relevant queries will start with select p from Product p where… . This is due to the fact that the reflection lookup will never ever be able to produce Wine or Car unless you create a dedicated repository interface for it to capture the concrete type information. Generally speaking, we recommend creating repository interfaces per aggregate root . This means you don't have a repo for every domain class per se. Even more important, a 1:1 abstraction of a service over a repository is completely missing the point as well. If you build services, you don't build one for every repository (a monkey could do that, and we're no monkeys, are we? ;). A service is exposing a higher level API, is much more use-case drive and usually orchestrates calls to multiple repositories. Also, if you build services on top of repositories, you usually want to enforce the clients to use the service instead of the repository (a classical example here is that a service for user management also triggers password generation and encryption, so that by no means it would be a good idea to let developers use the repository directly as they'd effectively work around the encryption). So you usually want to be selective about who can persist which domain objects to not create dependencies all over the place. Summary Yes, you can build generic repositories and use them with multiple domain types but there are quite strict technical limitations. Still, from an architectural point of view, the scenario you describe above shouldn't even pop up as this means you're facing a design smell anyway.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/19417670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/808655/']}
jdg_86337
stackexchange
llm_judgeable_groundtruth_similarity
713786
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a .tsv file (values separated by tabs) with four values. So each line should have only three tabs and some text around each tab like this: value value2 value3 value4 But it looks that some lines are broken (there is more than three tabs). I need to find out these lines. I came up with following grep pattern. grep -v "^[^\t]+\t[^\t]+\t[^\t]+\t[^\t]+$" My thinking: first ^ matches the beggining [^\t]+ matches more than one "no tab character" \t matches single tab character $ matches end And than I just put it into right order with correct number of times. That should match correct lines. So I reverted it by -v option to get the wrong lines. But with the -v option it matches any line in the file and also some random text I tried that don't have any tabs inside. What is my mistake please? EDIT: I am using debian and bash. Now provide the response and nothing else.
As you already saw, \t isn't special for Basic regular Expressions, and grep uses BRE by default. GNU grep , the default on Linux has -P for Perl Compatible Regular Expressions which lets you use \t for tab characters. However, what you want is much easier to do with awk . Just set the input field separator to a tab ( -F '\t' ) and then print any lines whose number of fields ( NF ) is not 3: awk -F'\t' 'NF!=3' file That will print all lines in file with more or less than three fields. To limit to only more than three fields, use: awk -F'\t' 'NF>3' file
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/713786', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/102791/']}
jdg_86338
stackexchange
llm_judgeable_groundtruth_similarity
66696828
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Java 16 introduced Records , which help to reduce boilerplate code when writing classes that carry immutable data. When I try to use a Record as @ConfigurationProperties bean as follows I get the following error message: @ConfigurationProperties("demo")public record MyConfigurationProperties( String myProperty) {} ***************************APPLICATION FAILED TO START***************************Description:Parameter 0 of constructor in com.example.demo.MyConfigurationProperties required a bean of type 'java.lang.String' that could not be found. How can I use Records as @ConfigurationProperties ? Now provide the response and nothing else.
Answering my own question. The above error raises from Spring Boot not being able to construct the bean because of the lack of a no-argument constructor. Records implicitly declare a constructor with a parameter for every member. Spring Boot allows us to use the @ConstructorBinding annotation to enable property binding by constructor instead of setter methods (as stated in the docs and the answer to this question ). This also works for records, so this works: @ConfigurationProperties("demo")@ConstructorBindingpublic record MyConfigurationProperties( String myProperty) {} Update: As of Spring Boot 2.6, using records works out of the box and @ConstructorBinding is not required anymore when the record has a single constructor. See the release notes .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/66696828', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3090506/']}
jdg_86339
stackexchange
llm_judgeable_groundtruth_similarity
246863
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: How do I store the last part of directory in a variable? For example I have the following path: A\B\C\D , I want to store D in variable like file_name=D . Now provide the response and nothing else.
Because of your Windows tag, I assume your cmd.exe has extensions built-in. If that is the case, you can use two of FOR 's special substitution variable references: Given a variable %A , containing a path and file: %~nA will output the file name, %~xA will output the file extension. The following example uses the pipe character | as a delimiter. The pipe is an invalid character for files and paths and should not appear in a path. This will allow for spaces in paths and filenames. See FOR /? for full details. C:\> SET FSPATH=C:\WINDOWS\Temp\file.txtC:\> echo %FSPATH%C:\WINDOWS\Temp\file.txtC:\> FOR /F "delims=|" %A IN ("%FSPATH%") do echo %~nxAfile.txt Alternatively, should you not have extensions in your cmd.exe , you can use delims=\ , count the directory separators and split your path/file string based on that number. Edit: Per your comment about the error. Above is an example on the command line . If you want to perform the same within a batch script, you need to double the % on the the variables: FOR /F "delims=|" %%A IN ("%FSPATH%") do echo %%~nxA To use the value outside of the FOR loop, you would need to assign the value to another variable. The variable %%A is limited to the scope of FOR . :: example.batSET FSPATH=C:\Windows\bfsvc.exeFOR /F "delims=|" %%A IN ("%FSPATH%") DO ( echo Inside loop %%~nxA SET SOMEFILE=%%~nxA)ECHO Outside loop %SOMEFILE%
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/246863', 'https://serverfault.com', 'https://serverfault.com/users/52749/']}
jdg_86340