source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
31111032
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I used Counter on a list to compute this variable: final = Counter(event_container) print final gives: Counter({'fb_view_listing': 76, 'fb_homescreen': 63, 'rt_view_listing': 50, 'rt_home_start_app': 46, 'fb_view_wishlist': 39, 'fb_view_product': 37, 'fb_search': 29, 'rt_view_product': 23, 'fb_view_cart': 22, 'rt_search': 12, 'rt_view_cart': 12, 'add_to_cart': 2, 'create_campaign': 1, 'fb_connect': 1, 'sale': 1, 'guest_sale': 1, 'remove_from_cart': 1, 'rt_transaction_confirmation': 1, 'login': 1}) Now I want to convert final into a Pandas DataFrame , but when I'm doing: final_df = pd.DataFrame(final) but I got an error. I guess final is not a proper dictionary, so how can I convert final to a dictionary? Or is it an other way to convert final to a DataFrame ? Now provide the response and nothing else.
You can construct using from_dict and pass param orient='index' , then call reset_index so you get a 2 column df: In [40]:from collections import Counterd = Counter({'fb_view_listing': 76, 'fb_homescreen': 63, 'rt_view_listing': 50, 'rt_home_start_app': 46, 'fb_view_wishlist': 39, 'fb_view_product': 37, 'fb_search': 29, 'rt_view_product': 23, 'fb_view_cart': 22, 'rt_search': 12, 'rt_view_cart': 12, 'add_to_cart': 2, 'create_campaign': 1, 'fb_connect': 1, 'sale': 1, 'guest_sale': 1, 'remove_from_cart': 1, 'rt_transaction_confirmation': 1, 'login': 1})df = pd.DataFrame.from_dict(d, orient='index').reset_index()dfOut[40]: index 00 login 11 rt_transaction_confirmation 12 fb_view_cart 223 fb_connect 14 rt_view_product 235 fb_search 296 sale 17 fb_view_listing 768 add_to_cart 29 rt_view_cart 1210 fb_homescreen 6311 fb_view_product 3712 rt_home_start_app 4613 fb_view_wishlist 3914 create_campaign 115 rt_search 1216 guest_sale 117 remove_from_cart 118 rt_view_listing 50 You can rename the columns to something more meaningful: In [43]:df = df.rename(columns={'index':'event', 0:'count'})dfOut[43]: event count0 login 11 rt_transaction_confirmation 12 fb_view_cart 223 fb_connect 14 rt_view_product 235 fb_search 296 sale 17 fb_view_listing 768 add_to_cart 29 rt_view_cart 1210 fb_homescreen 6311 fb_view_product 3712 rt_home_start_app 4613 fb_view_wishlist 3914 create_campaign 115 rt_search 1216 guest_sale 117 remove_from_cart 118 rt_view_listing 50
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31111032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1754181/']}
jdg_83341
stackexchange
llm_judgeable_groundtruth_similarity
1085709
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some questions regarding the usage and significance of the synchronized keyword. What is the significance of the synchronized keyword? When should methods be synchronized ? What does it mean programmatically and logically? Now provide the response and nothing else.
The synchronized keyword is all about different threads reading and writing to the same variables, objects and resources. This is not a trivial topic in Java, but here is a quote from Sun: synchronized methods enable a simple strategy for preventing thread interference and memory consistency errors: if an object is visible to more than one thread, all reads or writes to that object's variables are done through synchronized methods. In a very, very small nutshell: When you have two threads that are reading and writing to the same 'resource', say a variable named foo , you need to ensure that these threads access the variable in an atomic way. Without the synchronized keyword, your thread 1 may not see the change thread 2 made to foo , or worse, it may only be half changed. This would not be what you logically expect. Again, this is a non-trivial topic in Java. To learn more, explore topics here on SO and the Interwebs about: Concurrency Java Memory Model Keep exploring these topics until the name "Brian Goetz" becomes permanently associated with the term "concurrency" in your brain.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/1085709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124339/']}
jdg_83342
stackexchange
llm_judgeable_groundtruth_similarity
711753
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to obfuscate a large amount of data. I've created a list of words (tokens) which I want to replace and I am replacing the words one by one using the StringBuilder class, like so: var sb = new StringBuilder(one_MB_string); foreach(var token in tokens) { sb.Replace(token, "new string"); } It's pretty slow! Are there any simple things that I can do to speed it up? tokens is a list of about one thousand strings, each 5 to 15 characters in length. Now provide the response and nothing else.
Instead of doing replacements in a huge string (which means that you move around a lot of data), work through the string and replace a token at a time. Make a list containing the next index for each token, locate the token that is first, then copy the text up to the token to the result followed by the replacement for the token. Then check where the next occurance of that token is in the string to keep the list up to date. Repeat until there are no more tokens found, then copy the remaining text to the result. I made a simple test, and this method did 125000 replacements on a 1000000 character string in 208 milliseconds. Token and TokenList classes: public class Token { public string Text { get; private set; } public string Replacement { get; private set; } public int Index { get; set; } public Token(string text, string replacement) { Text = text; Replacement = replacement; }}public class TokenList : List<Token>{ public void Add(string text, string replacement) { Add(new Token(text, replacement)); } private Token GetFirstToken() { Token result = null; int index = int.MaxValue; foreach (Token token in this) { if (token.Index != -1 && token.Index < index) { index = token.Index; result = token; } } return result; } public string Replace(string text) { StringBuilder result = new StringBuilder(); foreach (Token token in this) { token.Index = text.IndexOf(token.Text); } int index = 0; Token next; while ((next = GetFirstToken()) != null) { if (index < next.Index) { result.Append(text, index, next.Index - index); index = next.Index; } result.Append(next.Replacement); index += next.Text.Length; next.Index = text.IndexOf(next.Text, index); } if (index < text.Length) { result.Append(text, index, text.Length - index); } return result.ToString(); }} Example of usage: string text = "This is a text with some words that will be replaced by tokens.";var tokens = new TokenList();tokens.Add("text", "TXT");tokens.Add("words", "WRD");tokens.Add("replaced", "RPL");string result = tokens.Replace(text);Console.WriteLine(result); Output: This is a TXT with some WRD that will be RPL by tokens. Note: This code does not handle overlapping tokens. If you for example have the tokens "pineapple" and "apple", the code doesn't work properly. Edit: To make the code work with overlapping tokens, replace this line: next.Index = text.IndexOf(next.Text, index); with this code: foreach (Token token in this) { if (token.Index != -1 && token.Index < index) { token.Index = text.IndexOf(token.Text, index); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/711753', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/77775/']}
jdg_83343
stackexchange
llm_judgeable_groundtruth_similarity
9343594
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a public async void Foo() method that I want to call from synchronous method. So far all I have seen from MSDN documentation is calling async methods via async methods, but my whole program is not built with async methods. Is this even possible? Here's one example of calling these methods from an asynchronous method: Walkthrough: Accessing the Web by Using Async and Await (C# and Visual Basic) Now I'm looking into calling these async methods from sync methods. Now provide the response and nothing else.
Asynchronous programming does "grow" through the code base. It has been compared to a zombie virus . The best solution is to allow it to grow, but sometimes that's not possible. I have written a few types in my Nito.AsyncEx library for dealing with a partially-asynchronous code base. There's no solution that works in every situation, though. Solution A If you have a simple asynchronous method that doesn't need to synchronize back to its context, then you can use Task.WaitAndUnwrapException : var task = MyAsyncMethod();var result = task.WaitAndUnwrapException(); You do not want to use Task.Wait or Task.Result because they wrap exceptions in AggregateException . This solution is only appropriate if MyAsyncMethod does not synchronize back to its context. In other words, every await in MyAsyncMethod should end with ConfigureAwait(false) . This means it can't update any UI elements or access the ASP.NET request context. Solution B If MyAsyncMethod does need to synchronize back to its context, then you may be able to use AsyncContext.RunTask to provide a nested context: var result = AsyncContext.RunTask(MyAsyncMethod).Result; *Update 4/14/2014: In more recent versions of the library the API is as follows: var result = AsyncContext.Run(MyAsyncMethod); (It's OK to use Task.Result in this example because RunTask will propagate Task exceptions). The reason you may need AsyncContext.RunTask instead of Task.WaitAndUnwrapException is because of a rather subtle deadlock possibility that happens on WinForms/WPF/SL/ASP.NET: A synchronous method calls an async method, obtaining a Task . The synchronous method does a blocking wait on the Task . The async method uses await without ConfigureAwait . The Task cannot complete in this situation because it only completes when the async method is finished; the async method cannot complete because it is attempting to schedule its continuation to the SynchronizationContext , and WinForms/WPF/SL/ASP.NET will not allow the continuation to run because the synchronous method is already running in that context. This is one reason why it's a good idea to use ConfigureAwait(false) within every async method as much as possible. Solution C AsyncContext.RunTask won't work in every scenario. For example, if the async method awaits something that requires a UI event to complete, then you'll deadlock even with the nested context. In that case, you could start the async method on the thread pool: var task = Task.Run(async () => await MyAsyncMethod());var result = task.WaitAndUnwrapException(); However, this solution requires a MyAsyncMethod that will work in the thread pool context. So it can't update UI elements or access the ASP.NET request context. And in that case, you may as well add ConfigureAwait(false) to its await statements, and use solution A. Update, 2019-05-01: The current "least-worst practices" are in an MSDN article here .
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/9343594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283055/']}
jdg_83344
stackexchange
llm_judgeable_groundtruth_similarity
11604946
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It doesn't seem to work: $ref = new ReflectionObject($obj);if($ref->hasProperty('privateProperty')){ print_r($ref->getProperty('privateProperty'));} It gets into the IF loop, and then throws an error: Property privateProperty does not exist :| $ref = new ReflectionProperty($obj, 'privateProperty') doesn't work either... The documentation page lists a few constants, including IS_PRIVATE . How can I ever use that if I can't access a private property lol? Now provide the response and nothing else.
class A{ private $b = 'c';}$obj = new A();$r = new ReflectionObject($obj);$p = $r->getProperty('b');$p->setAccessible(true); // <--- you set the property to public before you read the valuevar_dump($p->getValue($obj));
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11604946', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376947/']}
jdg_83345
stackexchange
llm_judgeable_groundtruth_similarity
23510184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting the following error: There was no endpoint listening at net.pipe://localhost/ServiceModelSamples/service that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. I am calling WCF self hosted service inside windows service from another WCF call as follows. _host = new ServiceHost(typeof(CalculatorService), new Uri[] { new Uri("net.pipe://localhost/PINSenderService") }); _host.AddServiceEndpoint(typeof(ICalculator), new NetNamedPipeBinding(), ""); _host.Open(); ChannelFactory<ICalculator> factory = new ChannelFactory<ICalculator>( new NetNamedPipeBinding(NetNamedPipeSecurityMode.None), new EndpointAddress("net.pipe://localhost/PINSenderService")); ICalculator proxy = factory.CreateChannel(); proxy.SendPin(pin); ((IClientChannel)proxy).Close(); factory.Close(); Self-Hosted WCF Service namespace PINSender { // Define a service contract. public interface ICalculator { [OperationContract] void SendPin(string pin); } // Implement the ICalculator service contract in a service class. public class CalculatorService : ICalculator { // Implement the ICalculator methods. public void SendPin(string pin) { } } public class CalculatorWindowsService : ServiceBase { public ServiceHost serviceHost = null; public CalculatorWindowsService() { // Name the Windows Service ServiceName = "PINSenderService"; } public static void Main() { ServiceBase.Run(new CalculatorWindowsService()); } // Start the Windows service. protected override void OnStart(string[] args) { if (serviceHost != null) { serviceHost.Close(); } // Create a ServiceHost for the CalculatorService type and // provide the base address. serviceHost = new ServiceHost(typeof(CalculatorService)); // Open the ServiceHostBase to create listeners and start // listening for messages. serviceHost.Open(); } protected override void OnStop() { if (serviceHost != null) { serviceHost.Close(); serviceHost = null; } } } // Provide the ProjectInstaller class which allows // the service to be installed by the Installutil.exe tool [RunInstaller(true)] public class ProjectInstaller : Installer { private ServiceProcessInstaller process; private ServiceInstaller service; public ProjectInstaller() { process = new ServiceProcessInstaller(); process.Account = ServiceAccount.LocalSystem; service = new ServiceInstaller(); service.ServiceName = "PINSenderService"; Installers.Add(process); Installers.Add(service); } }} App.Config <?xml version="1.0" encoding="utf-8" ?><configuration><system.serviceModel><services> <service name="PINSender.CalculatorService" behaviorConfiguration="CalculatorServiceBehavior"> <host> <baseAddresses> <add baseAddress="net.pipe://localhost/PINSenderService"/> </baseAddresses> </host> <endpoint address="" binding="netNamedPipeBinding" contract="PINSender.ICalculator" /> <endpoint address="mex" binding="mexNamedPipeBinding" contract="IMetadataExchange" /> </service></services><behaviors> <serviceBehaviors> <behavior name="CalculatorServiceBehavior"> <serviceMetadata httpGetEnabled="False" /> <serviceDebug includeExceptionDetailInFaults="False"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel></configuration> Now provide the response and nothing else.
Make sure IIS is configured to use Windows Process Activation Service(WAS) : From the Start menu, choose Control Panel. Select Programs, then Programs and Features, or in Classic view,select Programs and Features . Click Turn Windows Features on or off . Under Features Summary, click Add Features. Expand the Microsoft .NET Framework 3.0(or 3.5) node and check the Windows Communication Foundation Non-HTTP Activation feature . Make sure Net.Pipe Listener Adapter service is running: Got to run & open Services.msc Make sure Net.Pipe Listener Adapter service is running. In your App.config, you have used baseAddress with http , try changing that to net.pipe : <baseAddresses> <add baseAddress="net.pipe://localhost/ServiceModelSamples/service"/> </baseAddresses> see NetNamedPipeBinding for more details. Update : You need to add bindingConfiguration in endpoint like : <endpoint address="" binding="netNamedPipeBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" bindingConfiguration="Binding1" /> and add actual bindingConfiguration like: <bindings> <!-- Following is the expanded configuration section for a NetNamedPipeBinding. Each property is configured with the default value. --> <netNamedPipeBinding> <binding name="Binding1" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <security mode="Transport"> <transport protectionLevel="EncryptAndSign" /> </security> </binding> </netNamedPipeBinding></bindings>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23510184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1548183/']}
jdg_83346
stackexchange
llm_judgeable_groundtruth_similarity
1728789
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Unable to find the function which is bounded but not Riemann integrable. Now provide the response and nothing else.
Perhaps think about the binomial expansions of $\left(1 + \frac{1}{n}\right)^n$ and $\left(1 - \frac{1}{n}\right)^n$. The first two terms are $1 + n \frac{1}{n}$ and $1 - n \frac{1}{n}$ respectively. And after that the terms in $\left(1 + \frac{1}{n}\right)^n$ are all positive, whereas the terms in $\left(1 - \frac{1}{n}\right)^n$ alternate. So the difference between the two limits is going to be at least 2.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1728789', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/319679/']}
jdg_83347
stackexchange
llm_judgeable_groundtruth_similarity
2491196
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove the convergence of the series: $$\sum_n \dfrac{(\ln(n))^n}{n!}$$ I tried to use the d'Alembert's rule:If we write $u_n = \dfrac{(\ln(n))^n}{n!}$ then we get $$\dfrac{u_{n+1}}{u_n} = \dfrac{1}{n+1} \times \dfrac{(\ln(n+1))^{n+1}}{(\ln(n))^n}$$ Is that possible to show that $\dfrac{u_{n+1}}{u_n} \rightarrow l <1$ ? Now provide the response and nothing else.
Since $\log(n)\leq \sqrt{n}$ and $n!\geq \frac{n^n}{e^n}$ we have$$ 0\leq \sum_{n\geq 1}\frac{\log(n)^n}{n!}\leq \sum_{n\geq 1}\frac{e^n}{n^{n/2}}=\sum_{n\geq 1}\frac{1}{\left(\frac{1}{e}\sqrt{n}\right)^n} $$and convergence by asymptotic comparison with a geometric series.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2491196', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/406023/']}
jdg_83348
stackexchange
llm_judgeable_groundtruth_similarity
40511202
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following data frame df : v1 v2 v3 v41 1 5 7 42 2 6 10 3 And I want to obtain the following data frame df2 multiplying columns v1*v3 and v2*v4: v1 v2 v3 v4 v1v3 v2v41 1 5 7 4 7 202 2 6 10 3 20 18 How can I do that using dplyr ? Using mutate_each ? I need a solution that can be generalized to a large number of variables and not only 4 (v1 to v4).This is the code to generate the example: v1 <- c(1, 2)v2 <- c(5,6)v3 <- c(7, 10)v4 <- c(4, 3)df <- data.frame(v1, v2, v3, v4)v1v3 <- c(v1 * v3)v2v4 <- c(v2 * v4)df2 <- cbind(df, v1v3, v2v4) Now provide the response and nothing else.
You are really close. df2 <- df %>% mutate(v1v3 = v1 * v3, v2v4 = v2 * v4) such a beautifully simple language, right? For more great tricks please see here. EDIT:Thanks to @Facottons pointer to this answer: https://stackoverflow.com/a/34377242/5088194 , here is a tidy approach to resolving this issue. It keeps one from having to write a line to hard code in each new column desired. While it is a bit more verbose than the Base R approach, the logic is at least more immediately transparent/readable. It is also worth noting that there must be at least half as many rows as there are columns for this approach to work. # prep the product column names (also acting as row numbers)df <- df %>% mutate(prod_grp = paste0("v", row_number(), "v", row_number() + 2)) # converting data to tidy format and pairing columns to be multiplied together.tidy_df <- df %>% gather(column, value, -prod_grp) %>% mutate(column = as.numeric(sub("v", "", column)), pair = column - 2) %>% mutate(pair = if_else(pair < 1, pair + 2, pair))# summarize the products for each columnprod_df <- tidy_df %>% group_by(prod_grp, pair) %>% summarize(val = prod(value)) %>% spread(prod_grp, val) %>% mutate(pair = paste0("v", pair, "v", pair + 2)) %>% rename(prod_grp = pair)# put the original frame and summary frames togetherfinal_df <- df %>% left_join(prod_df) %>% select(-prod_grp)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40511202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3225661/']}
jdg_83349
stackexchange
llm_judgeable_groundtruth_similarity
39664141
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Today I'm trying to make my Docker environment working ! In this matter I've encountered quite a problem : my MySQL container, extending the MySQL official Docker image, seems to fail to create the root account, despite the setting of the MYSQL_ROOT_PASSWORD environment variable in my docker-compose`.yml . I copy here my Docker files : docker-compose.yml Most of the environment variables are used in scripts and applications independantly from the MySQL server. Only MYSQL_ROOT_PASSWORD deserves interest (perhaps this statement is the cause of my failure to make this work..! ). mysql: container_name: my_mysql build: mysql environment: - MYSQL_DATABASES=my-database - MYSQL_ROOT_PASSWORD=root - MYSQL_HOST=127.0.0.1 - MYSQL_PORT=33306 - MYSQL_USER=user - MYSQL_PASSWORD=password - MYSQL_MY_DATABASE=my-database ports: - "33306:3306" volumes: - "./volumes/mysql:/var/lib/mysql" mysql/Dockerfile The dos2unix command is meant to convert Windows line endings to Unix. The custom entrypoint is named differently to avoid overidding the default mysql entrypoint script. FROM mysql:5.7MAINTAINER Wonderful Dev <[email protected]>RUN apt-get update && apt-get install -y dos2unixCOPY conf.d/custom.cnf /etc/mysql/conf.d/COPY docker-entrypoint-initdb.d/databases.sh /docker-entrypoint-initdb.d/databases.shCOPY my-entrypoint.sh /my-entrypoint.shRUN dos2unix /docker-entrypoint-initdb.d/databases.sh && dos2unix /my-entrypoint.sh && dos2unix /etc/mysql/conf.d/custom.cnf && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*RUN chmod a+x /docker-entrypoint-initdb.d/databases.sh && chown root:root /docker-entrypoint-initdb.d/databases.shRUN chmod a+x /my-entrypoint.sh && chown root:root /my-entrypoint.shENTRYPOINT ["/entrypoint.sh", "/my-entrypoint.sh"]CMD ["mysqld"] mysql/my-entrypoint.sh This chmod command is meant to avoid this kind of error #!bin/bashchmod 664 /etc/mysql/conf.d/custom.cnfexec "$@" mysql/config/custom.cnf [mysqld]default-storage-engine=INNODBinit-connect='SET NAMES utf8'character-set-server=utf8collation-server=utf8_unicode_cibind-address = 0.0.0.0skip-external-lockingkey_buffer = 16Mmax_allowed_packet = 16Mthread_stack = 192Kthread_cache_size = 8query_cache_limit = 1Mquery_cache_size = 16Mexpire_logs_days = 10max_binlog_size = 100M[mysqldump]quickquote-namesmax_allowed_packet = 16M[isamchk]key_buffer = 16M All this stuff is working, except that I haven't any root access, blocking me from creating databases or mysql users for my apps ! Thank you for any advice :D ! EDIT : After days of investigation, and an issue in docker-library/mysql repository, we found out that one of the problems was a deprecated configuratio key in custom.cnf . key_buffer => key_buffer_size Now the container runs a few seconds after building, then crash. The main clue in the logs is that line : Version: '5.7.15' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL) I already tried a proposed solution to configure manually the port in custom.cnf but it does not work. The container tries to connect to MySQL and crash due to the port. The problem was not pretty visible because when launching the container a second time, the initialization was skipped, the good port was configured, and the server worked fine. The thing is that due to the crash, the end of the initialization was not executed, including my scripts and the databases creation along with users. So I'd like to find out why this damn mysqld is listening on port 0 the first time I launch the container after building. If you have a clue, I'd be glad to here about it ! Now provide the response and nothing else.
Finally I figured it out, thanks to the guys on this issue : https://github.com/docker-library/mysql/issues/82 The culprit was the MYSQL_HOST environment variable, which was causing the mysqld --initialize-insecure command fail. The solution is to replace 127.0.0.1 with localhost , which produces the following docker-compose.yml : mysql: container_name: my_mysql build: mysql environment: - MYSQL_DATABASES=my-database - MYSQL_ROOT_PASSWORD=root - MYSQL_HOST=localhost - MYSQL_PORT=33306 - MYSQL_USER=user - MYSQL_PASSWORD=password - MYSQL_MY_DATABASE=my-database ports: - "33306:3306" volumes: - "./volumes/mysql:/var/lib/mysql" I hope it will help some people :D !
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39664141', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4165295/']}
jdg_83350
stackexchange
llm_judgeable_groundtruth_similarity
53428120
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a relative large typescript project, I'm using ts-node to run node testing and examples. As far as I understand, ts-node will compile ts files to js files and execute. Recently I heard about deno , which is a typescript runtime. I tried a few examples in typescript, which works using ts-node . I ran the example with deno , there were many compile messages printed in the console, then execute the code. And later I found there's cache files in /username/.deno . I don't feel the deno execution is faster than ts-node It seems both deno and ts-node will compile and run using cache. What's the difference between them? Now provide the response and nothing else.
TL;DR Deno is more like Node than like ts-node, i.e. it is a JS runtime based on V8. Unlike Node, Deno contains the TypeScript compiler. Deno is not part of the Node/npm ecosystem. ts-node on the other hand is a Node.js module that uses the TypeScript compiler to transpile TypeScript code and run it in Node. ts-node is part of the Node/npm ecosystem. Deno is fast. See below. Deno and ts-node similarities They both run TypeScript code They both run on Linux, Mac and Windows (but ts-node also on SmartOS and AIX) They both use the Google V8 JavaScript engine (ts-node via node that it uses under the hood) Deno and ts-node differences ts-node ts-node is a Node.js module it is written in Node.js it's installed with npm it uses the TypeScript compiler as a peer dependency it installs its own dependencies as a runtime it uses Node which is written in C++ using libuv Deno deno is a standalone executable it doesn't use Node.js it is distributed as a single binary it contains the TypeScript compiler as a V8 snapshot it has no dependencies it is a runtime written in Rust using Tokio Maturity ts-node ts-node relies on the Node.js runtime so it is fair to include it here: Node.js was released in 2009, the latest LTS version is 14.15.0 npm was released in 2010, version included in Node LTS is 6.14.8 ts-node was released in 2015, the latest version is 9.0.0 Deno Deno is itself a runtime so it doesn't use anything else: Deno was released in 2018, the latest version is 1.5.2 check latest version here Popularity GitHub: nodejs/node TypeStrong/ts-node denoland/deno Stack Overflow: Questions tagged 'node.js' : 358,962 Questions tagged 'typescript' : 132,043 Questions tagged 'ts-node' : 199 Questions tagged 'deno' : 320 Libraries ts-node You can use all Node libraries available on npm (currently there are 955,263 packages on npm, not all of them for Node but still a lot) The Node libraries that are available on npm even if they were originally written in TypeScript are usually published in a form transpiled to JavaScript with additional type definitions in *.d.ts files (included in the npm package or installed separately from the @types namespace). Deno There are 1256 third-party modules on https://deno.land/x/ and 56 libraries and tools on https://github.com/denolib/awesome-deno#modules (I didn't check if all are the same) The Deno libraries are just TypeScript files. Installation difference ts-node you install Node.js https://nodejs.org/en/download/ the v10 (current LTS) on Mac and Linux is around 65MB in 4500 files you install typescript and ts-node with their dependencies with npm npm install typescript ts-node it installs 10 npm modules and puts 44MB in 212 files into node_modules Deno you download a single binary https://github.com/denoland/deno/releases the uncompressed binary of v0.3.6 on Mac and Linux is around 47MB and 41MB on Windows Your code differences ts-node your code works the same as if it were transpiled with tsc and run with node (because it is under the hood) you can use the Node API you can use all built-in Node modules you can use modules from npm you can import files using relative paths (usually without .ts suffix) you can import the dependencies installed with npm (or yarn ) in node_modules Deno your code doesn't work the same as in Node (because it isn't run with Node) you use the Deno API you can use the Deno built-in modules you can use other Deno modules that are available you can import files using relative paths (always with .ts suffix!) you can import URLs directly from the Web (no need for npm install ) Examples Here is an example of publishing a minimal library written in TypeScript and using it. Creating and using a TypeScript library with Node and ts-node This is what I am doing right now with an example project on: https://github.com/rsp/node-ts-hello Creating library: find a name that is free on npm (no longer enough, see below) create repo on GitHub create package.json with npm init install TypeScript compiler with npm install typescript decide if you're keeping package-lock.json in the repo (there are pros and cons) create a src dir where you will keep TypeScript files add hello.ts to src add tsconfig.json file and make sure to: add "src/**/*" to "include" add dependencies and your own types to "paths" add "outDir": "dist" to put the JS files in a known place add the dist directory to .gitignore so that compiled files are not in git add the same as in .gitignore but without dist in .npmignore (or otherwise you will not publish the most important files, see below) add "declaration": true so you have *.d.ts files generated add "main": "dist/hello.js" in package.json (note the "js" suffix) add "types": "dist/hello.d.ts" in package.json (note the "ts" suffix) add "build": "tsc" to package.json (watch out for redundant files, see below) login with npm login (you shouldn't be logged in all the time - see: Now Pushing Malware: NPM package dev logins slurped by hacked tool popular with coders ) compile the project with npm run build publish the package with npm publish when you get npm ERR! publish Failed PUT 401 you need to login with npm login when you get npm ERR! publish Failed PUT 403 your package may be "too similar to existing packages" - try renaming it in package.json, rename the repo and update all liks to readme, issues itp. in package.json logout from npm with npm logout see your ~/.npmrc and make sure you have nothing like this left: //registry.npmjs.org/:_authToken=... Using the library in other project using ts-node create a new directory create a package.json file with npm init (so that you can install dependencies locally for your new program) install our library with npm install node-ts-hello optionally install ts-node with npm install typescript ts-node (unless it's installed globally) add hi.ts file that imports our library with: import { hello } from 'node-ts-hello'; hello('TS'); run it with npx ts-node hi.ts (if ts-node was installed locally) or ts-node hi.ts (if ts-node was installed globally) if you get errors, see below Potential problems: I simplified the above a little bit, my actual process of creating that library is described here . Creating and using a TypeScript library with Deno This is what I am doing right now with an example project on: https://github.com/rsp/deno-hello Creating library: create repo on GitHub put hello.ts in the repo Using library: Create a file hi.ts with the contents: import { hello } from 'https://raw.githubusercontent.com/rsp/deno-hello/master/hello.ts'; hello('TS'); Run your program with deno run hi.ts The first run will print: $ deno run hi.ts Compiling file:///Users/rsp/talks/deno/hello-deno-test/hi.tsDownloading https://raw.githubusercontent.com/rsp/deno-hello/master/hello.tsCompiling https://raw.githubusercontent.com/rsp/deno-hello/master/hello.tsHello, TS! The second run: $ deno run hi.ts Hello, TS! If you change hi.ts it will be recompiled but the dependencies will not get downloaded again: $ deno run hi.ts Compiling file:///Users/rsp/talks/deno/hello-deno-test/hi.tsHello, TS! (Note that touch hi.ts will not be enough, you need to make the actual changes because Deno checks the file checksum, not the timestamp.) Speed ts-node The speed of starting the ts-node version of our hi.ts from the examples above: $ time npx ts-node hi.ts Hello, TS!real 0m0.904suser 0m1.300ssys 0m0.083s This is after the dependencies are already installed and after running several times to make sure that all of the caching works.Almost one second. Deno The speed of starting the Deno version of our hi.ts from the examples above: $ time deno run hi.ts Hello, TS!real 0m0.028suser 0m0.010ssys 0m0.015s This is also after the dependencies are already installed and after running several times to make sure that all of the caching works. More than 32x speed improvement. Summary Deno should be compared more with Node than with ts-node because Deno is an entirely new runtime while ts-node is a module for Node so your program run with ts-node really use the Node runtime. It is a very young project but has already got a lot of traction. It doesn't have as much documentation or libraries as Node but it means that it may be the best time to get involved because when it gets more popular, and I think it will for many reasons that are beyond the scope of this answer, people who already have experience with it will be needed on the market, like it was with Node. The program startup speed is already very impressive and I expect more improvements there. The development speed of using single files with no need for configuration like package.json or node_modules together with a possibility to import dependencies directly from URLs (like on the frontend) will make it possible to work in a different way both for the end user code and for the libraries. We'll see how it all works in practice but it already looks promising.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/53428120', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1828515/']}
jdg_83351
stackexchange
llm_judgeable_groundtruth_similarity
2237633
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having difficulty with a seemingly simple question. "Is it possible to have a finite state machine that detects if a bit string of arbitrary length has the same number of zeros as ones? If so what would it look like"? In trying to solve this problem I have come up with a variety of finite state machines. However, none have been able to solve the problem for a bit string of arbitrary length. Any suggestions? Thank you! Now provide the response and nothing else.
This is clearly possible with Turing machines, so let me assume that you are using finite state automata. In this case, it is not possible. The reason is the pumping lemma . Consider the strings $0^n1^n$, where $n$ is larger than the number of states in the machine. This string must be accepted by the machine, and so it can be pumped. But by the pumping lemma, the effect of pumping will be to insert extra $0$s into the string, making it unbalanced with respect to the criterion, but still accepted by the machine. So the machine does not work as suggested. So there is no finite state automata accepting all and only the balanced strings.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2237633', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/429378/']}
jdg_83352
stackexchange
llm_judgeable_groundtruth_similarity
966798
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How I solve the following equation for $0 \le x \le 360$: $$2\cos2x-4\sin x\cos x=\sqrt{6}$$ I tried different methods. The first was to get things in the form of $R\cos(x \mp \alpha)$: $$2\cos2x-2(2\sin x\cos x)=\sqrt{6}\\2\cos2x-2\sin2x=\sqrt{6}\\R = \sqrt{4} = 2 \\\alpha = \arctan \frac{2}{2} = 45\\\therefore \cos(2x + 45) = \frac{\sqrt6}{2}$$ which is impossible. I then tried to use t-substitution, where: $$t = \tan\frac{x}{2}, \sin x=\frac{2t}{1+t^2}, \cos x =\frac{1-t^2}{1+t^2}$$ but the algebra got unreasonably complicated. What am I missing? Now provide the response and nothing else.
Here's a general answer: The definitions of analysis are formulated in terms of conditions depending on a positive real number $\delta$ that "remain true if $\delta$ is made smaller". For example, the precise definition of the statement $\lim\limits_{x \to a} f(x) = L$ includes the condition$$\text{If $|x - a| < \delta$, then $|f(x) - L| < \varepsilon$,}$$which we might denote $P(\delta)$, regarding $f$, $a$, $L$, and $\varepsilon$ as given/known. If the condition $P(\delta)$ is true for some $\delta > 0$, and if $0 < \delta' < \delta$, then $P(\delta')$ is also true, because its hypothesis is logically more strict. Now suppose you have finitely many such conditions satisfied by positive numbers $\delta_{1}, \dots, \delta_{k}$, and you want a single $\delta > 0$ that satisfies all your conditions. It suffices to take a positive $\delta$ that does not exceed $\delta_{1}, \dots, \delta_{k}$. The standard idiom of analysis is to take$$\delta = \min(\delta_{1}, \dots, \delta_{k}).$$ To be picky, it's not that we need to use the minimum, but it's sufficient or enough to use the minimum.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/966798', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58414/']}
jdg_83353
stackexchange
llm_judgeable_groundtruth_similarity
432937
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to prove if $M$ is a module, $G_1$, $G_2$ submodules of $M$ and $M/G_1$ and $M/G_2$ Noetherians, then $M/(G_1\cap G_2)$ is Noetherian. I've tried by brute force (writing down explicitly an ascending chain), by second fundamental theorem of isomorphisms, etc... without success. I really need help. Thanks a lot. Now provide the response and nothing else.
Consider the canonical map $$\Phi \colon M \to M/G_1 \times M/G_2; \quad \Phi(m) = ([m]_{G_1},\, [m]_{G_2}).$$ The kernel is $\ker \Phi = G_1 \cap G_2$, so $\Phi$ induces an embedding $$\varphi \colon M/(G_1 \cap G_2) \hookrightarrow M/G_1 \times M/G_2.$$ $P = M/G_1 \times M/G_2$ is Noetherian (why?), $M/(G_1\cap G_2)$ is isomorphic to a submodule of $P$. A submodule of a Noetherian module is Noetherian (why?).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/432937', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/75086/']}
jdg_83354
stackexchange
llm_judgeable_groundtruth_similarity
40201
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A resistor is capable of absorbing positive power.Why wouldn't this be true for capacitors and inductors? Now provide the response and nothing else.
An ideal resistor dissipates (converts into heat) electrical power. They are not capable of delivering power. Capacitors and inductors both are capable of absorbing and delivering (positive) power. When power is absorbed by an ideal capacitor, all of it is stored in the form of an electric field. Likewise, all of the power absorbed by an ideal inductor is stored in the form of a magnetic field. These devices can deliver this stored energy, but cannot produce energy. Real capacitors and inductors, however, are not ideal, and will dissipate some power due to imperfections within the device (leakage within a capacitor, for example). This is why in simulations, capacitors and inductors will sometimes have very complex models to attempt to simulate real-world behavior (such as a leakage within a capacitor, which can be modeled simply with a high-resistance resistor in parallel with the capacitor).
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/40201', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/11121/']}
jdg_83355
stackexchange
llm_judgeable_groundtruth_similarity
4623297
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to understand the following quote from the Wikipedia article on the Pauli matrices . More formally, this defines a map from $\mathbb {R} ^{3}$ to the vector space of traceless Hermitian $2\times 2$ matrices. This map encodes structures of $\mathbb {R} ^{3}$ as a normed vector space and as a Lie algebra (with the cross-product as its Lie bracket) via functions of matrices, making the map an isomorphism of Lie algebras. This makes the Pauli matrices intertwiners from the point of view of representation theory. I understand everything except for the last sentence. I know what an intertwining map is, but what is an "intertwiner"? More importantly, the definition of an intertwining map $\phi:V\to W$ requires that we have linear representations (of the Lie algebra $\frak{so}(3)$ , in this case) on $V$ and $W$ . I understand that $\mathbb R^3$ with the cross product is isomorphic to $\mathfrak{so}(3)$ , but this is not a typical linear representation, where the representatives of Lie algebra elements are linear maps and $\rho([X,Y])=\rho(X)\rho(Y)-\rho(Y)\rho(X)$ . So what does the last sentence mean exactly? Now provide the response and nothing else.
I'm pretty sure that your objection is exactly right. That is, the Pauli matrices are literally a/the standard basis for the (complexified?!) Lie algebra $\mathfrak sl(2)$ . Yes, $\mathbb R^3$ with cross product is another 3-dimensional Lie algebra, and (maybe complexified?) is isomorphic to $\mathfrak sl(2)$ . I'd call the map a "Lie algebra isomorphism". As you say, there are not really any representations in sight, and an "intertwiner" would usually be a map from one repn (of a fixed thing) to another repn (of that same thing), that preserves/respects the action (of the fixed thing). So, yes, I agree with your appraisal that the terminology "intertwiner" in that situation is not standard, at least in the mathematics that I know, or the physics-y stuff that I know, either. But, still, conceivably, there is another milieu in which this abuse of terminology is standard, too.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4623297', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/477746/']}
jdg_83356
stackexchange
llm_judgeable_groundtruth_similarity
6970805
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm struggling to get a final result set for a 3 table hierarchical set of data. Hopefully, the diagrams will indicate what I have and what I'm trying to do. Briefly, my final result set (below) should easily allow me to define a dynamic number of checkboxes in my web site, while also allowing me to define whether the boxes are checked, all from within a single result set. I believe that since the data is normalized, I should be able to get a single result set, but I can't get my head wrapped around this on... Can anyone help?? TABLE A TABLE B TABLE C MEMBER CONTACT ALERT (pk)$member_id -> (pk)$contact_id -> (pk)$alert_id (fk)$member_id (fk)$contact_id $alert_type -> -> 'local', 'state', 'nation' Example of my filter criteria is member_id = 1 AND alert_type = 'local' * = results of filter member_id = 1TABLE MEMBERS A+----------+----------+|member_id | Name |+----------+----------+| 1 | Alan | *| 2 | Brad || 3 | Doug || 4 | Flo |+---------------------+TABLE CONTACTS B +--------------------------------------------------------------------+| contact_id | member_id | email | phone | Name |+------------+-------------+---------------+--------------+----------+| 1 | 1 | [email protected] | | Alex | *| 2 | 1 | [email protected] | 123-456-7890 | Bob | *| 3 | 3 | [email protected] | | Cris || 4 | 1 | [email protected] | | Dan | *| 5 | 2 | [email protected] | | Ed || 6 | 1 | [email protected] | | Fran | *| 7 | 1 | [email protected] | 212-323-1111 | Greg | *| 8 | 2 | [email protected] | | Hans || 9 | 3 | [email protected] | | Ida || 10 | 1 | [email protected] | 945-555-1212 | Jeff | *| 11 | 2 | [email protected] | 945-555-1212 | Karl || 12 | 3 | [email protected] | | Leo |+--------------------------------------------------------------------+# = resutls of filter alert_type = 'local'TABLE CONTACTS_SELECTED C+-----------------------------------------+ | alert_id | contact_id | alert_type |+------------+------------+---------------+ | 1 | 1 | local | * #| 2 | 1 | state | *| 3 | 3 | state || 4 | 5 | local || 5 | 5 | state || 6 | 6 | nation | *| 7 | 7 | local | * #| 8 | 8 | nation | | 9 | 10 | local | *| 10 | 12 | state |+-------------------------+---------------+ REQUIRED OUTPUT +------------------------------------------------------------------------------------+|member_id | contract_id | email | phone | Name | alert_type |+----------+--------------+---------------+--------------+----------+----------------+| 1 | 1 | [email protected] | | Alex | local || 1 | 2 | [email protected] | 123-456-7890 | Bob | NULL || 1 | 4 | [email protected] | | Dan | NULL || 1 | 6 | [email protected] | | Fran | nation || 1 | 7 | [email protected] | 212-323-1111 | Greg | local || 1 | 10 | [email protected] | 945-555-1212 | Jeff | local |+------------------------------------------------------------------------------------+With this result set, I should be easily able to FOREACH my way through all 6 records and create a checkbox for each record, and flag those records with 'local' as checked. Can anyone help with setting up this complex query? ---- Table structure for table `contacts`--CREATE TABLE IF NOT EXISTS `contacts` ( `contact_id` int(12) NOT NULL AUTO_INCREMENT, `member_id` int(12) NOT NULL, `email` varchar(30) NOT NULL, `phone` varchar(15) NOT NULL, `name` varchar(30) NOT NULL, PRIMARY KEY (`contact_id`)) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=13 ;---- Dumping data for table `contacts`--INSERT INTO `contacts` (`contact_id`, `member_id`, `email`, `phone`, `name`) VALUES(1, 1, '[email protected]', '', 'Alex'),(2, 1, '[email protected]', '123-456-7890', 'Bob'),(3, 3, '[email protected]', '', 'Cris'),(4, 1, '[email protected]', '987-654-3210', 'Dan'),(5, 2, '[email protected]', '', 'Ed'),(6, 1, '[email protected]', '', 'Fran'),(7, 2, '[email protected]', '234-567-8901', 'Hans'),(8, 3, '[email protected]', '', 'Ida'),(9, 1, '[email protected]', '', 'Greg'),(10, 1, '[email protected]', '456-789-0123', 'Jeff'),(11, 2, '[email protected]', '945-555-1212 ', 'Karl'),(12, 3, '[email protected]', '', 'Leo');CREATE TABLE IF NOT EXISTS `contacts_selected` ( `alert_id` int(12) NOT NULL AUTO_INCREMENT, `contact_id` int(12) NOT NULL, `alert_type` varchar(6) NOT NULL, PRIMARY KEY (`alert_id`)) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=12 ;---- Dumping data for table `contacts_selected`--INSERT INTO `contacts_selected` (`alert_id`, `contact_id`, `alert_type`) VALUES(1, 1, 'local'),(2, 1, 'state'),(3, 3, 'state'),(4, 5, 'local'),(5, 5, 'state'),(6, 6, 'nation'),(7, 7, 'local'),(8, 8, 'nation'),(9, 10, 'local'),(10, 12, 'state'),(11, 1, 'nation');CREATE TABLE IF NOT EXISTS `alert_types` ( `alert_type` varchar(6) NOT NULL) ENGINE=MyISAM DEFAULT CHARSET=utf8;---- Dumping data for table `alert_types`--INSERT INTO `alert_types` (`alert_type`) VALUES('local'),('state'),('nation');SOLUTION:$alert_type = 'local';// choices are local, state, nation//SELECT c.contact_id, c.member_id, c.email, c.phone, c.desc, s.alert_type FROM contact c LEFT JOIN contact_select s ON c.contact_id = s.contact_idWHERE c.member_id = 1 AND c.contact_id NOT IN(SELECT cs.contact_id FROM contact_select cs WHERE cs.alert_type = '$alert_type')GROUP BY c.contact_id UNIONSELECT * FROM (SELECT c.contact_id, c.member_id, c.email, c.phone, c.desc, s.alert_type FROM contact c LEFT JOIN contact_select s ON c.contact_id = s.contact_id WHERE c.member_id = 1 AND s.contact_id IN (SELECT cs.contact_id FROM contact_select cs WHERE cs.alert_type = '$alert_type')) zWHERE z.alert_type = '$alert_type' Now provide the response and nothing else.
The code you saw for testing for overflow is just bogus. For signed integers, you must test like this: if (a^b < 0) overflow=0; /* opposite signs can't overflow */else if (a>0) overflow=(b>INT_MAX-a);else overflow=(b<INT_MIN-a); Note that the cases can be simplified a lot if one of the two numbers is a constant. For unsigned integers, you can test like this: overflow = (a+b<a); This is possible because unsigned arithmetic is defined to wrap, unlike signed arithmetic which invokes undefined behavior on overflow.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6970805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/882422/']}
jdg_83357
stackexchange
llm_judgeable_groundtruth_similarity
6638
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Which of the contributing structures of the resonance is more stable? I'm watching a video lecture by a professor of my college where he puts this question to the class. The class unanimously says B . At first I thought they were wrong, but then the professor agreed with them. Is this true? The reason he gave was that oxygen's octet would be complete in B , hence it contributes more. But I don't understand this. 1) Isn't the octet complete in A too? It has 2 covalent bonds - 4 electrons (2 from sharing) and 2 lone pairs. Adding up to 8. 2) Oxygen being more electronegative than carbon, would pull the electrons towards itself, leading me to think that it would be more comfortable with a lone pair rather than another bond. Also, won't it prefer to lose out that '+' charge which the bonding's burdened onto it? Am I wrong? Another thing which made me believe that Prof. may have made a mistake while talking about the more relevant contributor, is that when he added Br $^-$ to the compound, he added it to the canonical structure A. And don't you explain reactions with the most relevant contributor, i.e. B? Answer The presence of an extra bond in the canonical structure B , along with the completion of the carbon valency makes it more stable than A . The reaction with Br $^-$ is best explained looking at A , though, because the incoming nucleophile ends up bound to carbon, not oxygen. Thanks, everyone! Now provide the response and nothing else.
Which of the contributing structures of the resonance below is more stable? Technically, neither. Both structures A and B are resonance contributors to the same true structure of the ion. Structures A and B, because they both represent the same species, cannot have different energies, and therefor they cannot have different stabilities. What the question is asking is "Which contributor is more important in describing the structure and behavior of the hybrid?" Contributor B probably approximates the structure better (i.e. the positive charge is more on the oxygen than on the carbon), but structure A represents the reactivity better (i.e. nucleophiles want to attack the carbon and not the oxygen).
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/6638', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/1858/']}
jdg_83358
stackexchange
llm_judgeable_groundtruth_similarity
42160915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to learn spring-boot basic annotation configuration with hibernate, to make myself a template that will always work. I'm using spring-boot latest release version 1.51 on STS (spring tool suite) 3.8.3. Here is my main: @SpringBootApplication@EnableAutoConfigurationpublic class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); }} Now, I know that @SpringBootApplication automatically comes with @componetScan , so I didn't add it. My configuration class: @Configuration@EnableTransactionManagement@EnableJpaRepositories(entityManagerFactoryRef = "someEntityManagerFactory", transactionManagerRef = "someTransactionManager", basePackages = { "com.example.*" })@EntityScan(basePackages = "com.demo.models")@ConfigurationProperties(prefix = "mysql.datasource")public class DataBaseConfig { @Autowired private Environment env; @Bean public DataSource someDataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName(env.getProperty("mysql.datasource.driver-class-name")); dataSource.setUrl(env.getProperty("mysql.datasource.url")); dataSource.setUsername(env.getProperty("mysql.datasource.username")); dataSource.setPassword(env.getProperty("mysql.datasource.password")); return dataSource; } @Bean public LocalContainerEntityManagerFactoryBean someEntityManagerFactory() { LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean(); em.setDataSource(someDataSource()); em.setPackagesToScan(new String[] { "org.openlegacy.analytics.models" }); JpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); em.setJpaVendorAdapter(vendorAdapter); em.setJpaProperties(additionalProperties()); return em; } @Bean public PlatformTransactionManager someTransactionManager() { JpaTransactionManager tm = new JpaTransactionManager(); tm.setEntityManagerFactory(someEntityManagerFactory().getObject()); tm.setDataSource(someDataSource()); return tm; } Properties additionalProperties() { Properties properties = new Properties(); properties.setProperty("hibernate.hbm2ddl.auto", env.getProperty("spring.jpa.hibernate.ddl-auto")); properties.setProperty("hibernate.dialect", env.getProperty("spring.jpa.properties.hibernate.dialect")); properties.setProperty("spring.jpa.show-sql", env.getProperty("spring.jpa.show-sql")); properties.setProperty("spring.jpa.hibernate.naming.physical-strategy", env.getProperty("spring.jpa.hibernate.naming.physical-strategy")); return properties; }} My controller class: @RestController@RequestMapping("/users")public class UserController { @Autowired private UserRepository userRipository; @RequestMapping(value = "", method = RequestMethod.GET) public List<User> getItems() { return userRipository.getUsers(); } @RequestMapping(value = "/message", method = RequestMethod.GET) public String getMessage() { return userRipository.getMessage(); }} My repository class: @Transactional@Repositorypublic class UserRepository { @PersistenceContext private EntityManager entityManager; @SuppressWarnings("unchecked") public List<User> getUsers() { return entityManager.createQuery("select u from User u").getResultList(); } public String getMessage() { return "hello"; }} My entity class: @Entity(name = "user")public class User implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Integer id; @Column(name = "user_name") private String userName; @Column(name = "password") private String password; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getUserName() { return userName; } public void setUserName(String userName) { this.userName = userName; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; }} and my properties file: # DataSource settings: set here your own configurations for the database connection.mysql.datasource.username=openlegacymysql.datasource.password=openlegacymysql.datasource.driver-class-name=com.mysql.jdbc.Drivermysql.datasource.url=jdbc:mysql://localhost:3306/olbankspring.jpa.database= MYSQLspring.data.jpa.repositories.enabled=true#spring.jpa.database-platform=org.hibernate.dialect.MYSQL5Dialect# Show or not log for each sql queryspring.jpa.show-sql = true# Hibernate ddl auto (create, create-drop, update)spring.jpa.hibernate.ddl-auto = update# Naming strategy#spring.jpa.hibernate.naming.strategy= org.hibernate.cfg.ImprovedNamingStrategyspring.jpa.hibernate.naming.physical-strategy= org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl# The SQL dialect makes Hibernate generate better SQL for the chosen databasespring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect and when I'm tying to retrieve data from the user table, I get this error: org.hibernate.hql.internal.ast.QuerySyntaxException: User is not mapped... My questions are: why am I getting this error? I know that user is mapped by the class name, and that's what I'm doing is this the best way to configure hibernate with spring-boot? It's important for me to write code by the best practices. Please give a detailed answer so I can learn from it. Any other information that can be useful is welcome :) Thanks. Now provide the response and nothing else.
Ok. You have a few things that need to be addressed in your overall configuration. You currently are providing an alias to your Entity User @Entity(name = "user") That is fine but if you are going to provide a name to your Entity then that is how you need to reference it in JPQL so, "select u from User u" would need to become select u from user u I might just suggest getting rid of your name qualifier and leave your query as "select u from User u". Then secondly you do have some issues with package references: In your @EnableJpaRepositories annotation change your basePackagesto reference the base of your actual repository package, guessing "com.demo.repository" . Get rid of the wildcard reference. In your someEntityManagerFactory() method you are setting thebasePackage to (something I imagine is incorrect) "org.openlegacy.analytics.models". You indicated your entity(s) isunder "com.demo.models" . So you should change that setter to em.setPackagesToScan(new String[] { "com.demo.models" }); That should do the trick.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42160915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7545938/']}
jdg_83359
stackexchange
llm_judgeable_groundtruth_similarity
50535214
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a User entity having email property annotated with @Email @Emailprivate String email; I am using @Valid (javax.validation.Valid) annotation on my Controller class. The issue is that the controller validator is passing the invalid emails. Example: pusp@1 - obviously this is an invalid email address pusp@fake The pattern I noticed is, the @Email only want sometext@text , it don't care for the extensions(.com/org etc). Is it the expected behaviour? Do I need to pass my own regex implementation for @Email(regex="") Now provide the response and nothing else.
A email without . may be considered as valid according to the validators. In a general way, validator implementations (here it is probably the Hibernate Validator) are not very restrictive about emails. For example the org.hibernate.validator.internal.constraintvalidators.AbstractEmailValidator javadoc states : The specification of a valid email can be found in RFC 2822 and one can come up with a regular expression matching all valid email addresses as per specification. However, as this article discusses it is not necessarily practical to implement a 100% compliant email validator. This implementation is a trade-off trying to match most email while ignoring for example emails with double quotes or comments. And as a side note, I noticed similarly things with HTML Validator for emails. So I think that the behavior that you encounter actually is which one expected. And about your question : Do I need to pass my own regex implementation for @Email(regex="") Indeed. You don't have any other choice if you want to make the validation more restrictive. As alternative, this answer creating its own validator via a constraints composition is really interesting as it is DRY (you can reuse your custom ConstraintValidator without specified at each time the pattern as it will be included in) and it reuses the "good part" of the @Email ConstraintValidator : @Email(message="Please provide a valid email address")@Pattern(regexp=".+@.+\\..+", message="Please provide a valid email address")@Target( { METHOD, FIELD, ANNOTATION_TYPE })@Retention(RUNTIME)@Constraint(validatedBy = {})@Documentedpublic @interface ExtendedEmailValidator { String message() default "Please provide a valid email address"; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {};}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50535214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4691230/']}
jdg_83360
stackexchange
llm_judgeable_groundtruth_similarity
43706662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to pass my data from one ViewController to another VC on VC dismiss. Is it possible? I've tried the next method and no success: On button click: self.dismiss(animated: true) { let storyboard = UIStoryboard(name: "Main", bundle: nil) let controller = storyboard.instantiateViewController(withIdentifier: "EditViewController") as! EditViewController controller.segueArray = [values] } when EditViewController appears again, my segueArray is nil there. How can I pass my data from my ViewController to the EditViewController on dismiss? Now provide the response and nothing else.
The best way to pass data back to the previous view controller is through delegates... when going from ViewController A to B, pass view controller A as a delegate and on the viewWillDisappear method for ViewController B, call the delegate method in ViewController A.. Protocols would help define the delegate and the required methods to be implemented by previous VC. Here's a quick example: Protocol for passing data: protocol isAbleToReceiveData { func pass(data: String) //data: string is an example parameter} Viewcontroller A: class viewControllerA: UIViewController, isAbleToReceiveData { func pass(data: String) { //conforms to protocol // implement your own implementation } prepare(for: Segue) { /** code for passing data **/ let vc2 = ViewCOntrollerB() / vc2.delegate = self //sets the delegate in the new viewcontroller //before displaying present(vc2) }} Dismissing viewcontroller: class viewControllerB: UIViewController { var delegate: isAbleToReceiveData viewWillDisappear { delegate.pass(data: "someData") //call the func in the previous vc }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43706662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7641138/']}
jdg_83361
stackexchange
llm_judgeable_groundtruth_similarity
19543547
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I read the source code of java.lang.String and I was surprised to find that String.indexof() does not use the Knuth–Morris–Pratt algorithm ? As we know, KMP is more effective. So why isn't it used in String.indexOf() ? Someone around me told me that for short string KMP is good enough, but if you need performance and you intend to use with large strings then is not a good choice. However he didn't tell me the details. So, here are my questions: why don't we use KMP in String.indexOf() ? why is KMP not a good choice with large Strings? Now provide the response and nothing else.
KMP has better worst-case performance, but actually requires a little bit of up-front computation (to generate the table of offsets). It also requires an initial memory allocation, which could also impact performance. For (presumably) common use-cases of searching in relatively short strings, this might actually end up slower than the primitive implementation. This, bundled with the fact that for really huge data sets you will probably be using more specialized data structures than a simple String means that the increased implementation (and possibly runtime) cost is not worth investing. Note that this might change in future Java versions, as the actual algorithm is not specified.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19543547', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2907140/']}
jdg_83362
stackexchange
llm_judgeable_groundtruth_similarity
869316
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to pass table names to a sub that gets all the field names of that table, stores them into an array, and then uses that array in conjunction with the fetchrow of another sql query to display the data in those fields. Here's the code I have now: Examples of sub calls with table names as the parameter: shamoo("reqhead_rec");shamoo("approv_rec");shamoo("denial_rec"); shamoo sub: sub shamoo{ my $table = shift; print uc($table)."\n=====================================\n"; #takes arg (table name) and stores all the field names into an array $STMT = <<EOF; select first 1 * from $table EOF my $sth = $db1->prepare($STMT);$sth->execute; my ($i, @field); my $columns = $sth->{NAME_lc}; while (my $row = $sth->fetch){for $i (0 .. $#$row){$field[$i] = $columns->[$i];}} $STMT = <<EOF; select * from $table where frm = '$frm' and req_no = $req_no EOF $sth = $db1->prepare($STMT);$sth->execute; $i=0; while ($i!=scalar(@field)) { #need code for in here... }} I am looking for a way to turn this nto something that doesn't have to be explicitly defined.... my ($frm, $req_no, $auth_id, $alt_auth_id, $id_acct, $seq_no, $id, $appr_stat, $add_date, $approve_date, $approve_time, $prim);while(($frm, $req_no, $auth_id, $alt_auth_id, $id_acct, $seq_no, $id, $appr_stat, $add_date, $approve_date, $approve_time, $prim) = $sth->fetchrow_array()) Now provide the response and nothing else.
Use fetchrow_hashref: sub shamoo { my ($dbh, $frm, $req_no, $table) = @_; print uc($table), "\n", "=" x 36, "\n"; #takes arg (table name) and stores all the field names into an array my $sth = $dbh->prepare( "select * from $table where frm = ? and req_no = ?" ); $sth->execute($frm, $req_no); my $i = 1; while (my $row = $sth->fetchrow_hashref) { print "row ", $i++, "\n"; for my $col (keys %$row) { print "\t$col is $row->{$col}\n"; } }} You may also want to set FetchHashKeyName to "NAME_lc" or "NAME_uc" when you create your database handle: my $dbh = DBI->connect( $dsn, $user, $pass, { ChopBlanks => 1, AutoCommit => 1, PrintError => 0, RaiseError => 1, FetchHashKeyName => "NAME_lc", }) or die DBI->errstr;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/869316', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/42229/']}
jdg_83363
stackexchange
llm_judgeable_groundtruth_similarity
448684
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: What is/are the quickest way(s) to view firmware version for HP Smart Array (preferably GUI version)? Should I look for it in Windows or it can be viewed via the iLO? Now provide the response and nothing else.
If you have the HP Management Agents installed, the hpacucli utility, part of the Array Configuration Software will provide all of the controller information you need. For Windows 2003, download this package for command-line tool or this, for the GUI . The ctrl all show config detail directive will output something like: => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 50123456789ABCDE Cache Serial Number: PBCDF0BRHYI032 RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: High Expand Priority: High Surface Scan Delay: 15 secs Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 1024 MB No-Battery Write Cache: Enabled Cache Backup Power Source: Capacitors Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: OK
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/448684', 'https://serverfault.com', 'https://serverfault.com/users/93862/']}
jdg_83364
stackexchange
llm_judgeable_groundtruth_similarity
3728502
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that $f(x)=\sqrt 2$ has no solutions when $$f(x)=\sin x\cos x(2+\sin x)$$ and $x\in [0,\frac{\pi}{2}]$ . My attempt: Since $f(x)$ is continuous in its domain, it is enough to show that maximum value attained by $f$ in $[0,\frac{\pi}{2}]$ is less than $\sqrt 2$ . Also, since $f(0)=f(\pi/2)=0$ , it must hold true that $f'(c)=0$ for some $c\in[0,\frac{\pi}{2}]$ (Rolle's Theorem). And since $f(\pi/6)>0$ , there has to exist a maxima. But differentiation doesn't help me here, because when I set $f'(x)$ to $0$ , I get (on rearrangement and using basic trigonometry) a cubic in $\sin x$ : $$3t^3+4t^2-2t-2=0$$ where $t=\sin x$ . The above cubic doesn't have any rational roots and I'm stuck. Any help will be great. Thanks! Edit: I created this question by myself to solve in a pen-paper test. So methods that involve usage of calculators are useless. No offence. P.S. Please keep in mind that I'm barely seventeen, so no highly advance math please! Now provide the response and nothing else.
We have $$f(x)=2\sin(x)\cos(x)+\sin^2(x)\cos(x)=\sin(2x)+\cos(x)-\cos^3(x).$$ Now for $0\leq x\leq\pi/2$ we have $0\leq\sin(2x)\leq1$ and the function $x\mapsto\cos(x)-\cos^3(x)$ has its maximum at $\arctan(\sqrt2)$ , its value is $$\frac{2}{3\sqrt3}=\frac{2\sqrt3}9<\frac{2\cdot1.8}9=0.4.$$ From here $$\sin(2x)+\cos(x)-\cos^3(x)\leq\sqrt2.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3728502', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/571692/']}
jdg_83365
stackexchange
llm_judgeable_groundtruth_similarity
273184
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My question is simple, why are we still using relays ? Can't we say that MOSFET are the future of relays? Now provide the response and nothing else.
Can't we say that MOSFET are the future of relays? You will probably never replace some radio frequency relays with MOSFETs because a MOSFET has several pF of drain-source capacitance and at 1 GHz that will continue to look pretty much like a closed contact. At 1 GHz, 100 pF has an impedance of 1.59 ohms - It wouldn't make much of an open-contact if you used a MOSFET. Even at (say) audio frequencies this capacitance is problematic if you were considering using a MOSFET as a signal switch. At 10 kHz, 100 pF has an impedance of 159 kohms and this could still let through a significant signal into a high impedance amplifier.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/273184', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/108947/']}
jdg_83366
stackexchange
llm_judgeable_groundtruth_similarity
2739
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: And the Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul. Genesis 2:7 ( KJV ) What is the meaning of "breath of life" here? Is there any relation between spirit of man? Now provide the response and nothing else.
The phrase in Hebrew is נִשְׁמַת חַיִּים ( nishmat chayyim ). The Hebrew word typically translated as "spirit" in English is רוּחַ ( ruach ). Here is a link to a Jewish understanding of the distinctions between neshamah , nefesh , and ruach . However, it is my belief that neshamah and ruach are probably equivalent to one another. For example, in Genesis 2:7, it is said that God inspired into man the נִשְׁמַת חַיִּים ( nishmat chayyim ), or "breath of life" (A.V.). Later on in Genesis 7:21-22, where the narrative is speaking about all those who died on the face of the earth in the flood (viz. "And all flesh died that moved upon the earth, both of fowl, and of cattle, and of beast, and of every creeping thing that creepeth upon the earth, and every man..."), regarding them it says, "...all in whose nostrils was the breath of life..." (A.V.). Here, the phrase "breath of life" is translated from the Hebrew phrase נִשְׁמַת־רוּחַ חַיִּים ( nishmat ruach chayyim ), which is like saying "the nishmah of the ruach chayyim ." Grammatically, I would understand this phrase as nishmat being in apposition (genitive of apposition) to ruach chayyim , and thus meaning, "the nishmah , that is to say, the ruach chayyim ." In summary, it seems as though they are equivalent. @Fraser Orr: Yes. That would be prefential. But, you could still read it as: nishmat , that is to say ruach , chayyim . A better view of the appositive: 2 Sam. 22:16; Psa. 18:15 In parallelism: Job 4:9, 33:4; Isa. 42:5 With epexegetical vav: Job 34:14
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/2739', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/615/']}
jdg_83367
stackexchange
llm_judgeable_groundtruth_similarity
27989
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This a reformulation of a practical problem I encountered. Say we have an infinite sequence of random, i.i.d bits. For each bit $X_i$, $P(X_i=1)=p$. What is the expected time until we get a sequence of $n$ 1 bits? Thanks! Now provide the response and nothing else.
There is a lot of literature on such questions concerning the mean time for patterns. For your particular problem a solution can be found on page 156 of Introduction to Probability Models (10th edition) by Sheldon Ross. The formula is $$E[T]=1/p+1/p^2+\cdots+1/p^n={(p^{-n}-1)/(1-p)}.$$ As expected, this is a decreasing function of $p$ for fixed $n$: it takes longer to see rarer events. As $p$ goes from 0 to 1, $E[T]$ decreases from infinity to $n$. Added: Here is a derivation of the formula in my answer. Let $T$ be the random variable that records the first time we see $n$ ones in a row. Let's also define the random variable $L$ to be the position of the first zero bit inthe sequence. Looking at the first $n$ bits there are, roughly speaking,two possibilities: either I get the desired pattern of $n$ ones or I got a zero bit at time $k$ and the whole problem starts over. More formally, conditioning on the value of $L$ we get\begin{eqnarray*}E[T] &=& \sum_{k=1}^{n} E[T \ |\ L=k]\ P(L=k) + E[T\ |\ L> n]\ P(L>n)\cr &=& \sum_{k=1}^{n} (k+E[T])\ P(L=k) + n P(L > n)\cr &=& \sum_{k=1}^{n} (k+E[T])\ p^{k-1}(1-p) + n p^n. \end{eqnarray*} Solving this equation for $E[T]$ gives the formula. There are many generalizations of this problem and variations on the above proof that use, for instance, Markov chains,or martingales, or generating functions, etc. In addition to Ross's book mentioned above, you may like to look at Section 8.4 of Concrete Mathematics by Graham, Knuth, and Patashnik Chapter 14 of Problems and Snapshots from the World of Probability by Blom, Holst, and Sandell Section XIII 7 of An Introduction to Probability Theory and Its Applications by Feller
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/27989', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8464/']}
jdg_83368
stackexchange
llm_judgeable_groundtruth_similarity
378268
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a small python project and have a general_code.py file where I have functions that I use throughout the project. Some examples: def to_boolean(var): return var in ['True', 'true', '1', 'y', 'Y' 'yes', 'Yes'] e.g 2 def get_last_line(in_file, block_size=128, ignore_ending_newline=False): e.g 3 def signal_handler(log, self, signum, frame): # pragma: no cover log.info("Ctrl + C pressed. Exiting application gracefully") self.run = False while threading.active_count() > 1: time.sleep(1) log.info("All threads closed. Exit.") sys.exit(0) e.g 4. class Timer(object): # pragma: no cover def __init__(self, name=None): self.name = name def __enter__(self): self.tstart = time.time() def __exit__(self, type, value, traceback): if self.name: print('[%s]' % self.name,) print('Elapsed: %s' % (time.time() - self.tstart)) e.g 5 def get_file_version(log, filename=None): e.g 6 def setup_logger(filename): Is it good design to have such a central place for common functionality? It has served me well, but often ideas that might be great in a small scope can end up terrible in bigger projects or different scopes. So what would be a good way in python to group common functionality? Is having all functions simply in one file like I do fine or would it be better to put them all into an own class or something completely different? Now provide the response and nothing else.
It can be fine for smaller use-cases, though it's not ideal. If you have 3 or so utility functions it's manageable in your head to keep them in 1 file, but it quickly gets out of hand and it's roughly impossible to navigate a god-file with 200 utility functions where some are related and others are not (too much cognitive load). I'd definitely shy away from making one "general code" file: the name of the import doesn't give any information on what it's doing, and as a project grows it has the potential to be a file that's imported in everything (and due to being used everywhere, they may get extended until you have a dozen optional flags on every function so that you can get returns just right). As a solution that helps to manage that, I'd suggest moving grouped functionality into their own files. So you might have a "file_utils.py" file that contains exapmles 2 and 5, a "debug_utils.py" file which contains 4 and 6, "signal_utils.py" which contains 3, and an "input_parsing_utils.py" which contains 1. Each of these is then nicely packaged bundles of functionality so you can have exactly the functionality you need and are nicely extensible. More generally, you want to get groups of functionality that are closely related put into actual groups, and you want to keep anything unrelated to those functions completely out of the way (out of sight, out of mind). As an added bonus, the sets of functionality are separate - so if you end up needing a ton of flags for a function you can more easily extract that into a factory or strategy pattern. Ideally, these utility files should be totally independent from one another (a util shouldn't need another util to work - if it does, it probably isn't a util). Packaging up discrete bundles of functionality that can be extended or refactored without needing to touch any other unrelated functionality is a good thing. It's also notable that things should only be utilities if they have the potential to be reused - if you write a function that does one thing that's very specific to an internal of the system and needs to return values in a very specific format, it's probably not worth making that a utility (just a private-ish function on the class that needs it).
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/378268', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/291476/']}
jdg_83369
stackexchange
llm_judgeable_groundtruth_similarity
21904614
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have the following class: class Buyer < ActiveRecord::Base attr_accesible :first_name, :last_name and the following in a CSV file: First Name,Last NameJohn,DoeJane,Doe I want to save the contents of the CSV into the database. I have the following in a Rake file: namespace :migration do desc "Migrate CSV data" task :import, [:model, :file_path] => :environment do |t, args| require 'csv' model = args.model.constantize path = args.file_path CSV.foreach(path, :headers => true, :converters => :all, :header_converters => lambda { |h| h.downcase.gsub(' ', '_') } ) do |row| model.create!(row.to_hash) endend end I am getting an undefined method 'downcase' for nil:NilClass . If I exclude the header converters then I get unknown attribute 'First Name' . What's the correct syntax for converting a header from, say, First Name to first_name ? Now provide the response and nothing else.
After doing some research here in my desktop, it seems to me the error is for something else. First I put the data in my "a.txt" file as below : First Name,Last NameJohn,DoeJane,Doe Now I ran the code, which is saved in my so.rb file. so.rb require 'csv'CSV.foreach("C:\\Users\\arup\\a.txt", :headers => true, :converters => :all, :header_converters => lambda { |h| h.downcase.gsub(' ', '_') } ) do |row| p rowend Now running the : C:\Users\arup>ruby -v so.rbruby 1.9.3p448 (2013-06-27) [i386-mingw32]#<CSV::Row "first_name":"John" "last_name":"Doe">#<CSV::Row "first_name":"Jane" "last_name":"Doe"> So everything is working now. Now let me reproduce the error : I put the data in my "a.txt" file as below ( just added a , after the last column) : First Name,Last Name,John,DoeJane,Doe Now I ran the code, which is saved in my so.rb file, again. C:\Users\arup>ruby -v so.rbruby 1.9.3p448 (2013-06-27) [i386-mingw32]so.rb:5:in `block in <main>': undefined method `downcase' for nil:NilClass (NoMethodError) It seems, in your header row, there is blank column value which is causing the error. Thus if you have a control to the source CSV file, check there the same. Or do some change in your code, to handle the error as below : require 'csv'CSV.foreach("C:\\Users\\arup\\a.txt", :headers => true, :converters => :all, :header_converters => lambda { |h| h.downcase.gsub(' ', '_') unless h.nil? } ) do |row| p rowend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21904614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/794461/']}
jdg_83370
stackexchange
llm_judgeable_groundtruth_similarity
113733
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I cannot differentiate clearly between "interaction" and "collinearity" in multiple linear regression. For me these terms are related but not the same. I have searched the forum but could not find the answer. Please share your answer if you have. If you know there is an answer somewhere in the forum for this kind of question, please share the link. Now provide the response and nothing else.
An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Most commonly, interactions are considered in the context of regression analyses. The presence of interactions can have important implications for the interpretation of statistical models. If two variables of interest interact, the relationship between each of the interacting variables and a third "dependent variable" depends on the value of the other interacting variable. In practice, this makes it more difficult to predict the consequences of changing the value of a variable, particularly if the variables it interacts with are hard to measure or difficult to control. Collinearity is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a non-trivial degree of accuracy. In this situation the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Collinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data themselves; it only affects calculations regarding individual predictors. That is, a multiple regression model with correlated predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others. Bottom line: Interactions don't imply collinearity and collinearity does not imply there are interactions.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/113733', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/54934/']}
jdg_83371
stackexchange
llm_judgeable_groundtruth_similarity
45460660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some code that has been working fine, but I have recently completely rewritten the spreadsheet to use named ranges. I'm now struggling to rewrite the code to refer to named ranges instead of absolute references. (i think this is the terminology!?) The File_ref range occupies cells A13:A104The Already_Input? range occupies cells B13:B104 I'm using Excel 2013 on Windows The code that works Sub test()Set mybook = Excel.ActiveWorkbookSet entrysheet = mybook.Sheets("Entry")Dim RangeStart As IntegerRangeStart = Range("File_ref").Cells(1).RowDim RangeLength As IntegerRangeLength = Range("File_Ref").CountDim i As LongDim j As LongDim m As Longj = 0m = 0For i = RangeStart To RangeLength + RangeStart If IsEmpty(entrysheet.Range("A" & i)) Then 'it's this bit I cannot get to refer to named range j = j + 1 ElseIf entrysheet.Range("B" & i) = "yes" Then m = m + 1 End IfNext iEnd Sub The code i have tried, but which doesn't work: Sub test()Set mybook = Excel.ActiveWorkbookSet entrysheet = mybook.Sheets("Entry")Dim RangeStart As IntegerRangeStart = Range("File_ref").Cells(1).RowDim RangeLength As IntegerRangeLength = Range("File_Ref").CountDim i As LongDim j As LongDim m As Longj = 0m = 0For i = RangeStart To RangeLength + RangeStart If IsEmpty(entrysheet.Range("File_ref").Cells(i)) Then j = j + 1 ElseIf entrysheet.Range("Already_Input?").Cells(i) = "yes" Then m = m + 1 End IfNext iEnd Sub Now provide the response and nothing else.
Assuming it's well-behaved (no cycles, no duplicate names or multiple parents for one child) you could simply use a "directed graph" and traverse it. To find the root(s) I also used a dictionary containing a boolean that indicates if there is any parent for the name: lst = [('john','marry'), ('mike','john'), ('mike','hellen'), ('john','elisa')]# Build a directed graph and a list of all names that have no parentgraph = {name: set() for tup in lst for name in tup}has_parent = {name: False for tup in lst for name in tup}for parent, child in lst: graph[parent].add(child) has_parent[child] = True# All names that have absolutely no parent:roots = [name for name, parents in has_parent.items() if not parents]# traversal of the graph (doesn't care about duplicates and cycles)def traverse(hierarchy, graph, names): for name in names: hierarchy[name] = traverse({}, graph, graph[name]) return hierarchytraverse({}, graph, roots)# {'mike': {'hellen': {}, 'john': {'elisa': {}, 'marry': {}}}}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45460660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4680747/']}
jdg_83372
stackexchange
llm_judgeable_groundtruth_similarity
57291234
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working with react-router for the first time. Here is my piece of code. <Switch> <Route exact path="/" component={Home} /> <Route path="/A/xx/xxx/:feeros" component={About} /> <Route path="/B/xx/xxx/:feeros" component={About} /> <Route path="/C/xx/xxx/:feeros" component={About} /> <Route path="/D/xx/xxx/:feeros" component={About} /> </Switch> This part of the <Route path="//xx/xxx/:feeros" component={About} /> code is always repeated.There is no way to reduce this react router ? Now provide the response and nothing else.
If you need to handle four specific routes (A | B | C | D); <Route path="/(A|B|C|D)/xx/xxx/:feeros" component={About} /> If you still need to intercept the parameter, but the values can only be A | B | C | D, then you can write this: <Route path="/:letter(A|B|C|D)/xx/xxx/:feeros" component={About} /> You will have two parameters: const { letter, feeros } = match.params; letter can only be: "A", "B", "C" or "D" You can use another regular expression. for example so: <Route path="/([a-zA-Z])/xx/xxx/:feeros" component={About} /> The route will work for one Latin letter, for example: '/A/xx/xxx/value''/s/xx/xxx/value''/F/xx/xxx/value' Moreover, you can use regular expressions for parameters: <Route path="/([a-zA-Z])/xx/xxx/:feeros(\d{2})" component={About} /> Matches routes with two-digit feeros values: '/A/xx/xxx/11''/s/xx/xxx/21''/F/xx/xxx/45'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57291234', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_83373
stackexchange
llm_judgeable_groundtruth_similarity
113061
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have gone through the below link which says that u-boot support three type of flash technology http://www.stlinux.com/u-boot/flash 1.Nand2.NOR3.SPI Serial Flash I would want to know why SPI is choosen to be used as flash devices not I2C . Why can't we have I2C based flash technology? Now provide the response and nothing else.
There are three reasons I feel that SPI is a better choice than I2C in most projects. SPI can be clocked faster than I2C. I've used SPI up to 50Mhz on IC's that support that speed, while the max I2C speed is 3.4Mhz. Since I2C is a 'shared' data line, the master needs to release control of the data line in order for the slave to respond. I've been burned on several projects before where the master released control and the slave IC proceeded to hang (mostly due to noise on the line). Now you're in a state where both sides are waiting for the other to respond and you need to reset the whole thing. Very frustrating. With SPI you side step the whole shared data lines because each line has a dedicated direction. If you have a voltage level difference between your IC's (say 3.3V vs 5V) unidirectional voltage translators are easier to deal with than the tri-state needed for I2C. On a side note, I find it easier to debug SPI if you're using the chip select lines since you can use that line to trigger on your scope. Basically it comes down to, if you have the spare pins, use SPI and save yourself the headache. If on the other hand, you only have 2 pins, you're stuck with I2C. But seriously, add a separate reset line to your I2C slave IC, so you have a means of recovering from a hung bus.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/113061', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/32914/']}
jdg_83374
stackexchange
llm_judgeable_groundtruth_similarity
638724
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking to have a few dozen DHT11 temperature/humidity sensors connected to a raspberry Pi. I would like to be able to determine the temperature/humidity of each unit, without having to use dozens of MCU Pins. I also will eventually be needing a servo motor hooked up to each unit as well. I was thinking about using a multiplexer, but that would result in lots of wire to be used. Some sort of addressable latch seems to be the solution here, but not sure whats the correct route to take here. Now provide the response and nothing else.
Dozens of temperature/humidity sensors sounds like a set of greenhouses, eventually with servos to open windows or run pumps ... If you have dozens of anything, modularity and testability are really important. Instead of an addressable latch -- of itself a perfectly good idea -- consider local intelligence, whcih will give you a lot of benefits. A very small local MCU solves addressability and serial connection . With a local MCU, obviously addressing is easy. But you can also standardise your interface so that later, different sensors only require local changes on the MCU. Long Wires For wires longer than a single enclosure (over 50 cm, say) my suggestion would be: consider RS-485. (In half-duplex 1-pair configuration) Short Wires If your wires are short (say, less than 20 cm) you might consider SPI or I2C . Advantages as above, saving a chip, PCB area, and power requirements. (But I can't immediately think of something that would have dozens of humidity sensors in a small area. Something in a lab?) Architecture outline What immediately jumps to mind is Smallest MCU with enough IO, I often use 8-pin ATTiny85 (with internal oscillator) Some 75176-pinout RS-485 driver and half-duplex 1-pair Your sensor (1 pin IO) + line driver (2 pins IO) + 2 spare for servo MCU transmits Node 7 17.6 C 25 %RH XXXX XXXX is some kind of checksum or CRC On a very lightly loaded net, remote units could transmit periodically (with randomising to reduce clashing, CRC to detect it (but see notes below) Either/or with polling from master, responding to Node 7 ? XXXX Do sensor normalisation on the local MCU so Pi master knows nothing about it. Very slow transmission (9600 baud or even 1200) This kind of arrangement Scales extremely well Especially for distance Future-proofs extremely well Especially for different sensors and actuators Is extremely easy to test Broadcast-until-silenced slaves are trivial to perform basic test You can just type commands and read the answers Tip: allow humansaysso as always-good checksum, so can type without special program Protocol I'd imagine very simple text messages as the protocol: node id name value unit name value unit ... XXXX XXXX being whatever checksum/CRC you use. I'd suggest looking at CCITT-16 or even -32. But if you like standards, consider Modbus-RTU or Modbus-ASCII or NMEA-0183 , all of which have very useful ideas for you. Modbus over serial v1.02 Modbus Application Protocol v1.1b3 NMEA-0183 Low-road protocol You might not like having such a "high-road" protocol (unit conversion, readability, parsing). Instead you might consider having the local MCU do as little as possible, ie, just be a translator from the serial bus to the sensor format, and adding addressing. This is a perfectly reasonable approach. It comes down to whether you are more worried about forced changes in sensors (because of supply or functionality) or forced changes in your MCU firmware (because of bug). Lastly, on the master Pi, consider UDP convert daemon translating to and from the bus outlined above. Then the real program interfaces via UDP datagrams. This modularity has huge benefits for futureproofing and testability: Testing is great: can easily make N processes representing N slaves Could put "real program" at any desired distance from "converter" over ethernet, internet PS Re protocol and CRC: Consider actual text messages, Update re broadcast slaves. My suggestion specifically is only good for very lightly loaded networks; concretely, is for the slaves to broadcast at whim after a reset. If you can be certain your network is very lightly loaded (perhaps a message every 10 minutes per slave, <100 slaves, ~10 ms/message) and the price of a missed message is very low (interpolated value on a graph for some slow-moving variable), you can just leave it like that. But for a bit more order, many would consider it better if the master polls the slaves. My preference is for slaves to wake up and announce themselves, master notes them, and then polls them -- but it depends entirely on what you're doing. If you've got anything which might get anywhere near loading your bus, or have any significant downside to individual missed messaegs, pay close attention to possibilities of collisions. As noted in comments (thanks @graham, @supercat), collisions are exponentially nasty on message size and frequency, and number of nodes. Where your tradeoffs lie is up to you and it's quite dependent on the frequencies and speeds etc.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/638724', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/310708/']}
jdg_83375
stackexchange
llm_judgeable_groundtruth_similarity
2543
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Wouldn't data be lost when mapping 6-bit values to 4-bit values in DES's S-Boxes? If so, how can we reverse it so the correct output appears? Now provide the response and nothing else.
DES is a Feistel-based cipher . In such ciphers, the function $\rm F$ need not be invertible. Here's the reason: In each round, the following operation is applied: For $i =0,1,\dots,n$ $L_{i+1} = R_i$ $R_{i+1}= L_i \oplus {\rm F}(R_i, K_i)$ Decryption is performed as follows: $R_{i} = L_{i+1}$ $L_{i} = R_{i+1} \oplus {\rm F}(L_{i+1}, K_{i})$ As you can see, the decryption does not need $\rm F$ to be invertible. (Since the decryption does not need to compute $\rm F^{-1}$.)
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/2543', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/-1/']}
jdg_83376
stackexchange
llm_judgeable_groundtruth_similarity
38699786
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have read and almost gone through all the linux kernel documentation on the device tree and device tree overlays.I am not able to understand if we have to create a new entree in the device tree of the platform or to create a new overlay for the device for a new driver based on device tree.I am looking for a simple led glowing driver example where led is connected to GPIO pin and its configuration is mentioned in the device tree overlay or device tree fragment on the board's platform.How can it be build/pushed and tested using the user space application. Now provide the response and nothing else.
I created a custom device in my device tree: my_device@ffdf0000 { compatible = "my_driver"; reg = <0xffdf0000 0x1000> /* * reg = address of device and size * (Minimum is System's Pagesize = 0x1000 Byte in my case */} I wrote a Kernel stub for this Device: (Here I took kernel_src/drivers/uio/uio_pdrv_genirq.c and Hans J. Koch: Userspace I/O drivers in a realtime context (device driver tutorial) as basis.) This stub has following two structs: The of_device_id struct: static struct of_device_id my_match_table[] = { { .compatible = "my_driver", }, {0}};MODULE_DEVICE_TABLE(of, my_match_table); and the driver struct itself: static struct platform_driver my_platform_driver = { .probe = my_probe, .remove = my_remove, .driver = { .name = "my_driver", .owner = THIS_MODULE, .of_match_table = of_match_ptr(my_match_table), },}; Now I have access to the properties of the device tree in my probe function: static int my_probe(struct platform_device *dev){ struct uio_info *uioinfo; struct resource *r = &dev->resource[0]; [...] uioinfo->name = dev->dev.of_node->name /* name from device tree: "my_device" */ uioinfo->mem[0].addr = r->start; /* device address from device tree */ uioinfo->mem[0].size = resource_size(r); /* size from device tree */ [...]} When there is a match in compatible with both the kernel stub's entry and the device tree, the probe function is called.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38699786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/714831/']}
jdg_83377
stackexchange
llm_judgeable_groundtruth_similarity
31202534
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two iPython notebook installations. One on an AWS Micro Instance, the second using Anaconda on my Macbook (OS X Yosemite).I encountered a difference in the way both of them handle the following code: my_list = [1, 2, 3, 0, 5, 10, 11, 1, 5]your_list = [1, 2, 3, 0, 12, 13]my_set = set(my_list)your_set = set(your_list)print my_setprint len(my_set)print len(my_list) On iPython-AWS, my output is: set([0, 1, 2, 3, 5, 10, 11])79 On iPython-Macbook, my output is: ---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-29-cd060f1b0bde> in <module>() 1 my_list = [1, 2, 3, 0, 5, 10, 11, 1, 5] 2 your_list = [1, 2, 3, 0, 12, 13]----> 3 my_set = set(my_list) 4 your_set = set(your_list) 5 print my_setTypeError: 'set' object is not callable Additionally, these are the installation details, if relevant:1. iPython on AWS Micro Instance: http://i.stack.imgur.com/qYrq8.png iPython Notebook on Macbook - http://i.stack.imgur.com/Q6Id5.png I cannot seem to find the reason for this difference, although I did come across many threads on Stackoverflow regarding the "TypeError: 'set' object is not callable" issue.I will appreciate any help in understanding why this is so, and if there is anything I can do to ensure my code runs on both installations. Now provide the response and nothing else.
This error indicates that you may have defined a set with the variable name as set , if you did so, that would overwrite the built-in function set . Example of this issue occuring - >>> set = set([1,2,3,4,5])>>> my_set = set([2,3,4,5,6])Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: 'set' object is not callable
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31202534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2762934/']}
jdg_83378
stackexchange
llm_judgeable_groundtruth_similarity
616665
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've tried to solve the following problem: "If $f,g:[a,b] \rightarrow \mathbb R$ are both Riemann integrable, show that the function $\phi:[a,b] \rightarrow \mathbb R$, defined as $\phi(x)= \max \{f(x), g(x)\}$ is also Riemann integrable". My solution: If $f,g$ are Riemann integrable, given $\epsilon >0$, there are partitions $P, Q$ with respect to the interval [a,b] such that $U(f;P)-L(f;P) < \frac{\epsilon}{2}$ and $U(g;Q)-L(g;Q) < \frac{\epsilon}{2}$, where U = upper sum and L = lower sum. Supose $P$ is more refined than $Q$. Then we can write that $U(f;P)-L(f;P)<\frac{\epsilon}{2}$ and $U(g;P)-L(g;P)<\frac{\epsilon}{2}$. It follows that $U(f;P)+U(g;P)-[L(f;P)+L(g;P)]<\epsilon$ (1). If $P=\{t_0,\dots,t_n\}$, consider $\sum_{i=1}^m (t_i - t_{i-1})$ as the sum of all intervals of $P$ where $M_i=\sup\phi(x)=\sup f(x)$, and $\sum_{i=1}^r (t'_i - t'_{i-1})$ as the sum of all intervals of $P$ where $N_i=\sup\phi(x)=\sup g(x)$, with $n=m+r$. Then we can write: $U(\phi;P)= \sum_{i=1}^m M_i(t_i-t_{i-1}) + \sum_{i=1}^r N_i(t'_i-t'_{i-1})$. It shows that $U(\phi;P)\ge U(f;P)$ and $U(\phi;P)\ge U(g;P)$, or $2U(\phi;P)\ge U(f;P)+U(g;P)$. A similar argument proves that $2L(\phi;P)\ge L(f;P)+L(g;P)$. I'd like to get $U(\phi;P)-L(\phi;P) \le U(f;P)+U(g;P)-[L(f;P)+L(g;P)]<\epsilon$ in order to prove that $\phi$ is Riemann intrgrable, but evidently it is not the case. I commited a mistake, but I don't know where. Please, help to know what is wrong. Now provide the response and nothing else.
Recall that $$\max \{ f,g\} = \frac{f+g + |f-g|}2.$$Then you can just use the fact that if $f,g$ are Riemann integrable, successively using the fact that a difference of Riemann integrable functions is Riemann integrable, composition of a Riemann integrable function with $| \cdot |$ is Riemann integrable, and linear combinations of linear integrable functions are linear integrable, then you can conclude that $\max \{f,g\}$ is Riemann integrable. I admit I didn't manage to go through your proof to find the "mistake" (if there is a way to do it this way), but I see $2$'s appearing (see that $2$ in my denominator? I don't think it's a surprise..), so I'm guessing there is just a lot of decoration in your proof but the hard part is essentially to show that $| f |$ is Riemann integrable when $f$ is, which is a bit hard to see with all this decoration around. Added : For those who are wondering why $|f|$ should be integrable if $f$ is, I can use this sledgehammer to do it : $f$ has a set of discontinuities of measure zero by integrability, hence $|f|$ has a set of discontinuities of measure less or equal than zero since it is continuous, i.e. a set of discontinuities of measure zero. Therefore it is integrable. Equivalently, $||f(x)| - |f(y)|| \le |f(x) - f(y)|$, hence $U(|f|,P) - L(|f|,P) \le U(f,P) - L(f,P)$. Hope that helps,
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/616665', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/70788/']}
jdg_83379
stackexchange
llm_judgeable_groundtruth_similarity
158187
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Is it true that the dual of a smooth hypersurface $X$ of $\mathbb{P}^n$ of degree $d\ge 2$ is a hypersurface? If yes, could you give me a simple proof ? Or a reference? Note that in this case, the dual is birational to $X$. Now provide the response and nothing else.
Yes, the dual is a hypersurface. Here is a simple proof when $n\geq 4$. First of all, by the Lefschetz hyperplane theorem, the restriction homomorphism, $$ r : \text{Pic}(\mathbb{P}^n) \to \text{Pic}(X),$$ is an isomorphism. In particular, every invertible sheaf on $X$ is either ample, trivial, or anti-ample. Denote the Gauss map on $X$ by, $$ G : X \to (\mathbb{P}^n)^\vee, \ \ x \mapsto \mathbb{T}_xX, $$ where $\mathbb{T}_x X$ denotes the tangent hyperplane to $X$ at $x$.The invertible sheaf $G^*\mathcal{O}(1)$ is either ample, trivial or anti-ample. If $G$ is nonconstant, then $G^*\mathcal{O}(1)$ has positive degree Hilbert polynomial, hence $G^*\mathcal{O}(1)$ is neither trivial nor anti-ample. Therefore $G^*\mathcal{O}(1)$ is ample. In particular, $G$ can have no positive dimensional fiber, since this would give a subvariety of $X$ on which $G^*\mathcal{O}(1)$ is not ample. Therefore $G$ is finite, and hence $\text{dim}(G(X))$ equals $\text{dim}(X)$. Finally, if $G$ were constant with image $[H]$, for $H$ a fixed hyperplane in $\mathbb{P}^n$, then for every $x\in X$, the tangent hyperplane to $\mathbb{T}_x X$ equals $H$. In particular, for every $x\in X$, $x$ is contained in $H$. Thus $X$ is contained in $H$, contradicting that $X$ is a smooth hypersurface of degree $d>1$. Edit. I just want to add, I heard this argument long ago, but I do not know who first proved this result. Fyodor Zak has done important work on the Gauss map, so you might consult his work to try and find a reference. Second edit. The OP asks about $n=3$. One could probably adapt the "fast" argument above using Noether-Lefschetz, but it is probably more honest just to compute $G^*\mathcal{O}(1)$. Indeed, the morphism $G$ arises from the short exact sequence sequence,$$ 0 \to \mathcal{O}_{\mathbb{P}^n}(-d)|_X \xrightarrow{u} \Omega_{\mathbb{P}^n/k}|_X \to \Omega_{X/k} \to 0,$$and the Euler sequence (restricted to $X$),$$ 0 \to \Omega_{\mathbb{P}^n/k}|_X \xrightarrow{v} \mathcal{O}_{\mathbb{P}^n}(-1)^{\oplus(n+1)}|_X \to \mathcal{O}_{\mathbb{P}^n}|_X \to 0. $$The adjoint of $v\circ u$ is an invertible quotient,$$ \mathcal{O}_{\mathbb{P}^n}(1)^{\oplus(n+1)}|_X \to \mathcal{O}_{\mathbb{P}^n}(d)|_X, $$which is equivalent to an invertible quotient (after twisting),$$\mathcal{O}_{\mathbb{P}^n}^{\oplus(n+1)}|_X \to \mathcal{O}_{\mathbb{P}^n}(d-1)|_X. $$This invertible quotient defines the morphism $G:X\to (\mathbb{P}^n)^\vee$. Thus, $G^*\mathcal{O}(1)$ is isomorphic to $\mathcal{O}_{\mathbb{P}^n}(d-1)|_X$. Thus, if $d>1$, then $G^*\mathcal{O}(1)$ is ample. Hence $G$ is finite.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/158187', 'https://mathoverflow.net', 'https://mathoverflow.net/users/23758/']}
jdg_83380
stackexchange
llm_judgeable_groundtruth_similarity
232085
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following question at hand: Suppose $U,V$ are iid random variables following Unif $(0,1)$. what is the conditional distribution of $U$ given $Z:=\max(U,V)$ ? I tried writing $Z=\Bbb{I}\cdot V+(1-\Bbb{I})\cdot U$ where $\Bbb{I}=\begin{cases}1&U<V\\0&U>V\end{cases}$ But I am not getting anywhere. Now provide the response and nothing else.
A picture might help. Independent uniform distributions on the interval $[0,1]$ may be considered a uniform distribution on the unit square $I^2 = [0,1]\times [0,1]$ . Events are regions in the square and their probabilities are their areas. Let $z$ be any possible value of $\max(U,V)$ . The set of coordinates $(U,V)$ where $\max(U,V)=z$ forms the top and right edges of a square of side $z$ . Let $dz$ be a small positive number. The set of coordinates $(U,V)$ whose maximum lies between $z$ and $z+dz$ forms a narrow thickening of that square, as shaded in the figure. Its area is the difference of the areas of two squares, one of side $z+dz$ and the other of side $z$ , whence $$\Pr(z \le Z \le z+dz) = (z+dz)^2 - z^2 = 2z\,dz + (dz)^2.\tag{1}$$ Let $u$ be any possible value of $U$ : it is marked with a vertical dashed line in the figures. The left panel shows a case where $u \le z$ : The chance that $U\le u$ would be the area to the left of that line (equal to $u$ ); but the event that $U\le u$ and $Z$ lies between $z$ and $z+dz$ is just the brown shaded area. It's a rectangle, so its area is its width $u$ times its height $dz$ . Thus, $$\Pr(U \le u, z \le Z \le z+dz) = u\,dz.\tag{2}$$ The right panel shows a case where $z \lt u \le z+dz$ . Now the chance that $U \le u$ and $z \lt Z \le z+dz$ consists of two rectangles. The top one has base $u$ and height $dz$ ; the right one has base $(u-z)$ and height $z$ . Therefore $$\Pr(U \le u, z \le Z \le z+dz) = u\, dz + (u-z)z.\tag{3}$$ By definition, the conditional probabilities are these chances divided by the total chance that $z \le Z \le z+dz$ , given in $(1)$ above. Divide $(2)$ and $(3)$ by this value. Letting $dz$ be infinitesimal, and retaining the standard part of the result, gives the chances conditional on $Z=z$ . Thus, when $0 \le u \le z$ , $$\Pr(U \le u\,|\, Z=z) = \frac{u\,dz}{2z\,dz + (dz)^2} = \frac{u}{2z + dz} \approx \frac{u}{2z}.$$ When $z \lt u \le z+dz$ , write $u = z + \lambda dz$ for $0 \lt \lambda \le 1$ and compute $$\Pr(U \le u|Z=z) = \frac{u\, dz + (u-z)z}{2z\,dz + (dz)^2} = \frac{(z + \lambda dz)dz + (\lambda dz)z}{2z\,dz+(dz)^2}\approx\frac{1+\lambda}{2}.$$ Finally, for $u \gt z+dz$ , the brown area in the right panel has grown to equal the gray area, whence their ratio is $1$ . These results show that the conditional probability grows linearly from $0$ to $z/(2z)=1/2$ as $u$ grows from $0$ to $z$ , then shoots up linearly from $1/2$ to $1$ in the infinitesimal interval between $z$ and $z+dz$ , then stays at $1$ for all larger $u$ . Here's a graph: Because $dz$ is infinitesimal, it is no longer possible to distinguish $z$ from $z+dz$ visually: the plot jumps from a height of $1/2$ to $1$ . Putting the foregoing together into a single formula to be applied to any $z$ for which $0 \lt z \le 1$ , we could write the conditional distribution function as $$F_{U|Z=z}(u) = \left\{\begin{array}{ll}0 & u \le 0 \\\frac{u}{2z} & 0 \lt u\le z \\1 & u \gt z.\end{array}\right.$$ This is a complete and rigorous answer. The jump shows that a probability density function will not adequately describe the conditional distribution at the value $U=z$ . At all other points, though, there is a density $f_{U|Z=z}(u)$ . It is equal to $0$ for $u\le 0$ , $1/(2z)$ for $0 \le u \lt z$ (the derivative of $u/(2z)$ with respect to $u$ ), and $0$ for $u \gt z$ . You could use a "generalized function" to write this in a density-like form. Let $\delta_z$ be the "generalized density" giving a jump of magnitude $1$ at $z$ : that is, it's the "density" of an atom of unit probability located at $z$ . Then the generalized density at $z$ can be written $\frac{1}{2}\delta_z$ to express the fact that a probability of $1/2$ is concentrated at $z$ . In full, we could write $$f_{U|Z=z}(u) = \left\{\begin{array}{ll}0 & u \le 0 \\\frac{1}{2z} & 0 \lt u\lt z \\\frac{1}{2}\delta_z(u) & u=z \\0 & u \gt z. \end{array}\right.$$
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/232085', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/95845/']}
jdg_83381
stackexchange
llm_judgeable_groundtruth_similarity
197917
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Numerical evidence shows the validity of the following identity$$\int\limits_0^z\frac{xdx}{\sin{x}\sqrt{\sin^2{z}-\sin^2{x}}}=\frac{\pi}{4\sin{z}}\ln{\frac{1+\sin{z}}{1-\sin{z}}},\tag{1}$$if $0< z< \pi/2$.How can it be proved? An indirect proof can be found in the paper http://link.springer.com/article/10.1134%2FS1547477113010044 (Potential of multiphoton exchange in the scattering of light charged particles of a heavy target, by Yu.M. Bystritskiy, E.A. Kuraev and M.G. Shatnev). Formula (1) is equivalent to (12) from the paper where it is called "the marvelous identity". Now provide the response and nothing else.
Actually, I now think that the easiest method is to do this: Write $k=\sin z$, so that $|k|<1$, and make the substitution $x = \arcsin(k\sin\theta)$, where $0\le \theta\le \frac\pi2$. The integral becomes$$\int_0^{\pi/2} \frac{\arcsin(k\sin\theta)}{k\sin\theta \,\,(1-k^2\sin^2\theta)^{(1/2)}}\ \mathrm{d}\theta = \int_0^{\pi/2} \sum_{n=0}^{\infty} c_n\,k^{2n}\sin^{2n}\theta\,\mathrm{d}\theta,$$where the numbers $c_n$ are the coefficients in the even power series$$\frac{\arcsin(t)}{t \,(1{-}t^2)^{(1/2)}} = \sum_{n=0}^{\infty} c_n t^{2n},$$which are easily calculated to be $$c_n = \frac{2^{2n} (n!)^2}{(2n{+}1)!}.$$Combining this with the well-known formula$$\int_0^{\pi/2} \sin^{2n}\theta\,\mathrm{d}\theta = \frac{\pi}{2^{2n+1}}\ {{2n}\choose{n}},$$one obtains$$\int_0^{\pi/2} \frac{\arcsin(k\sin\theta)}{k\sin\theta (1-k^2\sin^2\theta)^{(1/2)}}\ \mathrm{d}\theta = \pi \sum_{n=0}^{\infty} \frac{k^{2n}}{(4n{+}2)}= \frac{\pi}2 \sum_{n=0}^{\infty} \frac{\sin^{2n}z}{(2n{+}1)}\ .$$The rest should be clear.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/197917', 'https://mathoverflow.net', 'https://mathoverflow.net/users/32389/']}
jdg_83382
stackexchange
llm_judgeable_groundtruth_similarity
15686814
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created a simple menu in Java, but I can't figure out how to change the size of a button.My menu looks like this: I want the last button to have same size like other buttons. tlacTisk.setSize(10,10);tlacTisk.setPreferredSize(10,10); doesn't work. Code, where I created buttons and box: JButton tlacSVG = new JButton(); tlacSVG.setText("Export do SVG"); tlacSVG.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { exportujSVG(); } }); JButton tlacPNG = new JButton(); tlacPNG.setText("Export do PNG"); tlacPNG.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { exportujPNG(); } }); JButton tlacTisk = new JButton(); tlacTisk.setText("Tisk..."); tlacTisk.setPreferredSize(new Dimension(50, 25)); tlacTisk.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { tiskni(); } }); Box boxTlacitek = Box.createVerticalBox(); boxTlacitek.add(Box.createVerticalStrut(5)); boxTlacitek.add(tlacSVG); boxTlacitek.add(Box.createVerticalStrut(10)); boxTlacitek.add(tlacPNG); boxTlacitek.add(Box.createVerticalStrut(10)); boxTlacitek.add(tlacTisk); boxTlacitek.setBorder(BorderFactory.createTitledBorder("Menu")); okno.add(boxTlacitek, BorderLayout.EAST); Can you give me advice how I can change size? Thanks. Now provide the response and nothing else.
Different layouts managers treat preferred size differently. Also, setting size with setSize() is not a good idea. Let the layout manager do the layout for you. See A Visual Guide to Layout Managers for more details and examples. For example you can create a separate panel that holds the buttons. Set its layout to GridLayout . In this layout the components takes all the available space within its cell, and each cell is exactly the same size. Add this panel to the container. See How to Use GridLayout for examples. Here is a simple demo of GridLayout and GridBagLayout : import java.awt.*;import javax.swing.*;public class DemoButtons { public DemoButtons() { final JFrame frame = new JFrame("Demo buttons"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel buttonPanel = new JPanel(new GridLayout(3, 1)); buttonPanel.add(new JButton("Export do SVG")); buttonPanel.add(new JButton("Export do PNG")); buttonPanel.add(new JButton("Tisk...")); JPanel east = new JPanel(new GridBagLayout()); GridBagConstraints gbc = new GridBagConstraints(); gbc.anchor = GridBagConstraints.NORTH; gbc.weighty = 1; east.add(buttonPanel, gbc); JPanel center = new JPanel(){ @Override public Dimension getPreferredSize() { return new Dimension(200, 200); } }; center.setBorder(BorderFactory.createLineBorder(Color.BLACK)); frame.add(east, BorderLayout.EAST); frame.add(center); frame.pack(); frame.setVisible(true); } public static void main(String[] args) { try { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); } catch (Exception e) { e.printStackTrace(); } SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new DemoButtons(); } }); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15686814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2100558/']}
jdg_83383
stackexchange
llm_judgeable_groundtruth_similarity
7356638
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On my local machine, I work on multiple web sites and run them under IIS under a "Default" web site. That way I can access the sites through this type of URL: http://localhost/App1/ . Here's the structure: LocalDev (site) App1 (application) App2 (application) App3 (application) The problem I'm encountering is that in App1, I'm trying to enable Windows authentication on a subdirectory of App1, like this: <configuration> <location path="internal"> <system.web> <authentication mode="Windows"/> <authorization> <allow users="*"/> </authorization> </system.web> </location></configuration> Unfortunately, when I then try to access http://localhost/App1/internal/url.aspx , I get this error: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. App1 is set up as an application, not a virtual directory. I've tried changing my machine.config to allow changing the authentication section anywhere: <configuration> <configSections> <sectionGroup name="system.web" type="System.Web.Configuration.SystemWebSectionGroup, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"> <section name="authentication" type="System.Web.Configuration.AuthenticationSection, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" allowDefinition="Everywhere"/> </sectionGroup> </configSections></configuration> What do I have to do to allow my sites to set their own authentication modes? Now provide the response and nothing else.
You need to enable Windows authentication at the application level in the Web.config, then further define authorization at the folder level, allowing all users at the root and denying all unauthenticated for the internal folder. In IIS, make sure both Anonymous Authentication and Windows Authentication are enabled for the application. Then, modify your Web.config as follows: <configuration> <system.web> <authentication mode="Windows"/> <authorization> <allow users="*"/> </authorization> </system.web> <location path="internal" allowOverride="true"> <system.web> <authorization> <deny users="?"/> </authorization> </system.web> </location></configuration>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7356638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/119549/']}
jdg_83384
stackexchange
llm_judgeable_groundtruth_similarity
84531
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I appear not to be able to find a way to simplify the sums of fractions like the following example to their shortest representation.For me shorteste representation means, to have as few as possible repeting terms. As a proxy to this I use LeafCount , and thus want it minimized for expressions consisting of sums of fractions. FullSimplify fails to do much, just as using PowerExpand as additional TransformationFunction . Watch for yourself: Here is the expression: (-3 a - 2 a^3 + 4 Sqrt[1 + a^2] (5 - 9 Log[2]) + 4 a^2 Sqrt[1 + a^2] (5 - 9 Log[2]) + 12 (1 + a^2)^(3/2) Log[1 + Sqrt[1 + 1/a^2]] - 6 (4 (Sqrt[1 + a^2] - a (2 + a^2 - a Sqrt[1 + a^2])) Log[a] + a Log[1 + a^2]))/(12 (1 + a^2)^(3/2) Sqrt[2 π]) $\frac{-2 a^3+4 \sqrt{a^2+1} a^2 (5-9 \log (2))+12 \left(a^2+1\right)^{3/2} \log \left(\sqrt{\frac{1}{a^2}+1}+1\right)-6 \left(4 \left(\sqrt{a^2+1}-a \left(a^2-\sqrt{a^2+1} a+2\right)\right) \log (a)+a \log \left(a^2+1\right)\right)+4 \sqrt{a^2+1} (5-9 \log (2))-3 a}{12 \sqrt{2 \pi } \left(a^2+1\right)^{3/2}}$ You can see that there are many repeating terms, and it is obvious that this cannot be the shortest form.And the following commands will not do anything to better the situation: FullSimplify[%, Assumptions -> {a \[Element] Reals, a > 0}] or FullSimplify[%, Assumptions -> {a \[Element] Reals, a > 0}, TransformationFunctions -> {Automatic, PowerExpand}] or even FullSimplify[Together[Expand[%]], Assumptions -> {a \[Element] Reals, a > 0}] My bet of where the issue is, is currently on Together . It does something suboptimal (in terms of LeafCount ): a/e + b/e + c/f + d/f // Together yields: (c e + d e + a f + b f)/(e f) this thing can be further simplified with FullSimplify to the optimal result: (a + b)/e + (c + d)/f However this does obviously not work for my expression, I guess Together increases the LeafCount so much, that FullSimplify simply discards its intermediate results and goes without it. Edit: I just found that there is some simplification done with: FullSimplify[Apart[%]] which yields the somewhat simplified expression: (12 Log[1 + Sqrt[1 + 1/a^2]] - ( 3 a + 2 a^3 - 20 (1 + a^2)^(3/2) + 36 (1 + a^2)^(3/2) Log[2] - 48 a Log[a] - 24 a^3 Log[a] + 24 (1 + a^2)^(3/2) Log[a] + 6 a Log[1 + a^2])/(1 + a^2)^(3/2))/(12 Sqrt[2 \[Pi]]) $\frac{12 \log \left(\sqrt{\frac{1}{a^2}+1}+1\right)-\frac{2 a^3-24 a^3 \log (a)-20 \left(a^2+1\right)^{3/2}+6 a \log \left(a^2+1\right)+24 \left(a^2+1\right)^{3/2} \log (a)+36 \left(a^2+1\right)^{3/2} \log (2)+3 a-48 a \log (a)}{\left(a^2+1\right)^{3/2}}}{12 \sqrt{2 \pi }}$ However the result is still not the shortest form, there is still obviously a simplier expresion possible. Now provide the response and nothing else.
data3 = Style[{##}, ColorData["Rainbow"][Abs[(#1 - 0.5)] + Abs[1/10 (#2 - 5)]]] & @@@ data;Legended[ListPlot[data3, PlotStyle -> PointSize[0.01]], BarLegend["Rainbow"]] Update for your question in the comment: The color function ColorData["Rainbow"] ranges from 0 to 1 so the value has to be used within this range. Abs function basically works as a Ramp function in the two directions and centered at the center of the plot. Row[{Plot3D[Abs[(x - 0.5)] + Abs[1/10 (y - 5)],{x,0,1},{y,0,10}],ContourPlot[Abs[(x - 0.5)] + Abs[1/10 (y - 5)], {x, 0, 1}, {y, 0, 10}, ContourLabels -> (Text[Style[#3, Bold, 14, Red], {#1, #2}] &), ContourShading -> None]}] You can see in this plot that Abs[(#1 - 0.5)] + Abs[1/10 (#2 - 5)] is within 0 and 1 for the range of the plot and it drops in the four directions. So when you apply ColorData["Rainbow"] for the values resulted from Abs[(#1 - 0.5)] + Abs[1/10 (#2 - 5)] you will get the color gradient that you want.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/84531', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/29727/']}
jdg_83385
stackexchange
llm_judgeable_groundtruth_similarity
38401114
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to post to an url https://host:0101 with a request body. I have done this with non-ssl, http, but having trouble with https.I have both the trust store (JKS) and keystore (PKCS12)'s full path, and their passwords as properties in a .properties file. What i have to far is: public sendPost () throws Exception { SSLContext sslContext = getContext(); SSLConnectionSocketFactory factory = new SSLConnectionSocketFactory(sslContext, SSLConnectionSocketFactory.BROWSER_COMPATIBLE_HOSTNAME_VERIFIER); CloseableHttpClient client = HttpClients.custom().setSSLSocketFactory(factory).build(); HttpPost post = new HttpPost("https://host:0101/post"); StringEntity entity = new StringEntity(jsonData); post.setEntity(entity); response=client.execute(request); responseCode = response.getStatusLine().getStatusCode();}public SSLContext getContext() throws Exception { KeyStore keyStore = KeyStore.getInstance("JKS"); FileInputStream instream = new FileInputStream(new File(PATH_TO_KEYSTORE)); try { keyStore.load(instream, "password".toCharArray()); }finally {instream.close()} return SSLContexts.custom().loadTrustMaterial(keyStore).build();} I am testing this by running a Junit test just to verify that it works before deploying. @Testpublic void test() throws Exception { int responseCode = entityUnderTest.sendPost(); assertEquals(200, responseCode);} The error is at line: response=client.execute(request); with the HttpHostConnectException Failed: Connection Refused Now provide the response and nothing else.
You could just use str.count() since you only care about occurrences of a single word: with open("log_file") as f: contents = f.read() count = contents.count("apple") However, to avoid some corner cases, such as erroneously counting words like "applejack" , I suggest that you use a regex : import rewith open("log_file") as f: contents = f.read() count = sum(1 for match in re.finditer(r"\bapple\b", contents)) \b in the regex ensures that the pattern begins and ends on a word boundary (as opposed to a substring within a longer string).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38401114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1751692/']}
jdg_83386
stackexchange
llm_judgeable_groundtruth_similarity
2393518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I notice that the character/symbol '`' and '@' is not used as an operator in C/C++, does anyone know the reason or historically why its so? if its really not used, is it safe to define those symbols as another operator/statement using #define? Now provide the response and nothing else.
Normally, #define only accepts valid identifiers in the macro name - so you cannot do: #define @ at#define @(x) [x] Similarly with back-quote. And you didn't mention '$', which is sometimes allowed in identifiers. There might be a compiler-specific extension to allow such mappings, but I wouldn't use it. As to the historical reason for this, there are parts of the ISO 646 character set that are reserved to national implementations for national characters. These reserved portions include the characters that cause trouble, and the trigraphs and digraphs features in Standard C (and hence Standard C++) were added to ISO C in 1989 and 1994 respectively to provide workarounds for the problems. Trigraphs Trigraphs were added during the C89 standardization process to prevent people from, for example, having to see alphabetic characters (in Scandinavian languages) used in their C code (adapted from an example in B Stroustrup, 'Design and Evolution of C++', using a Danish terminal): #include <stdio.h>int main(int argc, char **argvÆÅ)æ if (argc < 1 øø *argvÆ1Å == 'Ø0') return 0; printf("Hello, %sØn", argvÆ1Å);å Or, in the ISO 8859-1 code set (or any of the ISO 8859-x code sets): #include <stdio.h>int main(int argc, char **argv[]){ if (argc < 1 || argv[1] == '\0') return 0; printf("Hello, %s\n", argv[1]);} The trigraphs were introduced to produce a neutral format for the code: ??=include <stdio.h>int main(int argc, char **argv??(??))??< if (argc < 1 ??!??! *argv??(1??) == '??/0') return 0; printf("Hello, %s??/n", argv??(1??));??> That's not very readable, either, but it is the same for everyone. Trigraph Equivalent to??/ \ backslash??< { open brace??> } close brace??( [ open square bracket??) ] close square bracket??= # hash (pound in American, but a pound is £ in English)??' ^ caret??! | pipe??- ~ tilde The standard says 'there are no other trigraphs'. This is why the escape sequence '\?' is recognized (as a simple question mark - though presumably that is '??/?'). Note that the GNU Compiler Collection (GCC) does not interpret trigraphs unless you hold its hand to the fire (specify ' -trigraphs ' on the command line). Digraphs The digraphs were added in 1994, and are not as pervasive or intrusive as trigraphs; they only appear outside strings and string literals. The digraphs are: Digraph Equivalent to<: [:> ]<% {%> }%: #%:%: ## The example using digraphs (and trigraphs): %:include <stdio.h>%:include <iso646.h>int main(int argc, char **argv<::>)<% if (argc < 1 or *argv<:1:> == '??/0') return 0; printf("Hello, %s??/n", argv<:1:>);%> At sign and back quote specifically? If you look at the Wikipedia URL above, you'll see that both '@' and '`' are sometimes replaced by national characters - and hence not good identifiers. An additional reason for not using '@' is that at the time C was introduced, '#" was the default erase character and '@' was the kill (line erase) character for terminals. So, you had to remember to escape them. Since '#' only appeared at the beginning of a line, it wasn't too much of a problem (using '#' and '##' came much, much later - standardization again), but '@' would have wiped out all the preceding typing on the line. And this is the days before 'vi' - 'ed is the the standard Unix editor'.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2393518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/286041/']}
jdg_83387
stackexchange
llm_judgeable_groundtruth_similarity
26076077
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have a set like this: #{"word1" "word2" "word3"} How could I list all ways that these words might be ordered, i.e. word1 word2 word3word2 word3 word1word3 word2 word1 etc. Now provide the response and nothing else.
The easiest way is using math.combinatorics : user> (require '[clojure.math.combinatorics :as combo])niluser> (combo/permutations #{"word1" "word2" "word3"})(("word1" "word2" "word3") ("word1" "word3" "word2") ("word2" "word1" "word3") ("word2" "word3" "word1") ("word3" "word1" "word2") ("word3" "word2" "word1")) Edit: I haven't looked at the math.combinatorics implementation, but here's a lazy version because OP asked for some code to follow. (defn permutations [s] (lazy-seq (if (seq (rest s)) (apply concat (for [x s] (map #(cons x %) (permutations (remove #{x} s))))) [s])))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26076077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11755/']}
jdg_83388
stackexchange
llm_judgeable_groundtruth_similarity
1242266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was refactoring some old code and came across the following line of code to convert bytes to GB. decimal GB = KB / 1024 / 1024 / 1024; Is there a better way to refactor the following piece of code? Update I meant to say bytes to Gigabytes. I gave wrong information. Now provide the response and nothing else.
I developed this method here, works up to TB. private static string FormatBytes(long bytes){ string[] Suffix = { "B", "KB", "MB", "GB", "TB" }; int i; double dblSByte = bytes; for (i = 0; i < Suffix.Length && bytes >= 1024; i++, bytes /= 1024) { dblSByte = bytes / 1024.0; } return String.Format("{0:0.##} {1}", dblSByte, Suffix[i]);}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1242266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26327/']}
jdg_83389
stackexchange
llm_judgeable_groundtruth_similarity
30032400
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am not sure what will be in the char array after initialization in the following way: char buf[5]={0,}; Is that equivalent to char buf[5]={0,0,0,0,0}; Now provide the response and nothing else.
Yes, it is the same. If there are less number of initializers than the elements in the array, then the remaining elements will be initialized as if the objects having static storage duration, (i.e., with 0 ). So, char buf[5]={0,}; is equivalent to char buf[5]={0,0,0,0,0}; Related Reading : From the C11 standard document, chapter 6.7.9, initalization, If there are fewer initializers in a brace-enclosed list than there are elements or members of an aggregate, or fewer characters in a string literal used to initialize an array of known size than there are elements in the array, the remainder of the aggregate shall be initialized implicitly the same as objects that have static storage duration.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30032400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4857099/']}
jdg_83390
stackexchange
llm_judgeable_groundtruth_similarity
57910449
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to run the jupyter notebooks of fastai's v3 course. My system has ubuntu 16.04 . Here's what I've done: Installed Python Installed Anaconda Ran conda update conda conda install -c pytorch -c fastai fastai pytorch torchvision cuda92 git clone https://github.com/fastai/course-v3 The repo is now cloned at /home/ali/ml/course-v3 Did cd nbs/dl1 , then jupyter notebook Opened the http://localhost:8888/?token=xxxx link from terminal However, when I open any of the notebooks, and select any of the import statements, e.g: %matplotlib inlinefrom fastai.basics import * I get an error of the following kind: ModuleNotFoundError Traceback (most recent call last)<ipython-input-2-d09c56fdb8ce> in <module> 1 get_ipython().run_line_magic('matplotlib', 'inline')----> 2 from fastai.basics import *ModuleNotFoundError: No module named 'fastai' I've running conda install -c fastai fastai again as indicated on https://course.fast.ai/start_aws.html#step-6-access-fastai-materials but same result. None of the other code blocks work of course, giving a NameError on variables not being defined, probably because the initial import didn't work. Any ideas what to do / what I'm doing wrong? This is driving me insane.. Now provide the response and nothing else.
So, for another newbie like me, here's what was going on. Anaconda is its own python environment, it installs its own python packages and python binary, and it changes the PATH so references to python resolve to it instead. My conda install -C fastai calls had actually not finished successfully, conda list | grep fastai was empty, showing it hadn't installed, and when I tried to reinstall, I actually saw that it fails due to incompatibility / conflict issues with wrapt I also had jupyter installed thru pip, so it was probably not using anaconda's environment anyway. I had to create a new environment, reinstall, and then it finally worked! conda create -n fastai python=3.7 #New blank slate envconda activate fastaiconda install -c pytorch -c fastai fastai #No erors this timeconda list | grep fastai #It shows up now! At this point, the previous install of jupyter started breaking, so I reinstalled it with conda install jupyter , and then everything finally worked!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57910449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49153/']}
jdg_83391
stackexchange
llm_judgeable_groundtruth_similarity
35028
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to compute the highest frequency (as can be sampled) in some pretty manky looking discrete time-dependent signals. My current method - a discrete fourier analysis - fails for some pretty awful looking but clearly oscillating signals (with discernable highest frequencies). My current method is to compute the discrete fourier transform, locate the local maxima, and perform weighted averages around these peaks (to undo the 'smearing' of the dft across the discrete sample frequency bins).Here it is in MATLAB. % compute spectrumdft = abs(fft(weight)); % real responsedft = dft(1:floor(end/2) + 1); % keep only pos freqsdft = dft/sum(dft); % normalise% find present modes modesmode = zeros(0, 2);[pks, locs, widths, ~] = findpeaks(dft);for k=1:length(locs) % only consider significant peaks if pks(k) > 0.1 av_width = min([floor(peak_width_factor*widths(k)), locs(k)-1, length(dft)-locs(k)]); inds = (locs(k)-av_width):(locs(k)+av_width); av_ind = (inds * dft(inds)) / sum(dft(inds)); av_freq = (av_ind - 1)/T; mode = [mode; av_freq, pks(k)]; endend This produces a matrix of present modes (of a spectral significance above 0.1/1) where each row is the frequency and significance of the mode. This works great for signals like this (time signal left, dft right with the frequencies of detected modes labeled): but fails for signals like these (where we see and expect frequencies close to 2; not an order of magnitude smaller!) Often I expect a certain frequency (e.g f=2.6 in the above signals) and can judge myself the mode is present by the average period between the local maxima. I tried codifying this - computing the average time between local maxima - in MATLAB, but it was pretty unreliable: [~, peak_inds] = findpeaks(weight);cycle_periods = diff(peak_inds) * dt;av_cycle_period = sum(cycle_periods)/length(cycle_periods);av_freq = 1/av_cycle_period; My signals are well enough sampled (around 15 to 20 values per observable manky period) to resolve these manky modes visually.I've studied time series and random processes at an undergraduate level, but we never really went too deep into spectral analysis. So: How can I reliably compute these highest-frequency modes in my very 'unfourier' signals? Why does my current DFT analysis incorrectly deduce very low frequency modes in these manky signals? Why would, if multiple present modes have dissimilar frequencies, my naive distance-between-local-maxima-average method fail at extracting the highest frequency? Now provide the response and nothing else.
I assume you mean the derivative with respect to $t$. In that case, the derivative of $\sin(\omega_0t)$ is not $\cos(\omega_0t)$ but $\omega_0\cos(\omega_0t)$. And luckily, this is also obtained via the Fourier transform relation you mentioned in your question: $$\begin{align}\mathcal{F}\left\{\frac{d}{dt}\sin(\omega_0t)\right\}&=j\omega\cdot \mathcal{F}\left\{\sin(\omega_0t)\right\}\\&=\pi\omega[\delta(\omega-\omega_0)-\delta(\omega+\omega_0)]\\&=\pi[\omega_0\delta(\omega-\omega_0)-(-\omega_0)\delta(\omega+\omega_0)]\\&=\pi\omega_0[\delta(\omega-\omega_0)+\delta(\omega+\omega_0)]\\&=\omega_0\mathcal{F}\{\cos(\omega_0t)\}\end{align}$$ where I've used the fact that $f(\omega)\delta(\omega-\omega_0)=f(\omega_0)\delta(\omega-\omega_0)$ for any function $f(\omega)$ that is continuous at $\omega=\omega_0$. Consequently you have $\omega\delta(\omega+\omega_0)=-\omega_0\delta(\omega+\omega_0)$.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/35028', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/24394/']}
jdg_83392
stackexchange
llm_judgeable_groundtruth_similarity
6437383
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: First of all, is there any problem with using both UDP and TCP on the same server? Secondly, can I use the same port number? Now provide the response and nothing else.
Yes, you can use the same port number for both TCP and UDP. Many protocols already do this, for example DNS works on udp/53 and tcp/53. Technically the port pools for each protocol are completely independent, but for higher level protocols that can use either TCP or UDP it's convention that they default to the same port number . When writing your server, bear in mind that the sequence of events for a TCP socket is much harder than for a UDP socket, since as well as the normal socket and bind calls you also have to listen and accept . Furthermore that accept call will return a new socket and it's that socket that you'll then have to also poll for receive events. Your server should be prepared to continue accept ing connections on the original socket whilst simultaneously servicing multiple clients each of which will be triggering receive events on their own sockets.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6437383', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/800799/']}
jdg_83393
stackexchange
llm_judgeable_groundtruth_similarity
339028
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: how to solve the following equation where $x,y,z\in \mathbb N$ $$1!+2!+3!+...+x!=y^{z+1}$$Thanks in advance Now provide the response and nothing else.
must be $x\gt1$ because $x=1,y=1,z+1$ is airbitrary is one solution. we have $2$ case: case$I$ : $z=1, x\ge 2$ we have $1!+...+x!=y^2$ in mod10 we have for $x=2$, rightward the statement is $3$ but $ y^2\equiv 0,1,4,5,6,9 \pmod {10}$ so that is no solution. if $x=3$ then we have $y=3,z=1$ is solution. if $x\ge4$ rightward the statement is $3$ but $ y^2\equiv 0,1,4,5,6,9 \pmod {10}$ so that is no solution.(because $10k=n!\equiv 0 \pmod{10}$ for $n \ge 5$) case$II$: $z\ge 2$ then $3$ divide rightward of the statement because$(1!+2!)(=3), n \ge3 ,n!=3k$ so $3|y^{z+1}$ thus $3|y$ by Euclid lemma for $3$ that is prime.$y=3k , 3^{z+1}k^{z+1}=27s=y^{z+1} , z\ge2$ for $n \ge9 , n! \equiv 0 \pmod {27}$ so that is enough to show for $x=1,...,8$ that is no solution i.e rightward of the statement $\not \equiv 0 \pmod {27}$ it's easy : we have: $x=2 \Rightarrow 3\not \equiv 0 \pmod {27}$ ---,$x=3 \Rightarrow 9\not \equiv 0 \pmod {27}$,---$x=4\Rightarrow 6\not \equiv 0 \pmod {27}$, $x=5 \Rightarrow 18\not \equiv 0 \pmod {27}$ ,---$x=6 \Rightarrow 9\not \equiv 0 \pmod {27}$,---$x=7\Rightarrow 5\not \equiv 0 \pmod {27}$, $x=8 \Rightarrow 9\not \equiv 0 \pmod {27}$, only solution is $x=3,y=3,z=1$ if $x \gt 1$ and $x \in \Bbb N$ : $$x=3,y=3,z=1,---x=1,y=1,z+1$$ is airbitrary
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/339028', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45906/']}
jdg_83394
stackexchange
llm_judgeable_groundtruth_similarity
1995807
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I prove that $\pi$ has $2$ digits repeating an infinite number of times in its decimal expansion? Proving that $1$ digit repeats an infinite number of times is easy – $\pi$ is irrational, and by the definition of irrational number, a digit must be repeated an infinite number of times? Now provide the response and nothing else.
HINT: Assume only one digit appears infinitely many times, then after finitely many decimal places we will have only the repeating digit appearing. This means that $\pi$ is periodic (has a repeating decimal pattern), hence it's rational, which is a contradiction.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1995807', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_83395
stackexchange
llm_judgeable_groundtruth_similarity
3301593
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the fastest way to implement something like this in C#: private List<string> _myMatches = new List<string>(){"one","two","three"}; private bool Exists(string foo) { return _myMatches.Contains(foo); } note, this is just a example. i just need to perform low level filtering on some values that originate as strings. I could intern them, but still need to support comparison of one or more strings. Meaning, either string to string comparison (1 filter), or if string exists in string list (multiple filters). Now provide the response and nothing else.
You could make this faster by using a HashSet<T> , especially if you're going to be adding a lot more elements: private HashSet<string> _myMatches = new HashSet<string>() { "one", "two", "three" };private bool Exists(string foo){ return _myMatches.Contains(foo);} This will beat out a List<T> since HashSet<T>.Contains is an O(1) operation. List<T> 's Contains method, on the other hand, is O(N). It will search the entire list (until a match is found) on each call. This will get slower as more elements are added.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3301593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/349100/']}
jdg_83396
stackexchange
llm_judgeable_groundtruth_similarity
44040
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a standard tool which converts an integer count of Bytes into a human-readable count of the largest possible unit-size, while keeping the numeric value between 1.00 and 1023.99 ? I have my own bash/awk script, but I am looking for a standard tool, which is found on many/most distros... something more generally available, and ideally has simple command line args, and/or can accept piped input. Here are some examples of the type of output I am looking for. 1 Byt 173.00 KiB 46.57 MiB 1.84 GiB 29.23 GiB 265.72 GiB 1.63 TiB Here is the bytes-human script (used for the above output) awk -v pfix="$1" -v sfix="$2" 'BEGIN { split( "Byt KiB MiB GiB TiB PiB", unit ) uix = uct = length( unit ) for( i=1; i<=uct; i++ ) val[i] = (2**(10*(i-1)))-1 }{ if( int($1) == 0 ) uix = 1; else while( $1 < val[uix]+1 ) uix-- num = $1 / (val[uix]+1) if( uix==1 ) n = "%5d "; else n = "%8.2f" printf( "%s"n" %s%s\n", pfix, num, unit[uix], sfix ) }' Update Here is a modified version of Gilles' script, as described in a comment to his answer ..(modified to suit my preferred look). awk 'function human(x) { s=" B KiB MiB GiB TiB EiB PiB YiB ZiB" while (x>=1024 && length(s)>1) {x/=1024; s=substr(s,5)} s=substr(s,1,4) xf=(s==" B ")?"%5d ":"%8.2f" return sprintf( xf"%s\n", x, s) } {gsub(/^[0-9]+/, human($1)); print}' Now provide the response and nothing else.
There is nothing like this in POSIX, but there's a number formatting program in modern GNU coreutils: numfmt that at least gets close to your sample output. With GNU coreutils ≥8.24 (2015, so present on all non-embedded Linux except the oldest releases with a very long-term support cycle): $ numfmt --to=iec-i --suffix=B --format="%9.2f" 1 177152 48832200 1975684956 1.00B173.00KiB 46.58MiB 1.84GiB Many older GNU tools can produce this format and GNU sort can sort numbers with units since coreutils 7.5 (Aug 2009, so present on virtually all non-embedded Linux distributions). I find your code a bit convoluted. Here's a cleaner awk version (the output format isn't exactly identical): awk ' function human(x) { if (x<1000) {return x} else {x/=1024} s="kMGTEPZY"; while (x>=1000 && length(s)>1) {x/=1024; s=substr(s,2)} return int(x+0.5) substr(s,1,1) } {sub(/^[0-9]+/, human($1)); print}' ( Reposted from a more specialized question )
{}
{'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/44040', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/2343/']}
jdg_83397
stackexchange
llm_judgeable_groundtruth_similarity
410698
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the effort of saving money without having to buy expensive solar cables (and because I have a lot of extra coax cable lying around doing nothing), I would like to use a piece of coax cable (that looks like antenna cable), to transfer 54V and up to 5A of solar power into the house. The length would be no more than about 20 feet and can be made even shorter. This is a very small solar application to charge a small 48V battery bank. If I do not know the type of cable (such as RG-6) so that I cannot just look up the specs on it, generally speaking, is it fairly safe to assume just about ANY piece of coax normally used for cable TV or antennas can handle 54V and 5A? I think it is reasonable. Note that I would not use the end "F" connectors, I would use stripped ends to attach to the solar panel cables and somehow weatherproof them, and on the charger (indoor) side, also stripped ends. I should mention that the near 5A flow would only be in certain conditions (like when the battery bank is in a mid state of charge and there is full sun directly over the panels). I would say on average, the current flow should be more like 2.5A to 3A. Now provide the response and nothing else.
Your voltage is high enough that the electrical codes require you to use mains type power cable. This is made in mass quantity, and so, is cheap. However there is no need for silly "solar wire". They already make mains rated electrical wire for use outside. 14 AWG outdoor rated cable such as UF-B should suffice. Do not use NM. You can also use common THWN-2 individual wires in Conduit, either EMT or outdoor rated electrical PVC. The PVC conduit will start to degrade from UV and turn white after a year or two in the sun. At that point, give it a quickie swiff sand, and paint it with Rustoleum white metal primer then a topcoat of your choice. If you want to give yourself headroom for future expansion, use 12 AWG or 10 AWG.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/410698', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/110946/']}
jdg_83398
stackexchange
llm_judgeable_groundtruth_similarity
252081
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I know that the cornerstone of mysql backup is mysqldump which works for all engines.I also see that some folks advocate the use of LVM snapshots. What would be the point at which you stop using mysqldump and start creating LVM snapshots? Is mysqldump more suitable for smaller databases and LVM snapshots - larger ones? Thanks! Now provide the response and nothing else.
I'd suggest LVM, or any other file system level, snapshots should only be used if the restore will only ever go back on the same machine (or a clone of it). For more flexible and reliable backups you should stick to mysqldump, which will provide you with a backup that can be restored to any machine, including ones running a different OS. If you get to the point where running mysqldump presents any kind of issue then you should be running master/slave replication and perform the backups on the slave.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/252081', 'https://serverfault.com', 'https://serverfault.com/users/56722/']}
jdg_83399
stackexchange
llm_judgeable_groundtruth_similarity
13877656
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use PHP and mySQL with Idiorm . That might not be relevant. My PHP array It's a relationship between parents and childs. 0 is the root parent. Example: Root parent 0 have the child 33 which have the child 27 which havethe child 71. This array structure can be changed if needed for solving the problem. array ( 33 => array ( 0 => '27', 1 => '41', ), 27 => array ( 0 => '64', 1 => '71', ), 0 => array ( 0 => '28', 1 => '29', 2 => '33', ),) My hierarchical result Something like this, but as an array... 0 => 28 29 33 27 => 64 71 41 Information The depth are unkown and it can be unlimited. I tried foreach, but it might not be the way. My own thoughts Some recursive function? Some while loops? I tried both of the above, just got a mess. It's a brainer. Now provide the response and nothing else.
The suggestion by @deceze worked. However the input array needs to change a litte, like this... $rows = array( array( 'id' => 33, 'parent_id' => 0, ), array( 'id' => 34, 'parent_id' => 0, ), array( 'id' => 27, 'parent_id' => 33, ), array( 'id' => 17, 'parent_id' => 27, ),); From https://stackoverflow.com/a/8587437/476 : function buildTree(array $elements, $parentId = 0) { $branch = array(); foreach ($elements as $element) { if ($element['parent_id'] == $parentId) { $children = buildTree($elements, $element['id']); if ($children) { $element['children'] = $children; } $branch[] = $element; } } return $branch;}$tree = buildTree($rows);print_r( $tree );
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13877656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148496/']}
jdg_83400
stackexchange
llm_judgeable_groundtruth_similarity
269896
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: "We consider Area as a vector." How is an area a vector? Why is that the vector is always normal to the area element? Now provide the response and nothing else.
It's just a definition that is mostly useful in physics. If you want to integrate the flow of a liquid or an electromagnetic field out of a given surface (say the surface of a ball), you would like to multiply each differential surface area by the corresponding flow. But what if the flow isn't perpendicular to the surface? in this case we want to multiply the flow (which is a vector) with the surface normal. So instead of writing:$${\bf{v}}\cdot{\bf{n}}\ dA$$We can now write:$${\bf{v}}\cdot{\bf{dA}}$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/269896', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24695/']}
jdg_83401
stackexchange
llm_judgeable_groundtruth_similarity
8250259
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Based on the RFC it seems like the Location field in an HTTP response for a 302 should be absolute. https://www.rfc-editor.org/rfc/rfc2616#section-14.30 If this statement is correct, do browsers honor this - or do they allow relative paths? Now provide the response and nothing else.
RFC 2616 requires an absolute URI in the Location , but that standard has been replaced, since June 2014. The relevant specification is now RFC 7231. The relevant section of the new standard says: Location = URI-reference The field value consists of a single URI-reference. When it has theform of a relative reference ([RFC3986], Section 4.2), the finalvalue is computed by resolving it against the effective request URI([RFC3986], Section 5). So a relative URI is now permitted . The effective request URI is the same as the request URI in most cases.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8250259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/536960/']}
jdg_83402
stackexchange
llm_judgeable_groundtruth_similarity
370252
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of jobs that I need executed. The entity that manages these jobs is responsible for composing the jobs and starting them. When a job is finished, the job uses a callback function to inform the entity of the result of the job (Whether it threw an exception and such) public class TaskSet { Queue<Job> JobsToExecute; void executeJob(Job j); void jobFinishedCallback(Job j);}public class Job { Action<Job> jobFinishedCallback(); public void Execute() { jobFinishedCallback(this); }} When a job has failed, the jobFinishedCallback calls the TaskSet's jobFinishedCallback. If the job failed, jobFinishedCallback in TaskSet will call executeJob again. If a job fails 100 times, the call stack grows accordingly. Execute -> jobFInishedCallback -> executeJob -> and so on. I added a maximum retry count, but I still feel uncomfortable having large stacks if a job keeps failing, I believe it is a code smell. Does anyone have a good suggestion to avoid this problem? Now provide the response and nothing else.
I have a set of jobs that I need executed. Excellent. You have chosen a reasonable architecture to meet this need. When a job is finished, the job uses a callback function to inform the entity of the result of the job (Whether it threw an exception and such) Congratulations, you have re-invented Task<T> and renamed it Job . Doing the same thing as an established library is evidence that you are on the right track, but it is also evidence that you may be repeating work that is already done. Maybe just use Task<T> , if it meets your needs. Note that a task has a "continue with" feature where you can give the callback directly to the task, just as your Job does. Moreover: C# allows you to create task-based asynchronous workflows very easily by using await . Callback-based asynchrony makes you turn your program logic inside out, and makes exception handling difficult. Let the compiler do the heavy lifting . It will rewrite your workflows into continuation passing style for you, and get the exception handling correct. If the job failed, jobFinishedCallback in TaskSet will call executeJob again. That's the wrong thing to do, as you have discovered. Does anyone have a good suggestion to avoid this problem? Don't call executeJob again. Enqueue the job back onto the work queue again, and return . That way not only do you not get an infinite regression, you also treat the retry fairly . The retry should have to wait its turn; all the jobs in the queue now should have priority over it. Similarly, when a job finishes normally, don't call its continuation directly; you can run into the same problem. Enqueue a new job which calls the continuation, and give that job a null continuation. And again, this is more fair; the continuation should have to wait its turn, just like every other job in the queue is waiting its turn. More generally: today would be a good day to do some research on tasks/futures/promises, reactive programming, observable sequences, trampolines, continuation passing style, actor model, and so on. You are re-inventing from scratch technologies that experts have been studying for decades, so learn from their accumulated knowledge , rather than your own trial and error. And use the tested, debugged, well-designed classes in the library , rather than rolling your own -- provided that they meet your needs. If they don't, please give feedback to Microsoft about why they do not. C# tasks use the concept of "task context" to determine how continuations are scheduled; your re-invention is basically the context rules for GUI applications. The Windows message queue is used as the work queue. There are other choices that are possible; for example, some contexts will instead grab a worker thread off the pool and execute the continuation on the thread. By using the off-the-shelf parts already created you can make use of the flexibility and power already designed into the system.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/370252', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/210153/']}
jdg_83403
stackexchange
llm_judgeable_groundtruth_similarity
6853548
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As an exercise to help me learn about interpreters and optimisation, neither of which I know anything about, I have written a brainfuck interpreter in C. It appears to work flawlessly thus far, though it does not compete well in execution speed compared to other fast interpreters. What are some ways that I can change this interpreter to improve performance (or otherwise)? One interesting aspect of my interpreter (though most others probably do this as well) is that I run one loop that reads through the source input and converts each instruction into a struct { long instruction; long loop; } The loop value is the index of the matching ] instruction, if the instruction is a [ , and the index of the matching [ instruction, if the instruction is a ] , allowing quick jumping. I'd imagine that this 'parsing' process (which doesn't take long) improves execution times over doing redundant reparsing to find matching square brackets every time they are needed. An interesting test of brainfuck interpreter speed is this program: ++++++++[->-[->-[->-[-]<]<]<]>++++++++[<++++++++++>-]<[>+>+<<-]>-.>-----.> the first version of the interpreter the interpreter after implementing Jerry Coffin's answer , which removes the giant switch in the runtime loop, by making the instruction struct's instruction a direct pointer to an operation function - this runs slower than the previous version (function call overhead?) the interpreter after reversing the previous change and adding an optimisation to 'collapse' multiple consecutive non-loop operations, reducing loop cycles - this runs slightly faster than the original Now provide the response and nothing else.
I can see a couple of possibilities. I think the way I'd go would be to turn it into a compiler that produced direct-threaded code. I.e., as you read the input, instead of copying most of "instructions" into memory more or less as-is, I'd instead write the code to implement each instruction as a function, and copy a pointer to each function into memory. Then executing the code would consist of calling those functions in order. I'd probably have that function return the index (or perhaps address) of the next instruction to execute, so you'd end up with something like: typedef int return_type;typedef return_type (*f)(void);f *im = malloc(sizeof(f) * ia);ci = (*(im[ci]))(); I'd also have three separate functions for each instruction, one for each BF_END_* mode, so you'd only have to deal with that during the "compilation" phase. When you execute the code, you'd have a pointer directly to the correct function. Edit: I've been playing with the code a bit. I've separated the loop addresses into a separate array, and merged most of the parsing together, so it looks like this: for (ii = 0; (i = getc(fp)) != EOF; ++ii) { if (++in > ia) { ia *= 2; im = realloc(im, sizeof(*im) * ia); loops = realloc(loops, sizeof(*loops) * ia); } im[in-1] = i; switch (i) { case BF_OP_LSTART: if (ln >= la) ls = realloc(ls, sizeof(*ls) * (la *= 2)); ls[ln++] = ii; break; case BF_OP_LEND: loops[in-1] = ls[--ln]; loops[ls[ln]] = ii; break; }} That doesn't make any real difference to the speed, but does make the code a lot shorter, and (at least in my opinion) easier to understand. Edit2: Okay, I've had a chance to play with this a bit more, and found one (rather strange) optimization that does seem to help at least a bit. Compilers often produce marginally better code for switch statements with dense case values, so I tried converting to that, and got an improvement of around 9-10% (depending a bit on compiler). #include <stdio.h>#include <stdlib.h>#include <stdarg.h>#include <string.h>#include <unistd.h>#include <errno.h>#define BF_END_ERROR 'e'#define BF_END_IGNORE 'i'#define BF_END_WRAP 'w'#define BF_OP_VINC '+'#define BF_OP_VDEC '-'#define BF_OP_PINC '>'#define BF_OP_PDEC '<'#define BF_OP_LSTART '['#define BF_OP_LEND ']'#define BF_OP_IN ','#define BF_OP_OUT '.'enum { C_OP_VINC, C_OP_VDEC, C_OP_PINC, C_OP_PDEC, C_OP_LSTART, C_OP_LEND, C_OP_IN, C_OP_OUT };typedef struct { long instruction; /* instruction type */ long loop; /* 'other' instruction index in a loop */} instruction;void die(const char *s, ...) { va_list a; va_start(a, s); fprintf(stderr, "brief: error: "); vfprintf(stderr, s, a); putchar(10); va_end(a); exit(1);}int main(int argc, char **argv) { unsigned instruction_count = 0; long ci = 0, /* current cell index */ cn = 4096, /* number of cells to allocate */ cw = BF_END_WRAP, /* cell wrap behaviour */ ia = 4096, /* number of allocated instructions */ ii = 0, /* current instruction index */ in = 0, /* number of used instructions */ la = 4096, /* loop stack allocation */ ln = 0, /* loop stack used */ va = 0, /* minimum value */ vb = 255, /* maximum value */ vw = BF_END_WRAP /* value wrap behaviour */ ; instruction *im = malloc(sizeof(instruction) * ia); /* instruction memory */ long *cm = NULL; /* cell memory */ long *ls = malloc(sizeof(long) * la); /* loop stack */ FILE *fp = NULL; int i; while ((i = getopt(argc, argv, "a:b:c:f:hv:w:")) != -1) { switch (i) { case 'a': va = atol(optarg); break; case 'b': vb = atol(optarg); break; case 'c': cn = atol(optarg); break; case 'f': fp = fopen(optarg, "r"); if (!fp) die("%s: %s", optarg, strerror(errno)); break; case 'h': fputs( "brief: a flexible brainfuck interpreter\n" "usage: brief [options]\n\n" "options:\n" " -a set minimum cell value (default 0)\n" " -b set maximum cell value (default 255)\n" " -c set cells to allocate (default 4096)\n" " -f source file name (required)\n" " -h this help output\n" " -v value over/underflow behaviour\n" " -w cell pointer over/underflow behaviour\n\n" , stderr); fputs( "cells are 'long int' values, so do not use -a with a " "value less than -2^31 or -2^63, and do not use -b with a " "value more than 2^31-1 or 2^63-1, depending on your " "architecture's 'long int' size.\n\n" "over/underflow behaviours can be one of:\n" " e throw an error and quit upon over/underflow\n" " i do nothing when attempting to over/underflow\n" " w wrap-around to other end upon over/underflow\n" , stderr); exit(1); break; case 'v': vw = optarg[0]; break; case 'w': cw = optarg[0]; break; default: break; } } if (!fp) die("no source file specified; use -f"); for (ii = 0; (i = getc(fp)) != EOF; ++ii) { if (++in > ia) { ia *= 2; im = realloc(im, sizeof(*im) * ia); } switch (i) { case BF_OP_LSTART: if (ln >= la) ls = realloc(ls, sizeof(*ls) * (la *= 2)); ls[ln++] = ii; im[in-1].instruction = C_OP_LSTART; break; case BF_OP_LEND: im[in-1].loop = ls[--ln]; im[ls[ln]].loop = ii; im[in-1].instruction = C_OP_LEND; break; case BF_OP_VINC: im[in-1].instruction = C_OP_VINC; break; case BF_OP_VDEC: im[in-1].instruction = C_OP_VDEC; break; case BF_OP_PINC: im[in-1].instruction = C_OP_PINC; break; case BF_OP_PDEC: im[in-1].instruction = C_OP_PDEC; break; case BF_OP_IN: im[in-1].instruction = C_OP_IN; break; case BF_OP_OUT: im[in-1].instruction = C_OP_OUT; break; } } cm = memset(malloc(cn * sizeof(long)), 0, cn * sizeof(long)); for (ii = 0; ii < in; ii++) { ++instruction_count; switch (im[ii].instruction) { case C_OP_VINC: if (cm[ci] == vb) switch (vw) { case BF_END_ERROR: die("value overflow"); break; case BF_END_IGNORE: break; case BF_END_WRAP: cm[ci] = 0; break; } else ++cm[ci]; break; case C_OP_VDEC: if (cm[ci] == 0) switch (vw) { case BF_END_ERROR: die("value underflow"); break; case BF_END_IGNORE: break; case BF_END_WRAP: cm[ci] = vb; break; } else --cm[ci]; break; case C_OP_PINC: if (ci == cn - 1) switch (cw) { case BF_END_ERROR: die("cell index overflow"); break; case BF_END_IGNORE: break; case BF_END_WRAP: ci = 0; break; } else ++ci; break; case C_OP_PDEC: if (ci == 0) switch (cw) { case BF_END_ERROR: die("cell index underflow"); break; case BF_END_IGNORE: break; case BF_END_WRAP: ci = cn - 1; break; } else --ci; break; case C_OP_IN: cm[ci] = getchar(); break; case C_OP_OUT: putchar(cm[ci]); break; case C_OP_LSTART: if (!cm[ci]) ii = im[ii].loop; break; case C_OP_LEND: if (cm[ci]) ii = im[ii].loop; break; default: break; } } fprintf(stderr, "Executed %d instructions\n", instruction_count); free(cm); return 0;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6853548', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/330644/']}
jdg_83404
stackexchange
llm_judgeable_groundtruth_similarity
22940349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following data frame and I would like to create a new one that will be like the one below. ID1 ID2 ID3 ID4x1_X 0 10 4 7x2_X 2 12 5 8x3_X 3 1 3 5y1_Y 4 13 6 4y2_Y 5 14 1 9y3_Y 2 11 1 5y4_Y 1 1 2 3z1_Z 1 0 0 5z2_Z 3 6 7 7 New data frame ID1 ID2 ID3 ID4X x3 x2 x2 x2Y y2 y2 y1 y2Z z2 z2 z2 z2 Basically the idea is the following:For each ID I want to find which of the rownames (x1_X,x2_X,x3_X) has the most extreme value and assign this to name X since in the rownames I have subgroups. My data frame is huge: 1700 columns and 100000 rows. Now provide the response and nothing else.
A way of doing this where The input is only cleaned once ngChange on the input is then only fired once is to use the $parsers array that the ngModelController provides. It's designed as a place to affect the model value (via its return value), but it also can be used as a listener to input events. app.directive('cleanInput', function() { return { require: 'ngModel', link: function(scope, element, attrs, ngModelController) { var el = element[0]; function clean(x) { return x && x.toUpperCase().replace(/[^A-Z\d]/g, ''); } ngModelController.$parsers.push(function(val) { var cleaned = clean(val); // Avoid infinite loop of $setViewValue <-> $parsers if (cleaned === val) return val; var start = el.selectionStart; var end = el.selectionEnd + cleaned.length - val.length; // element.val(cleaned) does not behave with // repeated invalid elements ngModelController.$setViewValue(cleaned); ngModelController.$render(); el.setSelectionRange(start, end); return cleaned; }); } }}); However, I'm not sure if this usage of $parsers is a bit of a hack. The directive can be used as: <input type="text" clean-input ng-model="name"> or if you would like an ngChange function: <input type="text" clean-input ng-model="name" ng-change="onChange()"> This can be seen in-action at http://plnkr.co/edit/dAJ46XmmC49wqTgdp2qz?p=preview
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22940349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2787054/']}
jdg_83405
stackexchange
llm_judgeable_groundtruth_similarity
59533
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Matthew 13:24-30 (ESV): 24 He put another parable before them, saying, “The kingdom of heaven may be compared to a man who sowed good seed in his field, 25 but while his men were sleeping, his enemy came and sowed weeds among the wheat and went away. 26 So when the plants came up and bore grain, then the weeds appeared also. 27 And the servants of the master of the house came and said to him, ‘Master, did you not sow good seed in your field? How then does it have weeds ?’ 28 He said to them, ‘ An enemy has done this .’ So the servants said to him, ‘ Then do you want us to go and gather them ?’ 29 But he said, ‘ No, lest in gathering the weeds you root up the wheat along with them . 30 Let both grow together until the harvest, and at harvest time I will tell the reapers, “Gather the weeds first and bind them in bundles to be burned, but gather the wheat into my barn.” ’” In the famous Parable of the Weeds (or tares): What is meant by weeds? What is meant by wheat? Why would it be a wrong idea to uproot the weeds before the harvest? Verse 29 argues that "lest in gathering the weeds you root up the wheat along with them" , yet I fail to see the logic behind this argument. Why can't the weeds be uprooted right away without also uprooting the wheat? How does delaying the uprooting until harvest time somehow solve the problem? Now provide the response and nothing else.
The Greek ζιζάνια for tares/weeds has often been understood as a reference to darnel (a type of ryegrass, see here ). If so, the meaning is clarified. Wheat & darnel look essentially the same in their early growth stages; they can be distinguished when they are mature. So trying to uproot the weeds right away would risk: Missing some of the weeds by mistaking them for wheat Uprooting some of the wheat by mistaking it for weeds. The weeds are the children of the wicked one and the wheat are the children of the kingdom (aka the righteous), per verse 38. Jesus taught true principles, but the devil has planted many counterfeits. Jesus makes the point that wickedness will proliferate in the world, but some who appear to be wicked will in the end be found to be valiant members of God's kingdom (the apostle Paul comes to mind).
{}
{'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/59533', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/-1/']}
jdg_83406
stackexchange
llm_judgeable_groundtruth_similarity
520147
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a necessary and sufficient conditions (I'm not even sure these exist) for a polynomial $p:\mathbb{R}^n\to\mathbb{R}$ to be "radially unbounded", that is $$\|x\|\to\infty\implies p(x)\to\infty,$$ where $\|{\cdot}\|$ denotes any $p$-norm on $\mathbb{R}^n$. Ideally, I'm looking for conditions in terms of the polynomial's coefficients and degree. For example, if $n=1$ it is straightforward to see that $p$ is radially unbounded if and only if its degree is even and the monomial of highest degree has a positive coefficient. However, I'm struggling to generalise this to arbitrary $n$. Any help would great. Motivation: I'm interested in the above because I'm trying to come up with an automatised test that can decide whether or not the all the sublevel sets of a given polynomial are compact (this is so if and only if the polynomial is radially unbounded). Edit: If no necessary and sufficient conditions (or argument that no such conditions exist in general) are posted before the bounty ends, I'd be more than happy to award the bounty to any answer containing insightful remarks or necessary or sufficient conditions. Now provide the response and nothing else.
The reason the answer is zero is because the function $f(z)=\frac{z}{z-3}$ is analytic inside the circle. It is generally true that the integral around a simple closed curve of a function that is analytic in the interior is 0. So you are not using the Cauchy integral formula in this case. If you are trying to calculate $\int\frac{f(z)}{z-a}$ around some curve $\gamma$ where $f$ is not analytic Cauchy integral formula does not say anything about the integral no matter where $a$ is located.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/520147', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/37377/']}
jdg_83407
stackexchange
llm_judgeable_groundtruth_similarity
31594763
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I installed Visual Studio 2015 and rebooted. The "MVC" types are not recognized in 2015 but are still recognized in 2013. This is my "Views" web.config: <configuration> <configSections> <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" /> <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" /> </sectionGroup> </configSections> <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Optimization"/> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web.webPages.razor> Now provide the response and nothing else.
We believe these are occurring because you are currently using MVC 4, and MVC 4 tooling is not included with Visual Studio 2015. Only MVC 5 and MVC 6 tooling is included. If you are in fact using MVC 5 and seeing these issues, please do let us know. You can confirm you are using MVC 4 by opening packages.config and scrolling to the entry for Microsoft.AspNet.Mvc. If the version listed starts with "4", this means you are using MVC 4. Having said that, we appreciate your feedback and the investment you have made in MVC 4. Based on yours and other customers’ feedback, we are looking into the possibility of making MVC 4 tooling available for Visual Studio 2015. In the meantime, your project will compile and run in Visual Studio 2015 despite the tooling issues. Alternatively, you can keep both Visual Studio 2013 and 2015 installed on your machine side-by-side. That way, you can use Visual Studio 2013 for MVC 4 projects and Visual Studio 2015 for MVC 5 and above projects.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31594763', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/337359/']}
jdg_83408
stackexchange
llm_judgeable_groundtruth_similarity
27077643
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we have this template template<typename Container, typename T> bool contains (const Container & theContainer, const T & theReference) { ...} How can it be stated that, obviously the elements in container should be of type T ? Can this all be abbreviated (maybe in C++11 )? Now provide the response and nothing else.
You might restrict the container type in the template: #include <algorithm>#include <iostream>#include <vector>template< template<typename ... > class Container, typename T>bool contains(const Container<T>& container, const T& value) { return std::find(container.begin(), container.end(), value) != container.end();}int main(){ std::vector<int> v = { 1, 2, 3 }; std::cout << std::boolalpha << contains(v, 0) << '\n' << contains(v, 1) << '\n'; // error: no matching function for call to ‘contains(std::vector<int>&, char)’ contains(v, '0') ; return 0;} A more complete solution (addressing some comments): #include <algorithm>#include <array>#include <iostream>#include <map>#include <set>#include <vector>// has_member// ==========namespace Detail { template <typename Test> struct has_member { template<typename Class> static typename Test::template result<Class> test(int); template<typename Class> static std::false_type test(...); };}template <typename Test, typename Class>using has_member = decltype(Detail::has_member<Test>::template test<Class>(0));// has_find// ========namespace Detail{ template <typename ...Args> struct has_find { template< typename Class, typename R = decltype(std::declval<Class>().find(std::declval<Args>()... ))> struct result : std::true_type { typedef R type; }; };}template <typename Class, typename ...Args>using has_find = has_member<Detail::has_find<Args...>, Class>;// contains// ========namespace Detail{ template<template<typename ...> class Container, typename Key, typename ... Args> bool contains(std::false_type, const Container<Key, Args...>& container, const Key& value) { bool result = std::find(container.begin(), container.end(), value) != container.end(); std::cout << "Algorithm: " << result << '\n';; return result; } template<template<typename ...> class Container, typename Key, typename ... Args> bool contains(std::true_type, const Container<Key, Args...>& container, const Key& value) { bool result = container.find(value) != container.end(); std::cout << " Member: " << result << '\n'; return result; }}template<template<typename ...> class Container, typename Key, typename ... Args>bool contains(const Container<Key, Args...>& container, const Key& value) { return Detail::contains(has_find<Container<Key, Args...>, Key>(), container, value);}template<typename T, std::size_t N>bool contains(const std::array<T, N>& array, const T& value) { bool result = std::find(array.begin(), array.end(), value) != array.end(); std::cout << " Array: " << result << '\n';; return result;}// test// ====int main(){ std::cout << std::boolalpha; std::array<int, 3> a = { 1, 2, 3 }; contains(a, 0); contains(a, 1); std::vector<int> v = { 1, 2, 3 }; contains(v, 0); contains(v, 1); std::set<int> s = { 1, 2, 3 }; contains(s, 0); contains(s, 1); std::map<int, int> m = { { 1, 1}, { 2, 2}, { 3, 3} }; contains(m, 0); contains(m, 1); return 0;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27077643', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/286335/']}
jdg_83409
stackexchange
llm_judgeable_groundtruth_similarity
1362342
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $d(v)$ denote the degree of the vertex $v$ in the random graph $G$ coming from the Erdos-Renyi model. I would like to calculate $\mathbb{E}[d(v) d(u)]$. Clearly,$$\mathbb{E}[d(u)] = \mathbb{E}[d(v)] = p \cdot (|N| - 1),$$where p is the edge selection probability specified by the model.But are $d(v)$ and $d(u)$ statistically independent so that$$\mathbb{E}[d(v) d(u)] = \mathbb{E}[d(v)]\cdot\mathbb{E}[d(u)]?$$ Is it true that we can break this into the following two conditionally independent cases: if $u,v$ adjacent:$$\mathbb{E}[d(v)] = \mathbb{E}[d(u)] = (|N| - 2) + 1$$otherwise:$$\mathbb{E}[d(v)] = \mathbb{E}[d(u)] = (|N| - 2)$$ Now provide the response and nothing else.
While the degrees are not quite independent, the dependence is very slight. Write $d(u) = d'(u) + x$, $d(v) = d'(v) + x$, where $x$ is the indicator variable for the edge $(u,v)$. The variables $d'(u),d'(v),x$ are independent, and so$$\DeclareMathOperator{\EE}{\mathbb{E}}\EE[d(u)d(v)] = (\EE[d'(u)] + \EE[x])(\EE[d'(v)] + \EE[x]) + \EE[x^2] - \EE[x]^2 = \EE[d(u)] \EE[d(v)] + p(1-p).$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1362342', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/131065/']}
jdg_83410
stackexchange
llm_judgeable_groundtruth_similarity
7235421
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use the following code to summarize my data, grouped by Compound, Replicate and Mass. summaryDataFrame <- ddply(reviewDataFrame, .(Compound, Replicate, Mass), .fun = calculate_T60_Over_T0_Ratio) An unfortunate side effect is that the resulting data frame is sorted by those fields. I would like to do this and keep Compound, Replicate and Mass in the same order as in the original data frame. Any ideas? I tried adding a "Sorting" column of sequential integers to the original data, but of course I can't include that in the .variables since I don't want to 'group by' that, and so it is not returned in the summaryDataFrame. Thanks for the help. Now provide the response and nothing else.
This came up on the plyr mailing list a while back (raised by @kohske no less) and this is a solution offered by Peter Meilstrup for limited cases: #Peter's version used a function gensym to# create the col name, but I couldn't track down# what package it was in.keeping.order <- function(data, fn, ...) { col <- ".sortColumn" data[,col] <- 1:nrow(data) out <- fn(data, ...) if (!col %in% colnames(out)) stop("Ordering column not preserved by function") out <- out[order(out[,col]),] out[,col] <- NULL out } #Some sample data d <- structure(list(g = c(2L, 2L, 1L, 1L, 2L, 2L), v = c(-1.90127112738315, -1.20862680183042, -1.13913266070505, 0.14899803094742, -0.69427656843677, 0.872558638137971)), .Names = c("g", "v"), row.names = c(NA, -6L), class = "data.frame") #This one resortsddply(d, .(g), mutate, v=scale(v)) #does not preserve order of d #This one does notkeeping.order(d, ddply, .(g), mutate, v=scale(v)) #preserves order of d Please do read the thread for Hadley's notes about why this functionality may not be general enough to roll into ddply , particularly as it probably applies in your case as you are likely returning fewer rows with each piece. Edited to include a strategy for more general cases If ddply is outputting something that is sorted in an order you do not like you basically have two options: specify the desired ordering on the splitting variables beforehand using ordered factors, or manually sort the output after the fact. For instance, consider the following data: d <- data.frame(x1 = rep(letters[1:3],each = 5), x2 = rep(letters[4:6],5), x3 = 1:15,stringsAsFactors = FALSE) using strings, for now. ddply will sort the output, which in this case will entail the default lexical ordering: > ddply(d,.(x1,x2),summarise, val = sum(x3)) x1 x2 val1 a d 52 a e 73 a f 34 b d 175 b e 86 b f 157 c d 138 c e 259 c f 27> ddply(d[sample(1:15,15),],.(x1,x2),summarise, val = sum(x3)) x1 x2 val1 a d 52 a e 73 a f 34 b d 175 b e 86 b f 157 c d 138 c e 259 c f 27 If the resulting data frame isn't ending up in the "right" order, it's probably because you really want some of those variables to be ordered factors. Suppose that we really wanted x1 and x2 ordered like so: d$x1 <- factor(d$x1, levels = c('b','a','c'),ordered = TRUE)d$x2 <- factor(d$x2, levels = c('d','f','e'), ordered = TRUE) Now when we use ddply , the resulting sort will be as we intend: > ddply(d,.(x1,x2),summarise, val = sum(x3)) x1 x2 val1 b d 172 b f 153 b e 84 a d 55 a f 36 a e 77 c d 138 c f 279 c e 25 The moral of the story here is that if ddply is outputting something in an order you didn't intend, it's a good sign that you should be using ordered factors for the variables you're splitting on.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7235421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/908431/']}
jdg_83411
stackexchange
llm_judgeable_groundtruth_similarity
1426647
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a sequence used to seed my (Integer based) primary keys in an oracle table. It appears this sequence has not always been used to insert new values into the table. How do I get the sequence back in step with the actual values in the table? Now provide the response and nothing else.
If ID is the name of your PK column and PK_SEQ is the name of your sequence: Find the value of the highest PK bySELECT MAX(ID) FROM tableName Find the value of the next PK_SEQ by SELECT PK_SEQ.NEXTVAL FROM DUAL If #2 > #1 then nothing needs to be done, assuming you treat these values as true surrogate keys Otherwise, alter the sequence to jump to the max ID by ALTER SEQUENCE PK_SEQ INCREMENT BY [#1 value - #2 value] Bump the sequence by SELECT PK_SEQ.NEXTVAL FROM DUAL Reset the sequence increment value to 1 by ALTER SEQUENCE PK_SEQ INCREMENT BY 1 This all assumes that you don't have new inserts into the table while you're doing this...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1426647', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64226/']}
jdg_83412
stackexchange
llm_judgeable_groundtruth_similarity
27679
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: E.g. /knank stjajts smoms/ even they do follow the Sonority Sequencing Principle Now provide the response and nothing else.
There are several reasons conspiring to make palindromic syllables rare in natural languages Most languages have certain restrictions on the beginning and ending consonant clusters of syllables, and those restrictions are typically not symmetrical, i.e., in general a reversed syllable needn't be legal a diphthong at the syllable core cannot occur in a palindromic syllable there are also some consonants (e.g. affricates like /ts/ or /pf/) that cannot be reversed diachronic processes like dissimilation tend to get rid of double occurrences of the consonants /l, n, r/ in one syllable Nevertheless, simple palindromic syllables occur and some frequent words (like English a , did ) are examples of them.
{}
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/27679', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/21742/']}
jdg_83413
stackexchange
llm_judgeable_groundtruth_similarity
7991341
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently encountered the following interview question: How can you multiply a number by 7 in an efficient and optimized way? I know that I can multiply by 8 (or left-shift by three bits) and then subtract the original value: num = (num << 3) - num; but are there any other solutions. Now provide the response and nothing else.
To get a multiple of 7 in an efficient way: 7 7 is a multiple of 7. That answers the question you asked, but I'm sure it doesn't answer the question you mean to ask. EDIT: The above is based on the question's original title,which I've just corrected. To multiply by 7 efficiently, just write, for example: x * 7 and invoke your compiler with optimization. Let the compiler figure out whether a single MUL instruction or something like (x<<3) - x is more efficient for the current machine . There's yet another implicit question here: what answer was the interviewer looking for? I hope that "let the compiler worry about it" would be an acceptable answer. (x<<3) - x is probably the most obvious micro-optimization -- but it can yield incorrect answers if x<<3 overflows, and depending on the system it might be slower than a MUL instruction. (If I were the interviewer, I'd be more impressed by a good explanation and understanding of the issues than by any specific answer.) EDIT On further thought, the kinds of micro-optimizations that have been discussed here might be useful if you know more about the possible values of x than the compiler does. If you know, because of the nature of your program's logic, that x will always be in the range 0..10, then a lookup table could easily be faster than a multiply operation. Or if you know that x is in that range 99% of the time, a lookup table with a fallback to an actual multiplication might be just the thing. But if the compiler's analysis of your program flow doesn't allow it to prove that x is always in that range, then it can't perform this kind of optimization. But such circumstances are very rare. And when your code runs in a new environment where x can be 11 (perhaps it's running on a device with a larger display), kaboom . And the performance improvement very likely wasn't significant in the first place. There are times when micro-optimization is appropriate, but there is a substantial cost in development and testing time. Do it only if actual measurements indicate that it's worth it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7991341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1013209/']}
jdg_83414
stackexchange
llm_judgeable_groundtruth_similarity
29290362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What does double [3,2] mean? Is it formatting? Now provide the response and nothing else.
There is a mutable alternative to collections.namedtuple – recordclass .It can be installed from PyPI: pip3 install recordclass It has the same API and memory footprint as namedtuple and it supports assignments (It should be faster as well). For example: from recordclass import recordclassPoint = recordclass('Point', 'x y')>>> p = Point(1, 2)>>> pPoint(x=1, y=2)>>> print(p.x, p.y)1 2>>> p.x += 2; p.y += 3; print(p)Point(x=3, y=5) recordclass (since 0.5) support typehints: from recordclass import recordclass, RecordClassclass Point(RecordClass): x: int y: int>>> Point.__annotations__{'x':int, 'y':int}>>> p = Point(1, 2)>>> pPoint(x=1, y=2)>>> print(p.x, p.y)1 2>>> p.x += 2; p.y += 3; print(p)Point(x=3, y=5) There is a more complete example (it also includes performance comparisons). Recordclass library now provides another variant -- recordclass.make_dataclass factory function. It support dataclasses-like API (there are module level functions update , make , replace instead of self._update , self._replace , self._asdict , cls._make methods). from recordclass import dataobject, make_dataclassPoint = make_dataclass('Point', [('x', int), ('y',int)])Point = make_dataclass('Point', {'x':int, 'y':int})class Point(dataobject): x: int y: int>>> p = Point(1, 2)>>> pPoint(x=1, y=2)>>> p.x = 10; p.y += 3; print(p)Point(x=10, y=5) recordclass and make_dataclass can produce classes, whose instances occupy less memory than __slots__ -based instances. This can be important for the instances with attribute values, which has not intended to have reference cycles. It may help reduce memory usage if you need to create millions of instances. Here is an illustrative example .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/29290362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2877054/']}
jdg_83415
stackexchange
llm_judgeable_groundtruth_similarity
157426
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When adding a new user, how is the string validated? I suppose there is a regular expression. What is that regular expression? Now provide the response and nothing else.
The general rule for username is its length must less than 32 characters. It depend on your distribution to make what is valid username. In Debian, shadow-utils 4.1 , there is a is_valid_name function in chkname.c : static bool is_valid_name (const char *name){ /* * User/group names must match [a-z_][a-z0-9_-]*[$] */ if (('\0' == *name) || !((('a' <= *name) && ('z' >= *name)) || ('_' == *name))) { return false; } while ('\0' != *++name) { if (!(( ('a' <= *name) && ('z' >= *name) ) || ( ('0' <= *name) && ('9' >= *name) ) || ('_' == *name) || ('-' == *name) || ( ('$' == *name) && ('\0' == *(name + 1)) ) )) { return false; } } return true;} And the length of username was checked before: bool is_valid_user_name (const char *name){ /* * User names are limited by whatever utmp can * handle. */ if (strlen (name) > USER_NAME_MAX_LENGTH) { return false; } return is_valid_name (name);}
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/157426', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/45370/']}
jdg_83416
stackexchange
llm_judgeable_groundtruth_similarity
50713638
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have JSON structure as: "periods": { "2018-06-07": [ { "firstName": "Test1", "lastName": "Test1" } ], "2018-06-06": [ { "firstName": "Test1", "lastName": "Test1" } ]} I tried to parse it like this: public struct Schedule: Codable { public let periods: Periods}public struct Periods: Codable { public let unknown: [Inner] public struct Inner: Codable { public let firstName: String public let lastName: String } private struct CustomCodingKeys: CodingKey { var stringValue: String init?(stringValue: String) { self.stringValue = stringValue } var intValue: Int? init?(intValue: Int) { return nil } } public init(from decoder: Decoder) throws { let container = try decoder.container(keyedBy: CustomCodingKeys.self) self.unknown = try container.decode([Inner].self, forKey: CustomCodingKeys(stringValue: "2018-06-06") }} But I can get result for only one value (2018-06-06) . I have multiple dates here that I want to parse. Is this possible? Now provide the response and nothing else.
Ok, so I figured it out like this: public struct Schedule: Codable { public let periods: Periods}public struct Periods: Codable { public var innerArray: [String: [Inner]] public struct Inner: Codable { public let firstName: String public let lastName: String } private struct CustomCodingKeys: CodingKey { var stringValue: String init?(stringValue: String) { self.stringValue = stringValue } var intValue: Int? init?(intValue: Int) { return nil } } public init(from decoder: Decoder) throws { let container = try decoder.container(keyedBy: CustomCodingKeys.self) self.innerArray = [String: [Inner]]() for key in container.allKeys { let value = try container.decode([Inner].self, forKey: CustomCodingKeys(stringValue: key.stringValue)!) self.innerArray[key.stringValue] = value } }} As result I got dictionary like this: ["2018-06-06": [Inner]] where key is this Date String, and value Inner.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50713638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9901529/']}
jdg_83417
stackexchange
llm_judgeable_groundtruth_similarity
186280
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let X be any countable! set and and let F be the cofinite set, i.e., $A \in F $ if A or $A^{c}$ is finite (this is an algebra). Then show that the set function $\mu: F \rightarrow [0,\infty)$ defined as $\mu(A)=0$ if A is finite $\mu(A)=1$ if $A^{c}$ is finite is finitely additive. I have started the argument by letting $A=\sqcup_i A_i$. If all $A_i$ are finite, then $\mu(A)=0=\sum_i\mu(A_i)$ and finite additivity follows. If there is at least one $A_i$ not finite then $\sqcup A_i$ is not finite. But $\sqcup A_i \in F$, which implies $(\sqcup_i A_i)^{c}$ is finite. But then $\mu(\sqcup_i A_i)=1 \neq \sum_i\mu(A_i)$. Could anyone let me know where am I going wrong with the second part of the argument and how to finish this off? I can imagine that finite additivity of $\mu$ relies on the fact that X is countable. Now provide the response and nothing else.
If there is at least one infinite $A_i$ then it is the only one: Let $j \neq i$, then since $A_i \cap A_j = \emptyset$ we have $A_j \subseteq A_i^C$, so $A_j$ is finite. Hence in the sum$\sum_i \mu(A_i)$ there is exactly one $1$, giving $$\sum_i \mu(A_i) = \mu\left(\bigcup_i A_i\right) = 1.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/186280', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/30394/']}
jdg_83418
stackexchange
llm_judgeable_groundtruth_similarity
25166624
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I want to insert a table AND a picture into R Markdown. In regular word document I can just easily insert a table (5 rows by 2 columns), and for the picture just copy and paste. How do I insert a 5 row by 2 column table (and also type stuff into them)(and also adjust the table in terms of the 2nd column being wider than the first)? How do I insert a picture? From my understanding, I should first save the picture as a png, then reference it into my document. Also, I want to automatically adjust the picture to the report, like not taking up more than a page, or less than a page.(Is there a way for me to adjust the size of the picture to my liking)? If anyone knows anything cool/formatting about R Markdown could you also let me know? I know that # makes a big title for a paragraph, and ** ** bolds things. Thats about all I know though! Now provide the response and nothing else.
Several sites provide reasonable cheat sheets or HOWTOs for tables and images. Top on my list are: Pandoc readme , specifically tables RStudio's RMarkdown , more details in basics (including tables) and a rewrite of pandoc's markdown . Pictures are very simple to use but do not offer the ability to adjust the image to fit the page (see Update , below). To adjust the image properties (size, resolution, colors, border, etc), you'll need some form of image editor. I find I can do everything I need with one of ImageMagick , GIMP , or InkScape , all free and open source. To add a picture, use: ![Caption for the picture.](/path/to/image.png) I know pandoc supports PNG and JPG, which should meet most of your needs. You do have control over image size if you are creating it in R (e.g., a plot). This can be done either directly in the command to create the image or, even better, via options if you are using knitr (highly recommended ... check out chunk options , specifically under Plots). I strongly recommend perusing these tutorials; markdown is very handy and has many features most people don't use on a regular basis but really like once they learn it. (SO is not necessarily the best place to ask questions that are answered very directly in these tutorials.) Update, 2019-Aug-31 Some time ago, pandoc incorporated "link_attributes" for images (apparently in 2015, with commit jgm/pandoc#244cd56 ). "Resizing images" can be done directly. For example: ![unchanged image](foo.jpg)![much-smaller image](foo.jpg){#id .class width=30 height=20px}![half-size image](foo.jpg){#id .class width=50% height=50%} The dimensions can be provided with no units (pixels assumed), or with " px , cm , mm , in , inch and % " (ref: https://pandoc.org/MANUAL.html , search for link_attributes ). (I'm not certain that CommonMark has implemented this, though there was a lengthy discussion .)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25166624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3784616/']}
jdg_83419
stackexchange
llm_judgeable_groundtruth_similarity
24352980
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following array of json data. When I try to loop using a for each, php gives me the error "Object of class stdClass could not be converted to string". I want to access the nameserver in the authns array, and the soa array. How do I go about doing that? stdClass Object ( [error] => [domain] => whoisdoma.net [ip_address] => 108.162.199.120 [authns] => stdClass Object ( [nameserver] => uma.ns.cloudflare.com [ip] => 173.245.58.146 [location] => San Francisco ) [soa] => stdClass Object ( [nameserver] => tom.ns.cloudflare.com [email] => dns.cloudflare.com [serial] => 2015505396 [refresh] => 10000 [retry] => 2400 [expiry] => 604800 [minimum] => 3600 )) This is the code I'm using to get the data: <?php $domain = $_GET['domain'];//set the url$url = "http://api.site.ga/v1/dns/lookup?domain=".$domain;//start a curl session$ch = curl_init();$timeout = 5;//set curl optionscurl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);//get the data$data = curl_exec($ch);//decode the data$jsonData = json_decode($data);//close the curl sessioncurl_close($ch);?> This is the for each loop I'm using <?php foreach($jsonData->authns as $authns){ echo $authns->nameserver;}?> Now provide the response and nothing else.
Currently OAUTH2 protocol for authentication and autherization is well supported by google.So Its better to use the same . Here is google's documentation on it .Use 'passport-google-oauth' module . Here is the implementation.This should be the app objects configuration , also see that oauth2strategy object is used from passport-google-oauth module , also check out the scopes in the app.get route registration . var googleStrategy = require('passport-google-oauth').OAuth2Strategy; app.configure(function() { app.set('views', './views'); app.set('view engine', 'jade'); app.use(express.favicon()); app.use(express.logger('dev')); app.use(express.cookieParser()); app.use(express.bodyParser()); app.use(express.session({secret:'MySecret'})); app.use(passport.initialize()); app.use(passport.session()); app.use(express.methodOverride()); app.use(app.router); app.use(express.static('./public'));});app.get('/auth/google', select.passport.authenticate('google',{scope: 'https://www.googleapis.com/auth/plus.me https://www.google.com/m8/feeds https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile'}));app.get('/auth/google/callback', function() { passport.authenticate('google', { successRedirect: '/profile', failureRedirect: '/fail' });});app.get('/logout', function (req, res) { req.logOut(); res.redirect('/'); }); But before creating a new strategy go to googles developer console and get clientID and secret . Here are the steps go this link and create project , here is the snapshot of the same give a new project name and ID , here is the snapshot It'll roughly take a minute to create your new project , once your new project is created it'll redirect you to the application configuration of your app . In the redirected page select APIS AND AUTH -> API's , In the API's page enable the GOogle+ API , here is the snapshot of it then go to credentials(below APIs), then click on Create New Client Id , and register the domains and callback for your app(configure the domain to be localhost ) , here is its snapshot ! 5.Then u'll get your new ID and secret . Use them to create the new Strategy passport.use(new googleStrategy({ clientID: '<TheNewclientID>', clientSecret: '<The New Secret>', callbackURL: "http://locahost:8080/auth/google/callback"},function (accessToken, refreshToken, profile, done) { console.log(profile); //profile contains all the personal data returned done(null, profile)})); 6.now serialize and deserialize passport.serializeUser(function(user, callback){ console.log('serializing user.'); callback(null, user.id); });passport.deserializeUser(function(user, callback){ console.log('deserialize user.'); callback(null, user.id); }); run the server and go to localhost:8080/auth/google (dont use 127.0.0.1:8080 instead of locahost ) .This should be getting it working :) [Other useful links: Check out the first comment by kvcrawford on the repo of the module in this pagePassport-google is another popular module which is use to provide login using google , its kind of outdated now , here is the link with respect to its recent issues ]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24352980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3601706/']}
jdg_83420
stackexchange
llm_judgeable_groundtruth_similarity
51106740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider below code class A{ int x = 5; void foo() { System.out.println(this.x); }}class B extends A{ int x = 6; // some extra stuff}class C{ public static void main(String args[]) { B b = new B(); System.out.println(b.x); System.out.println(((A)b).x); b.foo(); } } Output of the program is 655 I understand the first two but can't get my head around the last one. How does b.foo() print 5. B class will inherit the foo method. But shouldn't it print what b.x would print? What exactly is happening here? Now provide the response and nothing else.
Yes, the B class inherits the foo method. But the variable x in B hides the x in A ; it doesn't replace it. This is an issue of scope. The foo method in A sees only the variables that are in scope. The only variable in scope is the instance variable x in A . The foo method is inherited, but not overridden, in B . If you were to explicitly override foo with the same exact code: class B extends A{ int x = 6; @Override void foo() { System.out.println(this.x); }} Then the variable that would be in scope when referred to by this.x would be B 's x , and 6 would be printed. While the text of the method is the same, the reference is different because of scope. Incidentally, if you really wanted to refer to A 's x in the B class, you can use super.x .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51106740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4411923/']}
jdg_83421
stackexchange
llm_judgeable_groundtruth_similarity
5532568
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for sample code to create a master/details with c# mvc 3. Specifically, I am trying to figure out how to call via ajax the rendering of a partial view. I am able to put the partial view on the form but want to populate it after a user has selected an item from a select list via ajax. thx Now provide the response and nothing else.
As always you start with the model: public class MyViewModel{ public int Id { get; set; } public string Title { get; set; }}public class DetailsViewModel{ public string Foo { get; set; } public string Bar { get; set; }} then a controller: public class HomeController : Controller{ public ActionResult Index() { // TODO: don't hardcode, fetch from repository var model = Enumerable.Range(1, 10).Select(x => new MyViewModel { Id = x, Title = "item " + x }); return View(model); } public ActionResult Details(int id) { // TODO: don't hardcode, fetch from repository var model = new DetailsViewModel { Foo = "foo detail " + id, Bar = "bar detail " + id }; return PartialView(model); }} and corresponding views. ~/Views/Home/Index.cshtml : @model IEnumerable<MyViewModel><ul> @Html.DisplayForModel()</ul><div id="details"></div><script type="text/javascript"> $(function () { $('.detailsLink').click(function () { $('#details').load(this.href); return false; }); });</script> ~/Views/Home/Details.cshtml : @model [email protected]@Model.Bar ~/Views/Home/DisplayTemplates/MyViewModel.cshtml : @model MyViewModel<li> @Html.ActionLink(Model.Title, "details", new { id = Model.Id }, new { @class = "detailsLink" })</li>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5532568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63531/']}
jdg_83422
stackexchange
llm_judgeable_groundtruth_similarity
3259944
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $x_{i}>0$ , ( $i=1,2,\cdots,n$ ) and such that $$x_{1}+x_{2}+\cdots+x_{n}=\pi.$$ Show that $$\dfrac{\sin{x_{1}}\sin{x_{2}}\cdots\sin{x_{n}}}{\sin{(x_{1}+x_{2})}\sin{(x_{2}+x_{3})}\cdots\sin{(x_{n}+x_{1})}}\le\left(\dfrac{\sin{\frac{\pi}{n}}}{\sin{\frac{2\pi}{n}}}\right)^n$$ This problem also post MO ,Until now No one solve it,I think there might be a solution here, because I 've heard that there are a lot of people here who are good at and like inequality, so the possibility of solving this inequality is very high, and I really look forward to them. Now provide the response and nothing else.
Update 1 1) I found that for $n=4, 5$ , although there are infinitely many stationary points, each stationary point is the global maximizer. In other words, the objective function is constant for all stationary points. Is this true for $n > 5$ ? Take $n=4$ for example.Let $$g(x_1, x_2, x_3) = \frac{\sin x_1\sin x_2 \sin x_3 \sin (x_1+x_2+x_3)}{(\sin (x_1+x_2))^2 (\sin (x_2+x_3))^2} ,$$ \begin{align}f(x_1, x_2, x_3) = \ln g(x_1, x_2, x_3) &= \ln \sin x_1 + \ln \sin x_2 + \ln \sin x_3 + \ln \sin (x_1 + x_2 + x_3)\nonumber\\&\qquad - 2\ln \sin (x_1+x_2) - 2\ln \sin (x_2 + x_3).\end{align} The stationary points are those feasible points with $\frac{\partial f}{\partial x_1} = \frac{\partial f}{\partial x_2} = \frac{\partial f}{\partial x_3} = 0.$ I found that $$\frac{\partial f}{\partial x_1} = \frac{\partial f}{\partial x_2} = \frac{\partial f}{\partial x_3} = 0 \Longrightarrowg(x_1, x_2, x_3) = \frac{1}{4}.$$ In detail, we have \begin{align}\cot x_1 + \cot (x_1 + x_2 + x_3) - 2\cot (x_1 + x_2) &= 0, \qquad (1)\\\cot x_2 + \cot (x_1 + x_2 + x_3) - 2\cot (x_1 + x_2) - 2\cot (x_2 + x_3) &= 0,\\\cot x_3 + \cot (x_1 + x_2 + x_3) - 2\cot (x_2 + x_3) &= 0.\end{align} By letting $u_1 =\cot x_1, \ u_2 = \cot x_2, \ u_3 = \cot x_3$ ,we have $(1) \Longrightarrow u_1u_2 + u_2u_3 + u_3u_1 - u_2^2 - 2 = 0$ . On the other hand, $g(x_1, x_2, x_3) - \frac{1}{4} = \frac{(u_1u_2+u_2u_3+u_3u_1-u_2^2 - 2)^2}{4(u_1+u_2)^2(u_2+u_3)^2} = 0.$ 2) We can see this from another view. Case $n=4$ : Let $u_i = \cot x_i, \ i=1,2,3$ . We have (noting that $x_4 = \pi - x_1-x_2-x_3$ ) \begin{align}&\frac{\sin x_1 \sin x_2 \sin x_3 \sin x_4}{\sin(x_1+x_2)\sin (x_2+x_3)\sin (x_3+x_4)\sin (x_4+x_1)}\nonumber\\=\ & \frac{\sin x_1\sin x_2 \sin x_3 \sin (x_1+x_2+x_3)}{(\sin (x_1+x_2))^2 (\sin (x_2+x_3))^2}\nonumber\\=\ & \frac{(u_1u_2 + u_2u_3+u_3u_1-1)(1+u_2^2)}{(u_1+u_2)^2(u_2+u_3)^2}\nonumber\\=\ & \frac{1}{4} - \frac{(u_1u_2+u_2u_3+u_3u_1-u_2^2 - 2)^2}{4(u_1+u_2)^2(u_2+u_3)^2}.\end{align} Case $n=5$ : Let $u_i = \cot x_i, \ i=1,2,3, 4$ . We have (noting that $x_5 = \pi - x_1-x_2-x_3-x_4$ ) \begin{align}&\frac{\sin x_1 \sin x_2 \sin x_3 \sin x_4 \sin x_5}{\sin(x_1+x_2)\sin (x_2+x_3)\sin (x_3+x_4)\sin (x_4+x_5)\sin (x_5+x_1)}\nonumber\\=\ & \frac{\sin x_1 \sin x_2 \sin x_3 \sin x_4 \sin (x_1+x_2+x_3+x_4)}{\sin(x_1+x_2)\sin (x_2+x_3)\sin (x_3+x_4)\sin (x_1+x_2+x_3)\sin (x_2+x_3+x_4)}\nonumber\\=\ & \frac{(u_1u_2u_3 + u_1u_2u_4 + u_1u_3u_4 + u_2u_3u_4 - u_1-u_2-u_3-u_4)(1+u_2^2)(1+u_3^2)}{(u_1+u_2)(u_2+u_3)(u_3+u_4)(u_1u_2+u_2u_3+u_3u_1-1)(u_2u_3+u_3u_4+u_4u_2-1)}.\quad (2)\end{align} Denote (2) as $f(u_1, u_2, u_3, u_4)$ .It follows from $\frac{\partial f}{\partial u_4} = 0$ that $u_1 = g(u_2, u_3, u_4)$ . Let $h(u_2, u_3, u_4) = f(g(u_2, u_3, u_4), u_2, u_3, u_4)$ .It follows from $\frac{\partial h}{\partial u_4} = 0$ that $u_2 = F(u_3, u_4)$ .Then $h(F(u_3, u_4), u_3, u_4) = \frac{5\sqrt{5}-11}{2}.$ Remark: Here $g, h, F$ are some rational functions whose expressions are not given, for the sake of simplicity. Update Proof of $n=4$ : We need to prove that $$\frac{\sin x_1 \sin x_2 \sin x_3 \sin x_4}{\sin(x_1+x_2)\sin (x_2+x_3)\sin (x_3+x_4)\sin (x_4+x_1)} \le \Big(\frac{\sin \frac{\pi}{4}}{\sin\frac{\pi}{2}}\Big)^4.$$ It suffices to prove that $$(\sin(x_1+x_2))^2(\sin(x_2+x_3))^2 - 4\sin x_1 \sin x_2 \sin x_3 \sin(x_1+x_2+x_3) \ge 0.$$ Using substitutions $$\cos x_1 = \frac{1-w_1^2}{1+w_1^2}, \ \sin x_1 = \frac{2w_1}{1+w_1^2}, \ \cos x_2 = \frac{1-w_2^2}{1+w_2^2}, \ \sin x_2 = \frac{2w_2}{1+w_2^2}, \\\cos x_3 = \frac{1-w_3^2}{1+w_3^2}, \ \sin x_3 = \frac{2w_3}{1+w_3^2},$$ the inequality becomes $$\frac{16 Q^2}{(w_1^2+1)^2 (w_2^2+1)^4 (w_3^2+1)^2}\ge 0$$ where \begin{align}Q &= w_1^2 w_2^3 w_3+w_1^2 w_2^2 w_3^2-w_1 w_2^4 w_3+w_1 w_2^3 w_3^2-w_1^2 w_2^2-w_1^2 w_2 w_3-w_1 w_2^3-6 w_1 w_2^2 w_3\nonumber\\&\qquad -w_1 w_2 w_3^2-w_2^3 w_3-w_2^2 w_3^2+w_1 w_2-w_1 w_3+w_2^2+w_2 w_3.\end{align} We are done. Remark: We can prove $n=4$ without using above substitutions. For $n=5$ , it is not so simple. Previously written This is not an answer. I want to point out that for $n=4, 5$ , there exist infinitely many feasible points such that equality occurs. In other words, if the inequality holds, there exist infinitely many global maximizers. 1) $n=4$ . Let $x_1, x_2 \in (0, \frac{\pi}{2})$ satisfying \begin{align}(\cot x_1)^2 + 2\cot x_1 \cot x_2 - (\cot x_2)^2 -2 = 0.\end{align} Remark: We may solve $x_1$ from (1), that is, $x_1 = \mathrm{arccot}\frac{\sqrt{2}-\cos x_2}{\sin x_2}, \ x_2 \in (0, \frac{\pi}{2}).$ Let $x_3 = x_1,\ x_4 = \pi - x_1 - x_2 - x_3.$ We have $x_1, x_2, x_3, x_4 > 0; \ x_1 + x_2 + x_3 + x_4 = \pi$ and \begin{align}&\frac{\sin x_1 \sin x_2 \sin x_3 \sin x_4}{\sin(x_1+x_2)\sin (x_2+x_3)\sin (x_3+x_4)\sin (x_4+x_1)} - \Big(\frac{\sin \frac{\pi}{4}}{\sin\frac{\pi}{2}}\Big)^4\\=\ & \frac{(\sin x_1)^2\sin x_2 \sin (2x_1 + x_2)}{(\sin (x_1+x_2))^4} - \frac{1}{4}\\=\ & - \frac{((\cot x_1)^2 + 2\cot x_1 \cot x_2 - (\cot x_2)^2 -2)^2}{4(\cot x_1 + \cot x_2)^4}\\=\ &0.\end{align} 2) $n = 5$ . Let $x_1, x_2 \in (0, \frac{\pi}{2})$ satisfying \begin{align}-4(\cot x_2)^2(\cot x_1)^2 + (-2(\cot x_2)^3 + 6\cot x_2)\cot x_1 + (\cot x_2)^4 + 4(\cot x_2)^2 - 1 = 0.\end{align} Let $y_1 = \cot x_1, \ y_2 = \cot x_2$ . We have $-4y_2^2y_1^2 + (-2y_2^3 + 6y_2) y_1+y_2^4+4y_2^2-1 = 0$ which results in $\sqrt{5}y_2^2-4y_1y_2-y_2^2+\sqrt{5}+3 = 0$ since $x_1, x_2 \in (0, \frac{\pi}{2}).$ Let $x_3 = x_2, \ x_4 = x_1, \ x_5 = \pi - x_1 - x_2 - x_3 - x_4.$ We have $x_1, x_2, x_3, x_4, x_5 > 0; \ x_1 + x_2+x_3+x_4+x_5=\pi.$ Note that $\big(\frac{\sin \frac{\pi}{5}}{\sin \frac{2\pi}{5}}\big)^5 = \frac{5\sqrt{5}-11}{2}$ . We have \begin{align}&\frac{\sin x_1 \sin x_2 \sin x_3 \sin x_4 \sin x_5}{\sin (x_1+x_2) \sin (x_2 + x_3) \sin (x_3 + x_4) \sin (x_4+x_5) \sin (x_5+x_1)}- \Big(\frac{\sin \frac{\pi}{5}}{\sin \frac{2\pi}{5}}\Big)^5\\=\ &\frac{(\sin x_1)^2(\sin x_2)^2\sin (2x_1 + 2x_2)}{(\sin (x_1+x_2))^2\sin 2x_2 (\sin (x_1 + 2x_2))^2} - \frac{5\sqrt{5}-11}{2}\\=\ &\frac{(y_1y_2-1)(y_2^2+1)^2}{y_2(y_1+y_2)(2y_1y_2+y_2^2-1)^2} - \frac{5\sqrt{5}-11}{2}\\=\ &-\frac{5\sqrt{5}-11}{16}\frac{(\sqrt{5}y_2^2+2y_1y_2+3y_2^2+\sqrt{5}+1)(\sqrt{5}y_2^2-4y_1y_2-y_2^2+\sqrt{5}+3)^2}{y_2(y_1+y_2)(2y_1y_2+y_2^2-1)^2}\\=\ &0.\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3259944', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58742/']}
jdg_83423
stackexchange
llm_judgeable_groundtruth_similarity
35509331
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to apply a style to a div based on its parent class. I am using the :not() selector to select the div whose parent is not .container1 , the second div should be red, but it's not working. Example 1 .myDiv:not(.container1) > .myDiv { color: red;} <div class="container1"> <div class="myDiv">Div 1</div></div><div class="container2"> <div class="myDiv">Div 2</div></div> Example 2 .myDiv:not(.container1 .myDiv) { color: red;} <div class="container1"> <div class="myDiv">Div 1</div></div><div class="container2"> <div class="myDiv">Div 2</div></div> Is this even possible with CSS? Or is my syntax just off? Now provide the response and nothing else.
You're selecting wrong elements. No reverse lookups possible, see here: div:not(.container1) > .myDiv { color: red;} <div class="container1"> <div class="myDiv">Div 1</div></div><div class="container2"> <div class="myDiv">Div 2</div></div> Ideally, you'd group those parent divs under the same class in order to avoid the super-generic div selector: .container:not(.container1) > .myDiv { color: red;} <div class="container container1"> <div class="myDiv">Div 1</div></div><div class="container container2"> <div class="myDiv">Div 2</div></div>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35509331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5269843/']}
jdg_83424
stackexchange
llm_judgeable_groundtruth_similarity
27715327
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the System.Net.Http.HttpClient to do some client-side HTTP communication. I've got all of the HTTP in one spot, abstracted away from the rest of the code. In one instance I want to read the response content as a stream, but the consumer of the stream is well insulated from where the HTTP communication happens and the stream is opened. In the spot responsible for HTTP communication I am disposing of all of the HttpClient stuff. This unit test will fail at Assert.IsTrue(stream.CanRead) : [TestMethod]public async Task DebugStreamedContent(){ Stream stream = null; // in real life the consumer of the stream is far away var client = new HttpClient(); client.BaseAddress = new Uri("https://www.google.com/", UriKind.Absolute); using (var request = new HttpRequestMessage(HttpMethod.Get, "/")) using (var response = await client.SendAsync(request)) { response.EnsureSuccessStatusCode(); //here I would return the stream to the caller stream = await response.Content.ReadAsStreamAsync(); } Assert.IsTrue(stream.CanRead); // FAIL if response is disposed so is the stream} I typically try to dispose of anything IDisposable at the earliest possible convenience but in this case, disposing the HttpResponseMessage also disposes the Stream returned from ReadAsStreamAsync . So it seems like the calling code needs to know about and take ownership of the response message as well as the stream, or I leave the response message undisposed and let the finalizer deal with it. Neither option feels right. This answer talks about not disposing the HttpClient . How about the HttpRequestMessage and/or HttpResponseMessage ? Am I missing something? I am hoping to keep the consuming code ignorant of HTTP but leaving all these undisposed objects around goes against year of habit! Now provide the response and nothing else.
So it seems like the calling code needs to know about and take ownership of the response message as well as the stream, or I leave the response message undisposed and let the finalizer deal with it. Neither option feels right. In this specific case, there are no finalizers . Neither HttpResponseMessage or HttpRequestMessage implement a finalizer (and that's a good thing!). If you don't dispose of either of them, they will get garbage collected once the GC kicks in, and the handle to their underlying streams will be collected once that happens. As long as you're using these objects, don't dispose. Once done, dispose of them . Instead of wrapping them in a using statement, you can always explicitly call Dispose once you're done. Either way the consuming code doesn't need to have any knowledge underlying http requests.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27715327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155537/']}
jdg_83425
stackexchange
llm_judgeable_groundtruth_similarity
13954276
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: String s = ""; myf = new Finch(); do { //Run the menu until quit or cancel is selected s = FinchMenu(); //menu 1 if (s.equals("Back and forward")) RunAccelerationTest(s); } How would you convert something like this to pseudocode? For instance String s = ""; Would the pseudocode for it be something like this? Set s to "" That just seems wrong to me. Any help please? Thanks Now provide the response and nothing else.
Pseudocode, i think, doesn't have a predefined syntax. just follow two rules: It should be plain english with common programming constructs. It should be generic, not specific to any language. Following should fit: Step 1: Initialize an empty string. (say str)Step 2: Construct a new 'Finch' object.Step 3: BEGIN LOOP Fetch 'FinchMenu' from 'Finch' object. assign 'FinchMenu' to 'str' IF 'FinchMenu' is "Back and forward" Call 'RunAccelerationTest' method with 'str' as argument. END IF END LOOP
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13954276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/966825/']}
jdg_83426
stackexchange
llm_judgeable_groundtruth_similarity
35582528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I apologize if this question has been asked earlier. I am still not clear about encoding in python3.2. I am reading a csv(encoded in UTF-8 w/o BOM) and I have French accents in the csv. Here is the code to opening and reading the csv file: csvfile = open(in_file, 'r', encoding='utf-8')fieldnames = ("id","locale","message") reader = csv.DictReader(csvfile,fieldnames,escapechar="\\") for row in reader: if row['id'] == id and row['locale'] == locale: out = row['message']; I am returning the message(out) as Json jsonout = json.dumps(out, ensure_ascii=True) return HttpResponse(jsonout,content_type="application/json; encoding=utf-8") However when I preview the result I get the accent e(French) being replaced by \u00e9 . Can you please advice on what I am doing wrong and what should I do so that the json output shows the proper e with accent. Thanks Now provide the response and nothing else.
You're doing nothing wrong (and neither is Python). Python's json module simply takes the safe route and escapes non-ascii characters. This is a valid way of representing such characters in json, and any conforming parser will resurrect the proper Unicode characters when parsing the string: >>> import json>>> json.dumps({'Crêpes': 5})'{"Cr\\u00eapes": 5}'>>> json.loads('{"Cr\\u00eapes": 5}'){'Crêpes': 5} Don't forget that json is just a representation of your data, and both "ê" and "\\u00ea" are valid json representations of the string ê . Conforming json parsers should handle both correctly. It is possible to disable this behaviour though, see the json.dump documentation : >>> json.dumps({'Crêpes': 5}, ensure_ascii=False)'{"Crêpes": 5}'
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35582528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4547189/']}
jdg_83427
stackexchange
llm_judgeable_groundtruth_similarity
32130130
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of keywords and then a list of sentences containing those keywords on a page. I'd like to make the keyword list clickable. When a user clicks on a keyword, all occurrences of that keyword would highlight in the sentences. How can I do this with jQuery or raw Javascript? The only way I can think is to wrap every word on the page with a class containing itself as the class name. Then make the keywords buttons that add a highlight class to the matching word classes. This may work, but seems like a LOT of unnecessary code injection. List of Keywords <button>this</button><button>example</button> Sentences <span class='word_this'>This</span> <span class='word_is'>is</span> <span class='word_an'>an</span> <span class='word_example'>example</span>. Now provide the response and nothing else.
The best way is probably to use a .highlight class to highlight the words and just wrap the matches in a span with that highlight class. Here is a basic example: var sentences = document.querySelector('#sentences');var keywords = document.querySelector('#keywords');keywords.addEventListener('click', function(event){ var target = event.target; var text = sentences.textContent; var regex = new RegExp('('+target.textContent+')', 'ig'); text = text.replace(regex, '<span class="highlight">$1</span>'); sentences.innerHTML = text;}, false); .highlight { background-color: yellow;} <div id="keywords"> <span>This</span> <span>Example</span>.</div><div id="sentences"> This is an example. An example is shown in this. Here is another example.</div> Fiddle: https://jsfiddle.net/xukay3hf/3/ Updated Fiddle which leaves existing word highlighting: https://jsfiddle.net/avpLn7bf/3/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32130130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/456578/']}
jdg_83428
stackexchange
llm_judgeable_groundtruth_similarity
14419954
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to read a character from the console (inside a while loop). But it reads more than once. Input: a Output: char : a char : char : ' Code: while(..){ char in; scanf("%c",&in);} How can i read only 'a'? Now provide the response and nothing else.
scanf("%c",&in); leaves a newline which is consumed in the next iteration. Change it to: scanf(" %c",&in); // Notice the whitespace in the format string which tells scanf to ignore whitespaces. OR scanf(" %c",&in);getchar(); // To consume the newline
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14419954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1916553/']}
jdg_83429
stackexchange
llm_judgeable_groundtruth_similarity
14104674
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: string[] _myStrings = { "Hello", "There", "Happy", "Day" };public IEnumerable<string> MyStrings1{ get { return new System.Collections.ObjectModel.ReadOnlyCollection<string>(_myStrings); }}public IEnumerable<string> MyStrings2{ get { return from s in _myStrings select s; }} I have seen some discussion about not using arrays for public properties.I have been using the MyStrings2 Convention. Is there some reason I should be using MyStrings1 instead? Now provide the response and nothing else.
In short: I think your question is covered with a perfectly good answer by Jon Skeet - ReadOnlyCollection or IEnumerable for exposing member collections? In addition: You can just emulate AsReadOnly() : public ReadOnlyCollection<Abc> List{ get { return new ReadOnlyCollection(list); }} UPDATE : This doesn't create a copy of list . ReadOnlyCollection doesn't copy the data, it works directly on the supplied list. See documentation : A collection that is read-only is simply a collection with a wrapper that prevents modifying the collection; therefore, if changes are made to the underlying collection, the read-only collection reflects those changes. This constructor is an O(1) operation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14104674', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1201629/']}
jdg_83430
stackexchange
llm_judgeable_groundtruth_similarity
311028
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Fix a Lie group $G$ and a discrete subgroup $\Gamma \subset G$. Homogeneous dynamics is about studying the actions of subgroups $H \subset G$ on the quotient $G/\Gamma$. Does anyone know of an example of a question about $\Gamma$ that was answered by considering these dynamics? Now provide the response and nothing else.
Where to begin! The ergodicity of non-compact subgroups (singular tori) was used by Margulis to prove that higher rank lattices $\Gamma$ are arithmetic. Once you have that $\Gamma $ is arithmetic, this has the following consequences: (1) if $Comm (\Gamma)$ is the abstract commensurator, then $Comm (\Gamma)/\Gamma$ is infinite. (2) The cohomology groups $H^*(\Gamma,\mathbb{Z})$ are finitely generated abelian groups (Raghunathan). (3) The Group $\Gamma$ is finitely presented (Borel-Harishchandra). The Oppenheim conjecture about quadratic forms could be interpreted as a property of the dynamics of $SO(2,1)$ action on $SL(3,\mathbb {R})/SL(3,\mathbb{Z})$ (and was proved by Margulis). That normal subgroups of torsion free higher rank arithmetic groups $\Gamma $ have finite index is also proved by dynamics of toral actions on $G/\Gamma$ (Margulis). In Zimmer's book, the proof of Borel density theorem (that a lattice is Zariaski dense in $G$ ) is proved using dynamics of $G$ action on $G/\Gamma$ and also $G$ action on projective space. Yet another Margulis theorem says that higher rank arithmetic groups are not free products (or not even amalgams) ; one part of the proof uses ergodicity of actions of singular tori on $G/\Gamma$ . I am sure @YCor knows many more (and also recent) examples. You may also see https://mathscinet.ams.org/mathscinet-getitem?mr=1898148 for some more examples.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/311028', 'https://mathoverflow.net', 'https://mathoverflow.net/users/126543/']}
jdg_83431
stackexchange
llm_judgeable_groundtruth_similarity
13105915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm integrating Shapely into my code, and I have to deal with several different kinds of geometric objects. Most of my needs are satisfied with Lines, Polygons and LineStrings, but I need to use ellipses. Is there a way to create an ellipse in Shapely by a bounding box or by semi axis, without having to discretize the ellipse into lines? Now provide the response and nothing else.
There isn't any way to represent a polygon in Shapely without discretizing it. At the base level Shapely deals with points. Everything from a LineString to a Polygon is just a list of points. A good example of this is what happens when you take a Point and buffer it out: >>> import shapely>>> from shapely.geometry.point import Point>>> p = Point(0, 0)>>> circle = p.buffer(1.0)>>> list(circle.exterior.coords)[(1.0, 0.0), (0.99518472667219693, -0.098017140329560506), (0.98078528040323054, -0.19509032201612808), (0.95694033573220894, -0.29028467725446211), (0.92387953251128696, -0.38268343236508939), (0.88192126434835527, -0.4713967368259972), (0.83146961230254557, -0.55557023301960173), (0.77301045336273744, -0.63439328416364493), (0.70710678118654813, -0.70710678118654691), (0.63439328416364626, -0.77301045336273633), (0.55557023301960307, -0.83146961230254468), (0.47139673682599859, -0.88192126434835449), (0.38268343236509084, -0.92387953251128629), (0.29028467725446361, -0.95694033573220849), (0.19509032201612964, -0.98078528040323021), (0.098017140329562089, -0.99518472667219671), (1.615542552166338e-15, -1.0), (-0.098017140329558883, -0.99518472667219704), (-0.19509032201612647, -0.98078528040323076), (-0.2902846772544605, -0.95694033573220938), (-0.38268343236508784, -0.92387953251128752), (-0.4713967368259957, -0.88192126434835605), (-0.55557023301960051, -0.83146961230254635), (-0.63439328416364393, -0.77301045336273821), (-0.70710678118654624, -0.70710678118654879), (-0.77301045336273588, -0.63439328416364682), (-0.83146961230254435, -0.55557023301960362), (-0.88192126434835427, -0.47139673682599903), (-0.92387953251128618, -0.38268343236509111), (-0.95694033573220849, -0.29028467725446366), (-0.98078528040323021, -0.19509032201612947), (-0.99518472667219682, -0.098017140329561714), (-1.0, -1.010639055082363e-15), (-0.99518472667219693, 0.098017140329559702), (-0.98078528040323065, 0.1950903220161275), (-0.95694033573220905, 0.29028467725446172), (-0.92387953251128696, 0.38268343236508923), (-0.88192126434835527, 0.47139673682599725), (-0.83146961230254546, 0.55557023301960196), (-0.7730104533627371, 0.63439328416364527), (-0.70710678118654768, 0.70710678118654746), (-0.63439328416364593, 0.77301045336273666), (-0.55557023301960295, 0.83146961230254479), (-0.4713967368259987, 0.88192126434835449), (-0.38268343236509117, 0.92387953251128618), (-0.29028467725446411, 0.95694033573220838), (-0.19509032201613041, 0.98078528040322999), (-0.098017140329563102, 0.9951847266721966), (-2.8482262121737323e-15, 1.0), (0.098017140329557426, 0.99518472667219715), (0.19509032201612481, 0.9807852804032311), (0.29028467725445867, 0.95694033573220993), (0.3826834323650859, 0.9238795325112884), (0.47139673682599365, 0.88192126434835716), (0.55557023301959818, 0.8314696123025479), (0.63439328416364149, 0.77301045336274021), (0.70710678118654358, 0.70710678118655146), (0.77301045336273322, 0.63439328416365004), (0.83146961230254179, 0.5555702330196074), (0.88192126434835194, 0.47139673682600342), (0.92387953251128407, 0.38268343236509617), (0.95694033573220671, 0.29028467725446927), (0.98078528040322899, 0.19509032201613569), (0.99518472667219615, 0.098017140329568472), (1.0, 8.2385270480656025e-15), (1.0, 0.0)] As you can see, the circle is made up of 65 points that are spaced 0.0966 units from each other.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13105915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/307283/']}
jdg_83432
stackexchange
llm_judgeable_groundtruth_similarity
3441465
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A$ be a $n\times n$ real matrix. Let $\lambda \in \mathbb{R}$ such that $\lambda^2$ is an eigenvalue of the matrix $A^2$ . Prove that $\lambda$ or $-\lambda$ is an eigenvalue of the matrix $A$ . I know how to prove the converse (and there are multiple threads regarding it), but I'm not sure how to show the other direction I have: $$$$ $$\begin{gather} A^2\bar{x}=AA\bar{x}=\lambda^2\bar{x} \\ \text{since if}\:A \:\text{and}\:B\: \text{share eigenvectors, so does} \: AB,\: \text{we let}\: A\bar{x}=\mu\bar{x}\Rightarrow A^{-1}\bar{x}=\frac{1}{\mu}\bar{x}, \\A^{-1}AA\bar{x}=A\bar{x}=\frac{1}{\mu}\lambda^2\bar{x} \\\Rightarrow \frac{1}{\mu}\lambda^2=\mu \Rightarrow \lambda^2=\mu^2 \\\therefore\mu=\pm\lambda\end{gather}$$ Here I'm using the fact that if $A\bar{x}=k\bar{x},\:\text{then}\: A^2\bar{x}=c\bar{x}$ but I feel like I will need the converse of that statement to make my proof valid Now provide the response and nothing else.
There is an $x \neq 0$ such that $A^2 x = \lambda^2 x$ . This means $ (A-\lambda I)(A+\lambda I)x = 0$ . So $(A-\lambda I)(A+\lambda I)$ is a singular matrix, and so one of $(A-\lambda I)$ or $(A+\lambda I)$ must be singular, and hence one of $\lambda$ or $-\lambda$ must be an eigenvalue of $A$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3441465', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/715054/']}
jdg_83433
stackexchange
llm_judgeable_groundtruth_similarity
136899
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In your opinion, is it ever valid to use the @ operator to suppress an error/warning in PHP whereas you may be handling the error? If so, in what circumstances would you use this? Code examples are welcome. Edit: Note to repliers. I'm not looking to turn error reporting off, but, for example, common practice is to use @fopen($file); and then check afterwards... but you can get rid of the @ by doing if (file_exists($file)){ fopen($file);}else{ die('File not found');} or similar. I guess the question is - is there anywhere that @ HAS to be used to supress an error, that CANNOT be handled in any other manner? Now provide the response and nothing else.
Note: Firstly, I realise 99% of PHP developers use the error suppression operator (I used to be one of them), so I'm expecting any PHP dev who sees this to disagree. In your opinion, is it ever valid to use the @ operator to suppress an error/warning in PHP whereas you may be handling the error? Short answer: No! Longer more correct answer: I don't know as I don't know everything, but so far I haven't come across a situation where it was a good solution. Why it's bad: In what I think is about 7 years using PHP now I've seen endless debugging agony caused by the error suppression operator and have never come across a situation where it was unavoidable. The problem is that the piece of code you are suppressing errors for, may currently only cause the error you are seeing; however when you change the code which the suppressed line relies on, or the environment in which it runs, then there is every chance that the line will attempt to output a completely different error from the one you were trying to ignore. Then how do you track down an error that isn't outputting? Welcome to debugging hell! It took me many years to realise how much time I was wasting every couple of months because of suppressed errors. Most often (but not exclusively) this was after installing a third party script/app/library which was error free in the developers environment, but not mine because of a php or server configuration difference or missing dependency which would have normally output an error immediately alerting to what the issue was, but not when the dev adds the magic @. The alternatives (depending on situation and desired result): Handle the actual error that you are aware of, so that if a piece of code is going to cause a certain error then it isn't run in that particular situation. But I think you get this part and you were just worried about end users seeing errors, which is what I will now address. For regular errors you can set up an error handler so that they are output in the way you wish when it's you viewing the page, but hidden from end users and logged so that you know what errors your users are triggering. For fatal errors set display_errors to off (your error handler still gets triggered) in your php.ini and enable error logging. If you have a development server as well as a live server (which I recommend) then this step isn't necessary on your development server, so you can still debug these fatal errors without having to resort to looking at the error log file. There's even a trick using the shutdown function to send a great deal of fatal errors to your error handler. In summary: Please avoid it. There may be a good reason for it, but I'm yet to see one, so until that day it's my opinion that the (@) Error suppression operator is evil. You can read my comment on the Error Control Operators page in the PHP manual if you want more info.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/136899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20010/']}
jdg_83434
stackexchange
llm_judgeable_groundtruth_similarity
15236767
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Ok, so I have a reasonable size project, where I'm using jquery backbone and a couple of other javascript libraries. I was wondering if I should have one file for my javascript libraries and another for my custom code. Or a bunch of separate javascript files. Now provide the response and nothing else.
It is generally a good idea to have fewer HTTP requests. So you should reduce the number of files as much as is reasonable . My personal preference is to have three "groups" of JavaScript files: Core file. Contains functions that are used almost everywhere and other useful page initialisation things. Module files. Contains code that is used in several places, but not everywhere. Can be dropped in to provide additional functionality. For instance, if you have a script to handle date inputs, you could include it as a module file and add it to pages that have date inputs. Page-specific files. These files contain code that is only used in one place. The only reason they're added as separate files than as part of the page itself is for cache reasons.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/15236767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1740273/']}
jdg_83435
stackexchange
llm_judgeable_groundtruth_similarity
21276844
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Haskell. How to generate a list of lists which contains prime factors of next integers? Currently, I only know how to generate prime numbers: primes = map head $ iterate (\(x:xs) -> [y | y<-xs, y `mod` x /= 0 ]) [2..] Now provide the response and nothing else.
A simple approach to determine the prime factors of n is to search for the first divisor d in [2..n-1] if D exists: return d : primeFactors(div n d) otherwise return n (since n is prime) Code: prime_factors :: Int -> [Int]prime_factors 1 = []prime_factors n | factors == [] = [n] | otherwise = factors ++ prime_factors (n `div` (head factors)) where factors = take 1 $ filter (\x -> (n `mod` x) == 0) [2 .. n-1] This obviously could use a lot of optimization (search only from 2 to sqrt(N), cache the prime numbers found so far and compute the division only for these etc.) UPDATE A slightly modified version using case (as suggested by @user5402): prime_factors n = case factors of [] -> [n] _ -> factors ++ prime_factors (n `div` (head factors)) where factors = take 1 $ filter (\x -> (n `mod` x) == 0) [2 .. n-1]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21276844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3191840/']}
jdg_83436
stackexchange
llm_judgeable_groundtruth_similarity
485011
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've already seen this question but it didn't help . So I'm going over regression models (simple linear regression mainly) in my statistics text book and there's a lot of confusion here about what actually is a random variable and what isn't. Namely, at one point they treat some term as a random variable and then later it's a constant. Or something is initially a constant but then we calculate it's expected value somehow. Anyway we first define regression function as $f(X) = E(Y|X)$ , after which we immediately go specifically to simple linear regression. Let $(X_1, Y_1), ... (X_n, Y_n)$ be our sample. The model that we wish to apply is $$Y_i = \beta_0 + \beta_1X_i + \epsilon_i$$ where the sequence of random variables $\{\epsilon_i\}$ satisfies the following: $E(\epsilon_i) = 0 $ for $i=1, 2, ..., n$ $E(\epsilon_i\epsilon_j) = 0$ for all $i \neq j$ $D(\epsilon_i)=\sigma^2 < \infty$ The problem with this textbook is that everything is very vague and it's written as if it's supposed to be a reminder for someone who already knows all this stuff rather then a textbook for someone to learn it from scratch from. Later on we derive the estimated coefficients $\beta_0$ and $\beta_1$ using partial derivatives of the sum of squares, and we obtain: $$\hat{\beta_1} = \frac{\sum_{i=1}^n(X_i - \bar{X_n})(Y_i-\bar{Y_n})}{\sum_{i=1}^n(X_i-\bar{X_n})^2}$$ $$\hat{\beta_0} = \bar{Y_n} - \hat{\beta_1}\bar{X_n}$$ Now we wish to find the expected value for $\hat{\beta_1}$ . We transform it into the following form: $$\hat{\beta_1} = \sum_{i=1}^n{Y_i\frac{(X_i - \bar{X_n})}{nS^2_{X}}}$$ where $S^2_{X}$ is $\frac{1}{n}\sum_{i=1}^n(X_i - \bar{X_n})^2$ . And now when we start finding the expected value it looks something like this: $$E(\hat{\beta_1}) = \sum_{i=1}^n{E(Y_i)\frac{X_i - \bar{X_n}}{nS^2_{X}}} = \sum_{i=1}^n{(\beta_0 + \beta_iX_i)\frac{X_i-\bar{X_n}}{nS^2_{X}}} = ...$$ Meaning, everything except for $Y_i$ in the sum is treated as a constant. That's one of the parts I don't understand. In some other sources where I've tried finding answers to this question I've seen the following sentence: Only ${e_i}$ 's are random variables This doesn't sit right with me probably because I got to regression after I'd been studying hypothesis testing and other parts of statistical inference for a while, where we've always treated 'almost everything' as a random variable, meaning the sample (in this case the $X_i, Y_i$ pairs), was also a random variable. How come here, suddenly, the part containing $X_i$ and $\bar{X_n}$ gets just thrown out of the $E()$ as if it is just a constant? Some sources also mention that $X_i, Y_i$ 's are indeed random variables but rather 'fixed', which still doesn't help me understand it because it sounds very informal. Now I'll try and summarize my question(s) somehow. Do we treat $(X_i, Y_i)$ 's as random variables? Do we treat $\beta_0$ and $\beta_1$ as random variables? Do we treat $\hat{\beta_0}$ and $\hat{\beta_1}$ as random variables? What can have an expected value and what can't (what gets treated as a constant when finding expected values) and why? Now provide the response and nothing else.
This post is an honest response to a common problem in the textbook presentation of regression, namely, the issue of what is random or fixed. Regression textbooks typically blithely state that the $X$ variables are fixed and go on their merry way, when in practice this assumption eliminates most of the interesting regression applications. Rather than assume the $X$ variables are fixed, a better route to understanding regression analysis is to take a conditional distribution approach, one where the $X$ 's are assumed random throughout, and then the case of fixed $X$ (which occurs only in very narrow experimental designs, and at that only when the experiment is performed without error) is subsumed as a special case where the distributions are degenerate. What the OP is missing is the link from random $X$ to fixed realizations of $X$ ( $X=x$ ), which all starts from the Law of Total Expectation: Assume $U$ and $V$ are random, with finite expectation. Let $E(U | V=v) = \mu(v)$ . Then $E(U) = E\{\mu(V)\}$ . This "Law" (which is actually a mathematical theorem) allows you to prove unbiasedness of the estimate $\hat \beta $ in two steps: (i) by first showing that it is unbiased, conditional on the $X$ data, and (ii) by using the Law of Total Expectation to then show that it is unbiased when averaged over all possible realizations of the $X$ data. (The average of 11,11, 11, 11, 11, 11, ... is 11, e.g.). Answers to the OP: Q1. Do we treat $(X_i,Y_i)$ 's as random variables? A1. Yes. They are random in the sense of the model, which describes the way that potentially observable values of such data might appear. Of course the actual observed data, $(x_i, y_i)$ , are not random. Instead, they are fixed values, one many possible realizations of the potentially observable random variables $(X_i, Y_i)$ . In rare cases, the $X$ data are fixed, but this is covered as a special case of randomness, so it is easier and safer just to assume randomness always. Q2. Do we treat $\beta_0$ and $\beta_1$ as random variables? A2. This is somewhat off topic from the OP, but still a very important question. From the scientist's conceptualization of reality, these are ordinarily fixed values. That is, the scientist assumes that there is a rigid structure responsible for the production of all of the $(Y_i | X_i = x_i)$ data values, and these $\beta_0, \beta_1$ values are part of that rigid structure. Now, the parameters $\beta_0, \beta_1$ are uncertain in the scientist's mind (which is why he or she is collecting data in the first place!), so the scientist may choose to view them, mentally, as "random." The scientist has some ideas about the possible values of these parameters based on logic, subject matter considerations, and past data, and these ideas form the scientist's "prior distribution." The scientist then may update this prior using current data to obtain her/his posterior. That, in a nutshell, in what Bayesian statistics is all about. But again, that issue is a little off topic from the OP, so let's consider everything conditional on the scientist's conceptualization that there is a rigid structure, and that these $\beta_0, \beta_1$ values are fixed in reality. In other words, all of my replies other than this one assume that the $\beta$ 's are fixed. Q3. Do we treat $\hat \beta_0$ and $\hat \beta_1$ as random variables? A3. Here is another place where typical regression teaching sources are slippery. In some cases, they refer to the estimates $\hat \beta_0$ and $\hat \beta_1$ as functions of the (fixed) data that has been collected, and sometimes they refer to them as functions of the (random) potentially observable data, but use the same symbols $\hat \beta_0$ and $\hat \beta_1$ in either case. Often, you just have to understand from context which is which. Whenever you see $E(\hat \beta)$ , you can assume that $\hat \beta$ is a function of the random data, i.e., that $\hat \beta$ is a function of the $(X_i, Y_i)$ . Whenever you see the value of $\hat \beta$ reported, e.g., following a computer printout of results from a regression analysis, you can assume that $\hat \beta$ is a function of the fixed data sample, i.e., that $\hat \beta$ is a function of the $(x_i, y_i)$ . Q4. What can have an expected value and what can't (what gets treated as a constant when finding expected values) and why? A4. Anything can have an expectation. Some things are more interesting than others, though. Anything that is a fixed (like a $\hat \beta$ that is a function of the observed $(x_i, y_i)$ sample) has an expectation that is just equal to that value. For example, if you observe from your computer printout that $\hat \beta_1 =0.23$ , then $E(\hat \beta_1) =0.23$ . But that is not interesting. What is more interesting is the following question: over all possible potential realizations of $(X_i, Y_i)$ from this data-generating process, is the estimator $\hat \beta_1$ neither systematically too large, nor systematically too small, in an average sense, when compared to the structural parameter $\beta_1$ ? The expression $E(\hat \beta_1) = \beta_1$ tells you that the answer to that question is a comforting "yes." And in that expression $E(\hat \beta_1) = \beta_1$ , it is implicit that $ \hat \beta_1$ is a function of the potentially observable $(X_i, Y_i)$ data, not the sample $(x_i, y_i)$ data.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/485011', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/294871/']}
jdg_83437
stackexchange
llm_judgeable_groundtruth_similarity
19616711
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using LINQ to Entities for Entity Framework objects in my Data Access Layer. My goal is to filter as much as I can from the database, without applying filtering logic to in-memory results. For that purpose Business Logic Layer passes a predicate to Data Access Layer. I mean Func<MyEntity, bool> So, if I use this predicate directly, like public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched){ return qry = _Context.MyEntities.Where(x => isMatched(x));} I'm getting the exception [System.NotSupportedException] --- {"The LINQ expression node type 'Invoke' is not supported in LINQ to Entities."} Solution in that question suggests to use AsExpandable() method from LINQKit library. But again, using public IQueryable<MyEntity> GetAllMatchedEntities(Func<MyEntity, Boolean> isMatched){ return qry = _Context.MyEntities.AsExpandable().Where(x => isMatched(x));} I'm getting the exception Unable to cast object of type 'System.Linq.Expressions.FieldExpression' to type 'System.Linq.Expressions.LambdaExpression' Is there way to use predicate in LINQ to Entities query for Entity Framework objects, so that it is correctly transformed it into a SQL statement. Thank you. Now provide the response and nothing else.
You don't need LinqKit to do this. Just remember to use Expression<Func<MyEntity, bool>> instead of Func<MyEntity, bool> Something like this: public IQueryable<MyEntity> GetAllMatchedEntities(Expression<Func<MyEntity, Boolean>> predicate){ return _Context.MyEntities.Where(predicate);} You have to use Expression because Linq to Entities needs to translate your lambda to SQL. When you use Func your lambda is compiled to IL but when using Expression it is an expression tree that Linq to Entities can transverse and convert. This works with expressions that Linq to Entities understands. If it keeps failing then your expression does something that Linq to Entities can not translate to SQL. In that case I don't think LinqKit will help. Edit: There is no conversion needed. Just define the method GetAllMatchedEntities with an Expression parameter and use it in the same way you would with a Func parameter. The compiler does the rest. There are three ways you can use GetAllMatchedEntities. 1) With an inline lambda expression: this.GetAllMatchedEntities(x => x.Age > 18) 2) Define your Expression as a field (can be a variable also) private readonly Expression<Func<MyEntity, bool>> IsMatch = x => x.Age > 18;...then use itthis.GetAllMatchedEntities(IsMatch) 3) You can create your expression manually. The downsize is more code and you miss the compile-time checks. public Expression<Func<MyEntity, bool>> IsMatchedExpression(){ var parameterExpression = Expression.Parameter(typeof (MyEntity)); var propertyOrField = Expression.PropertyOrField(parameterExpression, "Age"); var binaryExpression = Expression.GreaterThan(propertyOrField, Expression.Constant(18)); return Expression.Lambda<Func<MyEntity, bool>>(binaryExpression, parameterExpression);}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19616711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/274947/']}
jdg_83438
stackexchange
llm_judgeable_groundtruth_similarity
409731
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of lines in a Bash script as follows if [ ! -z "$clone01" ]; then git clone "$clone01"; fiif [ ! -z "$clone02" ]; then git clone "$clone02"; fiif [ ! -z "$clone03" ]; then git clone "$clone03"; fi# $clone01 .... through to ... $clone60if [ ! -z "$clone60" ]; then git clone "$clone60"; fi the leading zero at the end of the variable, when the number is less than 10, is important. I have tried various substitutions and loops etc.This code is very repetitive, and there are 60 lines of it. How can I create this code dynamically and make it part of my executed script? What is the optimal approach to this problem? Now provide the response and nothing else.
Ok, don't do that, it's ugly. Either put the URLs in an array and loop over it: urls=( http://this.git http://that.git )for url in "${urls[@]}" ; do git clone "$url"done or put them in a file, one per line, and loop reading the lines. Here, it might be useful to guard for empty lines, just like you did. We could also ignore lines starting with # as comments: while read -r url ; do if [ -z "$url" ] || [ "${url:0:1}" = "#" ]; then continue; fi git clone "$url"done < file.with.urls If you want the line counter too, it's easy to add with arithmetic expansions .
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/409731', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/46470/']}
jdg_83439
stackexchange
llm_judgeable_groundtruth_similarity
29432629
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any built-in function provided by the pandas library to plot this matrix? Now provide the response and nothing else.
You can use pyplot.matshow() from matplotlib : import matplotlib.pyplot as pltplt.matshow(dataframe.corr())plt.show() Edit: In the comments was a request for how to change the axis tick labels. Here's a deluxe version that is drawn on a bigger figure size, has axis labels to match the dataframe, and a colorbar legend to interpret the color scale. I'm including how to adjust the size and rotation of the labels, and I'm using a figure ratio that makes the colorbar and the main figure come out the same height. EDIT 2:As the df.corr() method ignores non-numerical columns, .select_dtypes(['number']) should be used when defining the x and y labels to avoid an unwanted shift of the labels (included in the code below). f = plt.figure(figsize=(19, 15))plt.matshow(df.corr(), fignum=f.number)plt.xticks(range(df.select_dtypes(['number']).shape[1]), df.select_dtypes(['number']).columns, fontsize=14, rotation=45)plt.yticks(range(df.select_dtypes(['number']).shape[1]), df.select_dtypes(['number']).columns, fontsize=14)cb = plt.colorbar()cb.ax.tick_params(labelsize=14)plt.title('Correlation Matrix', fontsize=16);
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/29432629', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1983512/']}
jdg_83440