source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 22376034 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a relatively new to development but do know a few stuff. I was doing a tutorial on PHP Beyond the basics from Lynda and was trying to make the database class. after everything was written in the class, i tried to see if the connection is made. require_once("../includes/database.php");if(isset($database)) { echo "true<br />";}else { echo "false<br />";} the above code was my test to see if it was allright the problem was when i tried to use "require_once("config.php");" in the database class. require_once("config.php");//line 13class MySQLDatabase{ //code that is not important for this issue} this gave me the following error(s) on the page: Notice: Use of undefined constant DB_SERVER - assumed 'DB_SERVER' in E:\ProgramFiles\xampp\htdocs\photo_gallery\includes\database.php on line 13 Notice: Use of undefined constant DB_USER - assumed 'DB_USER' in E:\ProgramFiles\xampp\htdocs\photo_gallery\includes\database.php on line 13 Notice: Use of undefined constant DB_PASS - assumed 'DB_PASS' in E:\ProgramFiles\xampp\htdocs\photo_gallery\includes\database.php on line 13 Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in E:\ProgramFiles\xampp\htdocs\photo_gallery\includes\database.php on line 13 Warning: mysql_connect(): php_network_getaddresses: getaddrinfo failed: No such host is known. in E:\ProgramFiles\xampp\htdocs\photo_gallery\includes\database.php on line 13 Warning: mysql_connect(): php_network_getaddresses: getaddrinfo failed: No such host is known. in E:\ProgramFiles\xampp\htdocs\photo_gallery\includes\database.php on line 13 Database connection failed: php_network_getaddresses: getaddrinfo failed: No such host is known If i used (code below) instead of the require/require_once/include/include_once part it worked! define("DB_SERVER", "localhost");define("DB_USER", "gallery");define("DB_PASS", "123465");define("DB_NAME", "photo_gallery"); It's not the first time i use require/include...but i'm really baffled why this doesen't work with this method :-/(I'll change everything to mysqli once i finish with the DB class as it is the tutorial. And no, I don't have access to the exercise files) (sorry for the long post)
Now provide the response and nothing else.
| If I understand correctly you want to save the attachement somewhere? EWS provides you a byte array for every file stored within Content property of FileAttachment object, and from there its extremely easy to do so: foreach (var a in mail.Attachments){ FileAttachment fa = a as FileAttachment; if(fa != null) { try { //if you don't call this the Content property may be null, //depending on your property loading policy with EWS fa.Load(); } catch { continue; } using(FileStream fs = System.IO.File.OpenWrite("path_to_file")) { fs.Write(fa.Content, 0, fa.Content.Length); } }} If you just want a Stream object to do something else with it just create a MemoryStream: MemoryStream ms = new MemoryStream(fa.Content); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22376034', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3414830/']} | jdg_83441 |
stackexchange | llm_judgeable_groundtruth_similarity | 18890 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The New York Times published an editorial on December 29, 2013 about Pope Francis titled "Radical Pope, Traditional Values" which includes the following statement: As a result of its work in basic health and education — and despite its obtuse views on birth control — in the last 50 years the church has probably lifted more people out of poverty than any other civic institution in history. Is this true? Has the Catholic Church lifted more people out of poverty than any other civic institution?
Now provide the response and nothing else.
| It depends how you define "civic institution" but if political parties count, then China's Communist party wins the prize Defining "civic institution" is a little tricky especially if an organisation has multiple roles. I'd argue that political parties are civic institutions even if they also have a dual role when they are in government (this is consistent with the definition given in wikianswers thanks to @ChrisW for the link). The benefit of including political parties is that it allows us to include some of the biggest recent contributions to poverty reduction in the world. The other definitional issue is the definition of poverty. Here, however, there are widely agreed definitions such as this one from the Economist summarising a widely accepted World Bank definition: The threshold for dire poverty in developing countries is set much lower, at $1.25 a day of consumption (rather than income). This figure is arrived at by averaging the poverty lines in the 15 poorest countries, not because $1.26 spells comfort. This is the yardstick by which poverty reduction in poor countries is measured. On this yardstick, the world has made amazing progress on reducing poverty in the last few decades. The chart, from another Economist article , summarises recent progress and projections: The article also summarises just how much progress has been made (I've highlighted the parts relevant to this question): The country that cut poverty the most was China, which in 1980 had the largest number of poor people anywhere. China saw a huge increase in income inequality—but even more growth. Between 1981 and 2010 it lifted a stunning 680m people out poverty—more than the entire current population of Latin America. This cut its poverty rate from 84% in 1980 to about 10% now. China alone accounts for around three quarters of the world’s total decline in extreme poverty over the past 30 years. What is less often realised is that the recent story of poverty reduction has not been all about China. Between 1980 and 2000 growth in developing countries outside the Middle Kingdom was 0.6% a year. From 2000 to 2010 the rate rose to 3.8%—similar to the pattern if you include China. Mr Ravallion calculates that the acceleration in growth outside China since 2000 has cut the number of people in extreme poverty by 280m. By any reasonable statistical standards the world has made astonishing progress in the last 40 years. Why is this relevant here? Even without doing a detailed analysis of the contributions of the Catholic Church to education and health, it is a mathematical impossibility that they could have lifted a larger number of people out of poverty. We could argue that these numbers are the result of government action, not civil institutions. And they are, but the government actions in China were the result in a change in the ideology of the Chinese Communist party, so arguably the consequence of a change in a civil institution first (though the party-government distinction is blurred in China). But there are other stories of poverty reduction where the church might have a problem keeping up. Another Economist article reminds us that Bangladesh used to be a basket case: Its people are crammed onto a flood plain swept by cyclones and without big mineral and other natural resources. It suffered famines in 1943 and 1974 and military coups in 1975, 1982 and 2007. When it split from Pakistan in 1971 many observers doubted that it could survive as an independent state. But: Yet over the past 20 years, Bangladesh has made some of the biggest gains in the basic condition of people’s lives ever seen anywhere. Between 1990 and 2010 life expectancy rose by 10 years, from 59 to 69. Bangladeshis now have a life expectancy four years longer than Indians, despite the Indians being, on average, twice as rich. Even more remarkably, the improvement in life expectancy has been as great among the poor as the rich. Further it is worth noting some of the key factors behind this progress is built on: Family planning was made free and widely accessible Primary education was made nearly universal (and radically improved for women) There was a dramatic improvement in agricultural productivity Microfinance gave access to loans to the poor The role of NGOs, especially BRAC (originally the Bangladesh Rehabilitation Assistance Committee) became significant The focus on Women's education and freely available contraception led to one of the most profound demographic shifts in world history (with a total fertility rate--that is average number of children per woman--moving from 6.3 in 1975 to 2.3 in 2010, just above the replacement rate). The benefit of this shift in reducing poverty and mortality is not expected to be replicated in church-run programmes. But the role of BRAC, a civic institution on most definitions, is worth noting: BRAC began life distributing emergency aid in a corner of eastern Bangladesh after the war of independence. It is now the largest NGO in the world by the number of employees and the number of people it has helped (three-quarters of all Bangladeshis have benefited in one way or another). Unlike Grameen, which is mainly a microfinance and savings operation, BRAC does practically everything. In the 1980s it sent out volunteers to every household in the country showing mothers how to mix salt, sugar and water in the right proportions to rehydrate a child suffering from diarrhoea. This probably did more to lower child mortality in the country than anything else. BRAC and the government jointly ran a huge programme to inoculate every Bangladeshi against tuberculosis. BRAC’s primary schools are a safety net for children who drop out of state schools. BRAC even has the world’s largest legal-aid programme: there are more BRAC legal centres than police stations in Bangladesh. In summary The world has seen enormous numbers of people taken out of poverty in the last few decades. But the biggest numbers are attributable to Chinese economic growth arguably the result of the changed ideology of a political party. Other third world countries have also seen major improvements. But some of the key causes of those improvements are not the sort of things any catholic program could endorse (e.g. the case of contraception in Bangladesh). Other NGOs (Bangladesh's BRAC, for example) have lifted very large numbers from poverty. Compared to these numbers it is a major stretch to argue that the Catholic Church could be anywhere near the top of the league table however much good it has actually done. | {} | {'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/18890', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/8436/']} | jdg_83442 |
stackexchange | llm_judgeable_groundtruth_similarity | 6288571 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a function that adds an additional text field when the user clicks a button. The way it works is that there are actually four text fields and three buttons. Three of the four text fields are hidden using "display:none" and two of the three buttons are hidden.When you click button 1, text field 2 and button 2 shows and when you click button 2, text field 3 and button 3 shows and so on. This is manageable by putting in the code manually but becomes a burden when many text fields must be created. So far, I've used this code: <html><head><style type="text/css">.hide {display:none;}</style><script type="text/javascript" src="jquery.js"></script><script type="text/javascript">$(document).ready(function(){$("#add"+2 ).click(function(){$("#add"+2).hide();$("#text"+2).show();$("#add"+3).show(); });$("#add"+3 ).click(function(){$("#add"+3).hide();$("#text"+3).show();$("#add"+4).show(); });$("#add"+4 ).click(function(){$("#add"+4).hide();$("#text"+4).show(); });});</script></head><body><div id="border"><form action="" method="post"><table><tr><td><input type="text" id="text1" name="text1" /></td><td><input type="button" id="add2" name="add" value="add another field" /><input type="button" id="add3" class="hide" name="add" value="add another field" /><input type="button" id="add4" class="hide" name="add" value="add another field" /></td></tr><tr><td><input type="text" id="text2" class="hide" name="text2" /><br><input type="text" id="text3" class="hide" name="text3" /><br><input type="text" id="text4" class="hide" name="text4" /><td></tr></table></form></div></body></html> I then replaced $("#add"+2 ).click(function(){ $("#add"+2).hide(); $("#text"+2).show(); $("#add"+3).show(); }); $("#add"+3 ).click(function(){ $("#add"+3).hide(); $("#text"+3).show(); $("#add"+4).show(); }); with a for loop to try to do the same thing var i = 2;for (i=2; i<=3; i++){ $("#add"+i ).click(function(){ $("#add"+i).hide(); $("#text"+i).show(); $("#add"+(i+1)).show(); });} after replacing with the for loop, only the fourth text field displays after clicking the first button. Is there some logic I'm not understanding here? Thanks in advance.
Now provide the response and nothing else.
| Your inner function has a closure to the outer i , so when it accesses i , it accesses the variable itself, not its value. You can break this with a self executing function and passing the value to a new local variable. var i = 2;for (i = 2; i <= 3; i++) { (function(j) { $("#add" + j).click(function() { $("#add" + j).hide(); $("#text" + j).show(); $("#add" + (j + 1)).show(); }); })(i);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6288571', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/701510/']} | jdg_83443 |
stackexchange | llm_judgeable_groundtruth_similarity | 418588 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Today my professor in numerical analysis pointed out that in the exam we will probably have to do LU decomposition by hand. I understand how the decomposition works theoretically, but when it comes actually getting my hands dirty, I'm never sure, if I'm writing the row operation at the right place in the L matrix. Do you know a mnemonic, which allows one to efficiently compute the LU decomposition by hand?
Now provide the response and nothing else.
| Note that I didn't write this answer; taken from: LU Decomposition Steps This is also useful: Upper and Lower Triangular Matrices Let's go step by step. We want an equation of the following form: (An example is given below) $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}\star&0&0&0\\\star&\star&0&0\\\star&\star&\star&0\\\star&\star&\star&\star\end{pmatrix}\begin{pmatrix}\star&\star&\star&\star\\0&\star&\star&\star\\0&0&\star&\star\\0&0&0&\star\end{pmatrix}$$ From the first column and first row of our known matrix, it's not too hard to see that we can start with this: $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&\star&0&0\\1&\star&\star&0\\2&\star&\star&\star\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&\star&\star&\star\\0&0&\star&\star\\0&0&0&\star\end{pmatrix}$$ Next, we can choose the diagonal elements of our upper triangular matrix to be $1$, and fill in the lower triangular matrix column by column: $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&-4&0&0\\1&-3&\star&0\\2&-3&\star&\star\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&1&\star&\star\\0&0&1&\star\\0&0&0&1\end{pmatrix}$$ $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&-4&0&0\\1&-3&5&0\\2&-3&1&\star\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&1&2&\star\\0&0&1&\star\\0&0&0&1\end{pmatrix}$$ $$\begin{pmatrix}1&2&3&4\\5&6&7&8\\1&-1&2&3\\2&1&1&2\end{pmatrix}=\begin{pmatrix}1&0&0&0\\5&-4&0&0\\1&-3&5&0\\2&-3&1&2/5\end{pmatrix}\begin{pmatrix}1&2&3&4\\0&1&2&3\\0&0&1&8/5\\0&0&0&1\end{pmatrix}$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/418588', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50527/']} | jdg_83444 |
stackexchange | llm_judgeable_groundtruth_similarity | 15543186 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a ColorStateList programatically using this: ColorStateList stateList = new ColorStateList(states, colors); But I am not sure what are the two parameters. As per the documentation: public ColorStateList (int[][] states, int[] colors) Added in API level 1 Creates a ColorStateList that returns the specified mapping from states to colors. Can somebody please explain me how to create this? What is the meaning of two-dimensional array for states?
Now provide the response and nothing else.
| See http://developer.android.com/reference/android/R.attr.html#state_above_anchor for a list of available states. If you want to set colors for disabled, unfocused, unchecked states etc. just negate the states: int[][] states = new int[][] { new int[] { android.R.attr.state_enabled}, // enabled new int[] {-android.R.attr.state_enabled}, // disabled new int[] {-android.R.attr.state_checked}, // unchecked new int[] { android.R.attr.state_pressed} // pressed};int[] colors = new int[] { Color.BLACK, Color.RED, Color.GREEN, Color.BLUE};ColorStateList myList = new ColorStateList(states, colors); | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/15543186', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1997711/']} | jdg_83445 |
stackexchange | llm_judgeable_groundtruth_similarity | 32154387 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was under the impression that mutable references (i.e. &mut T ) are always moved. That makes perfect sense, since they allow exclusive mutable access.In the following piece of code I assign a mutable reference to another mutable reference and the original is moved. As a result I cannot use the original any more: let mut value = 900;let r_original = &mut value;let r_new = r_original;*r_original; // error: use of moved value *r_original If I have a function like this: fn make_move(_: &mut i32) {} and modify my original example to look like this: let mut value = 900;let r_original = &mut value;make_move(r_original);*r_original; // no complain I would expect that the mutable reference r_original is moved when I call the function make_move with it. However that does not happen. I am still able to use the reference after the call. If I use a generic function make_move_gen : fn make_move_gen<T>(_: T) {} and call it like this: let mut value = 900;let r_original = &mut value;make_move_gen(r_original);*r_original; // error: use of moved value *r_original The reference is moved again and therefore the program behaves as I would expect.Why is the reference not moved when calling the function make_move ? Code example
Now provide the response and nothing else.
| There might actually be a good reason for this. &mut T isn't actually a type: all borrows are parametrized by some (potentially inexpressible) lifetime. When one writes fn move_try(val: &mut ()) { { let new = val; } *val}fn main() { move_try(&mut ());} the type inference engine infers typeof new == typeof val , so they share the original lifetime. This means the borrow from new does not end until the borrow from val does. This means it's equivalent to fn move_try<'a>(val: &'a mut ()) { { let new: &'a mut _ = val; } *val}fn main() { move_try(&mut ());} However, when you write fn move_try(val: &mut ()) { { let new: &mut _ = val; } *val}fn main() { move_try(&mut ());} a cast happens - the same kind of thing that lets you cast away pointer mutability. This means that the lifetime is some (seemingly unspecifiable) 'b < 'a . This involves a cast, and thus a reborrow, and so the reborrow is able to fall out of scope. An always-reborrow rule would probably be nicer, but explicit declaration isn't too problematic. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32154387', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/929981/']} | jdg_83446 |
stackexchange | llm_judgeable_groundtruth_similarity | 36593446 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Same problem as Failed to start master for spark in windows 10 which is also not solved. My spark is working well by testing pyspark.cmd and spark-shell.cmd After runing .\sbin\start-master.sh I got: ps: unknown option -- oTry 'ps --help' for more information.starting org.apache.spark.deploy.master.Master, logging to C:\spark-1.6.1-bin-hadoop2.6/logs/spark--org.apache.spark.deploy.master.Master-1-%MY_USER_NAME%-PC.outps: unknown option -- oTry 'ps --help' for more information.failed to launch org.apache.spark.deploy.master.Master: ======================================== Picked up _JAVA_OPTIONS: -Xmx512M -Xms512Mfull log in C:\spark-1.6.1-bin-hadoop2.6/logs/spark--org.apache.spark.deploy.master.Master-1-%MY_USER_NAME%-PC.out I tried to visit web UI, while the localhost:4040 is working the localhost:8080 cannot be reached. And I found there is the .log file created at the folder of %SPARK_HOME%/logs . They contains same content: Spark Command: C:\Program Files\Java\jdk1.7.0_79\bin\java -cp C:\spark-1.6.1-bin-hadoop2.6/conf\;C:\spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar;C:\spark-1.6.1-bin-hadoop2.6\lib\datanucleus-api-jdo-3.2.6.jar;C:\spark-1.6.1-bin-hadoop2.6\lib\datanucleus-core-3.2.10.jar;C:\spark-1.6.1-bin-hadoop2.6\lib\datanucleus-rdbms-3.2.9.jar -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master --ip hahaha-PC --port 7077 --webui-port 8080========================================Picked up _JAVA_OPTIONS: -Xmx512M -Xms512M Working environment:Spark: 1.6.1Windows 10 Looking forward to your reply and thanks for your time so much!
Now provide the response and nothing else.
| The launch scripts located at %SPARK_HOME%\sbin do not support Windows. You need to manually run the master and worker as outlined below. Go to %SPARK_HOME%\bin folder in a command prompt Run spark-class org.apache.spark.deploy.master.Master to run the master. This will give you a URL of the form spark://ip:port Run spark-class org.apache.spark.deploy.worker.Worker spark://ip:port to run the worker. Make sure you use the URL you obtained in step 2. Run spark-shell --master spark://ip:port to connect an application to the newly created cluster. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36593446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4792543/']} | jdg_83447 |
stackexchange | llm_judgeable_groundtruth_similarity | 7834226 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently doing some template metaprogramming. In my case I can handle any "iteratable" type, i.e. any type for which a typedef foo const_iterator exists in the same manner. I was trying to use the new C++11 template metaprogramming for this, however I could not find a method to detect if a certain type is missing. Because I also need to turn on/off other template specializations based on other characteristics, I am currently using a template with two parameters, and the second one gets produced via std::enable_if . Here is what I am currently doing: template <typename T, typename Enable = void>struct Foo{}; // default case is invalidtemplate <typename T>struct Foo< T, typename std::enable_if<std::is_fundamental<T>::value>::type>{ void do_stuff(){ ... }};template<typename T>struct exists{ static const bool value = true;};template<typename T>struct Foo<T, typename std::enable_if<exists< typename T::const_iterator >::value >::type> { void do_stuff(){ ... }}; I was not able to do something like this without the exists helper template. For example simply doing template<typename T>struct Foo<T, typename T::const_iterator> { void do_stuff(){ ... }}; did not work, because in those cases where this specialization should be used, the invalid default case was instantiated instead. However I could not find this exists anywhere in the new C++11 standard, which as far as I know simply is taking from boost::type_traits for this kind of stuff. However on the homepage for boost::type_traits does not show any reference to anything that could be used instead. Is this functionality missing, or did I overlook some other obvious way to achieve the desired behavior?
Now provide the response and nothing else.
| If you simply want if a given type contains const_iterator then following is a simplified version of your code: template<typename T>struct void_ { typedef void type; };template<typename T, typename = void>struct Foo {};template<typename T>struct Foo <T, typename void_<typename T::const_iterator>::type> { void do_stuff(){ ... }}; See this answer for some explanation of how this technique works. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7834226', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/663957/']} | jdg_83448 |
stackexchange | llm_judgeable_groundtruth_similarity | 32108491 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Some confusing explanation:flush(); Flushing is the process of synchronizing the underlying persistent store with persistable state held in memory.it will update or insert into your tables in the running transaction, but it may not commit those changes. If the changes are anyways going to be persisted in the database only after the commit then why to flush in the middle of the code. And after running the flush if any changes are made to the managed object then will that throw an Exception or will those get synchronized and then will get perisisted. If they get synchronized then again why flush in the first place.
Now provide the response and nothing else.
| In theory, you (as a user of JPA) should never (or in absolutely rare situations) get in a situation to call flush() . Flushing is the process of synchronizing the underlying persistent store with persistable state held in memory In other words, on a flush() all the insert, update, delete or whatever statements are actually called on the database, before a flush() nothing happens on your database. Flushing is caused by a commit of your transaction or some kinds of database reads. For example if you execute a JPQL query, a flush() has to be done to get the correct results from the database. But this is just very nice to know and completely handled by your JPA implementation. There may be some situations you want to control this flushing on your own and then you can invoke it with flush() . Edit to answer the questions in comment: Not on every read a flush is necessary, consider this scenario (one transaction): Read a person Person p = em.find(Person.class, 234) Update person p.setAge(31) Read a building Building b = em.find(Building.class, 123 Read a building with JPQL query select b from Building b where b.id = 123 Automatic flush occurs only before 4., because Eclipselink can't determine what you are gonna read, so the person's age must be up to date on the database before this read can occur. Before 3. there is no flush needed because Eclipselink knows that the update on a person can not affect a building. To work with optimistic locking, you have to implement it. Read about the @Version annotation here: https://blogs.oracle.com/carolmcdonald/entry/jpa_2_0_concurrency_and . Without that your entity will not use optimistic locking and the "last update wins". | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32108491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1787314/']} | jdg_83449 |
stackexchange | llm_judgeable_groundtruth_similarity | 127598 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why do you square things in stats? I have run across this a lot, in both data mining and statistics classes, but no one has ever been able to give me an answer. One specific example is when summing the deviation scores in statistics you have to square them (otherwise the sum is 0). Why do you square them rather then using something else, like absolute value. Difference between prior question: If you have an answer for the problem above, does your answer apply to most statistics stuff that does this? If not, why not.
Now provide the response and nothing else.
| $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making predictions from your model. For instance, if you buy a stock expecting its future price to be $P_{\predicted}$ and its future price is $P_{\actual}$ instead, you lose money proportional to $(P_{\predicted} - P_{\actual})$ , not its square! The same is true in many other contexts. So why squared error? The squared error has many nice mathematical properties. Echoing the other answerers here, I would say that many of them are merely "convenient"--we might choose to use the absolute error instead if it didn't pose technical issues when solving problems. For instance: If $X$ is a random variable, then the estimator of $X$ that minimizes the squared error is the mean, $E(X)$ . On the other hand, the estimator that minimizes the absolute error is the median, $m(X)$ . The mean has much nicer properties than the median; for instance, $E(X + Y) = E(X) + E(Y)$ , but there is no general expression for $m(X + Y)$ . If you have a vector $\vec X = (X_1, X_2)$ estimated by $\vec x = x_1, x_2$ , then for the squared error it doesn't matter whether you consider the components separately or together: $||\vec X - \vec x||^2 = (X_1 - x_1)^2 + (X_2 - x_2)^2$ , so the squared error of the components just adds. You can't do that with absolute error. This means that the squared error is independent of re-parameterizations : for instance, if you define $\vec Y_1 = (X_1 + X_2, X_1 - X_2)$ , then the minimum-squared-deviance estimators for $Y$ and $X$ are the same, but the minimum-absolute-deviance estimators are not. For independent random variables, variances (expected squared errors) add: $\Var(X + Y) = \Var(X) + \Var(Y)$ . The same is not true for expected absolute error. For a sample from a multivariate Gaussian distribution (where probability density is exponential in the squared distance from the mean), all of its coordinates are Gaussian, no matter what coordinate system you use. For a multivariate Laplace distribution (like a Gaussian but with absolute, not squared, distance), this isn't true. The squared error of a probabilistic classifier is a proper scoring rule . If you had an oracle telling you the actual probability of each class for each item, and you were being scored based on your Brier score, your best bet would be to predict what the oracle told you for each class. This is not true for absolute error. (For instance, if the oracle tells you that $P(Y=1) = 0.9$ , then predicting that $P(Y=1) = 0.9$ yields an expected score of $0.9\cdot 0.1 + 0.1 \cdot 0.9 = 0.18$ ; you should instead predict that $P(Y=1) = 1$ , for an expected score of $0.9\cdot 0 + 0.1 \cdot 1 = 0.1$ .) Some mathematical coincidences or conveniences involving the squared error are more important, though. They don't pose technical problem-solving issues; rather, they give us intrinsic reasons why minimizing the square error might be a good idea: When fitting a Gaussian distribution to a set of data, the maximum-likelihood fit is that which minimizes the squared error, not the absolute error. When doing dimensionality reduction, finding the basis that minimizes the squared reconstruction error yields principal component analysis , which is nice to compute, coordinate-independent, and has a natural interpretation for multivariate Gaussian distributions (finding the axes of the ellipse that the distribution makes). There's a variant called "robust PCA" that is sometimes applied to minimizing absolute reconstruction error, but it seems to be less well-studied and harder to understand and compute. Looking deeper One might well ask whether there is some deep mathematical truth underlying the many different conveniences of the squared error. As far as I know, there are a few (which are related in some sense, but not, I would say, the same): Differentiability The squared error is everywhere differentiable , while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization . To optimize the squared error, you can just set its derivative equal to 0 and solve; to optimize the absolute error often requires more complex techniques. Inner products The squared error is induced by an inner product on the underlying space. An inner product is basically a way of "projecting vector $x$ along vector $y$ ," or figuring out "how much does $x$ point in the same direction as $y$ ." In finite dimensions this is the standard (Euclidean) inner product $\langle a, b\rangle = \sum_i a_ib_i$ . Inner products are what allow us to think geometrically about a space, because they give a notion of: a right angle ( $x$ and $y$ are right angles if $\langle x, y\rangle = 0$ ); and a length (the length of $x$ is $||x|| = \sqrt{\langle x, x\rangle}$ ). By "the squared error is induced by the Euclidean inner product" I mean that the squared error between $x$ and $y$ is $||x-y||$ , the Euclidean distance between them. In fact the Euclidean inner product is in some sense the "only possible" axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric properties. For random variables, in fact, you can define is a similar inner product: $\langle X, Y\rangle = E(XY)$ . This means that we can think of a "geometry" of random variables, in which two variables make a "right angle" if $E(XY) = 0$ . Not coincidentally, the "length" of $X$ is $E(X^2)$ , which is related to its variance. In fact, in this framework, "independent variances add" is just a consequence of the Pythagorean Theorem: \begin{align}\Var(X + Y) &= ||(X - \mu_X)\, + (Y - \mu_Y)||^2 \\ &= ||X - \mu_X||^2 + ||Y - \mu_Y||^2 \\ &= \Var(X)\quad\ \ \, + \Var(Y).\end{align} Beyond squared error Given these nice mathematical properties, would we ever not want to use squared error? Well, as I mentioned at the very beginning, sometimes absolute error is closer to what we "care about" in practice. For instance, if your data has tails that are fatter than Gaussian, then minimizing the squared error can place too much weight on outlying points. The absolute error is less sensitive to such outliers. (For instance, if you observe an outlier in your sample, it changes the squared-error-minimizing mean proportionally to the magnitude of the outlier, but hardly changes the absolute-error-minimizing median at all!) And although the absolute error doesn't enjoy the same nice mathematical properties as the squared error, that just means absolute-error problems are harder to solve , not that they're objectively worse in some sense. The upshot is that as computational methods have advanced, we've become able to solve absolute-error problems numerically, leading to the rise of the subfield of robust statistical methods . In fact, there's a fairly nice correspondence between some squared-error and absolute-error methods: Squared error | Absolute error========================|============================Mean | MedianVariance | Expected absolute deviationGaussian distribution | Laplace distributionLinear regression | Quantile regressionPCA | Robust PCARidge regression | LASSO As we get better at modern numerical methods, no doubt we'll find other useful absolute-error-based techniques, and the gap between squared-error and absolute-error methods will narrow. But because of the connection between the squared error and the Gaussian distribution, I don't think it will ever go away entirely. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/127598', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/62559/']} | jdg_83450 |
stackexchange | llm_judgeable_groundtruth_similarity | 203202 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Mathematica 12 introduces built-in plotting support for data with errorbars. The following example looks good with the default settings: ListPlot[Table[Around[i, 5], {i, 10}]] However, when I set PlotRange->All , the error bars get cropped: ListPlot[Table[Around[i, 5], {i, 10}], PlotRange -> All] Here is an example, where default PlotRange crops error bars: ListPlot[{Table[{i, Around[1/i, {-10, 10}]}, {i, -5.5, 5.5, 1.0}], Table[{i, 3 i}, {i, 0.5, 5.5, 1.0}]}] and adding PlotRange->All does not help either: ListPlot[{Table[{i, Around[1/i, {-10, 10}]}, {i, -5.5, 5.5, 1.0}], Table[{i, 3 i}, {i, 0.5, 5.5, 1.0}]}, PlotRange -> All] Is there a universal option to get the full PlotRange for any data with error bars?
Now provide the response and nothing else.
| Update: This issue is fixed in version 13.0. Using PlotRange -> All prevents cropping of the error bars: Row[{ListPlot[Table[Around[i, 5], {i, 10}], ImageSize -> 400, PlotRange -> All], ListPlot[{Table[{i, Around[1/i, {-10, 10}]}, {i, -5.5, 5.5, 1.0}], Table[{i, 3 i}, {i, 0.5, 5.5, 1.0}]}, ImageSize -> 400, PlotRange -> All]}, Spacer[20]] Original answer: A work-around: use Show with the option PlotRange -> All : Show[ListPlot[Table[Around[i, 5], {i, 10}]], PlotRange -> All] Show[ListPlot[{Table[{i, Around[1/i, {-10, 10}]}, {i, -5.5, 5.5, 1.0}], Table[{i, 3 i}, {i, 0.5, 5.5, 1.0}]}], PlotRange -> All] An alternative work-around is to use explicit range for the vertical axis: tbl = Table[Around[i, 5], {i, 10}];yrange = MinMax[#["Interval"]& /@ tbl];ListPlot[tbl, PlotRange -> {Automatic, yrange}] For the second example: tbl2 = {Table[{i, Around[1/i, {-10, 10}]}, {i, -5.5, 5.5, 1.0}], Table[{i, 3 i}, {i, 0.5, 5.5, 1.0}]};yrange2 = MinMax @ Flatten[tbl2[[All, All, 2]] /. a_Around :> MinMax[ a["Interval"]]];ListPlot[tbl2, PlotRange -> {Automatic, yrange2}] | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/203202', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/47416/']} | jdg_83451 |
stackexchange | llm_judgeable_groundtruth_similarity | 3721966 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\int_0^\infty\frac1{(1+x^2)(1+x^p)} \; \mathrm{d}x$$ This integral should have the same value for all $p$ . I showed that it converges for all $p.$ I confirmed the result for $p=0,1,2$ : $$\int_0^\infty\frac1{(1+x^2)(1+x^p)} \; \mathrm{d}x=\frac \pi4$$ Any ideas on how to solve this in general? Integration by parts or substitution doesn't seem to work. (I suppose $p$ is a real, but it isn't mentioned in the problem)
Now provide the response and nothing else.
| Substitute $t=\frac{1}{x}$ : $$I=\int_{\infty}^0 \frac{-\frac{1}{t^2}}{\left(1+\frac{1}{t^2}\right)\left(1+\frac{1}{t^p}\right)} \; \mathrm{d}t=\int_0^{\infty} \frac{t^p}{\left(t^2+1\right)\left(t^p+1\right)} \; \mathrm{d}t$$ Now add the original integral and remember that $x$ and $t$ are dummy variables, so we can just call both of them $x$ : \begin{align*}2I&=\int_0^{\infty} \frac{x^p+1}{\left(x^2+1\right)\left(x^p+1\right)} \; \mathrm{d}x\\&=\int_0^{\infty} \frac{\mathrm{d}x}{x^2+1}\\I&=\frac{\pi}{4}\\\end{align*} | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/3721966', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/503397/']} | jdg_83452 |
stackexchange | llm_judgeable_groundtruth_similarity | 11217 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a bottle of Clorox Clean-Up Cleaner with Bleach, marketed for sale in the United States. Its label reads, in part (and with emphasis removed): Directions for use: It is a violation of Federal law to use this product in a manner inconsistent with its labeling. Use only in well-ventilated areas. Before use, open windows and turn on fan. If vapors bother you, leave room while product is working. (Other, similar products have a similar statement.) Is it really a violation of federal law to use the product in a manner inconsistent with its labeling? Suppose I didn't open windows before cleaning my bathroom: did I violate the law? What law? — citation, please. Is there a punishment prescribed if one is found guilty?
Now provide the response and nothing else.
| Yes, it is against the US Federal law. Yes, in theory it can cost you $1000 or 30 days' jail. Pesticides are regulated in the US by the United States Environment Protection Agency , in accordance with the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) . Clorox Clean Up Cleaner With Bleach contains ingredients that are considered pesticides. (I'm not yet sure which ones are the ones that are considered pesticides.) Hence, the Chlorox Cleaner label is registered with the EPA's Pesticide Produce Labelling System . As explained in the EPA's Pesticide Labeling Questions & Answers - General Labeling , this implies that the label is required by the EPA regulations to contain the quoted warning: Is it true that the manufacturer's label on a CONSUMER (as opposed to agricultural, etc.) pesticide imposes a legal obligation on the purchaser to use the product only as directed on the label? LC10-0330; 4/6/10 FIFRA sec. 12(a)(2)(G) makes it a violation of federal law for any person "to use any registered pesticide in a manner inconsistent with its labeling." All registered pesticides, including registered consumer pesticides, must bear the statement: "It is a violation of Federal law to use this product in a manner inconsistent with its labeling." 40 CFR 156.10(i)(2)(ii). Section 2(ee) of FIFRA provides limited exceptions to what is considered "in a manner inconsistent with" labeling. For instance it is not a violation to use a pesticide at a rate lower than that specified on the label unless the label specifically prohibits deviation from the specified rate. The penalties are described in FIFRA Section 14 , and for "private applicators" are limited to: Civil Penalties: Not more than $1,000 if you do it after receiving a citation or written warning from the EPA. Criminal Penalties: "fined not more than $1,000, or imprisoned for not more than 30 days, or both." | {} | {'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/11217', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/2916/']} | jdg_83453 |
stackexchange | llm_judgeable_groundtruth_similarity | 188068 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Are there any specific tricks or options to help TextRecognize work on a screenshot of an Input Cell or code in general? For example, here's a screenshot that mixed Mathematica and C++: Here's the output, it gets the lines but really garbles the characters badly: i = Import["https://i.imgur.com/mCVtdHh.png", "PNG"];res = TextRecognize[i, "Line", "BoundingBox"];HighlightImage[i, {"Boundary", res}]TextRecognize[ColorNegate[Sharpen[i]], RecognitionPrior -> "SparseText", Masking -> res] There are no would-be helpful options such as: Language->Mathematica , RecognitionThreshold , PerformanceGoal , FontPrior .
Now provide the response and nothing else.
| If you are truly interested in this topic, you need to go deeper. TextRecognize uses Tesseract under the hood and getting familiar with how to train its ML algorithm is important in understanding how you can improve the outcome. To make this post not too long, here are the most important information: Tesseract stores the information about recognizing a language in .traineddata files. You can load such files by using, e.g. Language -> File["wl.traineddata"] in the call to TextRecognize . This makes it possible that you train Tesseract on specifically created test-data to improve the recognition of source code During training, Tesseract creates images from the training text. Therefore, it has both the ground-truth of the text and the image representation. When you train Tesseract yourself, you can keep the training images to see how the rendered text looks and if it resembles your real input. You can train Tesseract on specific fonts which will be vital for good OCR Improving the default TextRecognize As said in my comment above, TextRecognize is trained for English text and not code that contains wild variable names and many additional characters that usually don't appear in written text. However, when I trained my own Tesseract, I saw that the text in the training images was pretty bold Therefore, I first tried to use your image and binarize it myself to resemble the thickness to a better degree: img = Import["https://i.stack.imgur.com/QYIYM.png"];TextRecognize[Binarize[img, 0.9]] Here is the result: "we uvmstr = toLLvnIRL"c++ze", "#include <ctre.hpp>#include <vector>#include <cstdio>#include <string_view>#include <iostream>extern \"C\" Ibool is_date(int64_t * in, int64_t len) {using namespace ctre::literals;char buf[len + 1];for (auto ii = 0; ii < len; ii++) buf[ii] = static_cast<char>(in[ii]);buf[len] = '\\8';const auto s = std::string(buf);if (auto m = \"A([6-9](4))/([6-9]{1,2}+)/([6-9](1,2}+)$\"_ctre.match(s)) (return true;}return false;"1; This is already not so bad. What is clearly missing are the [] . Training Tesseract on WL data Then I tried to train Tesseract on WL code specifically. There is a tutorial video and a wiki page that shows how to do this for Tesseract and its new LSTM neural network. What I basically did was Clone the tesseract repository Build tesseract and the training tools from scratch Created test text from Mathematica packages Trained tesseract on the training text and the Source Code Pro font which is used in my front end I really didn't spend too much time on this, because my main objective was to find out if we can use the .traineddata directly with Mathematica. If you follow the video, you see he uses a training script. I used the following rm -rf train/*tesstrain.sh \ --fonts_dir /usr/local/Wolfram/Mathematica/12.0/SystemFiles/Fonts/ \ --fontlist 'Source Code Pro Black' \ --langdata_dir langdata_lstm \ --lang eng \ --training_text training_text \ --wordlist training_words \ --tessdata_dir tesseract/tessdata \ --maxpages 10 \ --save_box_tiff \ --output_dir traincp train/eng.traineddata wl.traineddata My folder structure looked like this and you can see I also built leptonica and cloned the langdata_lstm TesseractWL/├── generate_training_data.sh├── langdata_lstm├── leptonica├── tesseract├── train├── training_text├── training_words└── wl.traineddata For the training data, I used a very simple approach which leaves a lot of room for improvement. I used the packages available as AddOns and joined them into a big text file, but I removed empty lines, indentation and I trimmed the code to 80 chars per line. Here is a hacky version files = FileNames["*.m", {FileNameJoin[{$InstallationDirectory, "AddOns", "Packages"}]}, Infinity];packageCode = Function[file, Function[str, StringTake[#, Min[80, StringLength[#]]] &[StringTrim[str]]] /@ Select[StringSplit[ Import[file, "String"], EndOfLine ], StringLength[StringTrim[#]] > 0 &] ] /@ files;words = StringSplit[Import[#, "String"]] & /@ files // Flatten // DeleteDuplicates;Export["TesseractWL/training_text", StringRiffle[Flatten[packageCode], "\n"], "String"]Export["TesseractWL/training_words",StringRiffle[Take[words, 30000], "\n"], "String"] However, if you are going to pursue this further, you should look at the training text for Englisch . After that running generate_training_data.sh , I can use this TextRecognize[ img, Language -> File["TesseractWL/wl.traineddata"]] Using one page of the training data (first image), TextRecognize does a pretty good job now (second image), although even the default TextRecognize works quite good. Conclusion To maximize the performance on source code recognition, one has to spend substantially more time with this topic and look at each step in the chain.The training I did, only improved an already existing classifier which bases on the English language. I'm not sure this is the best way and this should probably be discussed with one of the Tesseract developers. Maybe it would be advantageous to train a completely new language from the ground up. As I said earlier, we trained the new LSTM neural network for Tesseract 4.0 but I'm not sure Mathematica even uses this. The approach still works because in the .traineddata file both to old and the new LSTM classifier information are stored. At least this is how I understood it. Therefore, if I had to work on this, I would start with plain Tesseract and work towards a good source code recognition. Once this works, you can use it from Mathematica. The Tesseract Tools which are the core of TextRecognize are available in source under FileNames["TesseractToolsImpl.m", {$InstallationDirectory}, Infinity] | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/188068', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/403/']} | jdg_83454 |
stackexchange | llm_judgeable_groundtruth_similarity | 50682108 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the new Identity UI package available since ASP.NET Core 2.1 was released. Using a newly generated MVC project, here are some page URLs that are available: /Home/About/Home/Contact/Identity/Account/Login/Identity/Account/Register How can I configure routing to remove the /Identity/ part from the URLs?
Now provide the response and nothing else.
| It looks like this is not yet possible. Looking at the source code, it's clear that the Area name is hardcoded in IdentityDefaultUIConfigureOptions<TUser> : private const string IdentityUIDefaultAreaName = "Identity"; This is used in a handful of places, including when configuring Razor Pages . e.g.: options.Conventions.AuthorizeAreaFolder(IdentityUIDefaultAreaName, "/Account/Manage"); And also when configuring the Cookies authentication . e.g.: options.LoginPath = $"/{IdentityUIDefaultAreaName}/Account/Login"; It's worth noting that IdentityDefaultUIConfigureOptions<TUser> itself is protected, so the ability to override the options does not appear to exist. I've opened a Github issue to see if we can get feedback from those involved in the project itself. 2018-06-12 Update Javier Calvarro Nelson from the ASP.NET Core Identity team provided some valuable feedback in the Github issue I raised, which can be summarised as follows: The main reason for the Identity UI to be in an area is to minimize the impact on your app and to provide a clean separation between your app code and the Identity code. Javier recommends one of the following options when wanting to customise the URLs: Use the scaffolding element of the Default UI and make all necessary customisations yourself. Use a redirection rule that points the old routes to the new routes. Don't use the Default UI at all. Although unsupported and not recommended , Javier also points out that it is possible to use a custom IPageApplicationModelConvention to override the URLs. However, in case you missed it, this is unsupported and not recommended . 2018-06-27 Update The official documentation has now been updated to better explain said URL changes. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50682108', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1847261/']} | jdg_83455 |
stackexchange | llm_judgeable_groundtruth_similarity | 4151850 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a JTable which has 1st row blank. Now when I sort the table based on a column by clicking on that column then the blank row goes at the bottom. If I insert something in the blank row and do the sorting then the row is placed accordingly. How do I keep track of its row index even when it is sorted. I need to access that row but if user does the sorting then i loose the row index as it is no more the 1st row.
Now provide the response and nothing else.
| Assuming you're using the TableRowSorter stuff added in Java 6, I think what you need to look at are the methods convertRowIndexToModel and convertRowIndexToView in the RowSorter Class. You'd do something like table.getRowSorter().convertRowIndexToView(0) to find out which visible row index is actually row index 0 from your model. Edit: As Tulskiy pointed out in the comments, this may cause a NullPointerException if no row sorter is assigned to the table. Better to use the methods directly on JTable instead, e.g. table.convertRowIndexToView(0) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4151850', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/504133/']} | jdg_83456 |
stackexchange | llm_judgeable_groundtruth_similarity | 205631 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i got a client side javascript function which is triggered on a button click (basically, its a calculator!!). Sometimes, due to enormous data on the page, the javascript calculator function take to long & makes the page appear inactive to the user. I was planning to display a transparent div over entire page, maybe with a busy indicator (in the center) till the calculator function ends, so that user waits till process ends. function CalculateAmountOnClick() { // Display transparent div // MY time consuming loop! { } // Remove transparent div } Any ideas on how to go about this? Should i assign a css class to a div (which surrounds my entire page's content) using javascript when my calculator function starts? I tried that but didnt get desired results. Was facing issues with transparency in IE 6. Also how will i show a loading message + image in such a transparent div? TIA
Now provide the response and nothing else.
| Javacript to show a curtain: function CalculateAmountOnClick () { var curtain = document.body.appendChild( document.createElement('div') ); curtain.id = "curtain"; curtain.onkeypress = curtain.onclick = function(){ return false; } try { // your operations } finally { curtain.parentNode.removeChild( curtain ); }} Your CSS: #curtain { position: fixed; _position: absolute; z-index: 99; left: 0; top: 0; width: 100%; height: 100%; _height: expression(document.body.offsetHeight + "px"); background: url(curtain.png); _background: url(curtain.gif);} (Move MSIE 6 underscore hacks to conditionally included files as desired.) You could set this up as add/remove functions for the curtain, or as a wrapper: function modalProcess( callback ) { var ret; var curtain = document.body.appendChild( document.createElement('div') ); curtain.id = "curtain"; curtain.onkeypress = curtain.onclick = function(){ return false; } try { ret = callback(); } finally { curtain.parentNode.removeChild( curtain ); } return ret;} Which you could then call like this: var result = modalProcess(function(){ // your operations here}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/205631', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28321/']} | jdg_83457 |
stackexchange | llm_judgeable_groundtruth_similarity | 4875289 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I could not sleep last night and started thinking about std::swap . Here is the familiar C++98 version: template <typename T>void swap(T& a, T& b){ T c(a); a = b; b = c;} If a user-defined class Foo uses external ressources, this is inefficient. The common idiom is to provide a method void Foo::swap(Foo& other) and a specialization of std::swap<Foo> . Note that this does not work with class templates since you cannot partially specialize a function template, and overloading names in the std namespace is illegal. The solution is to write a template function in one's own namespace and rely on argument dependent lookup to find it. This depends critically on the client to follow the " using std::swap idiom" instead of calling std::swap directly. Very brittle. In C++0x, if Foo has a user-defined move constructor and a move assignment operator, providing a custom swap method and a std::swap<Foo> specialization has little to no performance benefit, because the C++0x version of std::swap uses efficient moves instead of copies: #include <utility>template <typename T>void swap(T& a, T& b){ T c(std::move(a)); a = std::move(b); b = std::move(c);} Not having to fiddle with swap anymore already takes a lot of burden away from the programmer.Current compilers do not generate move constructors and move assignment operators automatically yet, but as far as I know, this will change. The only problem left then is exception-safety, because in general, move operations are allowed to throw, and this opens up a whole can of worms. The question "What exactly is the state of a moved-from object?" complicates things further. Then I was thinking, what exactly are the semantics of std::swap in C++0x if everything goes fine? What is the state of the objects before and after the swap? Typically, swapping via move operations does not touch external resources, only the "flat" object representations themselves. So why not simply write a swap template that does exactly that: swap the object representations ? #include <cstring>template <typename T>void swap(T& a, T& b){ unsigned char c[sizeof(T)]; memcpy( c, &a, sizeof(T)); memcpy(&a, &b, sizeof(T)); memcpy(&b, c, sizeof(T));} This is as efficient as it gets: it simply blasts through raw memory. It does not require any intervention from the user: no special swap methods or move operations have to be defined. This means that it even works in C++98 (which does not have rvalue references, mind you). But even more importantly, we can now forget about the exception-safety issues , because memcpy never throws. I can see two potential problems with this approach: First, not all objects are meant to be swapped. If a class designer hides the copy constructor or the copy assignment operator, trying to swap objects of the class should fail at compile-time. We can simply introduce some dead code that checks whether copying and assignment are legal on the type: template <typename T>void swap(T& a, T& b){ if (false) // dead code, never executed { T c(a); // copy-constructible? a = b; // assignable? } unsigned char c[sizeof(T)]; std::memcpy( c, &a, sizeof(T)); std::memcpy(&a, &b, sizeof(T)); std::memcpy(&b, c, sizeof(T));} Any decent compiler can trivially get rid of the dead code. (There are probably better ways to check the "swap conformance", but that is not the point. What matters is that it's possible). Second, some types might perform "unusual" actions in the copy constructor and copy assignment operator. For example, they might notify observers of their change. I deem this a minor issue, because such kinds of objects probably should not have provided copy operations in the first place. Please let me know what you think of this approach to swapping. Would it work in practice? Would you use it? Can you identify library types where this would break? Do you see additional problems? Discuss!
Now provide the response and nothing else.
| So why not simply write a swap template that does exactly that: swap the object representations*? There's many ways in which an object, once being constructed, can break when you copy the bytes it resides in. In fact, one could come up with a seemingly endless number of cases where this would not do the right thing - even though in practice it might work in 98% of all cases. That's because the underlying problem to all this is that, other than in C, in C++ we must not treat objects as if they are mere raw bytes . That's why we have construction and destruction, after all: to turn raw storage into objects and objects back into raw storage. Once a constructor has run, the memory where the object resides is more than only raw storage. If you treat it as if it weren't, you will break some types. However, essentially, moving objects shouldn't perform that much worse than your idea, because, once you start to recursively inline the calls to std::move() , you usually ultimately arrive at where built-ins are moved . (And if there's more to moving for some types, you'd better not fiddle with the memory of those yourself!) Granted, moving memory en bloc is usually faster than single moves (and it's unlikely that a compiler might find out that it could optimize the individual moves to one all-encompassing std::memcpy() ), but that's the price we pay for the abstraction opaque objects offer us. And it's quite small, especially when you compare it to the copying we used to do. You could, however, have an optimized swap() using std::memcpy() for aggregate types . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4875289', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252000/']} | jdg_83458 |
stackexchange | llm_judgeable_groundtruth_similarity | 38488443 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Quick Question: MSDN - Named and Optional Arguments (C# Programming Guide) states clearly that " Optional arguments enable you to omit arguments for some parameters. Both techniques can be used with methods, indexers, constructors , and delegates." So instead of this: class MyClass{//..public MyClass() { // Empty Constructor's Task }public MyClass(SomeType Param1) { // 2nd Constructor's Task }public MyClass(SomeType Param1, SomeType Param2) { // 3rd Constructor's Task }} I should be able to do this: class MyClass { //.. public MyClass(SomeType Param1 = null, SomeType Param2 = null) { if (Param1) { if (Param2) { // 3rd constructor's Task } else { // 2nd constructor's Task } } else { if (!Param2) { // Empty constructor's Task } } } } Then why this is not working: public MyClass(double _x = null, double _y = null, double _z = null, Color _color = null){ // ..} Telling me: A value of type "null" cannot be used as a default parameter because there are no standard conversions to type 'double'
Now provide the response and nothing else.
| double is a value type . You'd need to wrap it in Nullable<T> or ? for shorthand, to indicate that it is nullable. public MyClass(double? _x = null, double? _y = null, double? _z = null, Color _color = null){ // ..} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38488443', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5890227/']} | jdg_83459 |
stackexchange | llm_judgeable_groundtruth_similarity | 1043402 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to calculate a sum of square roots $\sum\limits_{i=1}^n \sqrt{a + i}$ and after some struggling and googling I gave up on this. Is there any way to get a closed formula for this sum (actually even approximation with epsilon $10^{-4}$ would suffice)
Now provide the response and nothing else.
| It depends on how large $n$ is. You could approximate it by: The green area is the sum exactly. The red line is the graph of $\sqrt{x + a}$. The Blue line is the graph of $\sqrt{x + a + 1}$ (both for a = 0 for simplicity of graphing). By comparing the areas, you can see: $$\underbrace{\int_{i=0}^{n} \sqrt{a+i} ~d i}_\text{Lower Bound} < \sum_{i=1}^n \sqrt{a + i} < \underbrace{\int_{i=0}^n \sqrt{a + i + 1} ~d i}_\text{Upper Bound}$$So:$$\text{Lower Bound} = L = \frac {(2n + 2a)\sqrt{a + n} - (2a)\sqrt{a}}{3} $$$$\text{Upper Bound} = U = \frac {(2n + 2a + 2)\sqrt{a + n + 1} - (2a + 2)\sqrt{a + 1}}{3}$$$$\sum_{i=1}^n \sqrt{a + i} \approx \text{Average} = \frac{U + L}2$$ For example, with $N=10^8$ and $A = 10$, it gives: Lower: 666666766645.587 Average: 666666771646.6416 Actual: 666666771647.26367 Upper: 666666776647.696 With $1 - \frac{\text{Average}}{\text{Actual}} = 9.3 \times 10^{-13}$ The larger your numbers, the more accurate the approximation will be, since the difference $\sqrt{a + i} - \sqrt{a + i - 1}$ is decreasing. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1043402', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50804/']} | jdg_83460 |
stackexchange | llm_judgeable_groundtruth_similarity | 30737262 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After I installed Xcode 7 beta and convert my swift code to Swift 2, I got some issue with the code that I can't figure out. I know Swift 2 is new so I search and figure out since there is nothing about it, I should write a question. Here is the error: Call can throw, but it is not marked with 'try' and the error is not handled Code: func deleteAccountDetail(){ let entityDescription = NSEntityDescription.entityForName("AccountDetail", inManagedObjectContext: Context!) let request = NSFetchRequest() request.entity = entityDescription //The Line Below is where i expect the error let fetchedEntities = self.Context!.executeFetchRequest(request) as! [AccountDetail] for entity in fetchedEntities { self.Context!.deleteObject(entity) } do { try self.Context!.save() } catch _ { } } Snapshot:
Now provide the response and nothing else.
| You have to catch the error just as you're already doing for your save() call and since you're handling multiple errors here, you can try multiple calls sequentially in a single do-catch block, like so: func deleteAccountDetail() { let entityDescription = NSEntityDescription.entityForName("AccountDetail", inManagedObjectContext: Context!) let request = NSFetchRequest() request.entity = entityDescription do { let fetchedEntities = try self.Context!.executeFetchRequest(request) as! [AccountDetail] for entity in fetchedEntities { self.Context!.deleteObject(entity) } try self.Context!.save() } catch { print(error) }} Or as @bames53 pointed out in the comments below, it is often better practice not to catch the error where it was thrown. You can mark the method as throws then try to call the method. For example: func deleteAccountDetail() throws { let entityDescription = NSEntityDescription.entityForName("AccountDetail", inManagedObjectContext: Context!) let request = NSFetchRequest() request.entity = entityDescription let fetchedEntities = try Context.executeFetchRequest(request) as! [AccountDetail] for entity in fetchedEntities { self.Context!.deleteObject(entity) } try self.Context!.save()} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/30737262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2556515/']} | jdg_83461 |
stackexchange | llm_judgeable_groundtruth_similarity | 33835537 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Question : Can you have long running queries (30s+) running while having WAL updates applied on the slave (the slave role is as a Reporting DB Server) in a Hot Standby mode? The way it's working now is, either you set the params below to kill long running queries so WAL updates can be applied, or delay the WAL updates indefinitely until no queries are running to apply them. Can we have both? Long running queries and WAL updates being applied at the same time? Case Implementation : We are currently using Hot Standby mode to sync any changes from one master to one slave. The slave role is as a reporting db server with queries constantly and concurrently running (some in ms, some in seconds,some in minutes.) It would be very rare to have a gap of no active queries running on the slave. We have adjusted these two params to allow long queries on the hot standby: max_standby_archive_delay = -1 # max delay before canceling queriesmax_standby_streaming_delay = -1 # max delay before canceling queries And looking at an archived mail question similar to ours in a postgres mailing list: http://www.postgresql.org/message-id/[email protected] I understand the concept of preventing WAL updates being applied toslaves while queries are running. However, I thought with the use of MVCC,an active query on the slave (long running, 30 seconds+) can run readingfrom one version/snapshot, while the WAL update is being applied, sosubsequent queries will get the WAL updates when that WAL transaction iscommitted. I haven't fully digested the MVCC model used in PostgreSQL yet https://devcenter.heroku.com/articles/postgresql-concurrency , so this isjust my assumption -- that even if a table is dropped / truncated during aWAL update, the current running query should still work as it's using aversion/snapshot of the table(s) it's querying? Summary : Is there anyway (even with a third party extension) we can sync slavesfrom a master and have those updates from the master be applied to theslave right away while letting queries of any execution time continue torun till they complete on the standby/slave? If Hot Standby can't do that,what would you recommend for this situation? Our scenario is that we arehitting postgres with queries constantly and concurrently running (some inms, some in seconds, some in minutes,) leaving almost no time for a WALupdate to be applied. We have used Bucardo, but that wouldn't be a goodchoice in this scenario, as we over 200+ tables that would need to besynced, including views as well 40+ other databases aside from our maindatabase. Any help would be greatly appreciated. Thank you!
Now provide the response and nothing else.
| Thanks Guillaume for your answer, but luckily, starting in PostgreSQL 9.1, PostgreSQL has the hot_standby_feedback option (you set this on the standby server in postgresql.conf) which wont kill long running queries and will allow for WAL updates to the standby servers. Credit for this answer goes to three people on the PostgreSQL mail list (Raja/Albe/Scott) who helped me on this in that mailing thread. Hopefully this could be helpful to someone searching for this answer on stackoverflow. The email thread can be found here: http://www.postgresql.org/message-id/D274E3C1.1113E8%[email protected] http://www.postgresql.org/docs/9.1/static/hot-standby.html Excerpt: Remedial possibilities exist if the number of standby-query cancellations is found to be unacceptable. The first option is to set the parameter hot_standby_feedback , which prevents VACUUM from removing recently-dead rows and so cleanup conflicts do not occur. If you do this, you should note that this will delay cleanup of dead rows on the primary, which may result in undesirable table bloat. However, the cleanup situation will be no worse than if the standby queries were running directly on the primary server, and you are still getting the benefit of off-loading execution onto the standby. max_standby_archive_delay must be kept large in this case, because delayed WAL files might already contain entries that conflict with the desired standby queries. Solution Implementation : Here's what your postgresql.conf should be configured to on the standby server: max_standby_archive_delay = -1max_standby_streaming_delay = -1hot_standby_feedback = on | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33835537', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1178516/']} | jdg_83462 |
stackexchange | llm_judgeable_groundtruth_similarity | 91760 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm going to be developing some functionality that will crawl various public web sites and process/aggregate the data on them. Nothing sinister like looking for e-mail addresses - in fact it's something that might actually drive additional traffic to their sites. But I digress. Other than honouring robots.txt , are there any rules or guidelines, written or unwritten, that I ought to be following in order to (a) avoid appearing malicious and potentially being banned, and (b) not cause any problems for the site owners/webmasters? Some examples I can think of which may or may not matter: Number of parallel requests Time between requests Time between entire crawls Avoiding potentially destructive links (don't want to be the Spider of Doom - but who knows if this is even practical) That's really just spit-balling, though; is there any tried-and-tested wisdom out there that's broadly applicable for anybody who intends to write or utilize a spider?
Now provide the response and nothing else.
| Besides obeying robots.txt, obey nofollow and noindex in <meta> elements and links: There are many who believe robots.txt is not the proper way to block indexing and because of that viewpoint, have instructed many site owners to rely on the <meta name="robots" content="noindex"> tag to tell web crawlers not to index a page. If you're trying to make a graph of connections between websites (anything similar to PageRank), (and <meta name="robots" content="nofollow"> ) is supposed to indicate the source site doesn't trust the destination site enough to give it a proper endorsement. So while you can index the destination site, you ought not store the relation between the two sites. SEO is more of an art than a real science, and it's practiced by a lot of people who know what they're doing, and a lot of people who read the executive summaries of people who know what they're doing. You're going to run into issues where you'll get blocked from sites for doing things that other sites found perfectly acceptable due to some rule someone overheard or read in a blog post on SEOmoz that may or may not be interpreted correctly. Because of that human element, unless you are Google, Microsoft, or Yahoo!, you are presumed malicious unless proven otherwise. You need to take extra care to act as though you are no threat to a web site owner, and act in accordance with how you would want a potentially malicious (but hopefully benign) crawler to act: stop crawling a site once you detect you're being blocked: 403/401s on pages you know work, throttling, time-outs, etc. avoid exhaustive crawls in relatively short periods of time: crawl a portion of the site, and come back later on (a few days later) to crawl another portion. Don't make parallel requests. avoid crawling potentially sensitive areas: URLs with /admin/ in them, for example. Even then, it's going to be an up-hill battle unless you resort to black-hat techniques like UA spoofing or purposely masking your crawling patterns: many site owners, for the same reasons above, will block an unknown crawler on sight instead of taking the chance that there's someone not trying to "hack their site". Prepare for a lot of failure. One thing you could do to combat the negative image an unknown crawler is going to have is to make it clear in your user-agent string who you are: Aarobot Crawler 0.9 created by John Doe. See http://example.com/aarobot.html for more information. Where http://example.com/aarobot.html explains what you're trying to accomplish and why you're not a threat. That page should have a few things: Information on how to contact you directly Information about what the crawler collects and why it's collecting it Information on how to opt-out and have any data collected deleted That last one is key: a good opt-out is like a Money Back Guarantee™ and scores an unreasonable amount of goodwill. It should be humane: one simple step (either an email address or, ideally, a form) and comprehensive (there shouldn't be any "gotchas": opt-out means you stop crawling without exception). | {} | {'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/91760', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/3249/']} | jdg_83463 |
stackexchange | llm_judgeable_groundtruth_similarity | 534057 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a circuit which is designed to be powered with a 5VDC source, but has been powered accidentally by a 15VDC source for a couple of seconds. On the PCB I see a transient voltage suppressor (TVS), whose datasheet can be found here (the exact one I have is SP0504BAHTG - SOT23-5.) I suspect this component has been exposed to the 15VDC. Unfortunately, the datasheet doesn't mention anything about absolute maximum ratings for Vin (it does mention that it works up to 5VDC in, but that doesn't imply by itself that the thing would be damaged above 5VDC.) Would you consider it likely that this thing is broken/damaged?
Now provide the response and nothing else.
| Would you consider it likely that this thing is broken/damaged? Yes I would but, there are certain things you can decipher from the data sheet that might help you see why (such as this): - So, with an ESD discharge voltage of +8 kV, the TVS diode would limit the peak voltage to about 12 volts. To get deeper you have to look at what MIL-STD-883 is all about. MIL-STD-883 is the human body model and uses this test circuit: - So, 8 kV is the charged capacitor voltage and, due to the current limiting resistor (1500 Ω), only 12 volts is produced across the device when subject to the ESD surge. This means that there is an initial peak current of: - $$\dfrac{8000-12}{1500} \text{ amps} = \text{5.32 amps}$$ So, if you applied a continuous 15 volts, there has to be more current flowing into the tiny device than 5.32 amps so, ask yourself, how long will a little tiny device last when having 15 volts across its terminals and a current of (maybe) 10 amps flowing. That's a power of 150 watts (or maybe it's 100 watts with a little bit of hand waving). Or maybe your 15 volt supply was current limited to 2 amps but, again, how long would a little itsy-bitsy device in an SOT23-6 package last with maybe 15 or 20 watts applied? My guess - a few milliseconds - how long was the 15 volt supply connected? If longer than a few milliseconds then this device has gone to meet its maker; it's an ex-TVS diode, it's passed-on; it's a stiff; it is bereft of (useful) life; it's shuffled off its mortal coil and joined the choir invisible. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/534057', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/167800/']} | jdg_83464 |
stackexchange | llm_judgeable_groundtruth_similarity | 1333991 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A question goes like : Find the HCF of $\underbrace{111\ldots 11}_{100\text{ ones}}$ and $\underbrace{111\ldots11}_{60 \text{ ones}}$. The answer is $\underbrace{111\ldots11}_{20 \text{ ones}}$ I'd like to know how this question is solved along with the logic behind the steps. All help is appreciated Cheers!
Now provide the response and nothing else.
| 3 steps of Euclid's algorithm will get you there: $$\begin{align} & \gcd(\underbrace{11\cdots11}_{100},\underbrace{11\cdots11}_{60})\\ ={}& \gcd(\underbrace{11\cdots11}_{40},\underbrace{11\cdots11}_{60}) \\ ={}& \gcd(\underbrace{11\cdots11}_{40},\underbrace{11\cdots11}_{20})\\ ={}& \gcd(\underbrace{11\cdots11}_{20},\underbrace{11\cdots11}_{20}) = \underbrace{11\cdots11}_{20} \end{align} $$ First subtract $10^{40}$ times the right number from the left, then $10^{20}$ times the left number from the right, then $10^{20}$ times the right number from the left again. This example generalizes to seeing that$$ \gcd(\underbrace{11\cdots11}_{a\text{ ones}},\underbrace{11\cdots11}_{b\text{ ones}}) = \underbrace{11\cdots11}_{\gcd(a,b)\text{ ones}} $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1333991', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/249724/']} | jdg_83465 |
stackexchange | llm_judgeable_groundtruth_similarity | 185793 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I may be being daft here, but if I want to do something while a process is running, why do I do: while kill -0 $PIDdo #...done instead of until kill -0 $PIDdo #...done ? kill -0 exits with code 1 if the process is not found, and 0 if the process is found: $ kill -0 123444-bash: kill: (123444) - No such process$ echo $?1$ screen -ls | grep rofl 28043.rofl (02/19/2015 02:27:56 PM) (Detached)$ kill -0 28043$ echo $?0 So if the process is running, wouldn't the while loop boil down to: while 0do #...done which would then never execute?
Now provide the response and nothing else.
| When dealing with return codes "0" is a success and non-zero is failure. The syntax of a while loop is: while COMMANDS; do ...; done The while statement checks the return code of the last command in the provided list of commands. In you last example of while 0 , this will attempt to execute a command called "0" and check it's return code. A literal 0 is not special to bash outside of arithmetic context. Inside of that context, 0 is considered false. For example while (( 0 )); do ... # never executesend This case is special as the keyword (( is treated as a command, which returns non-zero because it's result is 0. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/185793', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/67003/']} | jdg_83466 |
stackexchange | llm_judgeable_groundtruth_similarity | 226569 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do we show that equality holds in the triangle inequality $|a+b|=|a|+|b|$ iff both numbers are positive, both are negative or one is zero? I already showed that equality holds when one of the three conditions happens.
Now provide the response and nothing else.
| If $a$ and $b$ are positive, then $|a+b|=a+b=|a|+|b|$. If they are negative, then $|a+b|=-a-b=|a|+|b|$. Suppose one of them is $0$. Without loss of generality suppose $a=0$. Then $|a+b|=|b|=|a|+|b|$. If none of the three situations occurs, then between $a$ and $b$ one is positive and one negative. Without loss of generality, suppose $a$ is positive. Suppose $|a+b|=|a|+|b|$. If $a+b\geq 0$, then $a+b=a-b$ so that $b=0$, a contradiction. If $a+b<0$, then $-a-b=a-b$ so that $a=0$, a contradiction. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/226569', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/47610/']} | jdg_83467 |
stackexchange | llm_judgeable_groundtruth_similarity | 9535078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
<phpunit backupGlobals="false" colors="true"> <testsuite name="app1" > <directory>./app1</directory> </testsuite> <testsuite name="app1" > <directory>./app2</directory> </testsuite></phpunit> How can i make first and second testsuite load different bootstraps?
Now provide the response and nothing else.
| What I did is to have a Listener. phpunit.xml <?xml version="1.0" encoding="UTF-8"?><phpunit bootstrap="./phpunit_bootstrap.php" backupGlobals="false" backupStaticAttributes="false" verbose="true" colors="true" convertErrorsToExceptions="true" convertNoticesToExceptions="true" convertWarningsToExceptions="true" processIsolation="false" stopOnFailure="false" syntaxCheck="true"> <testsuites> <testsuite name="unit"> <directory>./unit/</directory> </testsuite> <testsuite name="integration"> <directory>./integration/</directory> </testsuite> </testsuites> <listeners> <listener class="tests\base\TestListener" file="./base/TestListener.php"></listener> </listeners></phpunit> Then TestListener.php class TestListener extends \PHPUnit_Framework_BaseTestListener{ public function startTestSuite(PHPUnit_Framework_TestSuite $suite) { if (strpos($suite->getName(),"integration") !== false ) { // Bootstrap integration tests } else { // Bootstrap unit tests } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9535078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/505726/']} | jdg_83468 |
stackexchange | llm_judgeable_groundtruth_similarity | 235482 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I push or hit an object in space (vacuum and no gravitation) in direction what is not going trough its centroid, will it rotate or move along in straight line? I expect that on earth it will depend on what is less difficult for the object (rotation or linear movement). So the object will do some kind of combination of both movements (rotating and also moving along the direction of impulse or force). But how could an object "decide" what to do in space, where is not resistance?
Now provide the response and nothing else.
| Any linear force not going through the centre of mass will create torque, which I hope you know, is related to how far from the centre of mass the line of force is. So, if you manage to hit the object exactly at its centre of mass, i.e. the line of force is directly passing through the centre of mass, then it will show NO ROTATION. It will go straight ahead in a line. But, if you fail to do so, i.e. the line of force misses the centre of mass, it will show BOTH kind of motions, Rotational and Linear. It will go straight ahead in a line as in previous case, but will also rotate. How much is the speed of rotation depends on how badly you missed the centre of mass. But in both cases, the total momentum will be (has to be) same. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/235482', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/106975/']} | jdg_83469 |
stackexchange | llm_judgeable_groundtruth_similarity | 34833653 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using Spark 1.5 and Scala 2.10.6 I'm trying to filter a dataframe via a field "tags" that is an array of strings. Looking for all rows that have the tag 'private'. val report = df.select("*") .where(df("tags").contains("private")) getting: Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve 'Contains(tags, private)' due to data type mismatch: argument 1 requires string type, however, 'tags' is of array type.; Is the filter method better suited? UPDATED: the data is coming from cassandra adapter but a minimal example that shows what I'm trying to do and also gets the above error is: def testData (sc: SparkContext): DataFrame = { val stringRDD = sc.parallelize(Seq(""" { "name": "ed", "tags": ["red", "private"] }""", """{ "name": "fred", "tags": ["public", "blue"] }""") ) val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.implicits._ sqlContext.read.json(stringRDD) } def run(sc: SparkContext) { val df1 = testData(sc) df1.show() val report = df1.select("*") .where(df1("tags").contains("private")) report.show() } UPDATED: the tags array can be any length and the 'private' tag can be in any position UPDATED: one solution that works: UDF val filterPriv = udf {(tags: mutable.WrappedArray[String]) => tags.contains("private")}val report = df1.filter(filterPriv(df1("tags")))
Now provide the response and nothing else.
| I think if you use where(array_contains(...)) it will work. Here's my result: scala> import org.apache.spark.SparkContextimport org.apache.spark.SparkContextscala> import org.apache.spark.sql.DataFrameimport org.apache.spark.sql.DataFramescala> def testData (sc: SparkContext): DataFrame = { | val stringRDD = sc.parallelize(Seq | ("""{ "name": "ned", "tags": ["blue", "big", "private"] }""", | """{ "name": "albert", "tags": ["private", "lumpy"] }""", | """{ "name": "zed", "tags": ["big", "private", "square"] }""", | """{ "name": "jed", "tags": ["green", "small", "round"] }""", | """{ "name": "ed", "tags": ["red", "private"] }""", | """{ "name": "fred", "tags": ["public", "blue"] }""")) | val sqlContext = new org.apache.spark.sql.SQLContext(sc) | import sqlContext.implicits._ | sqlContext.read.json(stringRDD) | }testData: (sc: org.apache.spark.SparkContext)org.apache.spark.sql.DataFramescala> | val df = testData (sc)df: org.apache.spark.sql.DataFrame = [name: string, tags: array<string>]scala> val report = df.select ("*").where (array_contains (df("tags"), "private"))report: org.apache.spark.sql.DataFrame = [name: string, tags: array<string>]scala> report.show+------+--------------------+| name| tags|+------+--------------------+| ned|[blue, big, private]||albert| [private, lumpy]|| zed|[big, private, sq...|| ed| [red, private]|+------+--------------------+ Note that it works if you write where(array_contains(df("tags"), "private")) , but if you write where(df("tags").array_contains("private")) (more directly analogous to what you wrote originally) it fails with array_contains is not a member of org.apache.spark.sql.Column . Looking at the source code for Column , I see there's some stuff to handle contains (constructing a Contains instance for that) but not array_contains . Maybe that's an oversight. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34833653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7223/']} | jdg_83470 |
stackexchange | llm_judgeable_groundtruth_similarity | 15272403 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Being completely inexperienced in clojure, and without any functional programming practice since college, I'm trying to interpret some example code to figure out the clojure syntax. I started by coding several versions of Fibonacci ( https://gist.github.com/pcalcao/ea4176719d778ea3ab9e ), but I still can't say I fully understand the more complex forms. For instance, this: (defn fib_map [n] (last (take (+ n 1) (map first (iterate (fn [[a b]] [b (+ a b)]) [0 1]))))) I'm struggling to really understand the innermost part of this code: fn [[a b]] [b (+ a b)] Now, from my understanding, we're creating an anonymous function that receives one parameter, a vector with two values (this is destructuring, right?), and returns another vector. Now, what is the reason we would do this, instead of: fn [a b] [b (+ a b)] Are these equivalent? Or are we simply making our anonymous function receive a single parameter as a "gimmick" to use in iterate ? Sorry if this is totally obvious, but like I said, Lisp-like languages aren't my strong point yet.
Now provide the response and nothing else.
| You already figured it out correctly yourself. Function of the form (fn [[a b]] ...) is using destructuring. It takes a single parameter that should be a vector or another type of object that supports clojure's nth function. Using destructuring, it "pulls" the first two values out of the vector and assigns them to local variables a and b . Function of the form (fn [a b] ...) is a function of two parameters. The two are not equivalent. The reason you have to use the (fn [[a b]] ...) form with iterate is that iterate only works with single-parameter functions. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15272403', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/432806/']} | jdg_83471 |
stackexchange | llm_judgeable_groundtruth_similarity | 158750 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am wondering if there is a way to combine multiple images into a single image using only JavaScript. Is this something that Canvas will be able to do. The effect can be done with positing, but can you combine them into a single image for download? Update Oct 1, 2008: Thanks for the advice, I was helping someone work on a js/css only site, with jQuery and they were looking to have some MacOS dock-like image effects with multiple images that overlay each other. The solution we came up with was just absolute positioning, and using the effect on a parent <div> relatively positioned. It would have been much easier to combine the images and create the effect on that single image. It then got me thinking about online image editors like Picnik and wondering if there could be a browser based image editor with photoshop capabilities written only in javascript. I guess that is not a possibility, maybe in the future?
Now provide the response and nothing else.
| I know this is an old question and the OP found a workaround solution, but this will work if the images and canvas are already part of the HTML page. <img id="img1" src="imgfile1.png"><img id="img2" src="imgfile2.png"><canvas id="canvas"></canvas><script type="text/javascript">var img1 = document.getElementById('img1');var img2 = document.getElementById('img2');var canvas = document.getElementById('canvas');var context = canvas.getContext('2d');canvas.width = img1.width;canvas.height = img1.height;context.globalAlpha = 1.0;context.drawImage(img1, 0, 0);context.globalAlpha = 0.5; //Remove if pngs have alphacontext.drawImage(img2, 0, 0);</script> Or, if you want to load the images on the fly: <canvas id="canvas"></canvas><script type="text/javascript">var canvas = document.getElementById('canvas');var context = canvas.getContext('2d');var img1 = new Image();var img2 = new Image();img1.onload = function() { canvas.width = img1.width; canvas.height = img1.height; img2.src = 'imgfile2.png';};img2.onload = function() { context.globalAlpha = 1.0; context.drawImage(img1, 0, 0); context.globalAlpha = 0.5; //Remove if pngs have alpha context.drawImage(img2, 0, 0);}; img1.src = 'imgfile1.png';</script> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/158750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/415/']} | jdg_83472 |
stackexchange | llm_judgeable_groundtruth_similarity | 593247 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I watched an excellent video about The Surprising Secret of Synchronization There are examples of pendulum clocks or metronomes synchronizing when coupled by a swinging platform. Other examples are flashing fireflies in a tree or an applauding audience in a theatre. What about electronic crystal oscillators? Do they synchronize when powered by the same power source with a high internal resistance and only a small decoupling capacitor? I am thinking of crystal oscillators of the same type and the same nominal frequency but with the usual small deviations of some 10 to 100 ppm. simulate this circuit – Schematic created using CircuitLab P.s. Of course this is nothing that a circuit designer should do. If several 10 MHz clocks are needed, they should be derived from the same master clock.
Now provide the response and nothing else.
| They can do. When you want oscillators to lock to others like this, it's called 'injection locking', and is a useful but little-known technique. You would apply a bit more signal for injection than just running them side by side though. It's perhaps most frequently used with harmonic or sub-harmonic locking to make a very low phase noise multiplier or divider. Commercial packaged crystal oscillators will usually have sufficient internal power supply decoupling so that any interference of one oscillator on another will not cause such a level of pulling that the two would be pulled into synchronisation. However, if you are building a low noise synthesiser, you can guarantee that the effect of one oscillator on another will be visible as frequency modulation. It usually takes some serious effort with decoupling, screening and buffering to get the influence below the -100dB level that separates a high quality synthesiser from a smelly heap of circuits. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/593247', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/115114/']} | jdg_83473 |
stackexchange | llm_judgeable_groundtruth_similarity | 4261350 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two scala.xml.Elem objects (actual, expected). I am using JUnit 4, but have also included XMLUnit 1.3. Is there any easy way to compare the two objects for equality, ignoring attribute order and insignificant whitespace in the XML? I tried XMLUnit.assertXMLEqual() , but it complains that the types are scala.xml.Elem . I know that I can use equals or == , but I would like the benefit of having the assertion print the two values when they are not equal. If I use assertTrue(actual.equals(expected)) , and they are not equal, the only output will be "assertion failed".
Now provide the response and nothing else.
| Use the version of assertTrue that allows passing custom messages public static void assertTrue(java.lang.String message, boolean condition) and (for example) diff to produce the string with the descendand nodes that aren't equal scala> val xml1 = <person><name>john</name><lastname>smith</lastname></person>xml1: scala.xml.Elem = <person><name>john</name><lastname>smith</lastname></person>scala> val xml2 = <person><name>jane</name><lastname>smith</lastname></person>xml2: scala.xml.Elem = <person><name>jane</name><lastname>smith</lastname></person>scala> assert(xml1 == xml2, xml1.child diff xml2.child mkString(", "))java.lang.AssertionError: assertion failed: <name>john</name> at scala.Predef$.assert(Predef.scala:91) at .<init>(<console>:8) at .<clinit>(<console>) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4261350', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/96233/']} | jdg_83474 |
stackexchange | llm_judgeable_groundtruth_similarity | 28824505 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I haven't used .NET Routing before.I have a URL: http://myurl.com/Account/Login/?IsIPA=true .I want to be able to hit this URL with the following: http://myurl.com/IPA This is the only custom route I want hit. Can I create a route just for a single URL like this? My code that isn't working is: public static void RegisterRoutes(RouteCollection routes){ routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); routes.MapRoute("IPA", "Account/Login/{IsIPA}", new { controller = "Account", action = "Login", IsIPA = "true" });} I get the error: The constraint entry IsIPA on the route with route template Account/Login/{IsIPA}=True must have a string value or be of a type which implements System.Web.Routing.IRouteConstraint .
Now provide the response and nothing else.
| Route matching is similar to a switch case statement. The url parameter and any default values and constraints are all considered to determine whether or not it is a match with the incoming URL. If the route matches, it will then create a dictionary of route values based on the configuration. If the route does not match, the next route in the collection is tried until a match is found (or not). This means the order that routes are specified is important. The default route matches any URL with 0, 1, 2, or 3 segments. Therefore, in most cases you will need to define your custom route before the default route. public static void RegisterRoutes(RouteCollection routes){ routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "IPA", url: "IPA", defaults: new { controller = "Account", action = "Login", IsIPA = "true" }); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } );} The above configuration will route http://myurl.com/IPA to the Controller named Account and Action method named Login , and pass the additional route key IsIPA . This same URL will be built for the Controller/Action/IsIPA combination because it is the first one that matches in the list. Note that the original URL http://myurl.com/Account/Login/?IsIPA=true will still work and still route to the same location. This configuration just adds an extra route to that resource. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28824505', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1409147/']} | jdg_83475 |
stackexchange | llm_judgeable_groundtruth_similarity | 1555326 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Having being taught during my C++ days about evils of the C-style cast operator I was pleased at first to find that in Java 5 java.lang.Class had acquired a cast method. I thought that finally we have an OO way of dealing with casting. Turns out Class.cast is not the same as static_cast in C++. It is more like reinterpret_cast . It will not generate a compilation error where it is expected and instead will defer to runtime. Here is a simple test case to demonstrate different behaviors. package test;import static org.junit.Assert.assertTrue;import org.junit.Test;public class TestCast{ static final class Foo { } static class Bar { } static final class BarSubclass extends Bar { } @Test public void test ( ) { final Foo foo = new Foo( ); final Bar bar = new Bar( ); final BarSubclass bar_subclass = new BarSubclass( ); { final Bar bar_ref = bar; } { // Compilation error final Bar bar_ref = foo; } { // Compilation error final Bar bar_ref = (Bar) foo; } try { // !!! Compiles fine, runtime exception Bar.class.cast( foo ); } catch ( final ClassCastException ex ) { assertTrue( true ); } { final Bar bar_ref = bar_subclass; } try { // Compiles fine, runtime exception, equivalent of C++ dynamic_cast final BarSubclass bar_subclass_ref = (BarSubclass) bar; } catch ( final ClassCastException ex ) { assertTrue( true ); } }} So, these are my questions. Should Class.cast() be banished to Generics land? There it has quite a few legitimate uses. Should compilers generate compile errors when Class.cast() is used and illegal conditions can be determined at compile time? Should Java provide a cast operator as a language construct similar to C++?
Now provide the response and nothing else.
| I've only ever used Class.cast(Object) to avoid warnings in "generics land". I often see methods doing things like this: @SuppressWarnings("unchecked")<T> T doSomething() { Object o; // snip return (T) o;} It's often best to replace it by: <T> T doSomething(Class<T> cls) { Object o; // snip return cls.cast(o);} That's the only use case for Class.cast(Object) I've ever come across. Regarding compiler warnings: I suspect that Class.cast(Object) isn't special to the compiler. It could be optimized when used statically (i.e. Foo.class.cast(o) rather than cls.cast(o) ) but I've never seen anybody using it - which makes the effort of building this optimization into the compiler somewhat worthless. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1555326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/185722/']} | jdg_83476 |
stackexchange | llm_judgeable_groundtruth_similarity | 28849900 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a route in Node.js/Express that reads data from two queries and then increments a count based on that data from the queires. Since Node.js is asynchronous my total is displayed before all the data has been read. I created a simple example that gets to the point of what I am currently doing var express = require('express');var router = express.Router();var total = 0;/* GET home page. */router.get('/', function(req, res, next) { increment(3); increment(2); console.log(total); res.end();});var increment = function(n){ //Wait for n seconds before incrementing total n times setTimeout(function(){ for(i = 0; i < n; i++){ total++; } }, n *1000);};module.exports = router; I'm not sure what I would have to do in order to wait until both functions finish before I print the total. Would I have to create a custom Event Emitter to achieve this?
Now provide the response and nothing else.
| Embrace asynchronicity: var express = require('express');var router = express.Router();var total = 0;/* GET home page. */router.get('/', function(req, res, next) { increment(3, function() { // <=== Use callbacks increment(2, function() { console.log(total); res.end(); }); });});var increment = function(n, callback){ // <=== Accept callback //Wait for n seconds before incrementing total n times setTimeout(function(){ for(i = 0; i < n; i++){ total++; } callback(); // <=== Call callback }, n *1000);};module.exports = router; Or use a promises library, or use events. In the end, they're all asynchronous callback mechanisms with slightly different semantics. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28849900', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2361727/']} | jdg_83477 |
stackexchange | llm_judgeable_groundtruth_similarity | 1511 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently, there are five permanent members of United Nations Security Council . India with other three nations, through G4 , is demanding a permanent seat in UNSC. So, On what basis/points does India claim the permanent seat in UNSC? Why does India think that she should be a permanent member? What role the coffee club is playing in this scenario?
Now provide the response and nothing else.
| India India is not only claiming a permanent seat in the Security Council, but also that there's need for a general reform of the Council. Asoke Kumar Mukerji, India's permanent representative at the UN, recently (April 2013) stated: Mukerji described as "truly bizarre" that while nearly 80 per cent of the work of the Security Council is focused on the continent of Africa, the 15-nation body has never had even one permanent member from any of the 53 states of the African continent. Source: India for expansion of UN Security Council , The Times of India When it comes to claiming a permanent seat, India brings forth several arguments: It's a regular contributor to UN's peacekeeping missions, the third largest one behind Pakistan and Bangladesh 1 It's one of the main financial backers of the UN 2 It's the world's largest democracy It's the world's second most populous country It maintains one of the largest armies in the world It possesses nuclear power It has been elected in the Security Council seven times 3 The numbers seem to support that India has a legitimate claim to a permanent seat, at least when considering the Council's current permanent members. This seems to be the opinion of the US as well, with president Obama pledging in 2010 to support India's claim: We salute India’s long history as a leading contributor to United Nations peacekeeping missions. And we welcome India as it prepares to take its seat on the United Nations Security Council. Source: Remarks by the President to the Joint Session of the Indian Parliament in New Delhi, India The Coffee Club The "Coffee Club's" main counter argument is that granting a permanent seat to any of the G4 does not constitute an actual reform of the Security Council, something that in their opinion is sorely needed. Their main counter arguments are summarized in a statement by Giulio Terzi, permanent representative of Italy to the United Nations: a) Quite frankly we do not think that anyone, not even the Chair of the negotiations, has the right to privilege one key issue over all the others, if this is not accepted by everybody. And is not what the Member States in their various groups have requested; b) The model of enlargement in two categories, as defined in the July 16 letter, does not actually exist as a model. Just as the proposal for an enlargement in only one category does not represent a model. In reality these are very general definitions within which there are numerous views: only in some cases true models. Within the position favorable to an expansion in two categories, for example, there is a broad range of views that are sometime diametrically opposed. ... c) By the same logic, it is clearly a deformation to posit a presumed level of support in favor of a presumed model of expansion in two categories. ... There is no objective possibility to identify five or six Countries that, more than others, can advance superior claims to a permanent seat. The so-called key actors on the international scene are far more numerous than the four celebrated “great pretenders.” The influence of these actors, in today’s world, changes at unprecedented speed, due to both political and economic factors. If those demanding national permanent seats wish to refer to objective criteria, it is clear that the assigning of such seats would be valid exclusively on the basis of two conditions: that they be subject to elections, and that these elections be periodic. Only the periodic approval by the membership can assure a constant assessment of respect for these criteria. The very distinguished Representative of Brazil said that we don’t need criteria but just a vote of the “peers” in the General Assembly. So, how does Brazil justify the fact that the UN Charter does set criteria for the election of the non permanent seats? Do we ask less for the permanent, and more for the non permanent members? The idea of creating new national permanent seats, as we know, originally is rooted in economic factors. In the early 1990s, in fact, some Countries argued that the emerging economic powers should enter the Security Council. If economic capacity is a criterion, however, it is in net contrast to the idea of permanence. For a concrete example, look at the list of the top contributors to the UN budget, how it has changed in the past 20 years, and the trends it indicates: there are Countries on the quick rise and Countries on the decline. For these very reasons, it is hardly a coincidence that six out of the top eleven contributors are Countries that closely sympathize with the positions of UfC. Those laying claims to new permanent seats today will find themselves in the space of just a few years flanked by new emerging powers that also aspire to a privileged seat. A reform that opens to new national permanent seats would give life to a Security Council destined to age very rapidly. After a few seasons we would find ourselves facing new pretenders lamenting again the lack of representation in this body. Source: Meeting of the informal plenary of the General Assembly on the question of equitable representation on and increase in the membership of the Security Council and related matters - Statement by H.E. Ambassador Giulio Terzi Permanent Representative of Italy to the United Nations (New York, 2 September 2009) It's worth mentioning that Pakistan, a country with a troublesome relationship with India, sides with the "Coffee Club". Small 5 and ACT In May 2012, the "Small 5" group (Switzerland, Costa Rica, Jordan, Liechtenstein, and Singapore) presented a draft resolution on Improving Working Methods of the Security Council . Its main points were: A greater role for the troop-contributing countries (TCCs) and those that make large financial contributions in the preparation and modification of mandates for peacekeeping missions Standing invitations to the Chairs of country-specific configurations of the Peacebuilding Commission to participate in relevant debates and, when appropriate, informal discussions Better access for interested and directly concerned States to subsidiary organs Establishing a working group on lessons learned in order to analyze reasons for non-implementation or lack of effectiveness to suggest mechanisms aimed at enhancing implementation of decisions Source: ‘Small-5′ Propose GA Resolution on Improving Working Methods of the Security Council , GAPWBLOG However, the "Small 5" withdrew the draft, after facing fierce opposition. The more recent development seems to be a new group with 21 member states, called ACT: On Thursday, 2 May 2013, a new group called ACT (Accountability, Coherence, and Transparency) - comprising 21 Member States - officially launched its initiative for better working methods of the Security Council. This initiative is a follow up to the multi-year efforts from the S5*, in particular in regard to its draft resolution L.42 Rev.2 from May 2012. That resolution was withdrawn due to intense pressure from governments, especially the 5 Permanent Members of the UN Security Council. The initial coordinator of this new ACT group is Switzerland and its Permanent Representative Paul Seger made a presentation, and a fact sheet was distributed. Ambassador Seger described the reasons, the goals, the background, the issues and strategies the 21 governments** would be following in the GA and at the UN. ... Note: * S5: Costa Rica, Jordan, Liechtenstein, Singapore, and Switzerland ** Austria, Chile, Costa Rica, Estonia, Finland, Gabon, Hungary, Ireland, Jordan, Liechtenstein, New Zealand, Norway, Papua New Guinea, Peru, Portugal, Saudi Arabia, Slovenia, Sweden, Switzerland, Tanzania (obs) and Uruguay. Source: 21 Member States Launch New Initiative to Improve the Working Methods of the Security Council , Center for U.N. Reform Education ACT's positions on expanding the Security Council and on G4's claims are not yet known. Further reading Reform of the United Nations Security Council , Wikipedia India’s Quest For United Nations Security Council Reform , Europe'sWorld Obama endorses India for U.N. Security Council seat , The Washington Post Countering China, Obama Backs India for U.N. Council , The New York Times 1 UN says peacekeepers overstretched , Al Jazeera 2 2009 Financial Contributions , UN 3 Countries Elected Members of the Security Council , UN | {} | {'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/1511', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/1710/']} | jdg_83478 |
stackexchange | llm_judgeable_groundtruth_similarity | 68374227 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here, when I push to the stack, why are the objects being destroyed? #include <iostream>#include <stack>class One{private: int i;public: One(int i) {this->i = i;} ~One() {std::cout << "value " << this->i << " is destroyed\n";}};int main(){ std::stack<One> stack; stack.push(One(1)); stack.push(One(2)); std::cout << "Now I'll stop\n";} I expected to see no output before Now I'll stop . But I get this value 1 is destroyedvalue 2 is destroyedNow I'll stopvalue 1 is destroyedvalue 2 is destroyed What should I do if I want prevent them from destroying?
Now provide the response and nothing else.
| One(1) and One(2) construct two temporary objects, which are passed to push and then copied (moved) into stack . Temporaries are destroyed after the full expression immediately. If you want to avoid constructing temporaries you can use emplace instead. Pushes a new element on top of the stack. The element is constructed in-place, i.e. no copy or move operations are performed. E.g. stack.emplace(1);stack.emplace(2); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68374227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14031636/']} | jdg_83479 |
stackexchange | llm_judgeable_groundtruth_similarity | 2559659 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to access a JET database from Python? I'm on Linux. All I found was a .mdb viewer in the repositories, but it's very faulty. Thanks
Now provide the response and nothing else.
| MDB Tools is a set of open source libraries and utilities to facilitate exporting data from MS Access databases (mdb files) without using the Microsoft DLLs. Thus non Windows OSs can read the data. Or, to put it another way, they are reverse engineering the layout of the MDB file. Jackcess is a pure Java library for reading from and writing to MS Access databases. It is part of the OpenHMS project from Health Market Science, Inc. . It is not an application. There is no GUI. It's a library, intended for other developers to use to build Java applications. ACCESSdb is a JavaScript library used to dynamically connect to and query locally available Microsoft Access database files within Internet Explorer. Both Jackcess and ACCESSdb are much newer than MDB tools, are more active and have write support. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2559659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/244182/']} | jdg_83480 |
stackexchange | llm_judgeable_groundtruth_similarity | 134695 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
TexPad is creating it.I know that it is under some deadkey.I just cannot remember it is name. The blue character: I just want to mass remove them from my document. How can you type it?
Now provide the response and nothing else.
| It is known as carriage return. If you're using vim you can enter insert mode and type CTRL - v CTRL - m . That ^M is the keyboard equivalent to \r . Inserting 0x0D in a hex editor will do the task. How do I remove it? You can remove it using the command perl -p -i -e "s/\r//g" filename As the OP suggested in the comments of this answer here , you can even try a ` dos2unix filename and see if that fixes it. As @steeldriver suggests in the comments, after opening the vim editor, press esc key and type :set ff=unix . References https://stackoverflow.com/questions/1585449/insert-the-carriage-return-character-in-vim https://stackoverflow.com/a/7742437/1742825 -ksh: revenue_ext.ksh: not found [No such file or directory] | {} | {'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/134695', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/16920/']} | jdg_83481 |
stackexchange | llm_judgeable_groundtruth_similarity | 24650325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If i use the following code it shows me error "Type 'HttpConnection' does not conform to protocol 'NSObjectProtocol'" class HttpConnection : NSURLConnectionDataDelegate { var urlConnection : NSURLConnection? weak var delegate : HttpConnecting? init(delegate:HttpConnecting){ self.delegate = delegate; } func startAsynRequestWithUrlString(url:NSString, withMethod:NSString){ } } If i subclass HttpConnection to NSObject then it works fine. So my question is when i need to adopt NSURLConnectionDataDelegate protocol in swift why i need to inherit the class from NSObject?
Now provide the response and nothing else.
| NSURLConnectionDataDelegate itself inherits from NSURLConnectionDelegate which inherits from NSObjectProtocol . That means that apart from implementing all the methods from NSURLConnectionDataDelegate , and NSURLConnectionDelegate , you also have to implement all the methods from NSObjectProtocol (e.g. equality, hash). You didn't implement them, that's your mistake. If you inherit from NSObject , all that NSObjectProtocol methods are already implemented for you. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24650325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/760827/']} | jdg_83482 |
stackexchange | llm_judgeable_groundtruth_similarity | 36201698 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was working on a solution for Project Euler's Question #4: "Find the largest palindrome made from the product of two 3-digit numbers." I could just write a basic script and loop, but I tend to write things within classes. I've been out of python for a while, so I'm using these exercises to stay familiar with the language. While looping through the factors to figure out the answer, I receive this error: File "p4.py", line 35, in is_palindromen = str(p)RuntimeError: maximum recursion depth exceeded while getting the str of an object I'm guessing it's the way I formatted my recursive method, but I can't figure out how to fix it. Could someone explain to me what I'm doing wrong in terms of structuring my recursive method? The code: import mathclass PalindromeCalculator: def __init__(self, min_factor=100, max_factor=999): self.stable_factor = max_factor self.variable_factor = max_factor def find_max_palindrome(self): return self.check_next_product() def check_next_product(self): product = self.stable_factor * self.variable_factor; if self.is_palindrome(product): print("We found a palindrome! %s" % product) return str(product) else: # Reduce one of the factors by 1 if self.variable_factor == 100: self.variable_factor = 999 self.stable_factor -= 1 else: self.variable_factor -= 1 self.check_next_product() def is_palindrome(self, p): # To check palindrom, pop and shift numbers off each side and check if they're equal n = str(p) length = len(n) if length % 2 == 0: iterations = length / 2 else: iterations = (length - 1) / 2 for i in range(0, iterations): first_char = n[i:i+1] last_char = n[-(i+1)] if first_char != last_char: return False return True And to run the function: start = time.time()calculator = PalindromeCalculator();M = calculator.find_max_palindrome()elapsed = (time.time() - start)print "My way: %s found in %s seconds" % (M, elapsed)
Now provide the response and nothing else.
| update Currently there is no automatic way. See also Angular 2 typescript error when using subscribe function on new router (rc 1) See also https://github.com/angular/angular/issues/6595#issuecomment-244232725 class MyAppComponent { constructor(router: Router) { router.events.subscribe(s => { if (s instanceof NavigationEnd) { const tree = router.parseUrl(router.url); if (tree.fragment) { // you can use DomAdapter const element = document.querySelector("#" + tree.fragment); if (element) { element.scrollIntoView(element); } } } }); }} update In the new router V3-beta.2 you can pass a fragment with router links and router navigation <a [routerLink]="..." fragment="top"> it should scroll to it but also adds #top to the URL (not tested myself yet) Update Original There is an open issue covering this https://github.com/angular/angular/issues/6595 A workaround (mentioned in https://github.com/angular/angular/issues/6946 ) Inject the router, subscribe to route changes and invoke the scroll to top: >= RC.x router.changes.subscribe() => { window.scrollTo(0, 0);}); beta router.events.filter(e => e instanceof NavigationEnd).subscribe(() => { window.scrollTo(0, 0);}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36201698', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/213259/']} | jdg_83483 |
stackexchange | llm_judgeable_groundtruth_similarity | 31368 |
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Unlike isearch , evil-search mode ignores case-fold-search option. Using evil-search like this for eg: (setq-default evil-search-module 'evil-search) Is it possible to make evil-search case sensitive?
Now provide the response and nothing else.
| What to do You can customize the variable evil-ex-search-case to tell itwhat you want evil to do when searching. Here's the docstring: evil-ex-search-case is a variable defined in evil-vars.el . Its value is smart Documentation: The case behaviour of the search command. Smart case means that the pattern is case sensitive if and only if it contains an upper case letter, otherwise it is case insensitive. You can customize this variable. If you dig into the source code, this is the variable in question: (defcustom evil-ex-search-case 'smart "The case behaviour of the search command.Smart case means that the pattern is case sensitive if and onlyif it contains an upper case letter, otherwise it is caseinsensitive." :type '(radio (const :tag "Case sensitive." sensitive) (const :tag "Case insensitive." insensitive) (const :tag "Smart case." smart)) :group 'evil) Now: you can either use the customize interface to choose youroption, or you could put the following in your init file: (setq evil-ex-search-case 'sensitive) Here's how I found this information. First, I asked apropos to tell me about various functions and variables related to evil searches ( M-x apropos evil search ). I then searched that list for mentions of "case," and checked out the variable evil-ex-search-case . When that looked like the relevant variable, I followed the link in the help window to the source code to figure out what value I needed to set it to. | {} | {'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/31368', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/2418/']} | jdg_83484 |
stackexchange | llm_judgeable_groundtruth_similarity | 6263660 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have something like this: public void Delete(T entity) { Context.DeleteObject(entity); Context.SaveChanges(); } I end up wit a exception :"The object cannot be deleted because it was not found in the ObjectStateManager."If I try to add the entity to objectContext with AttachTo() I get: "An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key." Whats wrong? Example: namespace CAFM.Data.Repository{ public abstract class RepositoryBase<T> : IRepository<T> where T : EntityObject { public RepositoryBase() : this(new CAFMRepositoryContext()) { } static RepositoryBase() { } public RepositoryBase(IRepositoryContext repositoryContext) { _context = repositoryContext ?? new CAFMRepositoryContext(); _entity = _repositoryContext.ObjectContext.CreateObjectSet<T>(); } private readonly ObjectContext _context; private readonly ObjectSet<T> _entity; protected ObjectContext Context { get { return _context; } } protected IObjectSet<T> Entity { get { return _entity; } } #region IRepository Members private string GetEntityName() { return string.Format("{0}.{1}", _entity.EntitySet.EntityContainer, _entity.EntitySet.Name); } public T Add(T entity) { var fqen = GetEntityName(); Context.AddObject(fqen, entity); Context.SaveChanges(SaveOptions.AcceptAllChangesAfterSave); return entity; } public T Update(T entity) { Context.ApplyCurrentValues(GetEntityName(), entity); Context.SaveChanges(SaveOptions.AcceptAllChangesAfterSave); return entity; } public void Delete(T entity) { Context.DeleteObject(entity); Context.SaveChanges(); } #endregion }}
Now provide the response and nothing else.
| You have to fetch the entity you wish to delete from your context first. Best to do this with a comparison of the primary key. It could look like this, but i do not know the object structure of TabMaster and TabMasterViewModel , so the properties may be wrong named. public void Delete(TabMasterViewModel entity) { TabMaster des = _tabmasterRepository.FirstOrDefault( e.Id = entity.ID ); if (des != null) { _tabmasterRepository.Delete(des); }} You have created a new Entity and mapped the values from your view model to that entity. But the context does not know of the entity, so he could not delete it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6263660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/735598/']} | jdg_83485 |
stackexchange | llm_judgeable_groundtruth_similarity | 15900842 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to build an app that shows up as an optional image source when a user tries to share an image using whatsapp. So far I have managed to get my app to show up in the service picker that whatsapp launches using intent filters but I cannot get the image to return correctly to whatsapp. Im posting my code below : public void returnImage(View v){ //Bitmap img; //Bundle selectedImage = new Bundle(); Uri imageURI; Intent shareIntent = new Intent(); switch(v.getId()){ case R.id.eric1 : imageURI = saveToCache(R.drawable.cartman1); shareIntent.putExtra(Intent.EXTRA_STREAM, imageURI); shareIntent.setType("image/png"); setResult(RESULT_OK, shareIntent); Utils.makeToast("Selected",this); System.out.println("--------------------------------"); System.out.println(imageURI.toString()); finish(); }} private Uri saveToCache(int resID) { // TODO Auto-generated method stub Bitmap image = BitmapFactory.decodeResource(getResources(), resID); File imageFile; Date d = new Date(); String imgName = ((Long.toString(d.getTime())).subSequence(1, 9)).toString(); String state = Environment.getExternalStorageState(); printDebug(state); if (Environment.MEDIA_MOUNTED.equals(state)) { File file = getExternalFilesDir(null); if (file != null) { try { //String root = file.getAbsolutePath(); imageFile = new File(file, imgName+".png"); printDebug(imageFile.getAbsolutePath()); FileOutputStream stream = new FileOutputStream(imageFile); boolean complete = image.compress(Bitmap.CompressFormat.PNG, 100, stream); if (!complete) { Log.d("tag", "image not saved"); } Log.d("tag", "image saved"); // Tell the media scanner about the new file so that it is // immediately available to the user. MediaScannerConnection.scanFile(this, new String[] { imageFile.toString() }, null, new MediaScannerConnection.OnScanCompletedListener() { public void onScanCompleted(String path, Uri uri) { Log.i("ExternalStorage", "Scanned " + path + ":"); Log.i("ExternalStorage", "-> uri=" + uri); } }); return Uri.parse(imageFile.getAbsolutePath()); } catch (IOException e) { Log.d("tag", "Can't save image", e); } } } return null; } The app opens and i select the image but whatsapp reports that the image cannot be shared. LogCat shows no errors or warnings. I read the resource Intent-Filter for Whatsapp -> share image but there is no mention of how or what was returned by the app so I'm at a complete loss here.
Now provide the response and nothing else.
| After searching for days, here is a working solution to return images to all other applications (tested for GMail and WhatsApp). First, you need to set an intent-filter in your AndroidManifest.xml (Inside application > activity ). This will list your application when other apps are calling for this intent (like when requesting an image). Note : WhatsApp is using the action.PICK - intent. Adding all intent-filters below even though will provide great compatibility with other apps. <intent-filter> <action android:name="android.intent.action.PICK" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.OPENABLE" /> <data android:mimeType="image/*" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.SEND" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.OPENABLE" /> <data android:mimeType="image/*" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.GET_CONTENT" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.OPENABLE" /> <data android:mimeType="image/*" /> </intent-filter> The second thing you need to care for is responding to an idling intent.This should consist of two parts: First you should check whether your application has been executed to return an image or if its run all by itself. Intent intent = getIntent(); if (intent!=null && intent.getType()!=null) //check if any application has executed your app { if(intent.getType().indexOf("image/") != -1) isinint=true; //check if the requested type is an image. If true, set a public static boolean, f.e. named isinint to true. Default is false. } Now, when the user has picked an image, set the result as following. Due to memory issues, you should copy the file you want to return onto the sdcard and return the Uri. if(isinint) //check if any app cares for the result { Intent shareIntent = new Intent(android.content.Intent.ACTION_SEND, Uri.fromFile(openprev)); //Create a new intent. First parameter means that you want to send the file. The second parameter is the URI pointing to a file on the sd card. (openprev has the datatype File) ((Activity) context).setResult(Activity.RESULT_OK, shareIntent); //set the file/intent as result ((Activity) context).finish(); //close your application and get back to the requesting application like GMail and WhatsApp return; //do not execute code below, not important } Note! : You can leave out ((Activity) context) when calling the data in OnCreate or similiar void's. As i use this snippet in another void, i need to provide a context in any case that has to be defined as displayed. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15900842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1248938/']} | jdg_83486 |
stackexchange | llm_judgeable_groundtruth_similarity | 3016683 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently studying COM. I found that COM DLL is kind of built upon the traditional DLL infrastructure. When we build COM DLLs, we still rely on the traditional DLL export methods to lead us to the internal COM co-classes. If COM is for component reusing at the binary level, I think the traditional DLL can achieve the same thing. They both expose functions, they are both binary, so what's the point of turning to COM approach? Currently, I have the feeling that the traditional DLL expose methods in a " flat " manner, while the COM DLL expose methods in an " OOP " hierarchy manner. And the OOP manner seems to be a better approach. Could this be the reason why COM prevails? Many thanks.
Now provide the response and nothing else.
| No, there's a Big difference. COM has a well defined protocols for creating objects, exposing methods, managing memory, publishing type information, managing threading. There is practically no language left that doesn't support using a COM server, no matter what language it was written in. You will not get that from exposing your own functions directly. That will likely be only usable from a program written in C/C++ (so it can read your header files), compiled with the exact same version of the C++ compiler and no lack of all kinds of interop problems. Something as simple as exposing a C++ class object like std::string is not safe. Neither the memory layout is guaranteed to be compatible, nor is there any kind of memory ownership protocol. It could well be more OOPy, COM doesn't support inheritance because OOP is so hard to get compatible at the binary level. That problem requires runtime support that all code buys into, VMs like .NET and Java. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3016683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/264052/']} | jdg_83487 |
stackexchange | llm_judgeable_groundtruth_similarity | 49403069 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to make a simple service which starts with device boot. Thing is that device return message "Unfortunately, [app_name] has stopped." I am struggling with this problem from few hours, with looking for mistake, but it is too simple.. Hope, you guys can help me with this problem. This is my code: AndroidManifest.xml <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED"/><application android:allowBackup="true" android:label="@string/app_name"> <receiver android:name=".StartReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED"/> </intent-filter> </receiver> <service android:name=".PService" /></application> StartReceiver.cs [BroadcastReceiver][IntentFilter(new[] { Intent.ActionBootCompleted })]public class StartReceiver : BroadcastReceiver{ public override void OnReceive(Context context, Intent intent) { Intent startIntent = new Intent(context, typeof(PService)); context.StartService(startIntent); }} and lastly PService.cs [Service] public class PService : Service { public override void OnCreate() { base.OnCreate(); } public override IBinder OnBind(Intent intent) { return null; } public override StartCommandResult OnStartCommand(Intent intent, StartCommandFlags flags, int startId) { Toast.MakeText(this, "Start", ToastLength.Short).Show(); return StartCommandResult.Sticky; } public override void OnDestroy() { base.OnDestroy(); Toast.MakeText(this, "Stop", ToastLength.Short).Show(); } } Additional this service application is targetted to API 19 (4.4.2 KitKat) Android version. I think there will be really small mistake, made by me but truly I cant find it out.. Thanks in advance for any help.
Now provide the response and nothing else.
| By adding the receiver in the manifest and via the BroadcastReceiverAttribute you have two receivers in your manifest. Plus the one in your manifest will not work since it is not the MD5-based Java name that Xamarin creates by default. Via Attributes 1) Remove the receiver and boot permission from your manifest 2) Add your boot permissions via an attribute) [assembly: UsesPermission(Manifest.Permission.ReceiveBootCompleted)] 3) Add the manifest entry via attributes: [BroadcastReceiver(Enabled = true)][IntentFilter(new[] { Intent.ActionBootCompleted })] public class BootBroadcastReceiver : BroadcastReceiver Via manifest 1) Add the manifest entry for the boot permission <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED"/> 2) Add the receiver and use a full qualify Java class name: <receiver android:name="com.yourpackagename.app.BootBroadcastReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED"/> </intent-filter></receiver> 3) Add a Name parameter to the BroadcastReceiverAttribute for the fully qualified Java class name that you used in the manifest [BroadcastReceiver(Name = "com.yourpackagename.app.BootBroadcastReceiver", Enabled = true)][IntentFilter(new[] { Intent.ActionBootCompleted })] public class BootBroadcastReceiver : BroadcastReceiver | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49403069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6325582/']} | jdg_83488 |
stackexchange | llm_judgeable_groundtruth_similarity | 27679456 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have the following spec: describe Thing do it 'can read data' do @data = get_data_from_file # [ '42', '36' ] expect(@data.count).to eq 2 end it 'can process data' do expect(@data[0].to_i).to eq 42 # Fails because @data is nil endend All I wanted is to have a variable shared in the given describe or context . I would write a value in one example and read it in another. How do I do that?
Now provide the response and nothing else.
| You should use before(:each) or before(:all) block: describe Thing do before(:each) do @data = get_data_from_file # [ '42', '36' ] end it 'can read data' do expect(@data.count).to eq 2 end it 'can process data' do expect(@data[0].to_i).to eq 42 endend The difference is that before(:each) will be executed for each case separately and before(:all) once before all examples in this describe/context . I would recommend you to prefer before(:each) over before(:all) , because each example will be isolated in this case which is a good practice. There are rare cases when you want to use before(:all) , for example if your get_data_from_file has a long execution time, in this case you can, of course, sacrifice tests isolation in favor of speed. But I want to aware you, that when using before(:all) , modification of your @data variable in one test( it block) will lead to unexpected consequences for other tests in describe/context scope because they will share it. before(:all) example: describe MyClass do before(:all) do @a = [] end it { @a << 1; p @a } it { @a << 2; p @a } it { @a << 3; p @a }end Will output: [1][1, 2][1, 2, 3] UPDATED To answer you question describe MyClass do before(:all) do @a = [] end it { @a = [1]; p @a } it { p @a }end Will output [1][] Because in first it you are locally assigning instance variable @a, so it isn't same with @a in before(:all) block and isn't visible to other it blocks, you can check it, by outputting object_id s. So only modification will do the trick, assignment will cause new object creation. So if you are assigning variable multiple times you should probably end up with one it block and multiple expectation in it. It is acceptable, according to best practices. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27679456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1597315/']} | jdg_83489 |
stackexchange | llm_judgeable_groundtruth_similarity | 3495263 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Which natural deduction rule allows to derive any consequence from a contradiction in a conditional proof?
Now provide the response and nothing else.
| It is the principle of explosion , also known as ex falso quodlibert : from contradiction, anything follows. In natural deduction, it says that if $\mathcal{D}$ is a derivation with conclusion $\bot$ then, for every formula $\varphi$ , $$\dfrac{\genfrac{}{}{0pt}{}{\ \ \vdots \mathcal{D}}{\bot}}{\varphi}\scriptstyle\text{efq}$$ is a derivation with conclusion $\varphi$ and the same hypotheses as $\mathcal{D}$ . For a reference, see here (p. 3) Note that this rule can be used everywhere (not only in a conditional proof). Moreover, unlike the principle of reductio ad absurdum , ex falso quodlibet is accepted not only in classical logic but also in more constructive logics such as intuitionistic logic . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3495263', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_83490 |
stackexchange | llm_judgeable_groundtruth_similarity | 3180881 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was making a diagram of different types of groups; finite /infinite, cyclic / non-cyclic, finitely generated / inifinitely generated, but realized that I didn't have any examples og infinite groups, that are both finitely generated and non -abelian. Does anyone have any examples? :) I was thinking about creating an example based on matrices and matrix multiplication, but I didn't get very far. I know that since I am looking for a finitely generated group, It must be countable.
Now provide the response and nothing else.
| The group $\langle a, b\rangle$ is finitely generated: obviously, it is generated by $\{a, b\}$ , non-abelian: the elements $ab$ and $ba$ are two distinct elements, infinite: The mapping $\mathbb N\to \langle a, b\rangle$ that maps $n$ to $a^n$ is injective. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3180881', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/521121/']} | jdg_83491 |
stackexchange | llm_judgeable_groundtruth_similarity | 115306 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to get a ±15 V power supply and I am wondering where to find it. Should I get a 30V DC power supply in its place?
Now provide the response and nothing else.
| A ±15V supply will have 3 connections (+15, -15, 0), whereas a 30V supply will only have two (+30, 0). A 30V DC power supply probably isn't what you're looking for. You would need a way to provide a low-impedance mid-point at 15V to substitute for 'ground'-- assuming that it's isolated, or that you're very careful about how you interface it to external parts. The best way to go is to get a ±15V supply, unless you really know what you're doing. Asking the question probably means you should just buy (or build) one! | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/115306', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/45569/']} | jdg_83492 |
stackexchange | llm_judgeable_groundtruth_similarity | 281485 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Galilean principle of relativity: The laws of classical mechanics apply in all inertial reference systems OR No experiment carried out in an inertial frame of reference can determine the absolute velocity of the frame of reference These two statements written above are equivalent. Maxwell's equations were discovered later. My question is (1) how did Maxwell's equations contradict the Galilean principle of relativity? Furthermore if one studies the two postulates of Einstein's special theory of relativity, they can be simply translated as follows: Postulate 1: Galileo was right. Postulate 2: Maxwell was right. (2) How did the Maxwell equations retain the same form in all inertial frames by obeying the Lorentz transformation?
Now provide the response and nothing else.
| A Galilean set of frames are an obvious/common sense way of viewing motion if we assume the validity of 3 also apparently obvious postulates. All clocks measure time at the same rate, independent of their velocity. Objects have no limit on their potential velocity. Rulers have the same length (difference in position between the lengths at a common time), independent of their velocity. When Maxwell formulated/compiled his equatons, implying that light speed was invariant in every frame, Einstein was forced to consider the implications of this for Galilean transformations and their "obvious" underlying assumptions. If light speed is invariant in all frames, then something has to give to preserve that invariance, and the 3 assumptions above needed to be abandoned to preserve Maxwell's laws. How the Maxwell equations retained the same form in all inertial frames by obeying Lorentz transformation? By the development of the Faraday tensor $F_{\mu v}$ based on a vector potential $\vec A $ and a scalar potential $\Phi $ . | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/281485', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/103515/']} | jdg_83493 |
stackexchange | llm_judgeable_groundtruth_similarity | 505523 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using this code where I am invoking the run method of a List of classes that I loaded dynamically from dlls: for (int i = 0; i < robotList.Count; i++){ Type t = robotList[i]; //robotList is a List<Type> object o = Activator.CreateInstance(t); t.InvokeMember("run", BindingFlags.Default | BindingFlags.InvokeMethod, null, o, null);} The invokeMember is invoking the run methodof each of the classes in the list. Now how can I invoke this run method from invokeMember in a separate Thread ? So that I'll have separate threads running for each of the invoked methods.
Now provide the response and nothing else.
| If you know that all your dynamically loaded types implement Run, could you just require they all implement IRunable and get rid of the reflection part? Type t = robotList[i];IRunable o = Activator.CreateInstance(t) as IRunable;if (o != null){ o.Run(); //do this in another thread of course, see below} If not, this will work: for (int i = 0; i < robotList.Count; i++){ Type t = robotList[i]; object o = Activator.CreateInstance(t); Thread thread = new Thread(delegate() { t.InvokeMember("Run", BindingFlags.Default | BindingFlags.InvokeMethod, null, o, null); }); thread.Start();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/505523', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44084/']} | jdg_83494 |
stackexchange | llm_judgeable_groundtruth_similarity | 19053 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Where is the information in the fields of a session stored? If I, for instance, store in a session something like $_SESSION['foo'] = 'bar' . Where is "bar" stored? If I store an object of a class, in wich way is it stored? Like $_SESSION['kart'] = new Kart(10) . Could someone get the information stored in that class? How? Could the legitimate owner of that session modify the value of the field "foo"? And, could someone change the value of the field "foo" in an already created session of other user?
Now provide the response and nothing else.
| This depends on the webserver used. If we take PHP on Unix as an example it may store the session on the filesystem in the /tmp folder. It creates a file here with the name of the users session ID prefixed with sess_ (Example: /tmp/sess_9gk8f055hd61qll6a8pjpje2n2).The contents of the session can be optionally encrypted before being placed on the browser. For Apache sessions (not necessarily PHP) you may use the mod_session_crypto module ( Read more here ). The owner of the session can not change the session variables as he feels like unless the application allows him to do it. The application logic has to provide means to change the session in order for the user to change the variables. The session object is never transmitted to the client and only a reference to the session (e.g. PHPSESSID) is passed to the client. The session ID should have high entropy and minimum 16 bytes of length in order to be very hard to guess. See OWASP Top 10 - Broken authentication and session management for more information about that. Also see this OWASP Cheat sheet for specific information on how to secure sessions. Attacks against session storage: If there exists a flaw like for example LFI (Local File Inclusion) it may be possible for an attacker to read their own and other users session objects by including the file on the file system. For example if the following example worked I could possibly read my own session data: http://<victim>/?page=../../../../../../../../tmp/sess_<my sessionid> | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/19053', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/9565/']} | jdg_83495 |
stackexchange | llm_judgeable_groundtruth_similarity | 3276264 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My attempt Based on the sine rule and the graph of $\sin A = k a$ (where $k$ is a constant) in interval $(0,\pi)$ , increasing $a$ up to $1/k$ will either increase $A$ up to $90^\circ$ . decrease $A$ up to $90^\circ$ . So I cannot conclude that increasing $a$ will increase $A$ . Now I use the cosine rule (it is promising because the cosine is decreasing in the given interval). \begin{align}A &= \cos^{-1}\left(\frac{b^2+c^2-a^2}{2bc}\right)\\B &= \cos^{-1}\left(\frac{a^2+c^2-b^2}{2ac}\right)\\C &= \cos^{-1}\left(\frac{a^2+b^2-c^2}{2ab}\right)\\\end{align} It is hard to show that $0^\circ<A\leq B\leq C<180^\circ$ for any $\triangle ABC$ with $0<a\leq b\leq c$ . Could you show it? It means that I need to show that $$-1<\frac{a^2+b^2-c^2}{2ab}\leq \frac{a^2+c^2-b^2}{2ac} \leq \frac{b^2+c^2-a^2}{2bc}<1$$ for $0<a\leq b\leq c$ .
Now provide the response and nothing else.
| We need $a\ne0$ and $b\ne0$ . If it holds, then we have $(a^3+b^3)^2=c^6=(a^2+b^2)^3$ $a^6+2a^3b^3+b^6=a^6+3a^4b^2+3a^2b^4+b^6$ $2ab=3a^2+3b^2$ $(a-b)^2+2a^2+2b^2=0$ $a=b=0$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3276264', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3901/']} | jdg_83496 |
stackexchange | llm_judgeable_groundtruth_similarity | 3329990 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was working on the following function: $$f(x) = \frac{\ln x}{\cos x}$$ I tried to find values of x where derivative will equal to zero.After taking derivative of $f'(x)$ , I got: $$\tag 1f'(x) = \frac{\frac{1}{x} \cdot \cos x + \ln x \cdot \sin x}{\cos^2 x}$$ Setting numerator to zero, we get $$\tag 2\frac{1}{x} \cdot \cos x + \ln x \cdot \sin x = 0 $$ And now I'm stuck. How can I solve equation above (2) algebraically? I've made some attempts at solving this myself, but, as you can probably tell, they led me nowhere.
Now provide the response and nothing else.
| Although no exact analytical solutions for this mixed log-trigonometric equation are available, really good analytic approximations can still be derived. Rewrite the equation $$\frac{\cos x}{x} + \ln x\sin x = 0 $$ equivalently as, $$ \tan x =-\frac{1}{x\ln x}$$ Observe that the rhs quickly becomes small as $x$ moves out. Since $\tan(x)$ assumes small values around $k\pi$ , there would be infinite number of roots, all around $k\pi$ . To proceed, let $x=k\pi +y$ and approximate $\tan(x)$ around $k\pi$ as $$\tan(x)=\tan(k\pi+y) \approx y \tag{1}$$ and, similarly, approximate $-1/(x\ln x)$ around $k\pi$ as $$-\frac{1}{x\ln x} \approx -\frac{1}{k\pi\ln (k\pi)} +\frac{\ln(k\pi)+ 1}{[k\pi \ln (k\pi)]^2}y\tag{2} $$ As a result, $y$ can be solved from (1) and (2), $$y_k =-\frac{k\pi\ln (k\pi)}{[k\pi \ln (k\pi)]^2-\ln(k\pi) - 1}$$ And, hence, the solutions to the original equation $ x_k = k\pi + y_k$ , $$ x_k = k\pi \left[1-\frac{\ln (k\pi)}{(k\pi)^2\ln^2 (k\pi)-\ln(k\pi) - 1} \right]\tag{3}$$ with $k=1,2,3, ... \infty$ . For illustration, the first few roots are $$x_1 \approx \pi - 0.33334 = 2.80825 \space (2.80984)$$ $$x_2 \approx 2\pi - 0.08848= 6.19471 \space (6.19490)$$ $$x_3 \approx 3\pi - 0.04764 =9.37714 \space (9.37717)$$ $$...$$ $$x_n=n\pi$$ where, for comparison, the exact roots are provided in the parentheses. The algebraic solutions (3) are fairly accurate, even for the very first root. The successive roots quickly approaches $n\pi$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3329990', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/678074/']} | jdg_83497 |
stackexchange | llm_judgeable_groundtruth_similarity | 1405709 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
DependencyProperty.AddOwner MSDN page offers an example with two classes with static members, and the member of one class depends on the member of the other class for initialization. I think MSDN is wrong - the initialization order of static variables is unreliable in C# just like it is in C++ or anywhere else. I'm probably wrong because the WPF library itself is written that way and it works just fine. What am I missing? How can C# compiler possibly know the safe initialization order?
Now provide the response and nothing else.
| It's fine for one type to depend on another type being initialized, so long as you don't end up in a cycle. Basically this is fine: public class Child{ static Child() {} // Added static constructor for extra predictability public static readonly int X = 10;}public class Parent{ static Parent() {} // Added static constructor for extra predictability public static readonly int Y = Child.X;} The result is well-defined. Child 's static variable initializers are executed prior to the first access to any static field in the class, as per section 10.5.5.1 of the spec. This isn't though: public class Child{ public static readonly int Nasty = Parent.Y; public static readonly int X = 10;}public class Parent{ public static readonly int Y = Child.X;} In this latter case, you either end up with Child.Nasty=0 , Parent.Y=10 , Child.X=10 or Child.Nasty=0 , Parent.Y=0 , Child.X=10 depending on which class is accessed first. Accessing Parent.Y first will start initializing Parent first, which triggers the initialization of Child . The initialization of Child will realise that Parent needs to be initialized, but the CLR knows that it's already being initialized, so carries on regardless, leading to the first set of numbers - because Child.X ends up being initialized before its value is used for Parent.Y . Accessing Child.Nasty will start initializing Child first, which will then start to initialize Parent . The initialization of Parent will realise that Child needs to be initialized, but the CLR knows that it's already being initialized, so carries on regardless, leading to the second set of numbers. Don't do this. EDIT: Okay, more detailed explanation, as promised. When is a type initialized? If a type has a static constructor , it will only be initializedwhen it's first used (either when a static member is referenced, orwhen an instance is created). If it doesn't have a staticconstructor, it can be initialized earlier. In theory, it could alsobe initialized later; you could theoretically call a constructor ora static method without the static variables being initialized - butit must be initialized before static variables are referenced. What happens during initialization? First, all static variables receive their default values (0, nulletc). Then the static variables of the type are initialized in textualorder. If the initializer expression for a static variable requiresanother type to be initialized, then that other type will becompletely initialized before the variable's value is assigned - unless that second type is already being initialized (due to acyclic dependency). Essentially, a type is either: Already initialized Being initialized at the moment Not initialized Initialization is only triggered if the type is not initialized.This means that when there are cyclic dependencies, it is possibleto observe a static variable's value before its initial value hasbeen assigned . That's what my Child / Parent example shows. After all the static variable initializers have executed, the staticconstructor executes. See section 10.12 of the C# spec for more details on all of this. By popular demand, here was my original answer when I thought the question was about the initialization order of static variables within a class : Static variables are initialized in textual order, as per section 10.5.5.1 of the C# spec: The static field variable initializers of a class correspond to a sequence of assignments that are executed in the textual order in which they appear in the class declaration. Note that partial types make this trickier as there's no one canonical "textual order" of the class. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1405709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113748/']} | jdg_83498 |
stackexchange | llm_judgeable_groundtruth_similarity | 19869299 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have code like this : (function() { $(document).ready(function() { //event handlers $('.project-delete').on('click', function() { deleteProject($(this)); }); $('.keyword-delete').on('click', function() { deleteKeyword($(this)); }); this.deleteKeyword = function(model) { }}).call(this); I am curious if this is a good approach since I was learning JS recently and as far as I understand this function will have global scope, doesn't it ? since .call(this) is passing window object to this closure then I think it is not the best option ? Is there something I am missing ? Does changing .call(this) to just () change anything ? EDIT:So is it OK to just pass window object to that closure ? Wouldn't it better to pass just a empty object or jQuery object ?
Now provide the response and nothing else.
| This problem is normally caused by a failure in type inference from in the expression provided to the Where statement. As was mentioned in the comments above it is 100% caused by the assignment operator in the lambda returning an int instead of a bool . To be clear - where you have var result = db.Set.Where(x => x.Id = num).Select(whatever); You should have var result = db.Set.Where(x => x.Id == num).Select(whatever); Another good (and more common) example of this is something like this public class Elem { public bool IsSomething {get;set;} public bool? IsSomethingElse {get;set;}} Then if you do the following query, which looks very reasonable at fist glance, it will fail to compile with the rather puzzling error of "abiguous invocation" IQueryable<Elem> queryable = GetQueryable();var result = queryable.Where(e => e.IsSomething && e.IsSomethingElse).ToList(); If you weren't writing this statement inside a lambda then you would get a more meaningful error of "Cannot apply operator '&&' to operands of type 'System.Nullable<bool>' and 'bool'" Which would immediately tell you that you are failing to return a boolean. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19869299', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1951693/']} | jdg_83499 |
stackexchange | llm_judgeable_groundtruth_similarity | 2309558 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a time in hh:mm and it has to be entered by the user in that format. However, I want to compare the time (eg. 11:22) is it between 10am to 6pm? But how do I compare it?
Now provide the response and nothing else.
| Java doesn't (yet) have a good built-in Time class (it has one for JDBC queries, but that's not what you want). One option would be use the JodaTime APIs and its LocalTime class. Sticking with just the built-in Java APIs, you are stuck with java.util.Date . You can use a SimpleDateFormat to parse the time, then the Date comparison functions to see if it is before or after some other time: SimpleDateFormat parser = new SimpleDateFormat("HH:mm");Date ten = parser.parse("10:00");Date eighteen = parser.parse("18:00");try { Date userDate = parser.parse(someOtherDate); if (userDate.after(ten) && userDate.before(eighteen)) { ... }} catch (ParseException e) { // Invalid date was entered} Or you could just use some string manipulations, perhaps a regular expression to extract just the hour and the minute portions, convert them to numbers and do a numerical comparison: Pattern p = Pattern.compile("(\d{2}):(\d{2})");Matcher m = p.matcher(userString);if (m.matches() ) { String hourString = m.group(1); String minuteString = m.group(2); int hour = Integer.parseInt(hourString); int minute = Integer.parseInt(minuteString); if (hour >= 10 && hour <= 18) { ... }} It really all depends on what you are trying to accomplish. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2309558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217638/']} | jdg_83500 |
stackexchange | llm_judgeable_groundtruth_similarity | 2344801 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a cache-eject method that essentially looks like this: while ( myHashSet.Count > MAX_ALLOWED_CACHE_MEMBERS ){ EjectOldestItem( myHashSet );} My question is about how Count is determined: is it just a private or protected int , or is it calculated by counting the elements each time its called?
Now provide the response and nothing else.
| From http://msdn.microsoft.com/en-us/library/ms132433.aspx : Retrieving the value of this property is an O(1) operation. This guarantees that accessing the Count won't iterate over the whole collection. Edit: as many other posters suggested, IEnumerable<...>.Count() is however not guaranteed to be O(1). Use with care! IEnumerable<...>.Count() is an extension method defined in System.Linq.Enumerable . The current implementation makes an explicit test if the counted IEnumerable<T> is indeed an instance of ICollection<T> , and makes use of ICollection<T>.Count if possible. Otherwise it traverses the IEnumerable<T> (possible making lazy evaluation expand) and counts items one by one. I've not however found in the documentation whether it's guaranteed that IEnumerable<...>.Count() uses O(1) if possible, I only checked the implementation in .NET 3.5 with Reflector. Necessary late addition: many popular containers are not derived from Collection<T> , but nevertheless their Count property is O(1) (that is, won't iterate over the whole collection). Examples are HashSet<T>.Count (this one is most likely what the OP wanted to ask about), Dictionary<K, V>.Count , LinkedList<T>.Count , List<T>.Count , Queue<T>.Count , Stack<T>.Count and so on. All these collections implement ICollection<T> or just ICollection , so their Count is an implementation of ICollection<T>.Count (or ICollection.Count ). It's not required for an implementation of ICollection<T>.Count to be an O(1) operation, but the ones mentioned above are doing that way, according to the documentation. (Note aside: some containers, for instance, Queue<T> , implement non-generic ICollection but not ICollection<T> , so they "inherit" the Count property only from from ICollection .) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2344801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35142/']} | jdg_83501 |
stackexchange | llm_judgeable_groundtruth_similarity | 7618374 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code for drawing a circle : #include<stdio.h>#include<conio.h>#include<graphics.h>#include<math.h>void main(){ int xc, yc, x, y, p[100], r, k; int gdriver=DETECT, gmode, errorcode; printf("\nEnter the center point(xc,yc): "); scanf("%d%d", &xc, &yc); printf("\nEnter the radius: "); scanf("%d", &r); printf("\nPlotting...\n"); sleep(5); clrscr(); initgraph(&gdriver, &gmode, ""); p[0]=1-r; x=0; y=r; for(k=0;k<=y;k++) { putpixel(xc+x, yc+y, 9); putpixel(xc-x, yc-y, 9); putpixel(xc+x, yc-y, 9); putpixel(xc-x, yc+y, 9); putpixel(xc+y, yc+x, 9); putpixel(xc-y, yc-x, 9); putpixel(xc+y, yc-x, 9); putpixel(xc-y, yc+x, 9); if(p[k]>0) { p[k+1]= p[k]+ 2*(x+1)+1-2*(y+1); x++; y--; } else { p[k+1]=p[k]+2*(x+1)+1; x++; } } getch();} This part of code : putpixel(xc+x, yc+y, 9); putpixel(xc-x, yc-y, 9); putpixel(xc+x, yc-y, 9); putpixel(xc-x, yc+y, 9); putpixel(xc+y, yc+x, 9); putpixel(xc-y, yc-x, 9); putpixel(xc+y, yc-x, 9); putpixel(xc-y, yc+x, 9); Is mainly for plotting the points with respect to the circle, and it works because of the symmetric property of circle. But I couldn't figure out what this part of code is exactly doing ; if(p[k]>0) { p[k+1]= p[k]+ 2*(x+1)+1-2*(y+1); x++; y--; } else { p[k+1]=p[k]+2*(x+1)+1; x++; } Can anyone explain me what it does?Thanks in advance.
Now provide the response and nothing else.
| The update formulae look a little weird, and I will give what I think are the correct steps below: You are starting from the topmost point in the circle and rotating clockwise until the angle reaches 45 degrees. Now, the points on the circle roughly satisfy (x^2 + y^2 = r^2). The idea is to draw one pixel at a time, moving in the positive x direction. If you find that the next point (without shifting down) is too far from the center of the circle, then that point should be drawn one unit lower. For example, if you look at pixellated circles, you will see that they can be essentially broken down into a series of horizontal lines and pixels. Each end of the horizontal line marks a point where extending the line would be too far from the circle, and hence you see a drop. Note that there is some element of discretion here regarding which points you choose. There are 3 circle drawing disciplines: Inner circle: Choose points such that no point is drawn outside of the circle (so that x^2 + y^2 < (r+1)^2 for each point r -- note that its r+1 here and not r ) Outer circle: Choose points such that no point is drawn inside of the circle (so that x^2 + y^2 > (r-1)^2 for each point r -- note that its r-1 here and not r ) Middle circle: Choose points that minimize abs(x^2 + y^2 - r^2) . You can choose any of these disciplines in the algorithm. The methods are identical except for that code block (and the changes there are minor). In each case, you have to calculate how far each point deviates from the circle. This requires knowing x^2 + (y-1)^2 - r^2 . Let's call that sequence p[k] . If x^2 + (y-1)^2 - r^2 <= 0 , then moving down would show the point too close to the center of the circle, so the next point should be (x+1, y) . In that circumstance, then the next deviation will be: p[k+1] = (x+1)^2 + (y-1)^2 - r^2 = x^2 + (y-1)^2 - r^2 + 2x + 1 = p[k] + 2*(x + 1) - 1 If x^2 + y^2 - r^2 > 0 , then the next point should be (x+1,y-1) , so that p[k+1] = (x+1)^2 + (y-2)^2 - r^2 = x^2 + (y-1)^2 - r^2 + 2x + 1 - 2y + 3 = q[k] + 2*(x + 1) - 2*(y - 1) = p[k] + 2*(x+1) - 2 * (y + 1) These formulae change based on whether you are interested in finding the outer circle (pixels are never too close), inner circle (pixels are never too far), or center circle (roughly in line), but this is the essential idea. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7618374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678672/']} | jdg_83502 |
stackexchange | llm_judgeable_groundtruth_similarity | 273340 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to the concept of nosql databases and have never used it. Based on what I've read and the little that I've understood I still don't see how they can be particularly useful if you can't make references between data, if there's no concept of foreign key. How would I forexample query something simple as 'find all the comments posted by this user', 'find all photos that belong to an album entity' etc. Do nosql systems move away from the static relational data-model but still let you keep track of such references, is there something analogous to foreign keys that you can utilize in queries?
Now provide the response and nothing else.
| General Uses If you have data structures that are not clearly defined at the time when you make the system. I tend to keep user settings in nosql, for example. Another example was a system where the users needed to be able to add fields at runtime - very painful in an RDBMS and a breeze in NoSQL. If your model structure is largely centered around one or few model objects and most relationships are actually child objects of the main model objects. In this case you will find that you will have fairly little need for actual joins. I found that contact management system can be implemented quite nicely in nosql for example. A person can have multiple addresses, phones and e-mails. Instead of putting them each into a separate table, they all become part of the same model and you have one person object. If you want to benefit from clustering your data across multiple servers rather than having one monolithic server, which is commonly required by RDBMS. Caching. Even if you want to stick with a RDBMS as your main database, it can be useful to use a NoSQL database for caching query results or keeping data, such as counters. Storing documents. If you want to store coherent documents, in a database some of the NoSQL databases (such as MongoDB) are actually specialized in storing those. What about joins? Honestly, the no join thing sounded quite scary to me too in the beginning. But the trick is to stop thinking in SQL. You have to actually think with the object you have in memory when you are running your application. These should more or less just be saved into the NoSQL database as they area. Because you can store your full object graph, with child objects, most of the need for joins is eliminated. And if you find you need one, you will have to bite the bullet and fetch both objects and join in your application code. Luckily, most drivers can do the joining for you, if you set up your schema right. For further reading I actually recommend Martin Fowler . | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/273340', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/167392/']} | jdg_83503 |
stackexchange | llm_judgeable_groundtruth_similarity | 12147779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef? -(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image{ CGImageRef rawImageRef = [image CGImage]; //This function returns the raw pixel values const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef))); NSUInteger imageHeight = CGImageGetHeight(rawImageRef); NSUInteger imageWidth = CGImageGetWidth(rawImageRef); //Here I sort the R,G,B, values and get the average over the whole image int i = 0; unsigned int red = 0; unsigned int green = 0; unsigned int blue = 0; for (int column = 0; column< imageWidth; column++) { int r_temp = 0; int g_temp = 0; int b_temp = 0; for (int row = 0; row < imageHeight; row++) { i = (row * imageWidth + column)*4; r_temp += (unsigned int)rawPixelData[i]; g_temp += (unsigned int)rawPixelData[i+1]; b_temp += (unsigned int)rawPixelData[i+2]; } red += r_temp; green += g_temp; blue += b_temp; } NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)]; NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)]; NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)]; //Then I store the result in an array NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil]; return result;} I tried two things: Option 1:I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error" Option 2:I add one lineCGImageRelease(rawImageRef) before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line "Incorrect decrement of the reference count of an object that is not owned at this point by the caller" Where and how should I release the CGImageRef? Thanks!
Now provide the response and nothing else.
| Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average. Create a 1x1 bitmap context. Set the interpolation quality to medium (see later). Draw your image scaled down to exactly this one pixel. Read the RGB value from the context's buffer. (Release the context, of course.) This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling. Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here. Worth a try, at least. Edit: OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want. It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium . I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here. Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression. Note: Here's a late follow up on the example image above. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12147779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1203376/']} | jdg_83504 |
stackexchange | llm_judgeable_groundtruth_similarity | 27409074 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need Three.js code to convert 3D object coordinates to 2d ones in a 'div' element so that I can place text labels where they need to be (without those labels scaling/moving/rotating along with the 3D movement). Unfortunately, all of the examples that I have seen and tried so far seem to be using obsolete functions/techniques. In my case, I believe that I am using r69 of Three.js. Here is an example of an 'older' technique that just produces errors for me: Three.js: converting 3d position to 2d screen position Here is a snippet of some newer code (?) that doesn't provide sufficient context for me to get working, but looks a lot cleaner: https://github.com/mrdoob/three.js/issues/5533
Now provide the response and nothing else.
| I've written for my project the following function; it receives an THREE.Object3D instance and a camera as a parameters and returns the position on the screen. function toScreenPosition(obj, camera){ var vector = new THREE.Vector3(); var widthHalf = 0.5*renderer.context.canvas.width; var heightHalf = 0.5*renderer.context.canvas.height; obj.updateMatrixWorld(); vector.setFromMatrixPosition(obj.matrixWorld); vector.project(camera); vector.x = ( vector.x * widthHalf ) + widthHalf; vector.y = - ( vector.y * heightHalf ) + heightHalf; return { x: vector.x, y: vector.y };}; Then I created a THREE.Object3D just to hold the div position (it's attached to a mesh in the scene) and when needed it can easily converted to screen position using the toScreenPosition function and it updates the coordinates of the div element. var proj = toScreenPosition(divObj, camera);divElem.style.left = proj.x + 'px';divElem.style.top = proj.y + 'px'; Here a fiddle with a demo . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27409074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4347173/']} | jdg_83505 |
stackexchange | llm_judgeable_groundtruth_similarity | 53197806 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a docker-compose.yaml for my Wordpress stack using official Wordpress image and I want to add some custom constants in wp-config.php file automatically. By following official image instructions I end up with this: ### Web Application wordpress: container_name: 'wordpress' image: 'wordpress:php7.2-fpm-alpine' user: 1001:1001 environment: - WORDPRESS_DB_HOST=mysql - WORDPRESS_DB_USER=something - WORDPRESS_DB_NAME=something - WORDPRESS_DB_PASSWORD=xxxxxxxxxxxxxxx - WORDPRESS_DEBUG=1 - WORDPRESS_CONFIG_EXTRA= define( 'WP_REDIS_CLIENT', 'predis' ); define( 'WP_REDIS_SCHEME', 'tcp' ); define( 'WP_REDIS_HOST', 'redis' ); define( 'WP_REDIS_PORT', '6379' ); define( 'WP_REDIS_PASSWORD', 'xxxxxxxxxxxxxxx' ); define( 'WP_REDIS_DATABASE', '0' ); define( 'WP_REDIS_MAXTTL', '21600' ); define( 'WP_CACHE_KEY_SALT', 'xx_ '); define( 'WP_REDIS_SELECTIVE_FLUSH', 'xx_ '); define( 'WP_AUTO_UPDATE_CORE', false ); volumes: - ./wordpress:/var/www/html - ./logs/php:/var/logs/php - ./config/php/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro networks: - frontend - backend restart: always depends_on: - mysql Everything works but my OCD can't rest until I figure out why generated wp-config.php looks like this: WORDPRESS_CONFIG_EXTRA constants joined in one line: // WORDPRESS_CONFIG_EXTRAdefine('WP_REDIS_CLIENT', 'predis'); define('WP_REDIS_SCHEME', 'tcp'); define('WP_REDIS_HOST', 'redis'); define('WP_REDIS_PORT', '6379'); define('WP_REDIS_PASSWORD', 'xxxxxxxxxxxxxxx'); define('WP_REDIS_DATABASE', '0'); define('WP_REDIS_MAXTTL', '21600'); define('WP_CACHE_KEY_SALT', 'xx_'); define('WP_REDIS_SELECTIVE_FLUSH', 'xx_'); ..instead of like this, formatted with each constant being on new line which is much more readable: // WORDPRESS_CONFIG_EXTRAdefine('WP_REDIS_CLIENT', 'predis');define('WP_REDIS_SCHEME', 'tcp');define('WP_REDIS_HOST', 'redis');define('WP_REDIS_PORT', '6379');define('WP_REDIS_PASSWORD', 'xxxxxxxxxxxxxxx');define('WP_REDIS_DATABASE', '0');define('WP_REDIS_MAXTTL', '21600');define('WP_CACHE_KEY_SALT', 'xx_');define('WP_REDIS_SELECTIVE_FLUSH', 'xx_'); Can anyone guide me on how multiline environment variables are handled in docker-compose file, specifically for WORDPRESS_CONFIG_EXTRA variable? I tried WORDPRESS_CONFIG_EXTRA: | and WORDPRESS_CONFIG_EXTRA: |- but none worked the way I think it should.
Now provide the response and nothing else.
| In your first example the last element of the first sequence of the document is a plain scalar (i.e. not having single or double quotes) that extends over multiple lines. In a plain scalar newlines are replaced by spaces (and empty lines replaced by a newline). So if you want newlines within that element you should use (only showing relevant part): - WORDPRESS_DB_PASSWORD=xxxxxxxxxxxxxxx - WORDPRESS_DEBUG=1 - WORDPRESS_CONFIG_EXTRA= define( 'WP_REDIS_CLIENT', 'predis' ); define( 'WP_REDIS_SCHEME', 'tcp' ); define( 'WP_REDIS_HOST', 'redis' ); define( 'WP_REDIS_PORT', '6379' ); define( 'WP_REDIS_PASSWORD', 'xxxxxxxxxxxxxxx' ); define( 'WP_REDIS_DATABASE', '0' ); define( 'WP_REDIS_MAXTTL', '21600' ); define( 'WP_CACHE_KEY_SALT', 'xx_ '); define( 'WP_REDIS_SELECTIVE_FLUSH', 'xx_ '); define( 'WP_AUTO_UPDATE_CORE', false );volumes: - ./wordpress:/var/www/html or: - WORDPRESS_DB_PASSWORD=xxxxxxxxxxxxxxx - WORDPRESS_DEBUG=1 - | WORDPRESS_CONFIG_EXTRA= define( 'WP_REDIS_CLIENT', 'predis' ); define( 'WP_REDIS_SCHEME', 'tcp' ); define( 'WP_REDIS_HOST', 'redis' ); define( 'WP_REDIS_PORT', '6379' ); define( 'WP_REDIS_PASSWORD', 'xxxxxxxxxxxxxxx' ); define( 'WP_REDIS_DATABASE', '0' ); define( 'WP_REDIS_MAXTTL', '21600' ); define( 'WP_CACHE_KEY_SALT', 'xx_ '); define( 'WP_REDIS_SELECTIVE_FLUSH', 'xx_ '); define( 'WP_AUTO_UPDATE_CORE', false );volumes: - ./wordpress:/var/www/html Using |- instead of | excludes the final newline from that element. What you tried ( WORDPRESS_CONFIG_EXTRA: | ) is something completely different, as you split the single scalar element into a mapping with a single key-value pair. Although the above load as string values with embedded newlines, it can still happen that the processing done by docker-compose, in particular passing things to a shell, can change the newlines into spaces. I have also used programs where, if you might have to escape the newline for the "folllowing" processing by ending each line with a backslash ( \ ) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53197806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/327619/']} | jdg_83506 |
stackexchange | llm_judgeable_groundtruth_similarity | 161922 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any Linux program which offers the same (or some of the) functionality of Sysinternals DiskView, especially being able to view to physical location of a file on a hard disk? DiskView URL: http://technet.microsoft.com/en-gb/sysinternals/bb896650
Now provide the response and nothing else.
| For some file systems like ext4 or btrfs on Linux, you can use filefrag to get the offsets of the data segments for the file on the block device the file system is on. $ seq 1000 > a$ filefrag -v aFilesystem type is: ef53File size of a is 3893 (1 block of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 0: 82784147.. 82784147: 1: eofa: 1 extent found$ sudo dd bs=4k skip=82784147 count=1 if=/dev/storage/home 2>&- | head12345678910 Here the block device is a LVM volume. That volume may have physical volumes on disks, on partitions, on RAID arrays, on files, on RAM, on network block devices... Going back to an actual disk or set of disk may prove difficult. In my case, it's relatively easy, as it's just a logical volume on top of one GPT partition as one linear stretch. $ sudo dmsetup table /dev/storage/home0 1953120256 linear 8:98 384 So /dev/storage/home is 384 sectors within device 8:98, which happens to be /dev/sdg2 for me. $ cat /sys/block/sdg/sdg2/start489060352 So sdg2 is 489060352 sectors within /dev/sdg (the 7th disk on this system). So I can obtain the offset within the single disk that file is on with: $ sudo dd if=/dev/sdg skip="$((489060352+384+82784147*8))" count=1 2> /dev/null | head12345678910 | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/161922', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/85900/']} | jdg_83507 |
stackexchange | llm_judgeable_groundtruth_similarity | 27743711 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a little test case to compare YAML and JSON speed : import jsonimport yamlfrom datetime import datetimefrom random import randintNB_ROW=1024print 'Does yaml is using libyaml ? ',yaml.__with_libyaml__ and 'yes' or 'no'dummy_data = [ { 'dummy_key_A_%s' % i: i, 'dummy_key_B_%s' % i: i } for i in xrange(NB_ROW) ]with open('perf_json_yaml.yaml','w') as fh: t1 = datetime.now() yaml.safe_dump(dummy_data, fh, encoding='utf-8', default_flow_style=False) t2 = datetime.now() dty = (t2 - t1).total_seconds() print 'Dumping %s row into a yaml file : %s' % (NB_ROW,dty)with open('perf_json_yaml.json','w') as fh: t1 = datetime.now() json.dump(dummy_data,fh) t2 = datetime.now() dtj = (t2 - t1).total_seconds() print 'Dumping %s row into a json file : %s' % (NB_ROW,dtj)print "json is %dx faster for dumping" % (dty/dtj)with open('perf_json_yaml.yaml') as fh: t1 = datetime.now() data = yaml.safe_load(fh) t2 = datetime.now() dty = (t2 - t1).total_seconds() print 'Loading %s row from a yaml file : %s' % (NB_ROW,dty)with open('perf_json_yaml.json') as fh: t1 = datetime.now() data = json.load(fh) t2 = datetime.now() dtj = (t2 - t1).total_seconds() print 'Loading %s row into from json file : %s' % (NB_ROW,dtj)print "json is %dx faster for loading" % (dty/dtj) And the result is : Does yaml is using libyaml ? yesDumping 1024 row into a yaml file : 0.251139Dumping 1024 row into a json file : 0.007725json is 32x faster for dumpingLoading 1024 row from a yaml file : 0.401224Loading 1024 row into from json file : 0.001793json is 223x faster for loading I am using PyYAML 3.11 with libyaml C library on ubuntu 12.04.I know that json is much more simple than yaml, but with a 223x ratio between json and yaml I am wondering whether my configuration is correct or not. Do you have same speed ratio ? How can I speed up yaml.load() ?
Now provide the response and nothing else.
| You've probably noticed that Python's syntax for data structures is very similar to JSON's syntax. What's happening is Python's json library encodes Python's builtin datatypes directly into text chunks , replacing ' into " and deleting , here and there (to oversimplify a bit). On the other hand, pyyaml has to construct a whole representation graph before serialising it into a string. The same kind of stuff has to happen backwards when loading. The only way to speedup yaml.load() would be to write a new Loader , but I doubt it could be a huge leap in performance, except if you're willing to write your own single-purpose sort-of YAML parser, taking the following comment in consideration: YAML builds a graph because it is a general-purpose serialisation format that is able to represent multiple references to the same object. If you know no object is repeated and only basic types appear, you can use a json serialiser, it will still be valid YAML. -- UPDATE What I said before remains true, but if you're running Linux there's a way to speed up Yaml parsing. By default, Python's yaml uses the Python parser. You have to tell it that you want to use PyYaml C parser. You can do it this way: import yamlfrom yaml import CLoader as Loader, CDumper as Dumperdump = yaml.dump(dummy_data, fh, encoding='utf-8', default_flow_style=False, Dumper=Dumper)data = yaml.load(fh, Loader=Loader) In order to do so, you need yaml-cpp-dev (package later renamed to libyaml-cpp-dev ) installed, for instance with apt-get: $ apt-get install yaml-cpp-dev And PyYaml with LibYaml as well. But that's already the case based on your output. I can't test it right now because I'm running OS X and brew has some trouble installing yaml-cpp-dev but if you follow PyYaml documentation , they are pretty clear that performance will be much better. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27743711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/755371/']} | jdg_83508 |
stackexchange | llm_judgeable_groundtruth_similarity | 4413570 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Qt 4.7 QPainter to draw some polygons, etc into a widget. I am hoping to alter the coordinate system so that (0,0) is at the center of my widget, and the x/y axis behave in a standard "Cartesian" way (ie. y increases going "up" and decreases going "down"). In other words, I want the coordinates to be "math"-like not "computer graphics"-like, if you know what I mean. :-) I'm trying to do this using setViewport() and setWindow() rather than do the math myself, as it would be nice to be able to just call the draw methods directly with my coordinates. Here's what I've got so far: // Setup coordinatesdouble screenWidth = width();double screenHeight = height();double windowWidth = 100.0;double windowHeight = (screenHeight / screenWidth) * windowWidth;painter.setViewport(0, 0, screenWidth, screenHeight);painter.setWindow(-(windowWidth / 2.0), -(windowHeight / 2.0), windowWidth, windowHeight);// Draw stuffpainter.setPen(Qt::NoPen);painter.setBrush(Qt::blue);painter.drawRect(-10, -10, 20, 20); Now this works just fine, in that it draws a nice blue square in the middle of the screen. The problem is, I have to say that the upper left corner is (-10, -10). I'd like to be able to make it (-10, 10), as that is what it would be in Cartesian coords. I tried messing with setWindow/setViewport to get this "y-axis flip", but to no avail. This seems like a really easy/basic thing to do, but after scouring the Qt docs and the web, I can't figure it out! Thanks, Chris
Now provide the response and nothing else.
| Use class QMatrix . It specifies 2D transformations. QMatrix is set to QPainter . But remember, in your case, if you convert your widget's coords to Cartesian coords , you will have to put first point at (-10,-10) (not at (-10,10) as you did mentioned) to draw a rect, which has center at (0,0) , because Y-axis now grows up and X-Axis now grows right . All you need is to transform your coord system this way: translate origin from (0,0) to the middle of the widget: scale Y-axis by -1 factor: Here is the code, typed in paintEvent() function of a widget: QPainter pn( this );int w_2 = width() / 2;int h_2 = height() / 2;{ // X- and Y-Axis drawing pn.setPen( Qt::blue ); pn.drawLine( 0, h_2, width(), h_2); // X-Axis pn.drawLine( w_2, 0 , w_2, height() ); // Y-Axis}QMatrix m;m.translate( w_2, h_2 );m.scale( 1, -1 );pn.setMatrix( m );pn.setPen( Qt::NoPen );pn.setBrush( QBrush( Qt::blue, Qt::Dense4Pattern ) );pn.drawRect( -10, -10, 20, 20 ); result: update apr 07, 2014 This question was asked a long time ago and many things have changed since. For those asking themselves the same question today (beginnings of 2014) then my personal answer is that since Qt 4.3 it is possible to solve problem with text flipping more easier. You are right. Text also gets filpped because it is drawn with the same painter. You can draw text at the end, when all flipped drawings are done, if it is possible. This method is not convinient because of new calculations of texts position. Also you will need to drop settings for painter.Now I would recommend you to use QGraphicsView , because of huge support of 2D painting. Also for each QGraphicsItem ItemIgnoresTransformations flag can be set, which allows it to ignore inherited transformations (i.e., its position is still anchored to its parent, but the parent or view rotation, zoom or shear transformations are ignored). This flag is useful for keeping text label items horizontal and unscaled, so they will still be readable if the graphics view is transformed | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4413570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214974/']} | jdg_83509 |
stackexchange | llm_judgeable_groundtruth_similarity | 390169 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Any suggestions how to solve it? by parts?$$ \int_{-1}^{1} \frac{x^4}{x^2+1}dx$$ Thanks!
Now provide the response and nothing else.
| $$\text{Note that, we have }\dfrac{x^4}{x^2+1} = \dfrac{x^4-1}{x^2+1}+\dfrac1{x^2+1} = x^2-1 + \dfrac1{x^2+1}$$$$\text{Hence, }\int \dfrac{x^4}{x^2+1} dx = \int (x^2-1) + \int \dfrac1{x^2+1} = \dfrac{x^3}3 - x + \arctan(x) + \text{ constant}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/390169', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/63283/']} | jdg_83510 |
stackexchange | llm_judgeable_groundtruth_similarity | 20902562 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an element with two classes, one called "rotate" that will rotate the element 360 degrees and another called "doublesize" that will scale the element 2x its normal size: .rotate { transform: rotate(0deg);}.rotate:hover { transform: rotate(360deg);}.doublesize { transform: scale(1);}.doublesize:hover { transform: scale(2);} http://jsfiddle.net/Sbw8W/ I'm guessing this does not work because the classes override each other's transform property? I know that I could easily do this in one CSS rule like: .doublerotatesize { transform: scale(1) rotate(0deg);}.doublerotatesize:hover { transform: scale(2) rotate(360deg);} But I would like to be able to apply each class separately from the other if it is possible.
Now provide the response and nothing else.
| I'm guessing this does not work because the classes override each other's transform property? Correct. This is an unfortunate limitation as a side-effect of how the cascade works. You will have to specify both functions in a single transform declaration. You could simply chain both class selectors together instead of creating a new class for a combined transform: .doublesize.rotate { -webkit-transform: scale(1) rotate(0deg);}.doublesize.rotate:hover { -webkit-transform: scale(2) rotate(360deg);} ... but as you can see, the issue lies in the transform property rather than in the selector. This is expected to be rectified in Transforms level 2, where each transform has been promoted to its own property , which would allow you to combine transforms simply by declaring them separately as you would any other combination of CSS properties. This means you would be able to simply do this: /* Note that rotate: 0deg and scale: 1 are omitted as they're the initial values */.rotate:hover { rotate: 360deg;}.doublesize:hover { scale: 2;} ... and take advantage of the cascade rather than be hindered by it. No need for specialized class names or combined CSS rules. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20902562', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3156923/']} | jdg_83511 |
stackexchange | llm_judgeable_groundtruth_similarity | 2898010 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to compute $\lim\limits_{x \to \infty}\frac{\log(x)^{\log(\log(x))}}{x}$ By graphing it, clearly $x$ grows larger than $\log(x)^{\log(\log(x))}$, so the limit will go to $0$. I tried iterating L'Hopital's rule, but after three derivations, the sequence of limits gets successively more complicated. How can you prove that the limit is indeed $0$?
Now provide the response and nothing else.
| HINT: Let $x=\exp(e^u)$. Then your limit is equal to $$\lim_{u\to\infty}\frac{(e^u)^{\log(e^u)}}{\exp(e^u)}=\lim_{u\to\infty}\frac{e^{u^2}}{e^{e^u}}=\lim_{u\to\infty}e^{u^2-e^u}=\cdots$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2898010', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/477805/']} | jdg_83512 |
stackexchange | llm_judgeable_groundtruth_similarity | 17091271 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What's the best way to populate forms with database data made using the Form class in Laravel while still giving way to Input::old() if there are any errors? I can't seem to get it right. My current setup looks something like this public function getSampleform() { // Load database data here return View::make('sampleform');}public function postSampleform() { // Save to database again then redirect to success page return Redirect::to('success');} I usually echo my fields in the View this way: <?php echo Form::text('entry', Input::old('entry'), array('class' => 'form-select'); ?> What am I doing wrong?
Now provide the response and nothing else.
| The best way to do this is using Form Model Binding ( http://four.laravel.com/docs/html#form-model-binding ): Use an existing model or create an 'empty' model class: class NoTable extends Eloquent { protected $guarded = array();} Find your model or instantiate your empty class and fill it with data: public function getSampleform() { // Load database data here $model = new NoTable; $model->fill(['name' => 'antonio', 'amount' => 10]); return View::make('sampleform')->with(compact('model'));} If you'll use your form with a table that you already have data on it, this is how you uset it: public function getSampleform() { // Locate the model and store it in a variable: $model = User::find(1); // Then you just pass it to your view: return View::make('sampleform')->with(compact('model'));} To have your form populated, use Form Model Binding, this is an example in Blade: {{ Form::model($model, array('route' => array('sample.form')) ) }} {{ Form::text('name') }} {{ Form::text('amount') }}{{ Form::close() }} You don't even have to pass your Input data, because Laravel will populate your inputs using what comes first: 1 - Session Flash Data (Old Input)2 - Explicitly Passed Value (wich may be null or not)3 - Model Attribute Data And Laravel will also take care of csrf token for you, using Form::open() or Form::model(). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17091271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/693642/']} | jdg_83513 |
stackexchange | llm_judgeable_groundtruth_similarity | 31637626 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
#include <stdio.h>// xyz will be emitted with -flto (or if it is static) even when// the function is unused__attribute__((__used__))void xyz() { printf("Hello World!\n");}int main() { return 0;} What do I need this for? Is there any way I could still reach xyz somehow besides directly calling the function, like some dlsym() like magic?
Now provide the response and nothing else.
| Attribute used is helpful in situation when you want to force compiler to emit symbol, when normally it may be omitted. As GCC's documentation says (emphasis mine): This attribute, attached to a function, means that code must be emitted for the function even if it appears that the function is not referenced. This is useful, for example, when the function is referenced only in inline assembly . For instance, if you have code as follows: #include <iostream>static int foo(int a, int b){ return a + b;}int main(){ int result = 0; // some inline assembly that calls foo and updates result std::cout << result << std::endl;} you might notice, that no symbol foo is present with -O flag (optimization level -O1 ): g++ -O -pedantic -Wall check.cpp -ccheck.cpp:3: warning: ‘int foo(int, int)’ defined but not usednm check.o | c++filt | grep foo As a result you cannot reference foo within this (imaginary) inline assembly. By adding: __attribute__((__used__)) it turns into: g++ -O -pedantic -Wall check.cpp -cnm check.o | c++filt | grep foo00000000 t foo(int, int) thus now foo can be referenced within it. You may also have spotted that gcc 's warning is now gone, as you have tell you compiler that you are sure that foo is actually used "behind the scene". | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31637626', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1392778/']} | jdg_83514 |
stackexchange | llm_judgeable_groundtruth_similarity | 249461 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My best guess is that it's there to drop the voltage down to 0V by the time the current reaches the negative terminal but I'm not sure if that's the case; and if it is, is it absolutely necessary to be there and why exactly? Also a side question: Why is the voltage dropped to 5V in the beginning of the circuit, wouldn't it work if it remained 9V?
Now provide the response and nothing else.
| What is the exact purpose of this 2.2 kOhm resistor in the circuit? [...] is it absolutely necessary to be there and why exactly? Its purpose is to form a voltage divider with LDR (photoresistor). The resistance of LDR varies with the light intensity. The resistor turns the variable resistance into a variable voltage , which is then compared to a reference voltage that you can control with the potentiometer. Yes, it's necessary in this circuit, and most other circuits using an LDR. Why is the voltage dropped to 5V in the beginning of the circuit, wouldn't it work if it remained 9V? You're right here. Due to the circuit's construction of only relying on the relative resistances, the circuit would work fine (even better) without the regulation. You would only have to change the resistor after the LED, because the output from the operational amplifier will be closer to 9 volts than 5 volts. One minor benefit of having a 5 volt regulation is that you will only get at most 5 volts out of the operational amplifier. This could be useful if you want to connect it straight to a digital input. Note that the μA741 amplifier chosen here is a pretty bad choice, but why it is so is another question . | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/249461', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/118631/']} | jdg_83515 |
stackexchange | llm_judgeable_groundtruth_similarity | 234602 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have about 40 variables for each subject in a human population. For each time period, people join and exit the study. As a made up example, I want to see whether there are increases in average spending on movies as time progresses. The problem is that my population is very volatile, there could be 15% male in one time period and 99% male in another. Given that this is the case, how can I figure you whether the increase I observe is due to actual increase, population change or just variance? What I'm looking for is what subject I should be learning to address this problem. A particular textbook on clustering, regression? Or something like that. I cannot change or resample the data I'm given and I'm looking for something that's masters or bachelors level.
Now provide the response and nothing else.
| It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let $X=\exp(U)$. $F_X(x) = P(X \leq x) = P(\exp(U)\leq x) = P(U\leq \ln x) = \ln x\,,\quad 1<x<e$ So $f_x(x) = \frac{d}{dx} \ln x = \frac{1}{x}\,,\quad 1<x<e$. This is not an exponential variate. A similar calculation shows that the log of an exponential is not uniform. Let $Y$ be standard exponential, so $F_Y(y)=P(Y\leq y) = 1-e^{-y}\,,\quad y>0$. Let $V=\ln Y$. Then $F_V(v) = P(V\leq v) = P(\ln Y\leq v) = P(Y\leq e^v) = 1-e^{-e^v}\,,\quad v<0$. This is not a uniform. (Indeed $-V$ is a Gumbel -distributed random variable, so you might call the distribution of $V$ a 'flipped Gumbel'.) However, in each case we can see it more quickly by simply considering the bounds on random variables. If $U$ is uniform(0,1) it lies between 0 and 1 so $X=\exp(U)$ lies between $1$ and $e$ ... so it's not exponential. Similarly, for $Y$ exponential, $\ln Y$ is on $(-\infty,\infty)$, so that can't be uniform(0,1), nor indeed any other uniform. We could also simulate, and again see it right away: First, exponentiating a uniform -- [the blue curve is the density (1/x on the indicated interval) we worked out above...] Second, the log of a exponential: Which we can see is far from uniform! (If we differentiate the cdf we worked out before, which would give the density, it matches the shape we see here.) Indeed the inverse cdf method indicates that taking the negative of the log of a uniform(0,1) variate gives a standard exponential variate, and conversely, exponentiating the negative of a standard exponential gives a uniform. [Also see probability integral transform ] This method tells us that if $U=F_Y(Y)$, $Y = F^{-1}(U)$. If we apply the inverse of the cdf as a transformation on $U$, a standard uniform, the resulting random variable has distribution function $F_Y$. If we let $U$ be uniform(0,1), then $P(U\leq u) = u$. Let $Y=-\ln (1-U)$. (Note that $1-U$ is also uniform on (0,1) so you could actually let $Y=-\ln U$, but we're following the inverse cdf method in full here) Then $P(Y\leq y) = P(-\ln (1-U) \leq y) = P( 1-U \geq e^{-y}) = P( U \leq 1-e^{-y}) = 1-e^{-y}$, which is the cdf of a standard exponential. [This property of the inverse cdf transform is why the $\log$ transform is actually required to obtain an exponential distribution, and the probability integral transform is why exponentiating the negative of a negative exponential gets back to a uniform.] | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/234602', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/130790/']} | jdg_83516 |
stackexchange | llm_judgeable_groundtruth_similarity | 186449 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let us say I run a command or shell script, and it gives me output. Without knowing the internals of this command or shell script, how does one determine if is the output was from stderr or stdout ? For e.g., $ ls -ld /drwxrwxr-t 35 root admin 1258 Dec 11 19:16 / vs ls -ld /testls: /test: No such file or directory How do I ascertain that the first command printed to stdout and the second to stderr (did it?)?
Now provide the response and nothing else.
| There's no way to tell once the output has already been printed. In this case, both stdout and stderr are connected to the terminal, so the information about which stream was written to was already lost by the time the text appeared on your terminal; they were combined by the program before ever making it to the terminal. What you can do, in a case like the above, would be to run the command with stdout and stderr redirected to different places and see what happens. Or run it twice, once with stdout redirected to /dev/null and once with stderr redirected to /dev/null , and see which of those cases results in the text showing up. You can redirect stdout to /dev/null by tacking >/dev/null on the end of the command line, and you can redirect stderr to /dev/null by adding 2>/dev/null . | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/186449', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4252/']} | jdg_83517 |
stackexchange | llm_judgeable_groundtruth_similarity | 59013072 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
how's it going? I was wondering if anyone had any ideas on how to move an image and resize it within the same activity? I setup a listener to move the image and that works well. The commented out code was used to resize the image and that worked well also. But I am struggling to figure out how to implement some kind of GestureDetector or something similar to do both. Here is the code I have so far: private static final String TAG = "CreateOutfitActivity"; Context mContext; ImageView bodyImage, outfitOne, close; Button saveButton; RecyclerView recyclerView; Spinner spinner; FirebaseAuth mAuth; String currentUserID; DatabaseReference privateUserReference; String CategoryKey; List<String> spinnerArray = new ArrayList<>(); private ScaleGestureDetector scaleGestureDetector; ViewGroup rootLayout; private int _xDelta; private int _yDelta; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_create_outfits); mContext = CreateOutfitActivity.this; deployWidgets(); setupRecyclerView(); setupFirebase(); setupSpinner(); } private void deployWidgets(){ bodyImage = findViewById(R.id.bodyimage); outfitOne = findViewById(R.id.image_view_one); close = findViewById(R.id.close); saveButton = findViewById(R.id.saveButton); recyclerView = findViewById(R.id.outfit_recycler_view); spinner = findViewById(R.id.spinner); outfitOne.bringToFront(); rootLayout = findViewById(R.id.view_root); RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(1000, 1000); outfitOne.setLayoutParams(layoutParams); outfitOne.setOnTouchListener(new ChoiceTouchListener()); } private void setupRecyclerView(){ recyclerView.setHasFixedSize(true); LinearLayoutManager linearLayoutManager = new LinearLayoutManager(mContext, LinearLayoutManager.HORIZONTAL, false); recyclerView.setLayoutManager(linearLayoutManager); } private void setupFirebase(){ mAuth = FirebaseAuth.getInstance(); currentUserID = mAuth.getCurrentUser().getUid(); privateUserReference = FirebaseDatabase.getInstance().getReference().child("private_user"); } private void setupSpinner(){ spinnerArray.add("Cutouts"); spinnerArray.add("All Items"); spinnerArray.add("Accessories"); spinnerArray.add("Athletic"); spinnerArray.add("Casual"); spinnerArray.add("Dresses"); spinnerArray.add("Jackets"); spinnerArray.add("Jewelery"); spinnerArray.add("Other"); spinnerArray.add("Pants"); spinnerArray.add("Purses"); spinnerArray.add("Shirts"); spinnerArray.add("Shoes"); spinnerArray.add("Shorts"); spinnerArray.add("Suits"); ArrayAdapter<String> arrayAdapter = new ArrayAdapter<String>(mContext, android.R.layout.simple_spinner_item, spinnerArray); arrayAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinner.setAdapter(arrayAdapter); spinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { String text = spinner.getItemAtPosition(position).toString().toLowerCase(); CategoryKey = text; if (position == 1){ CategoryKey = text.replace("All Items", "all_items"); }// if (text.equals("All Items")){//// } queryFirebaseToDisplayCategory(CategoryKey); } @Override public void onNothingSelected(AdapterView<?> parent) { } }); } private void queryFirebaseToDisplayCategory(String CategoryKey){ Query query = privateUserReference.child(currentUserID).child(CategoryKey).orderByKey(); FirebaseRecyclerOptions<Category> firebaseRecyclerOptions = new FirebaseRecyclerOptions.Builder<Category>().setQuery(query, Category.class).build(); FirebaseRecyclerAdapter<Category, OutfitViewHolder> firebaseRecyclerAdapter = new FirebaseRecyclerAdapter<Category, OutfitViewHolder>(firebaseRecyclerOptions) { @Override protected void onBindViewHolder(@NonNull OutfitViewHolder outfitViewHolder, int i, @NonNull Category category) { String PostKey = getRef(i).getKey(); Picasso.get().load(category.getFile_uri()).into(outfitViewHolder.recyclerImage); outfitViewHolder.mView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { privateUserReference.child(currentUserID).child(CategoryKey).child(PostKey) .addValueEventListener(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot dataSnapshot) { String fileURI = dataSnapshot.child("file_uri").getValue().toString(); displayRecyclerImage(fileURI); } @Override public void onCancelled(@NonNull DatabaseError databaseError) { } }); } }); } @NonNull @Override public OutfitViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) { View view = LayoutInflater.from(mContext).inflate(R.layout.layout_outfit_item_view, parent, false); return new OutfitViewHolder(view); } }; firebaseRecyclerAdapter.startListening(); recyclerView.setAdapter(firebaseRecyclerAdapter); } private void displayRecyclerImage(String fileURI){ if (fileURI != null){ Picasso.get().load(fileURI).into(outfitOne); } } public class OutfitViewHolder extends RecyclerView.ViewHolder{ View mView = itemView; ImageView recyclerImage; public OutfitViewHolder(@NonNull View itemView) { super(itemView); recyclerImage = mView.findViewById(R.id.outfit_recycler_image); } } private final class ChoiceTouchListener implements View.OnTouchListener { public boolean onTouch(View view, MotionEvent event){ final int X = (int) event.getRawX(); final int Y = (int) event.getRawY(); switch (event.getAction() & MotionEvent.ACTION_MASK){ case MotionEvent.ACTION_DOWN: RelativeLayout.LayoutParams lParams = (RelativeLayout.LayoutParams)view.getLayoutParams(); _xDelta = X - lParams.leftMargin; _yDelta = Y - lParams.topMargin; break; case MotionEvent.ACTION_UP: break; case MotionEvent.ACTION_POINTER_DOWN: break; case MotionEvent.ACTION_POINTER_UP: break; case MotionEvent.ACTION_MOVE: RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams)view.getLayoutParams(); layoutParams.leftMargin = X - _xDelta; layoutParams.topMargin = Y - _yDelta; layoutParams.rightMargin = -0; layoutParams.bottomMargin = -0; view.setLayoutParams(layoutParams); break; } rootLayout.invalidate(); return true; } } private class MySimpleOnScaleGestureListener extends ScaleGestureDetector.SimpleOnScaleGestureListener{ ImageView viewMyImage; float factor; public MySimpleOnScaleGestureListener(ImageView iv) { super(); viewMyImage = iv; factor = 1.0f; } @Override public boolean onScale(ScaleGestureDetector detector) { float scaleFactor = detector.getScaleFactor() - 1; factor += scaleFactor; viewMyImage.setScaleX(factor); viewMyImage.setScaleY(factor); return true; //return super.onScale(detector); } }}```
Now provide the response and nothing else.
| After days of reasoning and tests, I am really excited of my final solution, which I explain below. This is an opinionated approach to images and may not represent the current Rails Way™️, however it has incredible advantages for websites that serve many public images, in particular: When you serve a page with N images you don't get 1 + N requests to your app server, instead you get only 1 request for the page The images are served through a CDN and this improves the loading time The bucket is not completely public, instead it is protected by Cloudflare The images are cached by Cloudflare, which greatly reduce your S3 bill You greatly reduce the number of API requests (i.e. exists) to S3 This solution does not require large changes to Rails, and thus it is straightforward to switch back to Rails default behavior in case of problems Here's the solution: Create an s3 bucket and configure it to host a public website (i.e. call it storage.example.com ) - you can even disable the public access at bucket level and allow access only to the Cloudflare ips using a bucket policy Go to Cloudflare and configure a CNAME for storage.example.com that points to your domain; you need to use Flexible SSL (you can use a page rule for the subdomain); use page rules to set heavy caching: set Cache Everything and set a very long value (e.g. 1 year) for Browser Cache TTL and Edge Cache TTL In you Rails application you can keep using private storage / acl, which is the default Rails behavior In your Rails application call @post.variant(...).processed after every update or creation of @post ; then in your views use 'https://storage.example.com/' + @post.variant(...).key' (note that we don't call processed here in the views to avoid additional checks in s3); you can also have a rake task that calls processed on each object, in case you need to regenerate the variants; this is works perfectly if you have only a few variants (e.g. 1 image / variant per post) that are changed infrequently Most of the above steps are optional, so you can combine them based on your needs. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59013072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10425824/']} | jdg_83518 |
stackexchange | llm_judgeable_groundtruth_similarity | 4272385 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm thinking of an approach something like this.Please let me know if this can actually work this way:For Sample XML: <Root> <Node> <SubEl1>abc</SubEl1> <SubEl2>def</SubEl2> <SubEl3>123</SubEl3> <SubEl4>456</SubEl4> </Node></Root> Want to go into <Node> , loop through check for the node/element name and get it's value. Something like this, say name is 'SubEl1' use 'abc' for task1, on seeing the element name is 'SubEl2' I do task2. All sub-elements have to be checked for! Example (not working code): //looping through 'Node' children switch(SubElName for 'Node element) { case : 'SubEl1' //Do Task1 using the SubEl1's value/TextName ... case: 'SubEl2' //Task2 ... ... case: default //Do default task..... } //end loop If you can think of any other approach (XElement, XmlDocument, SelectNodes() etc., that will be appreciated too!
Now provide the response and nothing else.
| For this task it looks like all you need to do is simply create a list/dictionary of the node name and the node value, you then can use that in your switch.... var list = from x in XElement.Load(**yourxmlfile**).Element("Node").Elements() select new { Name = x.Name, Value = (string)x }; now you have a list of Name, value pairs you can simply pass to your switch method. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4272385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393035/']} | jdg_83519 |
stackexchange | llm_judgeable_groundtruth_similarity | 184029 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is elastic net regularization always preferred to Lasso & Ridge since it seems to solve the drawbacks of these methods? What is the intuition and what is the math behind elastic net?
Now provide the response and nothing else.
| 1. Which method is preferred? Yes, elastic net is always preferred over lasso & ridge regression because it solves the limitations of both methods, while also including each as special cases. So if the ridge or lasso solution is, indeed, the best, then any good model selection routine will identify that as part of the modeling process. Comments to my post have pointed out that the advantages of elastic net are not unqualified. I persist in my belief that the generality of the elastic net regression is still preferable to either $L^1$ or $L^2$ regularization on its own. Specifically, I think that the points of contention between myself and others are directly tied to what assumptions we are willing to make about the modeling process. In the presence of strong knowledge about the underlying data, some methods will be preferred to others. However, my preference for elastic net is rooted in my skepticism that one will confidently know that $L^1$ or $L^2$ is the true model. Claim: Prior knowledge may obviate one of the need to use elastic net regression. This is somewhat circular. Forgive me if this is somewhat glib, but if you know that LASSO (ridge) is the best solution, then you won't ask yourself how to appropriately model it; you'll just fit a LASSO (ridge) model. If you're absolutely sure that the correct answer is LASSO (ridge) regression, then you're clearly convinced that there would be no reason to waste time fitting an elastic net. But if you're slightly less certain whether LASSO (ridge) is the correct way to proceed, I believe it makes sense to estimate a more flexible model, and evaluate how strongly the data support the prior belief. Claim: Modestly large data will not permit discovery of $L^1$ or $L^2$ solutions as preferred, even in cases when the $L^1$ or $L^2$ solution is the true model. This is also true, but I think it's circular for a similar reason: if you've estimated an optimal solution and find that $\alpha\not\in \{0,1\},$ then that's the model that the data support. On the one hand, yes, your estimated model is not the true model, but I must wonder how one would know that the true model is $\alpha=1$ (or $\alpha=0$ ) prior to any model estimation. There might be domains where you have this kind of prior knowledge, but my professional work is not one of them. Claim: Introducing additional hyperparameters increases the computational cost of estimating the model. This is only relevant if you have tight time/computer limitations; otherwise it's just a nuisance. GLMNET is the gold-standard algorithm for estimating elastic net solutions. The user supplies some value of alpha, and it uses the path properties of the regularization solution to quickly estimate a family of models for a variety of values of the penalization magnitude $\lambda$ , and it can often estimate this family of solutions more quickly than estimating just one solution for a specific value $\lambda$ . So, yes, using GLMNET does consign you to the domain of using grid-style methods (iterate over some values of $\alpha$ and let GLMNET try a variety of $\lambda$ s), but it's pretty fast. Claim: Improved performance of elastic net over LASSO or ridge regression is not guaranteed. This is true, but at the step where one is contemplating which method to use, one will not know which of elastic net, ridge or LASSO is the best. If one reasons that the best solution must be LASSO or ridge regression, then we're in the domain of claim (1). If we're still uncertain which is best, then we can test LASSO, ridge and elastic net solutions, and make a choice of a final model at that point (or, if you're an academic, just write your paper about all three). This situation of prior uncertainty will either place us in the domain of claim (2), where the true model is LASSO/ridge but we did not know so ahead of time, and we accidentally select the wrong model due to poorly identified hyperparameters, or elastic net is actually the best solution. Claim: Hyperparameter selection without cross-validation is highly biased and error-prone . Proper model validation is an integral part of any machine learning enterprise. Model validation is usually an expensive step, too, so one would seek to minimize inefficiencies here -- if one of those inefficiencies is needlessly trying $\alpha$ values that are known to be futile, then one suggestion might be to do so. Yes, by all means do that, if you're comfortable with the strong statement that you're making about how your data are arranged -- but we're back to the territory of claim (1) and claim (2). 2. What's the intuition and math behind elastic net? I strongly suggest reading the literature on these methods, starting with the original paper on the elastic net. The paper develops the intuition and the math, and is highly readable. Reproducing it here would only be to the detriment of the authors' explanation. But the high-level summary is that the elastic net is a convex sum of ridge and lasso penalties, so the objective function for a Gaussian error model looks like $$\text{Residual Mean Square Error}+\alpha \cdot \text{Ridge Penalty}+(1-\alpha)\cdot \text{LASSO Penalty}$$ for $\alpha\in[0,1].$ Hui Zou and Trevor Hastie. " Regularization and variable selection via the elastic net ." J. R. Statistic. Soc., vol 67 (2005), Part 2., pp. 301-320. Richard Hardy points out that this is developed in more detail in Hastie et al. "The Elements of Statistical Learning" chapters 3 and 18. 3. What if you add additional $L^q$ norms? This is a question posed to me in the comments: Let me suggest one further argument against your point of view that elastic net is uniformly better than lasso or ridge alone. Imagine that we add another penalty to the elastic net cost function, e.g. an $L^3$ cost, with a hyperparameter $\gamma$ . I don't think there is much research on that, but I would bet you that if you do a cross-validation search on a 3d parameter grid, then you will get $\gamma\not =0$ as the optimal value. If so, would you then argue that it is always a good idea to include $L^3$ cost too. I appreciate that the spirit of the question is "If it's as you claim and two penalties are good, why not add another?" But I think the answer lies in why we regularize in the first place. $L^1$ regularization tends to produce sparse solutions, but also tends to select the feature most strongly correlated with the outcome and zero out the rest. Moreover, in a data set with $n$ observations, it can select at most $n$ features. $L_2$ regularization is suited to deal with ill-posed problems resulting from highly (or perfectly) correlated features. In a data set with $p$ features, $L_2$ regularization can be used to uniquely identify a model in the $p>n$ case. Setting aside either of these problems, the regularized model can still out-perform the ML model because the shrinkage properties of the estimators are "pessimistic" and pull coefficients toward 0. But I am not aware of the statistical properties for $L^3$ regularization. In the problems I've worked on, we generally face both problems: the inclusion of poorly correlated features (hypotheses that are not borne out by the data), and co-linear features. Indeed, there are compelling reasons that $L^1$ and $L^2$ penalties on parameters are the only ones typically used. In Why do we only see $L_1$ and $L_2$ regularization but not other norms? , @whuber offers this comment: I haven't investigated this question specifically, but experience with similar situations suggests there may be a nice qualitative answer: all norms that are second differentiable at the origin will be locally equivalent to each other, of which the $L^2$ norm is the standard. All other norms will not be differentiable at the origin and $L^1$ qualitatively reproduces their behavior. That covers the gamut. In effect, a linear combination of an $L^1$ and $L^2$ norm approximates any norm to second order at the origin--and this is what matters most in regression without outlying residuals. So we can effectively cover the range of options which could possibly be provided by $L^q$ norms as combinations of $L^1$ and $L^2$ norms -- all without requiring additional hyperparameter tuning. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/184029', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/54214/']} | jdg_83520 |
stackexchange | llm_judgeable_groundtruth_similarity | 4286671 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I convert big endian to little endian in ARM?
Now provide the response and nothing else.
| Are you talking about ARM's endian modes, or reading something written by some other big endian processor, etc? Normally converting to/from big/little endian you swap the bytes around. So 0xABCD is 0xCDAB when viewed as a 16 bit number 0x12345678 is 0x78563412 when viewed as a 32 bit number. ARM cores armv5 and older (ARM7, ARM9, etc) have an endian mode known as BE-32, meaning big endian word invariant. armv6 and newer (mpcore, cortex-somethings) have BE-8, or big endian byte invariant. So if you are using an armv4 for example in big endian mode and native (little) endian mode a word read (ldr) of the value 0x12345678 would be 0x12345678 for a big endian word read at the same address. Word invariant meaning word reads give the same answer. A byte read of address zero in little endian mode of the same address would be 0x78 and big endian byte read (ldrb) would be 0x12. So you have to go beyond just saying is it big or little endian but what instruction is being used. For an armv6 or newer, if an ldr at some address results in 0x12345678 then in big endian mode the ldr from the same address would result in 0x78563412. Note that big or little endian mode an instruction fetch for that address on an armv6 or newer would fetch 0x12345678. An ldrb little endian mode armv6 same data same address results in 0x78, ldrb big endian armv6 or newer also results in 0x78. this is because the armv6 and newer are byte invariant meaning byte accesses to the same address result in the same value, halfword, word and double word accesses are swapped on these architectures when in big endian mode. Because instruction fetches are not swapped, and because the endian bit is in the psr while running a little endian compiled program you can switch to big endian, do a number of instructions then return to native mode and it wont affect the instruction fetches nor interrupts that occur. setend beldr r0,[r1]add r0,r0,#7str r0,[r1]setend le Some web pages will mention this four instruction byte swap, in case you want to run native little endian (a very good idea) and perform the swap using assembler (not always a good idea, depends on what you are doing). eor r3,r1,r1, ror #16 bic r3,r3,#0x00FF0000 mov r0,r1,ror #8 eor r0,r0,r3, lsr #8 with r1 being the input it appears and r0 being the output For armv6 or newer the above can be performed with rev r0,r1 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4286671', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/521548/']} | jdg_83521 |
stackexchange | llm_judgeable_groundtruth_similarity | 3110 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to teach myself different approaches to software verification. I have read some articles. As far as I learned, propositional logic with temporal generally uses model checking with SAT solvers (in ongoing - reactive systems), but what about first order Logic with temporal? Does it use theorem provers? Or can it also use SAT? Any pointers to books or articles for beginners in this matter is much appreciated.
Now provide the response and nothing else.
| First order logic is undecidable, so SAT solving does not really help. That said, techniques exist for bounded model checking of first order formulas. This means that only a fixed number of objects can be considered when trying to determine whether the formula is true or false. Clearly, this is not complete, but if a counter-example is found, then it truly is a counter-example. The tool Alloy is one tool that allows models to be described in first-order logic (though the surface syntax is based on relationally described models) and uses bounded model checking to find solutions. A SAT solver is used under the hood. One alloy extension allows models with a temporal character, though technically it does not support temporal logic. If you wish to explore further, for example, to verify program correctness, then you can look at program verification tools. These are generally based on Hoare logic (for reasoning about pre- and post-conditions), possibly extended with Separation logic (for reasoning about heaps). These logics are generally undecidable, so a certain amount of interaction between the human and the verification tool is required.Some example tools are: Verifast Spec# | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/3110', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2436/']} | jdg_83522 |
stackexchange | llm_judgeable_groundtruth_similarity | 146576 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The method signature of a Java main method is: public static void main(String[] args) { ...} Is there a reason why this method must be static?
Now provide the response and nothing else.
| The method is static because otherwise there would be ambiguity: which constructor should be called? Especially if your class looks like this: public class JavaClass{ protected JavaClass(int x){} public void main(String[] args){ }} Should the JVM call new JavaClass(int) ? What should it pass for x ? If not, should the JVM instantiate JavaClass without running any constructor method? I think it shouldn't, because that will special-case your entire class - sometimes you have an instance that hasn't been initialized, and you have to check for it in every method that could be called. There are just too many edge cases and ambiguities for it to make sense for the JVM to have to instantiate a class before the entry point is called. That's why main is static. I have no idea why main is always marked public though. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/146576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2937/']} | jdg_83523 |
stackexchange | llm_judgeable_groundtruth_similarity | 1607627 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a physics major and would like to clear a confusion regarding complete metric spaces. I am quoting the definition of a Cauchy sequence from wikipedia below Formally, given a metric space $(X, d)$, a sequence $x_1, x_2, x_3, \ldots$ is Cauchy, if for every positive real number $\epsilon > 0$ there is a positive integer $N$ such that for all positive integers $m, n > N$, the distance $$d(x_m, x_n) < \epsilon$$ Now, if we have sequence like $x_1=3, x_2=3.1, x_3=3.14, \ldots$ converging to $\pi$, I do not understand how all distances $d(x_m, x_n)$ will be less than all positive real numbers. Since irrational numbers do not terminate and continue forever, how can the distance ever be less than the smallest real number or infinitesimal (hyperreal) as the distance can never become $0$. Does this definition of completeness apply where $\epsilon$ is infinitesimal (hypperreal) ? Kindly excuse my ignorance as I am not a mathematics major. Thanks
Now provide the response and nothing else.
| The subject line currently reads “How can a Cauchy Sequence converge to an irrational number?”. If we construe that literally, then one easy way a Cauchy sequence (lower-case initial "s") can converge to $\pi$ is that every term of the Cauchy sequence is $\pi$. Thus: $x_1=\pi, x_2=\pi, x_3=\pi,\ldots\,{}$. I suspect you meant “How can a Cauchy sequence of rational numbers converge to an irrational number?”. Consider your sequence $3,\ 3.1,\ 3.14,\ 3.141,\ \ldots\,$. The definition DOES NOT say that all distances between members of this sequence are less than all positive numbers. That would happen only with a constant sequence like my first example above. It says: For every positive real number $\varepsilon>0$ there is a positive integer $N$ such that for all positive integers $m,n>N$ we have $d(x_m,x_n)<\varepsilon$. Notice that $N$ depends on $\varepsilon$. In fact as $\varepsilon$ gets smaller, typically $N$ must get bigger. Suppose $\varepsilon = 0.01$. Then for your example sequence, $N=3$ is big enough: every pair of numbers in the sequence at or after the third place in the sequence differ from each other by less than $\varepsilon=0.01$. Thus $3.14$ and $3.141$ differ by less than $0.01$. But now suppose $\varepsilon=0.00001$. Then you need a bigger value of $N$. If each term of the sequence has one more digit or $\pi$, then $N=5$ would be big enough for that value of $\varepsilon$. Notice that the definition of convergence to $\pi$ differs from the definition of "Cauchy sequence". It says for every $\varepsilon>0$ there is a positive integer $N$ such that for every positive integer $n\ge N$ we have $|x_n-\pi|<\varepsilon$. Again, $N$ depends on $\varepsilon$. If $\varepsilon=0.00001$, then $N=5$ would be enough: every term at or beyond the $5$th one differs from $\pi$ by less than $\varepsilon=0.00001$. There is nothing in either of these definitions that says that the distance between two different members of the sequence or the distance between $\pi$ and a member of the sequence is $0$. You wrote: Since irrational numbers do not terminate and continue forever Let's be clear on a definition. It is certainly not correct that numbers whose decimal expansions do not terminate are necessarily irrational. For example, $1/7 = 0.\ 142857\ 142857\ 142857\ \ldots$ has a non-terminating decimal expansion and is rational. Nor is it the case that "rational number" is defined as one whose decimal expansion repeats or terminates. Euclid and other ancient Greeks proved some numbers are irrational without ever thinking about decimal expansions. That $\pi$ is irrational means $\pi$ is not a quotient of two integers, like $22/7$. Proving $\pi$ is irrational is so difficult that it was not done until the 18th century. Some numbers are far easier to prove to be irrational. For example, if $\log_2 3 = m/n$ and $m,n$ are positive integers, then $2^m=3^n$, but that can't happen because an even number cannot be equal to an odd number. The fact that a number is rational if and only if its decimal expansion repeats or terminates takes a bit of work to prove, but it's elementary enough that high-school students will understand it. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1607627', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/226413/']} | jdg_83524 |
stackexchange | llm_judgeable_groundtruth_similarity | 51066987 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I open a website, then wait for all redirects to be done. Then I capture a captcha image, and send it via nodejs to a user. Then I recive the typed captcha: const browser = await puppeteer.launch({headless: false}); const page = await browser.newPage(); await page.goto('http://localhost/p1.php' ); await page.waitForNavigation(); const captcha_image = await page.$eval('#security', e => e.getAttribute('src')); io.emit('send_captcha_to_client' , {text : captcha_image }); var captcha = await captchaPromise; After I receive the typed value of the capthca, I put it in the field and click the login button: await page.$eval('#W_CAPTCHA', (el , _captcha) => el.value = _captcha.W_CAPTCHA , captcha ); await page.click('#login-btn'); Now after clicking the button, an ajax request will be sent to the server. Lets say http://example.com/login.php - if the captcha is right, I will get redirected to my dashboard, but if the ajax call returns lets say {status:er} And there is <span id="alert"></span> in the page it'll add .Error class to it and put the error text in there. How can I intercept the ajax call and check if the login was successful or not? I can check the result of the ajax call or wait and check the document for div.Error . But I am not sure how can I do any of them. Here is my failed attempt: await page.waitForNavigation();page.waitForSelector('.Error');const error = await page.$eval('.Error', e => e.value );console.log(error);browser.close();
Now provide the response and nothing else.
| You can wait on both simultaneously and handle whichever occurs first: await Promise.race([ page.waitForNavigation({ waitUntil: "networkidle0" }), page.waitForSelector(".Error")]);if (await page.$(".Error")) { // there was an error} else { // the page changed} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51066987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5796284/']} | jdg_83525 |
stackexchange | llm_judgeable_groundtruth_similarity | 335484 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have ubuntu file system directories in the root directory and I accidentally copied hundreds of files into root directory. I intuitively tried to remove copied files by excluding file system like rm -rf !{bin,sbin,usr,opt,lib,var,etc,srv,libx32,lib64,run,boot,proc,sys,dev} ./. bu it doesn't work. What's the proper way to exclude some directories while deleting the whole? EDIT: Never try any of the commands here without knowing what to do!
Now provide the response and nothing else.
| Since you are using bash : shopt -s extglobecho rm -rf ./!(bin|sbin|usr|...) I recommend to add echo at the beginning of the command line when you are running something what potentially can blow up the entire system. Remove it if you are happy with the result. Note: The above command won't remove hidden files (those which name start by a dot). If you want to remove them as well then activate also dotglob option: shopt -s dotglob | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/335484', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/29606/']} | jdg_83526 |
stackexchange | llm_judgeable_groundtruth_similarity | 656199 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I have a list of items like this: local items = { "apple", "orange", "pear", "banana" } how do I check if "orange" is in this list? In Python I could do: if "orange" in items: # do something Is there an equivalent in Lua?
Now provide the response and nothing else.
| You could use something like a set from Programming in Lua : function Set (list) local set = {} for _, l in ipairs(list) do set[l] = true end return setend Then you could put your list in the Set and test for membership: local items = Set { "apple", "orange", "pear", "banana" }if items["orange"] then -- do somethingend Or you could iterate over the list directly: local items = { "apple", "orange", "pear", "banana" }for _,v in pairs(items) do if v == "orange" then -- do something break endend | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/656199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8308/']} | jdg_83527 |
stackexchange | llm_judgeable_groundtruth_similarity | 13009560 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets say we have these packages and classes: package p1;public class A1 { public static void a() {}}package p2;public class A1 { public static void a() {}}package p3;import static p1.A1.a;import static p2.A1.a;public class A1 { public static void test() { }} I am wondering, why the static import of methods is legal (won't result in compile time error) in package p3 ? We won't be able to use them further in the test() method as such usage will result in the compile time error. Why it is not the same as with a normal import of classes. Lets say we would like to import classes A1 from packages p1 and p2 into p3 : package p3;import p1.A1;import p2.A1; such import is illegal and will result in the compile time error.
Now provide the response and nothing else.
| The ambiguity of the static imports of methods could be resolved at the point of the method invocation. For example if you had a static import for two methods that look like this: void frobnicate(int i);// andvoid frobnicate(boolean b); Then you could import and use both, because the compiler could tell which one to use, based on the arguments you pass in ( frobnicate(1) calls the first one, frobnicate(true) calls the second one). With classes, that's not possible: Foobar a; alone is not sufficient to tell you which of the two Foobar classes you want. Also note that a single static import can import multiple names. According to the relevant section of the JLS (emphasis mine): A single-static-import declaration imports all accessible static members with a given simple name from a type. For example if the two frobnicate methods above where located in the same class, a single static import could import them both. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13009560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1055638/']} | jdg_83528 |
stackexchange | llm_judgeable_groundtruth_similarity | 221919 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a difference between the two examples below for beginning a Perl script? If so, when would I use one over the other? example 1: #!/usr/bin/perluse warnings; example 2: #!/usr/bin/perl -w
Now provide the response and nothing else.
| Using the switch will enable all warnings in all modules used by your program. Using the pragma you enable it only in that specific module (or script). Ideally, you use warnings in all your modules, but often that's not the case. Using the switch can get you a lot of warnings when you use a third party module that isn't warnings-safe. So, ideally it doesn't matter, but pragmatically it's often preferable for your end-users not to use the switch but the pragma. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/221919', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6406/']} | jdg_83529 |
stackexchange | llm_judgeable_groundtruth_similarity | 10236240 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to compile the following (g++ 4.6.3) class A {};A& operator*=( A& a, const A& b ){ return a;}A operator*( const A& a, const A& b ){ return A( a ) *= b;}int main( int, char*[] ){ A a, b; a = a*b; return 0;} I get the error /tmp/test.cxx: In function ‘A operator*(const A&, const A&)’:/tmp/test.cxx:14:20: error: no match for ‘operator*=’ in ‘(* & a) *= b’/tmp/test.cxx:14:20: note: candidate is:/tmp/test.cxx:6:1: note: A& operator*=(A&, const A&)/tmp/test.cxx:6:1: note: no known conversion for argument 1 from ‘A’ to ‘A&’ This puzzles me - how can a conversion from a class to a reference to that class not be known? Changing the declaration of class A as follows does not have any effect: class A{public: A() {} A( const A& ) {}}; Same error. I would be extremely grateful for hints as to what's going on here.
Now provide the response and nothing else.
| Like Lucian said, you cannot bind a temporary object to a non-const reference. The expectance of the compiler is that the object will cease to exist after the expression so it makes no sense to modify it. To fix your code, remove the temporary (making the argument const& makes no sense in operator *= ): A operator*(A a, const A& b){ return a *= b;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10236240', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1345022/']} | jdg_83530 |
stackexchange | llm_judgeable_groundtruth_similarity | 7775615 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to implement the Tumblr API in an Android app. I'm really getting stuck with authorizing a user so that they can do things such as post and view their dashboard. I don't really understand oAuth, and the Tumblr API documentation sort of skips right over it. I have no idea if I'm supposed to prompt the user for their credentials, or what to do with those once I have them, or anything like that. I added the Signpost library to my project, but I've been scratching my head since then. Anybody familiar with oAuth on Android who would care to fill me in? Thanks!
Now provide the response and nothing else.
| Yes, the documentation is not that good. You should first read about OAuth. Twitter has a good overview. First of all you need a consumer key and secret (you can get those by registering your app in tumblr). After that, you should use the auth URLs that Tumblr provides to get the authorization from the user. Usually you will generate a request URL, from which you can take the user to the browser where he/she will login and authorize your app. This will trigger a callback to your app, and you will be able to get the oAuth token. Save this in your app (SharedPreferences) so that you won't need to ask the user again to authenticate. With this token you will be able to interact with Tumblr's API that requires authentication. Note that you could also implement a webview instead of making the user use the browser. Though, this requires a bit more of work. I have found that the latest signpost library does not work well with Tumblr. You will need a bit older version. Head here and download these files: signpost-core-1.2.jar signpost-commonshttp4-1.2.jar (this is needed specially if you want to target pre-froyo devices) Import both libraries to your project. To use them, basically you need to call the following code: CommonsHttpOAuthConsumer consumer = new CommonsHttpOAuthConsumer(CONSUMER_KEY, CONSUMER_SECRET);CommonsHttpOAuthProvider provider = new CommonsHttpOAuthProvider( REQUEST_TOKEN_URL, ACCESS_TOKEN_URL, AUTH_URL);String authUrl = provider.retrieveRequestToken(consumer, CALLBACK_URL); CALLBACK_URL could be something like this: "tumblrapp://tumblrapp.com/ok". There is no need to set the callback URL on the Tumblr settings. Also, you will need to set an intent filter so your app gets called after authorization. Make sure that your Manifest looks like this: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE"/> <data android:scheme="tumblrapp"/> </intent-filter> Now after authentication you can get the token like this: Uri uri = this.getIntent().getData();if (uri != null) { String token = uri.getQueryParameter("oauth_token");} I made a quick sample app. You can check it out here . You might want to move the request to a background thread as it will block the UI. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7775615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/468498/']} | jdg_83531 |
stackexchange | llm_judgeable_groundtruth_similarity | 41362 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to select all elements above the main diagonal of matrix? I need to create a list of them.
Now provide the response and nothing else.
| The Diagonal command has a second argument that allows listing the elements of the jth superdiagonal. So we can map over all the superdiagonals: n = 4;mat = Array[a, {n, n}];Flatten[Diagonal[mat, #] & /@ Range[n-1]] which gives a list of all the elements above the diagonal: {a[1, 2], a[2, 3], a[3, 4], a[1, 3], a[2, 4], a[1, 4]} | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/41362', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/12048/']} | jdg_83532 |
stackexchange | llm_judgeable_groundtruth_similarity | 637186 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
There seems to be speed benefits when using VMware's templates to create new VMs. However, I'm concerned about possible less obvious security and flexibility implications. For e.g.: First boot generated keys VMware proprietary methods of configuring server settings (IPs, hostname, etc.) In a shop that's almost entirely VMware with a majority of Linux being RHEL-based distros, has anyone come across some pitfalls when provisioning from templates? How about concerns when cloning? BTW: Regardless of the initial provisioning method, it would primarily be used as a bootstrap to Puppet for the rest of the configuration.
Now provide the response and nothing else.
| Depending on the environment and your provisioning process, it may be faster to build anew than using VMware templates and the clone from template feature. I did work in a large Linux-focused VMware environment, where the deployment process wasn't as automated as it should have been. We relied on vSphere templates of RHEL systems, but quite a bit of manual work was needed following the initial clone. Template advantages: Templates are good if you're baking other applications, configuration settings and things that extend beyond the reach of config management into the template. ( e.g. a complex Oracle application stack ) The cloning time is a function of your storage and vSphere infrastructure. I've seen really slow and resource intensive cloning jobs occur. If using Red Hat/CentOS/Debian/Ubuntu, for example, you can take advantage of the sys-unconfig command to "unconfigure" a system ahead of templating. This is the Linux equivalent to Microsoft's Sysprep and removes network interface rules, SSH keys, network settings, etc. VMware's IP assignment tools are okay and haven't been too much of a problem with mainstream Linux operating systems. PXE/kickstart deployment advantages: Usually faster than templating/cloning, assuming you have a local repository and a net boot image. More flexible in that you can make changes to your images/master without the cycle of converting template to VM, modifying, reconverting and cloning. Best when coupled with a configuration management solution. In the end, it depends on your application and how much additional work is needed following the initial OS install. I've taken a hybrid approach, using clones for the most complex instances and new builds via a provisioning system for routine systems. They're not mutually exclusive. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/637186', 'https://serverfault.com', 'https://serverfault.com/users/53736/']} | jdg_83533 |
stackexchange | llm_judgeable_groundtruth_similarity | 26066964 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is my project structure in mvn : As you can notice, I have two classes in src/test/java CardValidtorIT.java (THis is integration test) CardValidatorTest.java (This is Unit-Test) when I run mvn package I notice only the unit-test (CardValidatorTest.java) is run But when I run mvn integration-test I see both unit-test and Integration tests are run. How does mvn know not to execute the CardValidatorIT.java when I run mvn package . That is, why it did not run CardValidatoryIT.java Here is my pom.xml <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>chapter14</artifactId> <groupId>org.agoncal.book.javaee7</groupId> <version>1.0</version></parent><groupId>org.agoncal.book.javaee7.chapter14</groupId> <artifactId>chapter14-service</artifactId> <version>1.0</version><packaging>war</packaging> <dependencies> <dependency><groupId>org.glassfish.main.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>4.0</version><scope>provided</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.5.1</version> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.2</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> <plugin><groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12.4</version><executions> <execution> <id>integration-test</id> <goals><goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> </plugin> </plugins> </build></project> P.S: I do know that I have a integration-test goal in mvn. but I have not tied the goal to which class should be run during integration test Thanks
Now provide the response and nothing else.
| See Maven Surefire. This plugin is responsible for mvn test in Maven. The default configuration is coming into play. This means the class with the word Test will come into play when you run mvn test and in your case mvn package http://maven.apache.org/surefire/maven-surefire-plugin/examples/inclusion-exclusion.html When you run mvn integration-test the failsafe plugin is used. It's default inclusion/exclusion rules differ - by default it looks for the word IT , for example. http://maven.apache.org/surefire/maven-failsafe-plugin/examples/inclusion-exclusion.html Note: It is odd to me that the test class CardValidatorTest is picked up when you run mvn integration-test . Based, on how I read the default inclusion and exclusion rules for the failsafe-plugin, I would not expect this. In fact, when I adapt your pom.xml to my own sample project, I don't see that behavior. Instead all Test classes are picked up with mvn test and mvn package . All IT classes are picked up with mvn integration-test . Are you sure you don't have some code level dependency on the two classes? Other than changed inclusion/exclusion rule, that's the only thing that I can think of that might cause them both to be picked up with mvn test or mvn package . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26066964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1988876/']} | jdg_83534 |
stackexchange | llm_judgeable_groundtruth_similarity | 49000 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that $L=\{ \langle M \rangle \mid |L(M)| < \infty \}$ is not decidable (by Rice's theorem or using reduction, I followed it from $L$ not being decidable ). But is $L$ recognizable? What I tried is, let $L$ have a machine that recognizes it, let it be called $H$. Then given an input $\langle M\rangle$ I would start enumerating all strings in $L$ by using $H$. As $L$ has infinite many strings at some point the string being enumerated will be equal or larger than $\langle M\rangle$ (in lexicological order), thus using $H$ I am able to decide $L$ which I know is not possible. Is my method correct ? In either case is there a better method for example is there a general way for proving that a language is not recognizable like for undecidability we try to reduce an undecidable problem to the current problem ?
Now provide the response and nothing else.
| If that method worked, semi-decidability would always imply decidability for infinite languages (note how you don't need any property of $L$ to make the proof work), which we know is not true. Since your reasoning is sound, one of your assumptions has to be faulty. What you assume is, paraphrased: Assuming semi-decider $H$ for $L$, I can enumerate $L$ in lexicological order by using $H$. You don't say how you do this, though, and we have already determined that you can not. What you probably had in mind that we always get a recursive enumerator, and even a repetition-free one for infinite languages. Thus, it's the order that has to be impossible to achieve. Intuitively, you can not reject $i$ as next output of the enumerator after only finite time. Proving that $L$ is not semi-decidable works by reduction as well. See here for inspiration. Hint: Reduce from $\overline{K}$, the complement of the Halting language. Details: Define a computable mapping $\langle M \rangle \mapsto \langle M' \rangle$ so that $\langle M \rangle \in \overline{K}$ if and only if $\langle M' \rangle \in L$. For instance, define $M'$ by M'(y) : Simulate M(⟨M⟩) for y steps. if M halts accept y else reject y Note how $L(M') = \emptyset$ if $\langle M \rangle \in \overline{K}$, and $L(M') = \mathbb{N}_{\geq N}$ if $M$ halts after $N$ steps on its own code. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/49000', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/27566/']} | jdg_83535 |
stackexchange | llm_judgeable_groundtruth_similarity | 872073 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
prove that if $T$ is a self adjoint operator and $a^2$ is less than $4b$ Then $T^2$+$aT$+$bI$ is invertible.Where $a$ and $b$ are scalars and $I$ is the identity operator Not: please dont use determinants because the book i am using didn't define determinants yet
Now provide the response and nothing else.
| This is a classic. The Fibonacci sequence $\pmod{m}$ is periodic for any $m$, since there are only a finite number of elements in $\mathbb{Z}_m\times\mathbb{Z}_m$, so for two distinct integers $a,b$ we must have $\langle F_a,F_{a+1}\rangle\equiv\langle F_b,F_{b+1}\rangle \pmod{m}$ as a consequence of the Dirichlet box principle. However, the last condition implies $F_{a+2}\equiv F_{b+2}\pmod{m}$ and, by induction, $F_{a+k}\equiv F_{b+k}\pmod{m}$. Hence the period of the Fibonacci sequence $\pmod{m}$ is bounded by $m^2$ ($m^2-1$ if we are careful enough to notice that $\langle F_c,F_{c+1}\rangle\equiv\langle0,0\rangle\pmod{m}$ is not possible since two consecutive Fibonacci numbers are always coprime). Now it suffices to take $m=10^{2014}$ and notice that $F_0=0$ to prove that there exists an integer $u\leq 10^{4028}$ such that $F_u\equiv 0\pmod{m}$, i.e. $F_u$ ends with at least $2014$ zeroes. It is also possible to give better estimates for $u$. Since $F_k$ is divisible by $5$ only when $k$ is divisible by $5$ and: $$ F_{5k} = F_k(25 F_k^4 + 25(-1)^k F_k^2 + 5),$$it follows that:$$ \nu_5(F_k) = \nu_5(k), $$so $$u\leq 2^{4028}\cdot 5^{2014}=20^{2014}$$ by the Chinese theorem. I put a proof of the Oleg567 conjecture:$$ k\mid F_n \quad\Longrightarrow\quad k^d\mid F_{k^{d-1}n} $$in a separate question . Since $8=F_6\mid F_{750}$ (because $6\mid 750$) and $\nu_5(750)=3$, we have that $1000|F_{750}$ and through the Oleg567's lemma we get$$ u\leq \frac{3}{4}10^{2014}.$$ | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/872073', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/165268/']} | jdg_83536 |
stackexchange | llm_judgeable_groundtruth_similarity | 12885968 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a method, which simplified looks like this: class Foo { public function bar($id) { // do stuff using $this, error occurs here }} Calling it like this works great: $foo = new Foo();$foo->bar(1); However, if I call it using call_user_func_array() , like this: call_user_func_array(array("Foo", "bar"), array('id' => 1)); Which should be equal, I get the following error: Fatal error: Using $this when not in object context in ( $this is undefined) Why is this? Is there something I am missing? How should I do this so I still can use $this in the called method?
Now provide the response and nothing else.
| array("Foo", "bar") is equal to Foo::bar() , i.e. a static method - this makes sense since $foo is nowhere used and thus PHP cannot know which instance to use. What you want is array($foo, "bar") to call the instance method. See http://php.net/manual/en/language.types.callable.php for a list of the various callables. You also need to pass the arguments as an indexed array instead of an associative array, i.e. array(1) instead of array('id' => 1) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12885968', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1113435/']} | jdg_83537 |
stackexchange | llm_judgeable_groundtruth_similarity | 4723249 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Quick question, of which the quickest and easiest answer may well be to rearrange related code, but let's see... So I have an If statement (a piece of code which is a part of a full working solution written in C#) rewritten using VB.NET. I am aware the VB.NET IIf(a, b, c) method evaluates both b and a regardless of the trueness of the first evaluation, but this seems to be the case in my standard construct, too: If (example Is Nothing Or example.Item IsNot compare.Item) Then 'ProceedEnd If Or, rather, more appropriately: If (example Is Nothing Or Not example.Item = compare.Item) Then 'ProceedEnd If Here, if example is Nothing ( null ) then I still get an NullReferenceException - is this my fault, or is it something I just have to endure at the whim of VB.NET?
Now provide the response and nothing else.
| It's your "fault" in that that's how Or is defined, so it's the behaviour you should expect: In a Boolean comparison, the Or operator always evaluates both expressions, which could include making procedure calls. The OrElse Operator (Visual Basic) performs short-circuiting, which means that if expression1 is True, then expression2 is not evaluated. But you don't have to endure it. You can use OrElse to get short-circuiting behaviour. So you probably want: If (example Is Nothing OrElse Not example.Item = compare.Item) Then 'ProceedEnd If I can't say it reads terribly nicely, but it should work... | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4723249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/263681/']} | jdg_83538 |
stackexchange | llm_judgeable_groundtruth_similarity | 18172257 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a Project Euler problem: the one about the sum of the even Fibonacci numbers. My code: def Fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return Fibonacci(n-1) + Fibonacci(n-2)list1 = [x for x in range(39)]list2 = [i for i in list1 if Fibonacci(i) % 2 == 0] The problem's solution can be easily found by printing sum(list2). However, it is taking a lot of time to come up with the list2 I'm guessing. Is there any way to make this faster? Or is it okay even this way... (the problem: By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.)
Now provide the response and nothing else.
| Yes. The primitive recursive solution takes a lot of time. The reason for this is that for each number calculated, it needs to calculate all the previous numbers more than once. Take a look at the following image. It represents calculating Fibonacci(5) with your function. As you can see, it computes the value of Fibonacci(2) three times, and the value of Fibonacci(1) five times. That just gets worse and worse the higher the number you want to compute. What makes it even worse is that with each fibonacci number you calculate in your list, you don't use the previous numbers you have knowledge of to speed up the computation – you compute each number "from scratch." There are a few options to make this faster: 1. Create a list "from the bottom up" The easiest way is to just create a list of fibonacci numbers up to the number you want. If you do that, you build "from the bottom up" or so to speak, and you can reuse previous numbers to create the next one. If you have a list of the fibonacci numbers [0, 1, 1, 2, 3] , you can use the last two numbers in that list to create the next number. This approach would look something like this: >>> def fib_to(n):... fibs = [0, 1]... for i in range(2, n+1):... fibs.append(fibs[-1] + fibs[-2])... return fibs... Then you can get the first 20 fibonacci numbers by doing >>> fib_to(20)[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765] Or you can get the 17th fibonacci number from a list of the first 40 by doing >>> fib_to(40)[17]1597 2. Memoization (relatively advanced technique) Another alternative to make it faster exists, but it is a little more complicated as well. Since your problem is that you re-compute values you have already computed, you can instead choose to save the values you have already computed in a dict, and try to get them from that before you recompute them. This is called memoization . It may look something like this: >>> def fib(n, computed = {0: 0, 1: 1}):... if n not in computed:... computed[n] = fib(n-1, computed) + fib(n-2, computed)... return computed[n] This allows you to compute big fibonacci numbers in a breeze: >>> fib(400)176023680645013966468226945392411250770384383304492191886725992896575345044216019675 This is in fact such a common technique that Python 3 includes a decorator to do this for you. I present to you, automatic memoization! import [email protected]_cache(None)def fib(n): if n < 2: return n return fib(n-1) + fib(n-2) This does pretty much the same thing as the previous function, but with all the computed stuff handled by the lru_cache decorator. 3. Just count up (a naïve iterative solution) A third method, as suggested by Mitch, is to just count up without saving the intermediary values in a list. You could imagine doing >>> def fib(n):... a, b = 0, 1... for _ in range(n):... a, b = b, a+b... return a I don't recommend these last two methods if your goal is to create a list of fibonacci numbers. fib_to(100) is going to be a lot faster than [fib(n) for n in range(101)] because with the latter, you still get the problem of computing each number in the list from scratch. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/18172257', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2085016/']} | jdg_83539 |
stackexchange | llm_judgeable_groundtruth_similarity | 1142803 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to figure out how you solve this question but I just can't seem to understand how to factor $$56x^4+18x^2-8$$
Now provide the response and nothing else.
| As others have hinted at, start by letting $t=x^2$. Then you get the following:$$56x^4+18x^2-8 = 56t^2+18t-8.$$Now your job is to factor this:\begin{align}56t^2+18t-8 &= 2(4t-1)(7t+4)\\[0.5em] &= 2(4x^2-1)(7x^2+4)\\[0.5em] &= 2(2x-1)(2x+1)(7x^2+4).\end{align}Thus, we see that$$56x^4+18x^2-8 = 2(2x-1)(2x+1)(7x^2+4).$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1142803', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/214549/']} | jdg_83540 |
Subsets and Splits