source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 154815 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
On cseligman.com , it is written that So, we see things falling with an acceleration which we call the acceleration of gravity,and thinking that we live in a straight line , uniformly moving or stationary inertial frame, we attribute that acceleration to a force, the force of gravity. Whereas in reality, objects falling towards the Earth are moving along geodesic paths with no acceleration and according to modified version of law of inertia, have no force acting on them. They fall simply because the curved space-time near the Earth ... Now, why do the objects falling towards the Earth move along the geodesic paths with no acceleration ? That means the objects don't have any force acting on them, but why? A body in a free-fall moves with acceleration $g$, so, why is it written like that? Why does the author use Law of inertia on freely falling body?? Law of Inertia can only be applied when no external force acts on the body. So, is the freely-falling body accelerates under force of gravity or moves uniformly while moving through geodesic paths as quoted by the author?
Now provide the response and nothing else.
| Suppose you and I start on the equator, a kilometre apart, and we both head exactly due North in a straight line, so we head off in exactly parallel directions: Now we know that in Euclidean geometry parallel lines remain the same distance apart. But if you and I measure the distance, $d$ , between us we find that $d$ starts off at 1km but decreases as we head North and we eventually meet at the North Pole. Se we have a paradox: we started out parallel but we moved together. The only explanation is that there is some force pulling us together. But we know there is no force really, it's just that we are moving on a curved surface. This is what happens in general relativity, though as I'm sure you'd expect it's a lot more complicated (principally because time is curved as well). If you see a freely falling body accelerating towards the earth you'd say there must be a force acting between the body and the Earth, and you'd call that force gravity. But the general relativist would say the Earth and the object are both moving along geodesics, i.e. in a straight line, and it's just that because spacetime is curved the two straight lines converge just as we saw for motion on a sphere. There isn't really a force acting even though it looks like a force to us. That's why gravity is sometimes described as a fictitious force . Now, there is an obvious problem with my analogy of moving on a sphere, because you and I could start off stationary. Then we are not moving north so we would not approach each other. This is where things get hard to visualise because in GR we are always moving in time even when we are stationary in space. You need to imagine the north direction as moving forwards in time so it's moving forwards in time that causes the two paths to converge. Actually there is an accelerating object involved in this, and it's you standing on the Earth's surface. How do you know you're accelerating? Well the Earth is pushing at the soles of your shoes and accelerating you upwards. Where there's a force there's an acceleration, so the conclusion must be that the surface of the Earth is accelerating you outwards while the freely falling object you're watching is not accelerating. If you're interested, twistor59's answer to What is the weight equation through general relativity? explains how to calculate this acceleration, though you may find the maths involved a bit hard going. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/154815', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_84241 |
stackexchange | llm_judgeable_groundtruth_similarity | 51852960 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to reindex using following line: POST _reindex{ "source": { "index": "poi_201705" }, "dest": { "index": "poi_dev_2" }} But I am getting following error in kibana console: { "statusCode": 504, "error": "Gateway Time-out", "message": "Client request timeout"} Can anybody tell me what is this issue and how can get rid of it.
Now provide the response and nothing else.
| 504 simply means that the request is still running but the HTTP connection from Kibana to ES timed out. You can still see the request going on by using the task management API like this: GET _tasks?actions=*reindex&detailed If you want to run the task asynchronously you can also do it with the following command: POST _reindex?wait_for_completion=false This will return a task id whose progress can then be checked with: GET _tasks/<task-id> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/51852960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5346880/']} | jdg_84242 |
stackexchange | llm_judgeable_groundtruth_similarity | 45555276 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a hash with weeks as keys: {"16 2016"=>6.0, "31 2016"=>7.5, "04 2017"=>8.666666666666666, "14 2017"=>7.5, "15 2017"=>8.4, "17 2017"=>7.333333333333333, "20 2017"=>7.571428571428571, "25 2017"=>6.75, "29 2017"=>6.8, "30 2017"=>6.7272727272727275} I want to display a chart line, but the empty months in my hash. I know how to get empty weeks and give the previous value that has the value to get that. {"16 2016"=>6.0, "17 2016"=>6.0, "31"=>7.5} I followed this solution to do it with my month hashes. When I change strptime to parse in "%W, %Y" , the view never stops loading.
Now provide the response and nothing else.
| Update: For not defining the legend but having the annotation positioned property please use the below code. import plotly.offline as py_offlineimport plotly.graph_objs as gopy_offline.init_notebook_mode()trace0 = go.Scatter( x=[1, 2, 3, 4, 5], y=[1, 2, 3, 4, 5],)trace1 = go.Scatter( x=[1, 2, 3, 4, 5], y=[5, 4, 3, 2, 1],)data = [trace0, trace1]layout = go.Layout( annotations=[ dict( x=1.12, y=1.05, align="right", valign="top", text='Legend Title', showarrow=False, xref="paper", yref="paper", xanchor="center", yanchor="top" ) ])fig = go.Figure(data=data, layout = layout)py_offline.iplot(fig) Notes: You need to define x and y position for annotations using this method, for varying legends. You can use html inside the text attribute(E.g: text='Legend Title<br>kinda lengthy', ) Previous Attempt: Another approach would to create the legend and use annotations to add the title to the legend. Provided you do not use the graph in editable mode. So in the below example, the legend is set to x=0 and y=1, since I want my legend title to be above my actual legend, I set the annotation location as x = 0, y= 1.5. x-ref and y-ref needs to be set to paper. This will give a nice annotation like Code: import plotly.plotly as pyimport plotly.graph_objs as gotrace0 = go.Scatter( x=[1, 2, 3, 4, 5], y=[1, 2, 3, 4, 5],)trace1 = go.Scatter( x=[1, 2, 3, 4, 5], y=[5, 4, 3, 2, 1],)data = [trace0, trace1]layout = go.Layout( legend=dict( x=0, y=1, traceorder='normal', font=dict( family='sans-serif', size=12, color='#000' ), bgcolor='#E2E2E2', bordercolor='#FFFFFF', borderwidth=2 ), annotations=[ dict( x=0, y=1.05, xref='paper', yref='paper', text='Legend Title', showarrow=False ) ])fig = go.Figure(data=data, layout = layout)py.iplot(fig) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45555276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8372392/']} | jdg_84243 |
stackexchange | llm_judgeable_groundtruth_similarity | 51289458 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to call a method from another method and for this I've used this. But my console is causing me an error. How to call a method inside another method in Vuejs? code methods: { searchLocations: function() { var address = this.search var geocoder = new window.google.maps.Geocoder() geocoder.geocode({ address: address }, function(results, status) { if (status === window.google.maps.GeocoderStatus.OK) { this.buscarLojas(results[0].geometry.location) } }) }, buscarLojas: function(center) { console.log(center) } } console: this.buscarLojas is not a function
Now provide the response and nothing else.
| You have an anonymous call back function that overwrites this keyword, you can either assign this to another variable ref before using it in your anonymous function: methods: { searchLocations: function () { var address = this.search var geocoder = new window.google.maps.Geocoder() var ref = this// ^^^^^^^^^^^^^^ geocoder.geocode({address: address}, function (results, status) { if (status === window.google.maps.GeocoderStatus.OK) { ref.buscarLojas(results[0].geometry.location) //^^^ } else { alert(address + ' not found') } }) }, buscarLojas: function (center) { console.log(center) }} Or use an arrow function: methods: { searchLocations: function () { var address = this.search var geocoder = new window.google.maps.Geocoder() geocoder.geocode({address: address}, (results, status) => { // ^^ if (status === window.google.maps.GeocoderStatus.OK) { this.buscarLojas(results[0].geometry.location) } else { alert(address + ' not found') } }) }, buscarLojas: function (center) { console.log(center) }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51289458', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3845038/']} | jdg_84244 |
stackexchange | llm_judgeable_groundtruth_similarity | 33475709 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
bundle install fails with Make sure that `gem install therubyracer -v '0.12.1'` succeeds before bundling gem install therubyracer -v '0.12.1' fails with errors which appear to relate to compilation Prior to this, libv8 gem was having problems but that is now installed thanks to following SO posts. Guides for fixing rubyracer for Mavericks seem to not work, due to changes in El Capitan? ln: /usr/bin/cpp: Operation not permitted
Now provide the response and nothing else.
| Install xcode through app store Install 'command line tools' for xcode ( xcode-select --install ) If using rbenv, after installing ruby you needed rbenv rehash Assuming you have run gem install bundler and got errors, remove your mess: gem uninstall libv8gem uninstall therubyracerbrew rm v8 Install gcc4.2 brew tap homebrew/dupesbrew install apple-gcc42 Install v8 brew tap homebrew/versionsbrew install v8-315brew link --force v8-315 Install gems gem install libv8 -- --with-system-v8gem install therubyracer if you get dyld: lazy symbol binding failed: Symbol not found , @rpbaltazar suggests an alternative: bundle config --local build.libv8 --with-cxx=/usr/local/bin/g++-4.2bundle install | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/33475709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/849697/']} | jdg_84245 |
stackexchange | llm_judgeable_groundtruth_similarity | 34486052 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When passing a class or primitive type into a function, any change made in the function to the parameter will be reflected outside of the class. This is basically the same thing an inout parameter is supposed to do. What is a good use case for an inout parameter?
Now provide the response and nothing else.
| inout means that modifying the local variable will also modify the passed-in parameters. Without it, the passed-in parameters will remain the same value. Trying to think of reference type when you are using inout and value type without using it. For example: import UIKitvar num1: Int = 1var char1: Character = "a"func changeNumber(var num: Int) { num = 2 print(num) // 2 print(num1) // 1}changeNumber(num1)func changeChar(inout char: Character) { char = "b" print(char) // b print(char1) // b}changeChar(&char1) A good use case will be swap function that it will modify the passed-in parameters. Swift 3+ Note : Starting in Swift 3 , the inout keyword must come after the colon and before the type. For example, Swift 3+ now requires func changeChar(char: inout Character) . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34486052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40106/']} | jdg_84246 |
stackexchange | llm_judgeable_groundtruth_similarity | 2155656 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm kinda ashamed of asking this since I've been working with MySQL for years, but oh well. I have a table with two fields, a and b . I will be running the following queries on it: SELECT * FROM ... WHERE A = 1; SELECT * FROM ... WHERE B = 1; SELECT * FROM ... WHERE A = 1 AND B = 1; From the performance point of view, is at least one of the following configurations of indexes slower for at least one query ? If yes, please elaborate. ALTER TABLE ... ADD INDEX (a); ALTER TABLE ... ADD INDEX (b); ALTER TABLE ... ADD INDEX (a, b); ALTER TABLE ... ADD INDEX (a); ALTER TABLE ... ADD INDEX (b); ALTER TABLE ... ADD INDEX (a, b); Thanks (note that we are talking about non unique indexes)
Now provide the response and nothing else.
| Yes, at least one case is considerably slower. If you only define the following index: ALTER TABLE ... ADD INDEX (a, b); ... then the query SELECT * FROM ... WHERE B = 1; will not use that index. When you create an index with a composite key, the order of the columns of the key is important. It is recommended to try to order the columns in the key to enhance selectivity, with the most selective columns to the left-most of the key. If you don't do this, and put a non-selective column as the first part of the key, you risk not using the index at all. (Source: Tips on Optimizing SQL Server Composite Index ) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2155656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/95135/']} | jdg_84247 |
stackexchange | llm_judgeable_groundtruth_similarity | 6118948 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm thinking that this needs to be changed to a while clause, at the moment it'll wait till all 10000 pings are done, I need it to return when the ping is successful. The program "say" is on OSX it makes the computer speak. #!/bin/bashecho begin pingif ping -c 100000 8.8.8.8 | grep timeout;then echo `say timeout`;else echo `say the internet is back up`;fi OK I don't have rights to answer my own question so here's my answer for it after playing around: Thanks, yeah I didn't know about $? until now. Anyway now I've gone and made this. I like that yours doesn't go forever but in my situation I didn't need it to stop until it's finished. #!/bin/bashintertube=0echo "begin ping"while [ $intertube -ne 1 ]; do ping -c 3 google.com if [ $? -eq 0 ]; then echo "ping success"; say success intertube=1; else echo "fail ping" fidoneecho "fin script"
Now provide the response and nothing else.
| You probably shouldn't rely on textual output of a command to decide this, especially when the ping command gives you a perfectly good return value : The ping utility returns an exit status of zero if at least one response was heard from the specified host; a status of two if the transmission was successful but no responses were received; or another value from <sysexits.h> if an error occurred. In other words, use something like: ((count = 60)) # Maximum number to try.while [[ $count -ne 0 ]] ; do ping -c 1 8.8.8.8 # Try once. rc=$? if [[ $rc -eq 0 ]] ; then ((count = 1)) # If okay, flag loop exit. else sleep 1 # Minimise network storm. fi ((count = count - 1)) # So we don't go forever.doneif [[ $rc -eq 0 ]] ; then # Make final determination. echo `say The internet is back up.`else echo `say Timeout.`fi | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6118948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/437449/']} | jdg_84248 |
stackexchange | llm_judgeable_groundtruth_similarity | 99536 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Shouldn't it be 4th option since since the carbon attached to the ring is secondary as well as resonance stabilized?
Now provide the response and nothing else.
| I've searched SciFinder for this and related reactions (allyl benzene + H-X). In the results, there are no examples of the halide at the benzylic position in the product, and there are many examples of the halide at the secondary postion arising from the more substituted end of the alkene. One such example is below. It is certainly warranted to consider both the direct addition and the product from rearrangement (since a resonance stabilized, benzylic carbocation would be formed as an intermediate), and if one were performing the experiment in the lab, it would be a good idea to look for both products. It seems that in practice, the addition of halide to the secondary carbocation is faster than the hydride shift. Predictions about kinetics like this are very difficult, and exam questions that require them could be considered unfair unless there is an overwhelming and predictable rationale for one product over another. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/99536', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/64829/']} | jdg_84249 |
stackexchange | llm_judgeable_groundtruth_similarity | 32275213 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use Leaflet package in R to draw a amp and connect the markers given the latitude and longitude information in the table below. | Observation | InitialLat | InitialLong | NewLat | NewLong | |-------------|------------|-------------|-----------|-----------| | A | 62.469722 | 6.187194 | 51.4749 | -0.221619 | | B | 48.0975 | 16.3108 | 51.4882 | -0.302621 | | C | 36.84 | -2.435278 | 50.861822 | -0.083278 | | D | 50.834194 | 4.298361 | 54.9756 | -1.62179 | | E | 50.834194 | 4.298361 | 54.9756 | -1.62179 | | F | 50.834194 | 4.298361 | 51.4882 | -0.302621 | | G | 47.460427 | -0.530804 | 51.44 | -2.62021 | | H | 51.5549 | -0.108436 | 53.4281 | -1.36172 | | I | 51.5549 | -0.108436 | 52.9399 | -1.13258 | | J | 51.5549 | -0.108436 | 51.889839 | -0.193608 | | | 51.5549 | -0.108436 | 52.0544 | 1.14554 | I want to draw lines from an initial point given by the coordinates in the InitialLat and InitialLong columns to an end point given by the NewLat and NewLong columns. Here is my current R code which only draws the markers on the map. library(leaflet) map3 = leaflet(data) %>% addTiles() map3 %>% addMarkers(~InitialLong,~InitialLat, popup=~Observation)
Now provide the response and nothing else.
| Here is an alternative way using the leaflet package. I just took two data points in your data for the purpose of demonstration. mydf <- data.frame(Observation = c("A", "B"), InitialLat = c(62.469722,48.0975), InitialLong = c(6.187194, 16.3108), NewLat = c(51.4749, 51.4882), NewLong = c(-0.221619, -0.302621), stringsAsFactors = FALSE) I changed the format of mydf and create a new data frame for leaflet. You can reshape your data in various ways. mydf2 <- data.frame(group = c("A", "B"), lat = c(mydf$InitialLat, mydf$NewLat), long = c(mydf$InitialLong, mydf$NewLong))# group lat long#1 A 62.46972 6.187194#2 B 48.09750 16.310800#3 A 51.47490 -0.221619#4 B 51.48820 -0.302621library(leaflet)library(magrittr)leaflet()%>%addTiles() %>%addPolylines(data = mydf2, lng = ~long, lat = ~lat, group = ~group) I trimmed the interactive map I got. Please see the map below. Although two lines are connected in this image, they are separated. If you run the code and zoom in, you will see that the two lines are separated. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32275213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5277582/']} | jdg_84250 |
stackexchange | llm_judgeable_groundtruth_similarity | 40961370 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make a simple modification in a PHP file located here in my parent theme: wp-content\themes\sailing\inc\widgets\gallery\tpl\base.php So I created the same folder structure in my child theme and did the modification I need in this file. I also copied/pasted all the PHP files needed to declare this widget. wp-content\themes\sailing\inc\widgets\widgets.php wp-content\themes\sailing\inc\widgets\gallery\gallery.phpwp-content\themes\sailing-child\inc\widgets\widgets.php wp-content\themes\sailing-child\inc\widgets\gallery\gallery.php What am I missing here ?
Now provide the response and nothing else.
| WordPress child themes are not working this way. The only files that you can override by using the same path in your child themes are the "basic" files: index.php, page.php, style.css... Mostly the template files. When it comes to overriding functions or classes in a child theme. You've several ways to handle it: re-declaring the functions/classes duplicating the functions/classes But it depends on how your theme is built and if it's "child theme" ready. Let's have a look with your widget issue. If you open your widget declaration file within your parent theme, you'll see something like: class Widget_Name extends WP_Widget {...CODE OF THE WIDGET... See: https://codex.wordpress.org/Widgets_API The ideal case is you don't see the above lines first but: if(!class_exists('Widget_Name')) { class Widget_Name extends WP_Widget { ... CODE OF THE WIDGET ... Which means, you can just copy/past your file and that'll work just fine, you widget will override the parent one and no error will be thrown as the parent widget won't be executed. That's the "child theme ready" theme. Note that it's the same with functions ( if(!function_exists('function_name') ). Don't forget to call your file from your child-theme/functions.php file as it won't be called by default. Like: require_once('path/to/your/widget_class.php'); Other way, if you don't have a class_exists call is to just duplicate the file, call it with the require_once. You should see an error as you're defining 2 times the same class. PHP won't let that happen, fatal error. Just rename: class Widget_Name2 extends WP_Widget { And somewhere (most of the time at the end) of your file, look for register_widget( and edit the class name: register_widget( 'Widget_Name2' ); That's not the most handy way as you'll have 2 times the same widget but that does work though. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40961370', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3046503/']} | jdg_84251 |
stackexchange | llm_judgeable_groundtruth_similarity | 42745 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a strong legal guarantee that the U.S. can give to another country that it won't attack them? I think because U.S. policy tends to drastically change every time the administration changes, some countries are reluctant to come to any sort of peace agreement with the U.S. I am talking about countries like North Korea and Iran. Is there a sort of legal guarantee that's legally binding that can convince a country that the U.S. won't attack them? If not, is there any non-legal guarantee the U.S. can offer on the table?
Now provide the response and nothing else.
| No such guarantee would be meaningful if the US administration and congress change their opinion. Whatever they write in law or sign, they can undo -- if not legally then in practice. The US had ratified a treaty with Panama about who had control over which part of the Canal zone, yet they deposed Noriega. The US had given security guarantees to Ukraine when Ukraine surrendered their nuclear weapons, and then failed to follow them. The US had made a deal with Gaddafi to make him drop his WMD programs and then supported his overthrow. For that matter, the US had made a deal with Iran on nuclear issues and it is generally understood that Iran held that promise, yet the US broke the deal. The only way for another country to be safe would be to join another strong alliance, or to retain sufficient military leverage to make an US attack unlikely. On the other hand, US foreign policy doesn't have to "change drastically" when the administration changes. Johnson and Nixon both fought the Vietnam war, Bush and Obama fought the Iraq counterinsurgency, and so on. | {} | {'log_upvote_score': 8, 'links': ['https://politics.stackexchange.com/questions/42745', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/26532/']} | jdg_84252 |
stackexchange | llm_judgeable_groundtruth_similarity | 22169752 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
String temp_address="nothing"; try { String selectQuery = "SELECT lastchapter FROM Bookdetails INTO"+temp_address+"WHERE bookpath=?"; db.execSQL(selectQuery, new String[] { fileName }); System.out.println(temp_address+" result of select Query"); } catch(Exception e) { System.out.println(e+" is the error here"); } finally { db.close(); } Logcat android.database.sqlite.SQLiteException: near "bookpath": syntax error: , while compiling: SELECT lastchapter FROM Bookdetails INTOnothingWHERE bookpath=? i just want to take the result of the above query so that the string stored in lastchapter is available in temp_addressplease help i am new to android sqlite database please help
Now provide the response and nothing else.
| There are SQL syntax problems and you'll need to use a Cursor to retrieve query results, for example with rawQuery() : String selectQuery = "SELECT lastchapter FROM Bookdetails WHERE bookpath=?";Cursor c = db.rawQuery(selectQuery, new String[] { fileName });if (c.moveToFirst()) { temp_address = c.getString(c.getColumnIndex("lastchapter"));}c.close(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22169752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3244439/']} | jdg_84253 |
stackexchange | llm_judgeable_groundtruth_similarity | 31963297 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 2 Lists: List<String> subjectArr = Arrays.asList<String>("aa", "bb", "cc");List<Long> numArr = Arrays.asList<Long>(2L, 6L, 4L); How do I create new List and zip two Lists into it? List<?> subjectNumArr = zip(subjectArr, numArr);// subjectNumArr == [{'aa',2},{'bb',6},{'cc',4}]
Now provide the response and nothing else.
| Here's Java-8 solution using the Pair class (like in @ZhekaKozlov answer): public static <A, B> List<Pair<A, B>> zipJava8(List<A> as, List<B> bs) { return IntStream.range(0, Math.min(as.size(), bs.size())) .mapToObj(i -> new Pair<>(as.get(i), bs.get(i))) .collect(Collectors.toList());} In Java 9 onwards you can use Map.entry() : public static <A, B> List<Map.Entry<A, B>> zipJava8(List<A> as, List<B> bs) { return IntStream.range(0, Math.min(as.size(), bs.size())) .mapToObj(i -> Map.entry(as.get(i), bs.get(i))) .collect(Collectors.toList()); } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31963297', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5218943/']} | jdg_84254 |
stackexchange | llm_judgeable_groundtruth_similarity | 1653841 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What if we used three axes on a plane like in the picture below? Then we could define any point on a plane using three numbers: $$P=(a,b,c)$$ Three numbers seems excessive, however we do not need to use negative numbers that way. $$a,b,c \geq 0$$ Of course, such coordinates in general will not be unique, since: $$0=(0,0,0)=(1,1,1)=(2,2,2)=...$$ We can define a canonical form for each number this way: $$m=\min(a,b,c)$$ $$P=(a,b,c)-(m,m,m)$$ This is not much different from using '$-$' sign in the usual complex plane. The important point is that negative real numbers have the canonical form: $$(-a,0,0)=(0,a,a)$$ On the left picture you can see how the numbers with integer coordinates form hexagonal cells. However, if we can use the easily scalable cells on the right side to define fractional coordinates. The arithmetics is simple: $$(a_1,a_2,a_3) \pm (b_1,b_2,b_3)=(a_1\pm b_1,a_2 \pm b_2,a_3 \pm b_3)$$ We can always get rid of any negatives by transforming the result to canonical form. The multiplication law can only be defined as follows: $$(a_1,a_2,a_3) \cdot (b_1,b_2,b_3)=(a_1 b_1 + a_2 b_3 + a_3 b_2, a_3 b_3+a_1 b_2+a_2 b_1,a_2 b_2 + a_1 b_3+a_3 b_1)$$ If we let $a$ be the real part of the number, then this rule agrees well with the rules for real numbers. It's also commutative, associative and transitive. The conjugate of any number is defined: $$P^*=(a,b,c)^*=(a,c,b)$$ Then we can see that multiplication by its conjugate gives the square of the norm : $$||P||^2=(a,b,c) \cdot (a,c,b)=(a^2 + b^2+c^2, bc+ac+ab,bc + ab+ac)=$$ $$=(a^2 + b^2+c^2-bc-ac-ab,0,0)=a^2 + b^2+c^2-bc-ac-ab$$ The inverse of the number is introduced this way: $$\frac{1}{P}=\frac{1}{(a,b,c)}=\frac{(a,c,b)}{a^2 + b^2+c^2-bc-ac-ab}$$ Finally let's introduce the units: $$(a,b,c) = (a,0,0)+(0,b,0)+(0,0,c)=a+b \mathbb{j}+c \mathbb{k}$$ Using the multiplication law above we can show that: $$\mathbb{j}^2=\mathbb{k}$$ $$\mathbb{k}^2=\mathbb{j}$$ $$\mathbb{j k}=\mathbb{k j}=1$$ These are just cube roots of unity: $$\mathbb{j}^3=1$$ $$\mathbb{k}^3=1$$ The units are not linearly independent! $$1+\mathbb{j}+\mathbb{k}=(1,1,1)=0$$ The imaginary unit will be: $$\mathbb{i}=\frac{1}{\sqrt{3}}(\mathbb{j}-\mathbb{k})=\frac{1}{\sqrt{3}}(0,1,-1)=\frac{1}{\sqrt{3}}(1,2,0)$$ So this number field works as well as the complex number field,doesn't it? Why is it never introduced in textbooks? Isn't it useful, especially with its relation to the hexagonal coordinate system?
Now provide the response and nothing else.
| It depends on what context you want to think of these as. If you want to look at $\mathbb{Z}[\zeta_3]$, this is a structure that is studied. It's studied own it's own, and is known as the Eisenstein Integers , and as an example of $\mathbb{Z}[\zeta_p]$ which has a lot of interesting properties as a collection of rings. These are called Cyclotomic Fields . If you want to look at it as $\mathbb{R}[\zeta_3]$, then it is the case that it is the same field as $\mathbb{R}[\zeta_4]=\mathbb{C}$. The expression as $\mathbb{C}$ is usually considered better because $\mathbb{R}[i]$ has an orthogonal basis. In circumstances where a hexagonal grid is relevant, it is used. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1653841', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/269624/']} | jdg_84255 |
stackexchange | llm_judgeable_groundtruth_similarity | 10299148 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First let me mention that I've gone through many suggested questions and found no relevent answer. Here is what I'm doing. I'm connected to my Amazon EC2 instance. I can login with MySQL root with this command: mysql -u root -p Then I created a new user bill with host % CREATE USER 'bill'@'%' IDENTIFIED BY 'passpass'; Granted all the privileges to user bill: grant all privileges on *.* to 'bill'@'%' with grant option; Then I exit from root user and try to login with bill: mysql -u bill -p entered the correct password and got this error: ERROR 1045 (28000): Access denied for user 'bill'@'localhost' (using password: YES)
Now provide the response and nothing else.
| You probably have an anonymous user ''@'localhost' or ''@'127.0.0.1' . As per the manual : When multiple matches are possible, the server must determine which ofthem to use. It resolves this issue as follows: (...) When a client attempts to connect, the server looks through the rows [of table mysql.user] in sorted order. The server uses the first row that matches the client host name and user name. (...)The server uses sorting rules that order rows with the most-specific Host values first .Literal host names [such as 'localhost'] and IP addresses are the most specific. Therefore such an anonymous user would "mask" any other user like '[any_username]'@'%' when connecting from localhost . 'bill'@'localhost' does match 'bill'@'%' , but would match (e.g.) ''@'localhost' beforehands. The recommended solution is to drop this anonymous user (this is usually a good thing to do anyways). Below edits are mostly irrelevant to the main question. These are only meant to answer some questions raised in other comments within this thread. Edit 1 Authenticating as 'bill'@'%' through a socket. root@myhost:/home/mysql-5.5.16-linux2.6-x86_64# ./mysql -ubill -ppass --socket=/tmp/mysql-5.5.sock Welcome to the MySQL monitor (...) mysql> SELECT user, host FROM mysql.user; +------+-----------+ | user | host | +------+-----------+ | bill | % | | root | 127.0.0.1 | | root | ::1 | | root | localhost | +------+-----------+ 4 rows in set (0.00 sec) mysql> SELECT USER(), CURRENT_USER(); +----------------+----------------+ | USER() | CURRENT_USER() | +----------------+----------------+ | bill@localhost | bill@% | +----------------+----------------+ 1 row in set (0.02 sec) mysql> SHOW VARIABLES LIKE 'skip_networking'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | skip_networking | ON | +-----------------+-------+ 1 row in set (0.00 sec) Edit 2 Exact same setup, except I re-activated networking, and I now create an anonymous user ''@'localhost' . root@myhost:/home/mysql-5.5.16-linux2.6-x86_64# ./mysql Welcome to the MySQL monitor (...) mysql> CREATE USER ''@'localhost' IDENTIFIED BY 'anotherpass'; Query OK, 0 rows affected (0.00 sec) mysql> Bye root@myhost:/home/mysql-5.5.16-linux2.6-x86_64# ./mysql -ubill -ppass \ --socket=/tmp/mysql-5.5.sock ERROR 1045 (28000): Access denied for user 'bill'@'localhost' (using password: YES) root@myhost:/home/mysql-5.5.16-linux2.6-x86_64# ./mysql -ubill -ppass \ -h127.0.0.1 --protocol=TCP ERROR 1045 (28000): Access denied for user 'bill'@'localhost' (using password: YES) root@myhost:/home/mysql-5.5.16-linux2.6-x86_64# ./mysql -ubill -ppass \ -hlocalhost --protocol=TCP ERROR 1045 (28000): Access denied for user 'bill'@'localhost' (using password: YES) Edit 3 Same situation as in edit 2, now providing the anonymous user's password. root@myhost:/home/mysql-5.5.16-linux2.6-x86_64# ./mysql -ubill -panotherpass -hlocalhost Welcome to the MySQL monitor (...) mysql> SELECT USER(), CURRENT_USER(); +----------------+----------------+ | USER() | CURRENT_USER() | +----------------+----------------+ | bill@localhost | @localhost | +----------------+----------------+ 1 row in set (0.01 sec) Conclusion 1, from edit 1: One can authenticate as 'bill'@'%' through a socket. Conclusion 2, from edit 2: Whether one connects through TCP or through a socket has no impact on the authentication process (except one cannot connect as anyone else but 'something'@'localhost' through a socket, obviously). Conclusion 3, from edit 3: Although I specified -ubill , I have been granted access as an anonymous user. This is because of the "sorting rules" advised above. Notice that in most default installations, a no-password, anonymous user exists (and should be secured/removed). | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/10299148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/813739/']} | jdg_84256 |
stackexchange | llm_judgeable_groundtruth_similarity | 25942010 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does this code sometimes produce ArrayOutOfBoundsException? How is that even possible for String.valueOf(int) ? public static String ipToString(ByteString bs) { if (bs == null || bs.isEmpty()) { return null; } else { StringBuilder sb = new StringBuilder(); boolean started = false; for (Byte byt : bs) { if (started) { sb.append("."); } sb.append(String.valueOf(byt & 0xFF)); started = true; } return sb.toString(); }}java.lang.ArrayIndexOutOfBoundsException: -81914 at java.lang.Integer.getChars(Integer.java:458) at java.lang.Integer.toString(Integer.java:402) at java.lang.String.valueOf(String.java:3086) at com.mystuff.mypackage.ipToString(MyCode.java:1325) ... at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Updates I don't know the value of the byte when this occurs, but it doesn't seem like it should be possible for any possible value of byte. Once it happens once, every invocation then errors out with the same exception. Environment: java version "1.8.0_20"Java(TM) SE Runtime Environment (build 1.8.0_20-b26)Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)
Now provide the response and nothing else.
| This is a JIT compiler bug that has been introduced in JDK 8u20 as a side-effect of another fix: JDK-8042786 The problem is related to auto-boxing elimination optimization. The work-around is to switch the optimization off by -XX:-EliminateAutoBox JVM flag Looks like the problem also exists in the most recent JDK 9 source base. I've submitted the bug report: https://bugs.openjdk.java.net/browse/JDK-8058847 with 100% reproducible minimal test case included. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25942010', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41613/']} | jdg_84257 |
stackexchange | llm_judgeable_groundtruth_similarity | 2888 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Will any issues occur if I have a topology involving 3 subnets and two OSPF areas where one subnet is in AREA 0 and the other two subnets are both in AREA 1? For example: [area 1, subnet 1]---[ABR #1]---[area 0, subnet 2]---[ABR #2]---[area 1, subnet 3]
Now provide the response and nothing else.
| Regarding the question of splitting area 1 across the backbone (area 0): [area 1, subnet 1]---[ABR #1]---[area 0, subnet 2]---[ABR #2]---[area 1, subnet 3] [area 1, subnet 1]---[Router #1]---[area 0, subnet 2, end device #1]---[Router #2]---[area 0, subnet 2, end device #2]---[Router #3]---[area 1, subnet 3] Short answer: There is no problem with your proposal... Long answer: Even Peter's answer, which argues that reusing area numbers is bad design, offers no proof that this is a bad design; if you examine the hyperlinks he used, there is no explanation of undesirable consequences for this design. Furthermore, the argument that you might have problems connecting R1 and R3 falls short, since the R1 to R3 link could legitimately be configured in either Area 0 or Area 1, depending on what traffic you want to transit it. The difficulties he mentions are a false dilemma. In RFC 2328, Section 3.7 OSPF explicitly allows you to use discontiguous non-backbone areas (which are called "area partitions", below): OSPF does not actively attempt to repair area partitions. When an area becomes partitioned, each component simply becomes a separate area. The backbone then performs routing between the new areas. Some destinations reachable via intra-area routing before the partition will now require inter-area routing. ... Also, the backbone itself must not partition. Thus whether you use the proposed discontiguous Area 1 is just a matter of taste... some people find it illogical to use the configuration in your diagram; these people might suggest that you keep OSPF area numbers together... so you'd have to change [area 1, subnet 3] on Router #3 to [area 3, subnet 3]. Other people see no problem with reusing Area 1, since OSPF area numbers are only locally significant to the router originating OSPF hellos. Either way, we should admit that OSPF is a remarkably flexible protocol; regardless of choosing one side or another in this debate. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/2888', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/1453/']} | jdg_84258 |
stackexchange | llm_judgeable_groundtruth_similarity | 3690499 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$I(a,b)$ = $\int_1^e x^a\ln^bx \,dx, b > 0$ I need to show that $I(a,1) + I(-a,1)\ge2$ I took both integrals. For the first one I get: $I(a,1)$ = $\int_1^e x^a\ln x \,dx$ = $\frac1{(a+1)^2}$ ( $ae^{a+1} + 1)$ The second one should be the same but $-a$ will take place for $a$ . Is there any elegant solution for this?
Now provide the response and nothing else.
| Notice that $I(a,1)+I(-a,1)=\int_1^e (x^a+x^{-a})\ln x dx\ge \int_1^e 2\ln x dx=2$ . This is in fact the best possible since $I(0,1)+I(-0,1)=2$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3690499', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/119047/']} | jdg_84259 |
stackexchange | llm_judgeable_groundtruth_similarity | 4074737 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
You make a gaming website where the user can buy gaming credits and the funds are deposited/credited into the user's virtual account to play some game etc...etc.. 1 If you got an accountant to record the transaction, it would be recorded like this (maybe a bit more complex but you get the point) TRANSACTIONPK_ID1 Cash - $10 (System)PK_ID2 Deposit $10 (System)TRANSACTIONPK_ID3 Bank Account - $10 (John)PK_ID4 Deposit $10 (John) 2 As a developer, do you really need to waste 2 extra records? why not just record it like this…(then you might store information where the funds came from, status in other columns under the same deposit record) TRANSACTIONPK_ID1 Cash - $10 (system)PK_ID2 Deposit $10 (John) Is there any real advantage of option #1 over option #2 and vice visa? EDIT: modified question, removed CR, DR and replaced with a sign.
Now provide the response and nothing else.
| (Answering your question, but also responding some points raised in paxdiablo's answer.) It is nothing to do with the accountant looking inside your database. With Double entry, errors are easy to trace; it is an Accounting and IRS requirement, so really, you do not have a choice, you need double entry for any system that deals with public funds. (Please do not try to tell me what "double entry" is; I have written double entry systems for banks, to Audit requirements.) Double entry is an accounting method, based on a set of accounts. Every financial transaction is Journal Entry; if all the transactions were re-applied from the beginning, all the accounts would at their exact same balance as they are today. Double Entry means every transaction has a "To" and a "From" account; money never leaves the system or enters the system. Every Credit has a Debit attached to it. Therefore (1) is not the "double entry" version of (2), they cannot be readily compared. The double entry version of John's transaction is (one financial transaction), in logical accounting terms: From: JohnAccount To: SystemAccount Amount: 10.00 (dollars) That may well be two rows in a table, one a credit and the other a debit, the two inserts wrapped in an SQL Transaction. That is it for the Accounting system, which is internal, and deals with money. We are done. But you are additionally marrying the accounting system to a purchase/sale system (without having explicitly declared it). Of course for the ten bucks you took from John, you need to give him whatever he purchased for it, and record that. John bought ten bucks worth of gaming credits, if you are tracking that, then yes, you also need: From: SystemGamingAccount To: JohnGamingAccount Amount: 100 (credits) or,expressed in dollars: From: SystemGamingAccount To: JohnGamingAccount Amount: 10.00 (dollars) That, too, may well be two rows in a table, one a credit and the other a debit, the four inserts wrapped in an SQL Transaction. To be clear, if you were selling widgets instead of gaming credits, the second (widget tracking) transaction would be: From: Warehouse To: PublicSale Amount: 1 (widgets) and since you are tracking Units in the warehouse but not how many widgets John Q Public has in his pocket, that is two inserts plus one update ( UPDATE Part SET QtInStock = QtyInStock - 1 WHERE PartCode = "Widget" ), all wrapped in a SQL transaction. And there IS an Account for each user, right. Virtual, esoteric or physical, it is a Legal Entity, against which transactions are made. So let's not pretend it does not exist because it is virtual. For gaming, one dollar Account plus one gaming (credit) Account. Credit/Debit I would put the CR/DB back in; not CHAR (2), but boolean. It will help you later when the table is large, WHERE IsCredit = 1 is much faster than WHERE Amount >= 0. Note that with ">=" you have to ensure that every code segment is coded the same way, not ">" sometimes. Boolean or char does not have that problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4074737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103264/']} | jdg_84260 |
stackexchange | llm_judgeable_groundtruth_similarity | 248113 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My question flows out of this comment on an Andrew Gelman's blog post in which he advocates the use of 50% confidence intervals instead of 95% confidence intervals, although not on the grounds that they are more robustly estimated: I prefer 50% to 95% intervals for 3 reasons: Computational stability, More intuitive evaluation (half the 50% intervals should contain the true value), A sense that in aplications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty. The commenter's idea seems to be that problems with the assumptions underlying the construction of the confidence interval will have more an impact if it's a 95% CI than if it's a 50% CI. However, he doesn't really explain why. [...] as you go to larger intervals, you become more sensitive in general to details or assumptions of your model. For example, you would never believe that you had correctly identified the 99.9995% interval. Or at least that’s my intuition. If it’s right, it argues that 50-percent should be better estimated than 95-percent. Or maybe “more robustly” estimated, since it is less sensitive to assumptions about the noise, perhaps? Is it true? Why/why not?
Now provide the response and nothing else.
| This answer analyzes the meaning of the quotation and offers the results of a simulation study to illustrate it and help understand what it might be trying to say. The study can easily be extended by anybody (with rudimentary R skills) to explore other confidence interval procedures and other models. Two interesting issues emerged in this work. One concerns how to evaluate the accuracy of a confidence interval procedure. The impression one gets of robustness depends on that. I display two different accuracy measures so you can compare them. The other issue is that although a confidence interval procedure with low confidence may be robust, the corresponding confidence limits might not be robust at all. Intervals tend to work well because the errors they make at one end often counterbalance the errors they make at the other. As a practical matter, you can be pretty sure that around half of your $50\%$ confidence intervals are covering their parameters, but the actual parameter might consistently lie near one particular end of each interval, depending on how reality departs from your model assumptions. Robust has a standard meaning in statistics: Robustness generally implies insensitivity to departures from assumptions surrounding an underlying probabilistic model. (Hoaglin, Mosteller, and Tukey, Understanding Robust and Exploratory Data Analysis . J. Wiley (1983), p. 2.) This is consistent with the quotation in the question. To understand the quotation we still need to know the intended purpose of a confidence interval. To this end, let's review what Gelman wrote. I prefer 50% to 95% intervals for 3 reasons: Computational stability, More intuitive evaluation (half the 50% intervals should contain the true value), A sense that in applications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty. Since getting a sense of predicted values is not what confidence intervals (CIs) are intended for, I will focus on getting a sense of parameter values, which is what CIs do. Let's call these the "target" values. Whence, by definition, a CI is intended to cover its target with a specified probability (its confidence level). Achieving intended coverage rates is the minimum criterion for evaluating the quality of any CI procedure. (Additionally, we might be interested in typical CI widths. To keep the post to a reasonable length, I will ignore this issue.) These considerations invite us to study how much a confidence interval calculation could mislead us concerning the target parameter value. The quotation could be read as suggesting that lower-confidence CIs might retain their coverage even when the data are generated by a process different than the model. That's something we can test. The procedure is: Adopt a probability model that includes at least one parameter. The classic one is sampling from a Normal distribution of unknown mean and variance. Select a CI procedure for one or more of the model's parameters. An excellent one constructs the CI from the sample mean and sample standard deviation, multiplying the latter by a factor given by a Student t distribution. Apply that procedure to various different models--departing not too much from the adopted one--to assess its coverage over a range of confidence levels. As an example, I have done just that. I have allowed the underlying distribution to vary across a wide range, from almost Bernoulli, to Uniform, to Normal, to Exponential, and all the way to Lognormal. These include symmetric distributions (the first three) and strongly skewed ones (the last two). For each distribution I generated 50,000 samples of size 12. For each sample I constructed two-sided CIs of confidence levels between $50\%$ and $99.8\%$, which covers most applications. An interesting issue now arises: How should we measure how well (or how badly) a CI procedure is performing? A common method simply evaluates the difference between the actual coverage and the confidence level. This can look suspiciously good for high confidence levels, though. For instance, if you are trying to achieve 99.9% confidence but you get only 99% coverage, the raw difference is a mere 0.9%. However, that means your procedure fails to cover the target ten times more often than it should! For this reason, a more informative way of comparing coverages ought to use something like odds ratios. I use differences of logits, which are the logarithms of odds ratios. Specifically, when the desired confidence level is $\alpha$ and the actual coverage is $p$, then $$\log\left(\frac{p}{1-p}\right) - \log\left(\frac{\alpha}{1-\alpha}\right)$$ nicely captures the difference. When it is zero, the coverage is exactly the value intended. When it is negative, the coverage is too low--which means the CI is too optimistic and underestimates the uncertainty. The question, then, is how do these error rates vary with confidence level as the underlying model is perturbed? We can answer it by plotting the simulation results. These plots quantify how "unrealistic" the "near-certainty" of a CI might be in this archetypal application. The graphics show the same results, but the one at the left displays the values on logit scales while the one at the right uses raw scales. The Beta distribution is a Beta$(1/30,1/30)$ (which is practically a Bernoulli distribution). The lognormal distribution is the exponential of the standard Normal distribution. The normal distribution is included to verify that this CI procedure really does attain its intended coverage and to reveal how much variation to expect from the finite simulation size. (Indeed, the graphs for the normal distribution are comfortably close to zero, showing no significant deviations.) It is clear that on the logit scale, the coverages grow more divergent as the confidence level increases. There are some interesting exceptions, though. If we are unconcerned with perturbations of the model that introduce skewness or long tails, then we can ignore the exponential and lognormal and focus on the rest. Their behavior is erratic until $\alpha$ exceeds $95\%$ or so (a logit of $3$), at which point the divergence has set in. This little study brings some concreteness to Gelman's claim and illustrates some of the phenomena he might have had in mind. In particular, when we are using a CI procedure with a low confidence level, such as $\alpha=50\%$, then even when the underlying model is strongly perturbed, it looks like the coverage will still be close to $50\%$: our feeling that such a CI will be correct about half the time and incorrect the other half is borne out. That is robust . If instead we are hoping to be right, say, $95\%$ of the time, which means we really want to be wrong only $5\%$ of the time, then we should be prepared for our error rate to be much greater in case the world doesn't work quite the way our model supposes. Incidentally, this property of $50\%$ CIs holds in large part because we are studying symmetric confidence intervals . For the skewed distributions, the individual confidence limits can be terrible (and not robust at all), but their errors often cancel out. Typically one tail is short and the other long, leading to over-coverage at one end and under-coverage at the other. I believe that $50\%$ confidence limits will not be anywhere near as robust as the corresponding intervals. This is the R code that produced the plots. It is readily modified to study other distributions, other ranges of confidence, and other CI procedures. ## Zero-mean distributions.#distributions <- list(Beta=function(n) rbeta(n, 1/30, 1/30) - 1/2, Uniform=function(n) runif(n, -1, 1), Normal=rnorm, #Mixture=function(n) rnorm(n, -2) + rnorm(n, 2), Exponential=function(n) rexp(n) - 1, Lognormal=function(n) exp(rnorm(n, -1/2)) - 1)n.sample <- 12n.sim <- 5e4alpha.logit <- seq(0, 6, length.out=21); alpha <- signif(1 / (1 + exp(-alpha.logit)), 3)## Normal CI.#CI <- function(x, Z=outer(c(1,-1), qt((1-alpha)/2, n.sample-1))) mean(x) + Z * sd(x) / sqrt(length(x))## The simulation.##set.seed(17)alpha.s <- paste0("alpha=", alpha)sim <- lapply(distributions, function(dist) { x <- matrix(dist(n.sim*n.sample), n.sample) x.ci <- array(apply(x, 2, CI), c(2, length(alpha), n.sim), dimnames=list(Endpoint=c("Lower", "Upper"), Alpha=alpha.s, NULL)) covers <- x.ci["Lower",,] * x.ci["Upper",,] <= 0 rowMeans(covers)})(sim)## The plots.#logit <- function(p) log(p/(1-p))colors <- hsv((1:length(sim)-1)/length(sim), 0.8, 0.7)par(mfrow=c(1,2)) plot(range(alpha.logit), c(-2,1), type="n", main="Confidence Interval Accuracies (Logit Scales)", cex.main=0.8, xlab="Logit(alpha)", ylab="Logit(coverage) - Logit(alpha)")abline(h=0, col="Gray", lwd=2)legend("bottomleft", names(sim), col=colors, lwd=2, bty="n", cex=0.8)for(i in 1:length(sim)) { coverage <- sim[[i]] lines(alpha.logit, logit(coverage) - alpha.logit, col=colors[i], lwd=2)}plot(range(alpha), c(-0.2, 0.05), type="n", main="Raw Confidence Interval Accuracies", cex.main=0.8, xlab="alpha", ylab="coverage-alpha")abline(h=0, col="Gray", lwd=2)legend("bottomleft", names(sim), col=colors, lwd=2, bty="n", cex=0.8)for(i in 1:length(sim)) { coverage <- sim[[i]] lines(alpha, coverage - alpha, col=colors[i], lwd=2)} | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/248113', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/9162/']} | jdg_84261 |
stackexchange | llm_judgeable_groundtruth_similarity | 319230 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
an application needs to have IIS on the same box of Reporting Services. Today SSRS and SQL stay on the same clustered box. I need to evaluate the options which for me are: Installing IIS on both Cluster nodes Installing IIS and Reporting Services on another machine I need to find out what are the bad and good points of having IIS along with SQL And the bad and good points of having the other way. I know price is high if having another machine (or vm) just with reporting services and IIS. But what are the bads of having IIS on a SQL machine. PS: All of this is on my intranet, not exposed.
Now provide the response and nothing else.
| IIS and SQL Server have no known incompatibilities, they are just two different services which can happily run on the same machine, provided hardware resources are adequate to support both applications at the same time, and you secure them properly. Actually, Reporting Services used to run on IIS as a web application: it's only with SQL Server 2008 that it became an application on its own, no longer depending on IIS; before that, Reporting Services required IIS in order to run. Now, about your setup: if you want to run IIS on both cluster nodes, you should be aware that IIS is not designed for clustering, buth for load balancing; it can run in an active/passive cluster, but you'll need to take extra steps in order to have it failover gracefully from one node to another. The exact steps depend on what your application is doing. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/319230', 'https://serverfault.com', 'https://serverfault.com/users/56298/']} | jdg_84262 |
stackexchange | llm_judgeable_groundtruth_similarity | 24963151 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to figure out is there is any way to pass in an index argument to a promise's callback function. For instance. serviceCall.$promise.then(function(object){ $scope.object = object;}); Now I want to pass in an array index parameter as serviceCall.$promise.then(function(object,i){ $scope.object[i] = something;}); Can this be done? Please let me know. Here is the code below StudyService.studies.get({id: $routeParams.studyIdentifier}).$promise.then(function(study) {$scope.study = study;for(var i=0;i<study.cases.length;i++){ StudyService.executionsteps.get({id: $routeParams.studyIdentifier,caseId:study.cases[i].id}) .$promise.then(function(executionSteps,i){ $scope.study.cases[i].executionSteps = executionSteps; }); }});
Now provide the response and nothing else.
| you can use a closure for that. for example, in your code, use something like: function callbackCreator(i) { return function(executionSteps) { $scope.study.cases[i].executionSteps = executionSteps; }}StudyService.studies.get({id: $routeParams.studyIdentifier}) .$promise.then(function(study) { $scope.study = study; for(var i=0;i<study.cases.length;i++) { var callback = callbackCreator(i); StudyService.executionsteps.get({id: $routeParams.studyIdentifier,caseId:study.cases[i].id}) .$promise.then(callback); }}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24963151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3799365/']} | jdg_84263 |
stackexchange | llm_judgeable_groundtruth_similarity | 36310264 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given the following classes: case class AddRequest(x: Int, y: Int)case class AddResponse(sum: Int)case class ToUppercaseRequest(str: String)case class ToUppercaseResponse(upper: String) How do I define in a typesafe manner some function: def process(req: ???): ??? Such that the following should hold true: val r1: AddResponse = process(AddRequest(2, 3))val r2: ToUppercaseResponse = process(ToUppercaseRequest("aaa")) Also, the following should not compile: val r3 = process("somestring")
Now provide the response and nothing else.
| This is both entirely possible and a totally reasonable thing to do in Scala. This kind of thing is all over Shapeless, for example, and something similar (but less principled) is the basis of the magnet pattern that shows up in Spray, etc. Update: note that the following solution assumes that "given the following classes" means you don't want to touch the case classes themselves. If you don't care, see the second part of the answer below. You'd want a type class that maps input types to output types: case class AddRequest(x: Int, y: Int)case class AddResponse(sum: Int)case class ToUppercaseRequest(str: String)case class ToUppercaseResponse(upper: String)trait Processable[In] { type Out def apply(in: In): Out} And then some type class instances: object Processable { type Aux[I, O] = Processable[I] { type Out = O } implicit val toUppercase: Aux[ToUppercaseRequest, ToUppercaseResponse] = new Processable[ToUppercaseRequest] { type Out = ToUppercaseResponse def apply(in: ToUppercaseRequest): ToUppercaseResponse = ToUppercaseResponse(in.str.toUpperCase) } implicit val add: Aux[AddRequest, AddResponse] = new Processable[AddRequest] { type Out = AddResponse def apply(in: AddRequest): AddResponse = AddResponse(in.x + in.y) }} And now you can define process using this type class: def process[I](in: I)(implicit p: Processable[I]): p.Out = p(in) Which works as desired (note the appropriate static types): scala> val res: ToUppercaseResponse = process(ToUppercaseRequest("foo"))res: ToUppercaseResponse = ToUppercaseResponse(FOO)scala> val res: AddResponse = process(AddRequest(0, 1))res: AddResponse = AddResponse(1) But it doesn't work on arbitrary types: scala> process("whatever")<console>:14: error: could not find implicit value for parameter p: Processable[String] process("whatever") ^ You don't even have to use a path dependent type (you should be able just to have two type parameters on the type class), but it makes using process a little nicer if e.g. you have to provide the type parameter explicitly. Update: everything above assumes that you don't want to change your case class signatures (which definitely isn't necessary). If you are willing to change them, though, you can do this a little more concisely: trait Input[Out] { def computed: Out}case class AddRequest(x: Int, y: Int) extends Input[AddResponse] { def computed: AddResponse = AddResponse(x + y)}case class AddResponse(sum: Int)case class ToUppercaseRequest(str: String) extends Input[ToUppercaseResponse] { def computed: ToUppercaseResponse = ToUppercaseResponse(str.toUpperCase)}case class ToUppercaseResponse(upper: String)def process[O](in: Input[O]): O = in.computed And then: scala> process(AddRequest(0, 1))res9: AddResponse = AddResponse(1)scala> process(ToUppercaseRequest("foo"))res10: ToUppercaseResponse = ToUppercaseResponse(FOO) Which kind of polymorphism (parametric or ad-hoc) you should prefer is entirely up to you. If you want to be able to describe a mapping between arbitrary types, use a type class. If you don't care, or actively don't want this operation to be available for arbitrary types, using subtyping. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36310264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/319688/']} | jdg_84264 |
stackexchange | llm_judgeable_groundtruth_similarity | 780394 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across this question and I wasn't sure how to approach it. The question says basically roll a die, what's the probability you get the first 6 on an even throw? I.e., the 2nd, 4th, 6, 8th, etc. die throw. Part of my issue is that obviously you never know when it may come. Here's my guess: The probability of you getting a 6 on your $n$th throw is: $$\left(\frac{5}{6}\right)^{n-1}\cdot\frac{1}{6}$$ Now we want to solve when $n=2k$ where $k$ is a natural number which gives us: $$\left(\frac{5}{6}\right)^{2k-1}\cdot\frac{1}{6}=\left(\frac{5}{6}\right)^{2k}\cdot\left(\frac{5}{6}\right)^{-1}\cdot\frac{1}{6}$$ The obvious guess is $\frac12$, since it seems it would be equally likely to fall on an even vs. odd, though odd should have an edge since the number of odd rolls are always greater than or equal to the even throws, but unsure. $$=\left(\frac{5}{6}\right)^{2k}\cdot\frac{1}{5}=\left(\frac{25}{36}\right)^{k}\cdot\frac{1}{5}$$ This point I have no idea how to proceed, have I been doing this right so far? What's the next step?
Now provide the response and nothing else.
| Your final expression: $$a_k = \frac{1}{5} \cdot\left( \frac{25}{36} \right)^k $$ This correctly gives the probability that your first 6 will occur on the $2k$th roll. So, $a_1$ is the probability that your first 6 is on roll 2; $a_2$ is the probability that your first 6 is on roll 4; etc. In order to find the probability $p$ that your first 6 is on any even roll, we just need to add up all the $a_k$s: $$p = \sum_{k=1}^\infty a_k = \sum_{k=1}^\infty \left[ \frac{1}{5} \cdot \left( \frac{25}{36} \right)^k \right]$$ The formula for the sum of an infinite geometric series with a ratio $r < 1$ (of the form $a_1, a_1 r^1, a_1 r^2, \dots$) is well-known: $$\sum_{k=1}^\infty a_1 r^k = \frac{a_1}{1-r}$$ Comparing the form of an infinite geometric series to the formula we've come up with for $a_k$, we see that $a_1 = (1/5)(25/36)$ and the ratio is $r = (25/36)$. Hence: $$p = \frac{a_1}{1-r} = \frac{25/180}{11/36} = \frac{5}{11}$$ This is close to $1/2$, but not exactly equal to it. One way to intuitively understand why this is is to think of this as a "game" between two players who take turns rolling the die. In each pair of turns, the first player to roll automatically has an advantage, since if they roll a 6, they win, whereas the second player only gets a chance to roll if the first player hasn't already won. This is why it's less likely that you get the first 6 on an even roll than on an odd roll - the even rolls correspond to the "second player", who is at a disadvantage. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/780394', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/147682/']} | jdg_84265 |
stackexchange | llm_judgeable_groundtruth_similarity | 2889782 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have delete links in my Asp.Net Mvc2 application. /{controller}/Delete/{id} It seems using link to delete has a security risk. Don’t use Delete Links because they create Security Holes I found this Implementing RESTful Routes & Controllers in ASP.NET MVC 2.0 but I am not sure how to implement a simple delete functionality using the new HttpDeleteAttribute class. Are there any examples on deleting, the RESTful approach?
Now provide the response and nothing else.
| I see that you want to actually move the expander button into your HeaderTemplate, not just restyle it. This is easily done with FindAncestor: First add a ToggleButton and bind its IsChecked property using FindAncestor, along these lines: <DataTemplate x:Key="MyHeaderTemplate"> <Border ...> <DockPanel> <!-- Expander button --> <ToggleButton IsChecked="{Binding IsExpanded, Mode=TwoWay, RelativeSource={RelativeSource FindAncestor,Header,1}}" Content=... /> <!-- Other content here --> ... </DockPanel> </Border></DataTemplate> This adds an expand button inside the header template but does not hide the original button provided by the Expander. To do this I recommend you replace the Expander's ControlTemplate. Here is a complete copy of Expander's ControlTemplate with the ToggleButton replaced with a simple ContentPresenter: <ControlTemplate x:Key="ExpanderWithoutButton" TargetType="{x:Type Expander}"> <Border BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" CornerRadius="3" SnapsToDevicePixels="true"> <DockPanel> <ContentPresenter Content="{TemplateBinding Header}" ContentTemplate="{TemplateBinding HeaderTemplate}" ContentTemplateSelector="{TemplateBinding HeaderTemplateSelector}" DockPanel.Dock="Top" Margin="1" Focusable="false" /> <ContentPresenter x:Name="ExpandSite" Visibility="Collapsed" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Margin="{TemplateBinding Padding}" Focusable="false" /> </DockPanel> </Border> <ControlTemplate.Triggers> <Trigger Property="IsExpanded" Value="true"> <Setter Property="Visibility" Value="Visible" TargetName="ExpandSite"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/> </Trigger> </ControlTemplate.Triggers></ControlTemplate> It might be used as follows: <Expander Template="{StaticResource ExpanderWithoutButton}"> <Expander.HeaderTemplate> <DataTemplate ...> <Border ...> <DockPanel> <ToggleButton ... IsChecked="{Binding IsExpanded, Mode=TwoWay, RelativeSource={RelativeSource FindAncestor,Header,1}}" /> ... other header template content here ... A simpler alternative would be to just set a negative margin in yourHeaderTemplate to cover the expander button. Instead of the ControlTemplate shown above, your DataTemplat would just contain something like this: <DataTemplate ...> <Border Margin="-20 0 0 0" ... /> Adjust the negative margin to get the look you want. This solution is simpler but inferior in that if you switch to a different system theme the required margin may change and your expander may no longer look good. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2889782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59941/']} | jdg_84266 |
stackexchange | llm_judgeable_groundtruth_similarity | 479739 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When watching a video by Veritasium about the SI units redefinition (5:29) , a claim that the volt and unit of resistance (presumably the ohm) will change by about 1 part in 10 million caught my attention: [ ... ] I should point out that a volt will actually change by about 1 part in 10 million, and resistance will change by a little bit less than that. And that's because back in 1990, the electrical metrologists decided to stop updating their value of, effectively, plancks constant, and just keep the one they had in 1990. And there was a benefit to that: they didn't have to update their definitions or their instruments. [ ... ] Well, now the electrical metrologists will have to change. But, that's a very tiny change for a very tiny number of people. Apparently, the reason is that on 20 May, 2019, redefinitions of SI base units are scheduled to come into force . The kilogram will be redefined using the Planck constant, which, presumably, means that any change in value from the previous definition ( the International Prototype of the Kilogram ) would affect derived units depending on it, including the volt, ohm, farad, henry, siemens, tesla and (formerly) ampere . Will the volt or ohm change, as Veritasium seemingly claims? Are any other electrical units (listed above) affected? If so, exactly how much will they have changed after the redefinition?
Now provide the response and nothing else.
| Late last century electrical standards based on Josephson junctions became common. A Josephson junction together with an atomic clock can give an exquisitely precise voltage standard in terms of the Josephson constant. Unfortunately, the then-current definition of the volt relied on the definition of the SI kilogram, which introduced substantial uncertainty. So we could provide a very precise voltage standard, but because of the imprecise definition of the volt we were not sure how many volts it was. Therefore, in 1990 the community came up with the conventional volt, denoted $V_{90}$ , based on a fixed value of the Josephson constant, $K_{J-90}$ . This conventional unit has served as a more accurate and reproducible standard for voltage since then, however its exact value in terms of SI $V$ was unknown due to the aforementioned lack of precision. https://en.wikipedia.org/wiki/Conventional_electrical_unit With the SI redefinition in a few days $K_J$ will now have an exact value, and that value is slightly different from the exact value assigned to $K_{J-90}$ by the 1990 convention. Therefore, the SI $V$ is also slightly different from the conventional $V_{90}$ . Because both $K_J$ and $K_{J-90}$ are exact, the conversion between SI and conventional volts is also exact and therefore the conventional volt is abrogated. This means that electrical metrologists will need to stop using $V_{90}$ and use $V$ which has a slightly different value but the same precision. In other words, an accurate old 1 $V$ standard was much less precise than an old 1 $V_{90}$ standard, but an accurate new 1 $V$ standard will have the same precision as the abrogated 1 $V_{90}$ standard even though the value is slightly different. So as Veritasium pointed out, it's a very tiny change for a very tiny number of people , although it is not that $V_{90}$ is changing, it is just being abrogated. And the value of $V$ is not changing, it is just gaining precision. Here is a summary of the affected electrical units and the changes being made: Unit Symbol Definition Related to SI SI value (CODATA 2014) SI value (2019) conventional volt V 90 see above $\frac{K_\text{J-90}}{K_J} \text{V}$ 1.000 000 0983(61) V 1.000 000 106 66... V conventional ohm Ω 90 see above $\frac{R_K}{R_\text{K-90}} \text{Ω}$ 1.000 000 017 65(23) Ω 1.000 000 017 79... Ω conventional ampere A 90 V 90 /Ω 90 $\frac{K_\text{J-90}}{K_J} \cdot \frac{R_\text{K-90}}{R_K} \text{A}$ 1.000 000 0806(61) A 1.000 000 088 87... A conventional coulomb C 90 s⋅A 90 = s⋅V 90 /Ω 90 $\frac{K_\text{J-90}}{K_J} \cdot \frac{R_\text{K-90}}{R_K} \text{C}$ 1.000 000 0806(61) C 1.000 000 088 87... C conventional watt W 90 A 90 V 90 = V 90 2 /Ω 90 $\left(\frac{K_\text{J-90}}{KJ}\right)^2 \cdot \frac{R_\text{K-90}}{R_K} \text{W} $ 1.000 000 179(12) W 1.000 000 195 53... W conventional farad F 90 C 90 /V 90 = s/Ω 90 $\frac{R_\text{K-90}}{R_K} \text{F}$ 0.999 999 982 35(23) F 0.999 999 982 20... F conventional henry H 90 s⋅Ω 90 $\frac{R_K}{R_\text{K-90}} \text{H}$ 1.000 000 017 65(23) H 1.000 000 017 79... H From the exact value of $K_{J-90}$ in the link above and the exact value of $e$ and $h$ given here you can calculate that $\frac{K_{J-90}}{K_J} = \frac{ 71207857995393}{71207850400000}$ exactly. For the volt that works out to approximately $1 V_{90} = 1+1.06\times 10^{-7} \; V$ , or ~100PPB. | {} | {'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/479739', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_84267 |
stackexchange | llm_judgeable_groundtruth_similarity | 42318870 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make a callback between two page.In the page 1, i have this code: DataInfo= [ { Price: 0, ClosePrice: 0, UpdateTime:"", DefaultPrice:0 } ] GetClosePrice(i):number{return DataInfo[i].ClosePrice;} i want to get the value of 'i' from the page 2, How can i load the function GetClosePrice() when the navcontroller return to the page 1 (this.navCtrl.pop())
Now provide the response and nothing else.
| SOURCE PAGE CLASS this.navCtrl.push(Page,{ data: this.data, callback: this.getData});getData = data =>{ return new Promise((resolve, reject) => { for (let order of orders) { this.data = data; } resolve(); });}; TARGET PAGE CLASS constructor(public navCtrl: NavController, public navParams: NavParams){ this.callback = this.navParams.get('callback'); this.data = this.navParams.get('data') || [];}sendData(event: any): void{ this.callback(this.data).then( () => { this.navCtrl.pop() });} TARGET PAGE TEMPLATE <button ion-button (click)="sendData($event)"> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42318870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7554182/']} | jdg_84268 |
stackexchange | llm_judgeable_groundtruth_similarity | 21133717 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following test program #include <map>#include <iostream>using namespace std;int main(int argc, char **argv){ map<int,int> a; a[1]=a.size(); for(map<int,int>::const_iterator it=a.begin(); it!=a.end(); ++it) cout << "first " << (*it).first << " second " << (*it).second << endl;} leads to different output when compiled on g++ 4.8.1 (Ubuntu 12.04 LTS): g++ xxx.cpp ./a.out first 1 second 1 and on Visual Studio 2012 (Windows 7) (Standard Win32 Console Application Project): ConsoleApplication1.exefirst 1 second 0 Which compiler is right? Am I doing something wrong?
Now provide the response and nothing else.
| This is actually a well-formed program that has two equally valid execution paths, so both compilers are right. a[1] = a.size() In this expression, the evaluation of the two operands of = are unsequenced. §1.9/15 [intro.execution] Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced. However, function calls are not interleaved, so the calls to operator[] and size are actually indeterminately sequenced , rather than unsequenced. §1.9/15 [intro.execution] Every evaluation in the calling function (including other function calls) that is not otherwise specifically sequenced before or after the execution of the body of the called function is indeterminately sequenced with respect to the execution of the called function. This means that the function calls may happen in one of two orders: operator[] then size size then operator[] If a key doesn't exist and you call operator[] with that key, it will be added to the map, thereby changing the size of the map. So in the first case, the key will be added, the size will be retrieved (which is 1 now), and 1 will be assigned to that key. In the second case, the size will be retrieved (which is 0), the key will be added, and 0 will be assigned to that key. Note, this is not a situation that brings about undefined behaviour. Undefined behaviour occurs when two modifications or a modification and a read of the same scalar object are unsequenced. §1.9/15 [intro.execution] If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. In this situation, they are not unsequenced but indeterminately sequenced. So what we do have is two equally valid orderings of the execution of the program. Either could happen and both give valid output. This is unspecified behaviour . §1.3.25 [defns.unspecified] unspecified behavior behavior, for a well-formed program construct and correct data, that depends on the implementation So to answer your questions: Which compiler is right? Both of them are. Am I doing something wrong? Probably. It's unlikely that you would want to write code that has two execution paths like this. Unspecified behaviour can be okay, unlike undefined behaviour, because it can be resolved to a single observable output, but it's not worth having in the first place if you can avoid it. Instead, don't write code that has this kind of ambiguity. Depending on what exactly you want correct path to be, you can do either of the following: auto size = a.size();a[1] = size; // value is 0 Or: a[1];a[1] = a.size(); // value is 1 If you want the result to be 1 and you know the key doesn't yet exist, you could of course do the first code but assign size + 1 . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21133717', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3197430/']} | jdg_84269 |
stackexchange | llm_judgeable_groundtruth_similarity | 40149119 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Parallel.ForEach loop in my code and I am wondering how to handle exceptions. Should I catch and handle(e.g write to log) exceptions inside the loop or should I catch aggregate exception outside - encapsulate the loop in try/catch?
Now provide the response and nothing else.
| Should I catch and handle exceptions inside the loop or should I catch aggregate exception outside Those two are not functionally equivalent. Both can be done, with different effects. The fundamental question is: when one or more iterations suffer an exception, do you want the remaining items to be processed or not? If yes, then handle them inside the loop, possibly storing them like in the MSDN example . If not then just put a try/catch around the Parallel loop itself. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40149119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1711499/']} | jdg_84270 |
stackexchange | llm_judgeable_groundtruth_similarity | 8289 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
An interesting tidbit floating around the internet these days is that Dr. Duncan MacDougall apparently weighed people shortly before and after death, and found an average of a 21 gram discrepancy, to which he attributed to be the soul departing. As we all know, this certainly can't be the right explanation. Putting aside likely extreme bias and instrumental/experimental error, is there any bodily process at death that could potentially explain a small decrease in observed mass?
Now provide the response and nothing else.
| Snopes.com gives a discussion of possible issues with the original experiment . Evaporation or bowel or body movements for instance. I think the biggest complaint about the experiment is that it has not been reproducible and that the original experiment was flawed. MacDougall only took six measurements and he threw two of them out in his original work. The Wikipedia link you give cites a paper where sheep were found to gain weight at least for a time after death. Dogs are found to have no weight change. MacDougall and his contemporaries were inclined to interpret their results such that animals simply do not have souls, or have different sorts of souls. But if you look at an average of all experiments cited, it could also be said that the average weight loss up on death averages to zero. 3/4 of an ounce of a human being (or a sheep) over the course of something like a death is not an easy measurement. You can't give them a drink of water, you worry that they are breathing too much and losing water. Its not an easy experiment, at least for human beings where you can't control the circumstances of death so well. In any case this really needs to be measured better. In 90 years it doesn't seem that there's been a real rush to do it. I just stumbled across this blog post that talks about an effort to measure a change in blood flow to the brain when in intense mental activity popularized in the late 19th century. The investigator Mosso claimed he measured a difference, but the result is disputed today. It could not have been more than a few grams. In any case a description of how difficult this measurement can be. | {} | {'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/8289', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/307/']} | jdg_84271 |
stackexchange | llm_judgeable_groundtruth_similarity | 9883113 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I implemented a font system that finds out which letter to use via char switch statements. There are only capital letters in my font image. I need to make it so that, for example, 'a' and 'A' both have the same output. Instead of having 2x the amount of cases, could it be something like the following: char c;switch(c){case 'a' & 'A': /*get the 'A' image*/; break;case 'b' & 'B': /*get the 'B' image*/; break;...case 'z' & 'Z': /*get the 'Z' image*/; break;} Is this possible in java?
Now provide the response and nothing else.
| You can use switch-case fall through by omitting the break; statement. char c = /* whatever */;switch(c) { case 'a': case 'A': //get the 'A' image; break; case 'b': case 'B': //get the 'B' image; break; // (...) case 'z': case 'Z': //get the 'Z' image; break;} ...or you could just normalize to lower case or upper case before switch ing. char c = Character.toUpperCase(/* whatever */);switch(c) { case 'A': //get the 'A' image; break; case 'B': //get the 'B' image; break; // (...) case 'Z': //get the 'Z' image; break;} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9883113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1155474/']} | jdg_84272 |
stackexchange | llm_judgeable_groundtruth_similarity | 9983464 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why would you ever use "On Error Goto 0" in a VB6 app? This statement turns the error handler off and would mean that any error would crash the app. Why would this ever be desirable?
Now provide the response and nothing else.
| In VB6, you can specify that you want errors to be handled by particular code later in the routine: Sub Bar() On Error Goto MyHandler ... ...some code that throws an error... ... Exit SubMyHandler: ...some error handler code (maybe pops up a dialog)End Sub It may be the case, however, that the code that throws the error is localized, and you don't want that same handler for all of the rest of the code in the routine. In that case, you'd use "On Error Goto 0" as follows: Sub Bar() ... On Error Goto MyHandler ...some code that throws an error... On Error Goto 0 ... ... Exit SubMyHandler: ...some error handler code (maybe pops up a dialog)End Sub Now you have effectively scoped the error handling to execute only if that particular line of code fails. By calling "On Error Goto 0" you are NOT saying that you want the app to crash immediately. You are simply saying that you want to de-register any error handlers that you may have set up earlier in the routine; errors will be passed up the call stack to calling routines, like normal. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9983464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/327528/']} | jdg_84273 |
stackexchange | llm_judgeable_groundtruth_similarity | 58042401 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to the docs one can increase the default async timeout from 5000ms using the jest-object More specifically, by using the jestsettimeouttimeout The issue I am facing is I am running a series of tests against an API that is very slow, 5-15 second response times, configuring this jest object at the top of each test is painfully annoying. Is it possible to declare these settings once before all test files are run?
Now provide the response and nothing else.
| OK, putting bits together: Option "setupTestFrameworkScriptFile" was replaced by configuration "setupFilesAfterEnv", which supports multiple paths https://jestjs.io/docs/en/jest-object#jestsettimeouttimeout https://jestjs.io/docs/en/jest-object#jestdisableautomock The Jest search box doesn't actually return anything when you search for: setupFilesAfterEnv And docs talk about: setupTestFrameworkScriptFile (which also doesn't return anything on the search:/ ) Anyway, the docs leave you scratching your head but this works: jest.config.js: module.exports = { setupFilesAfterEnv: ['./setup.js'], setup.js: jest.setTimeout(10000); // in milliseconds The jest folks should make it easier to find this information. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58042401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_84274 |
stackexchange | llm_judgeable_groundtruth_similarity | 6677967 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create a Photos Library as existing photo library in iPhone. I add image in scrollviewer which is chosen from Photo library. Before add image i resize the selected image and set it to ImageView Control.But when i compare to added image quality with iPhone Photo library image quality, my control image is not good. How to bring the quality and withou memory overflow issue. -(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize{ // Create a bitmap context. UIGraphicsBeginImageContext( newSize ); [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage;}
Now provide the response and nothing else.
| I ran into this issue also. I think you're using an iPhone 4 with Retina Display. Even if you're not, you should account for it. Instead of UIGraphicsBeginImageContext() , use UIGraphicsBeginImageContextWithOptions() and use the scale property of UIScreen for the third argument. All iOS devices have the scale property, on iPhone 4 it's set to 2.0; on the rest, as I write this, it's set to 1.0. So your code, with those changes, becomes -(UIImage *)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize{ // Create a bitmap context. UIGraphicsBeginImageContextWithOptions(newSize, YES, [UIScreen mainScreen].scale); [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage;} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6677967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/842524/']} | jdg_84275 |
stackexchange | llm_judgeable_groundtruth_similarity | 1433527 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that $\sum a_n$ is an absolutely converging complex series and that for all integers $k \ge 1$, $\sum a_n^k =0$. How to prove that the $a_n$ are all equal to $0$? I was looking to extend Newton's identities to infinite number of variables, but I'm not able to write a simple proper proof!
Now provide the response and nothing else.
| Assume that the conclusion is wrong, and let $m$ be the smallestindex such that $a_m \ne 0$. Without loss of generality we canassume that $m=1$, i.e.$$ \tag 1a_1 \ne 0 \, .$$ Let $0 < \varepsilon < \frac 12|a_1|$. $\sum_{n=1}^\infty a_n$ is absolutely convergent, therefore there is an integer $N$ such that $$ \sum_{n=N+1}^\infty |a_n| < \varepsilon \, .$$and in particular $|a_n| < \varepsilon$ for $n > N$. It follows thatfor all integers $k$,$$ \sum_{n=N+1}^\infty |a_n|^k \le \sum_{n=N+1}^\infty \varepsilon^{k-1}|a_n| = \varepsilon^{k-1} \sum_{n=N+1}^\infty |a_n| < \varepsilon^k \, .$$ Since $\sum a_n^k =0$, the $k$-th power sums$$p_k := p_k(a_1, \ldots, a_N) = \sum_{n=1}^N a_n^k$$satisfy$$ \tag 2 |p_k| = \bigl| \sum_{n=N+1}^\infty a_n^k \, \bigr| \le \sum_{n=N+1}^\infty |a_n|^k < \varepsilon^k$$for $1 \le k \le N$. Now let$$e_k := e_k(a_1, \ldots, a_N) $$ be the $k$-th elementary symmetric polynomial in the variables $a_1, \ldots, a_N$. Then $e_0 = 1$, and Newton's identities state that$$\begin{aligned} e_1 &= p_1 \\ 2e_2 &= e_1 p_1 - p_2 \\ 3e_3 &= e_2 p_1 - e_1 p_2 + p_3\end{aligned}$$and generally$$ \tag 3 k e_k = \sum_{i=1}^k (-1)^{i-1} e_{k-i} p_i \, \text{ for } k \ge 0 \, .$$ From $(2)$ and $(3)$ it follows easily by induction that the elementary symmetric polynomials satisfy$$ |e_k| \le \varepsilon^k \text{ for } 0 \le k \le N \, .$$ Now define$$P(x) = (x-a_1)(x-a_2) \cdots (x-a_N) \\= x^N - e_1 x^{N-1} + e_2 x^{N-2} \cdots \pm e_N \, .$$Then $P(a_1) = 0$ and therefore $r := |a_1|$ satisfies$$ r^N \le |e_1| r^{N-1} + |e_2| r^{N-2} + \cdots + |e_N| \\ \le \varepsilon r^{N-1} + \varepsilon^2 r^{N-2} + \cdots + \varepsilon^N \\ = \varepsilon r^{N-1} \bigl( 1 + \frac{\varepsilon}{r } + \cdots + (\frac{\varepsilon}{r })^{N-1} \bigr)$$or $$1 \le \frac{\varepsilon}{r } \bigl( 1 + \frac{\varepsilon}{r } + \cdots + (\frac{\varepsilon}{r })^{N-1} \bigr) \, .$$$\varepsilon$ was chosen such that $0 < \frac \varepsilon r < \frac 12$, therefore$$1 < \frac 12 \bigl( 1 + \frac 12 + \cdots + (\frac 12)^{N-1} \bigr) < 1 $$which is a contradiction. So the initial assumption is wrong, and it is proven thatall $a_n$ are zero. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1433527', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/187663/']} | jdg_84276 |
stackexchange | llm_judgeable_groundtruth_similarity | 11100878 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm creating an application using fabric.js , and experiencing really strange behavior. I'm adding both images and text instances, using the regular fabric.Canvas (not fabric.StaticCanvas ), and am unable to move or resize these items after addition. I've only added a couple of each type. The basic functionality is wrapped in a callback tied to a click event on some button, but the core fabric functionality looks something like this (simplified, but almost everything is the same as far as fabric related code): <canvas id="myCanvas" height=600 width=800></canvas><input id="addStuff" type="button" value="Add!" />....var canvas = new fabric.Canvas('myCanvas');canvas.renderOnAddition = true;$(function(){ $('#addStuff').on('click', function(e) { var text = new fabric.Text("Hello, world", { top : 100, left : 100, fontSize : 12, fontFamily : 'Delicious_500' }); text.hasControls = false; canvas.add(text); fabric.Image.fromURL('./img/pic.png', function(img) { var imgX = img.set({ top: 200, left: 100, }); canvas.add(imgX); }); canvas.renderAll(); });}); The above code renders the elements well, but they cannot be resized or moved around, and act static. The strange thing is that if I open and close my chrome dev-panel (using F12 or [Ctrl/CMD]+SHIFT+I twice), the canvas regains functionality and everything works again . I have some serverside stuff happening as well (this is only a mock sample of what I'm doing), but I have no idea how to further trace this down to this odd behavior. I'm using the very latest code (built from github repo). Thoughts?
Now provide the response and nothing else.
| It's possible that canvas internal offset is not updated properly. Something on the page could change dimensions/position, making canvas positioned at a different location than the one it was during initialization. Try calling canvas.calcOffset() after renderAll() call, or after adding those objects. The reason calcOffset is not called in renderAll is for performance reasons. renderAll is a redraw of an entire canvas. It happens every time something changes on canvas and canvas needs to be redrawn. As you can imagine, it gets called quite often (sometimes dozens of times per second), and it would be pretty wasteful to calculate new offset every time redraw happens. Consider also that in most cases canvas position does not change throughout page life. It probably happens pretty rarely. Mainly during page load. In highly-dynamic environment — where canvas changes position often — you can solve offset "problem" by explicitly calling calcOffset() . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11100878', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187907/']} | jdg_84277 |
stackexchange | llm_judgeable_groundtruth_similarity | 307672 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So here is what happened The customer gave me a list of things he wanted to have. It is a funding website. So to start a funding project he wanted to have certain fields in the "create/edit funding project" part. He gave me a document on what should get implemented.I estimated around 25 hours for it and made a price tag for that milestone, but am now at 66 hours, and according to him, it is not done yet. An example:We changed an existing wordpress theme. He wanted a funding project initiator name for each project. So i estimated 0.5 hours for adding a field 'project initiator name'. For me, that was adding the field and adding it to the "create/edit funding project page" there was not mentioned to add it somewhere else in the document. Now he added a "bug" that this new field does not change in the "My account" the display of the "my profile". He added other things that were not in the feature list as well. How do you create contracts for milestones that don't give the customer a bad understanding of what is inside and me no need to explain what is not inside?
Now provide the response and nothing else.
| Software is never done. That's why it's soft. If you sold this project as something that would be defect and maintenance free you're living in a hell of your own making. A lack of defect tickets is not a milestone. It's a temporary anomaly (that may indicate something is wrong with the ticketing system). You should be working towards the previously agreed milestones. Doing anything beyond that is not only hurting you it's hurting your customer who now thinks they have to hound you over every little thing when what they need is a working web site. You understand this better then your customer does. It's your responsibility to make it happen. Your contract should have excluded maintenance and defect corrections. If the site can be made to work then you've got your milestones. You drive this point home by showing them what maintenance and defect corrections are going to cost. Don't underestimate this just to make work for yourself. This isn't real work. This isn't even charity. You're hurting your customer. When you estimate the work make the scope clear and the price clear. Stick to the scope and the price. If there is any doubt be very conservative. Taking longer than estimated to get it "just right" is unprofessional. Sometimes you get surprised. It happens. But as soon as it happens revise your estimate and seek new approval. Do not work without approval. Do not "try harder". The very moment your estimate is in the slightest doubt revise it. This is behaving professionally. Customers like predictability. So much they will pay you to take an hour to do a 5 minute job. Take 65 minutes to do it though and don't expect them to hire you again. You say this is a fixed price contract. Doesn't matter. Time is money. Also, you are a very valuable resource to your customer so working yourself raw over trivia is also not doing them any favors. Take care of yourself. Take care of your customer. If you understand this and behave like this then when problems crop up that you think will take "0.5 hours" to fix you can answer them with a confident, "That will require a new agreement. I'll allow a day for it and I'll need you to be available just to be sure it's resolved to your satisfaction." Do that every time some little thing comes up and you'll see the little things become fewer and fewer and a bit more important. Many programmers think of their customers as slave drivers cracking a whip demanding, "Faster! Faster!" This is all wrong. Think of them as a little kid in the back seat of an airplane that you are piloting as they squeal, "Faster! Faster!" It's your job to land them safely. You can laugh and dip your wings. But mind your air speed. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/307672', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/29473/']} | jdg_84278 |
stackexchange | llm_judgeable_groundtruth_similarity | 6061996 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using redis-rb in a Rails app, the following doesn't work: irb> keys = $redis.keys("autocomplete*")=> ["autocomplete_foo", "autocomplete_bar", "autocomplete_bat"]irb> $redis.del(keys)=> 0 This works fine: irb> $redis.del("autocomplete_foo", "autocomplete_bar")=> 2 Am I missing something obvious? The source is just: # Delete a key.def del(*keys) synchronize do @client.call [:del, *keys] endend which looks to me like it should work to pass it an array...?
Now provide the response and nothing else.
| A little coding exploration of the way the splat operator works: def foo(*keys) puts keys.inspectend>> foo("hi", "there")["hi", "there"]>> foo(["hi", "there"])[["hi", "there"]]>> foo(*["hi", "there"])["hi", "there"] So passing in a regular array will cause that array to be evaluated as a single item, so that you get an array inside an array within your method. If you preface the array with * when you call the method: $redis.del(*keys) That lets the method know to unpack it/not to accept any further arguments. So that should solve the problem that you're having! Just for the sake of further clarification, this works: >> foo("hello", *["hi", "there"]) This causes a syntax error: >> foo("hello", *["hi", "there"], "world") | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6061996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/469544/']} | jdg_84279 |
stackexchange | llm_judgeable_groundtruth_similarity | 12946521 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm plotting the same data in two different formats: log scale and linear scale. Basically I want to have exactly the same plot, but with different scales, one on the top of the other. What I have right now is this: import matplotlib.pyplot as plt# These are the plot 'settings'plt.xlabel('Size')plt.ylabel('Time(s)');plt.title('Matrix multiplication')plt.xticks(xl, rotation=30, size='small')plt.grid(True)# Settings are ignored when using two subplotsplt.subplot(211)plt.plot(xl, serial_full, 'r--')plt.plot(xl, acc, 'bs')plt.plot(xl, cublas, 'g^')plt.subplot(212)plt.yscale('log')plt.plot(xl, serial_full, 'r--')plt.plot(xl, acc, 'bs')plt.plot(xl, cublas, 'g^') All 'settings' before plt.subplot are ignored. I can get this to work the way I want, but I have to duplicate all the settings after each subplot declaration. Is there a way to do configure both subplots at once?
Now provide the response and nothing else.
| The plt.* settings usually apply to matplotlib's current plot; with plt.subplot , you're starting a new plot, hence the settings no longer apply to it. You can share labels, ticks, etc., by going through the Axes objects associated with the plots ( see examples here ), but IMHO this would be overkill here. Instead, I would propose putting the common "styling" into one function and call that per plot: def applyPlotStyle(): plt.xlabel('Size') plt.ylabel('Time(s)'); plt.title('Matrix multiplication') plt.xticks(range(100), rotation=30, size='small') plt.grid(True)plt.subplot(211)applyPlotStyle()plt.plot(xl, serial_full, 'r--')plt.plot(xl, acc, 'bs')plt.plot(xl, cublas, 'g^')plt.subplot(212)applyPlotStyle()plt.yscale('log')plt.plot(xl, serial_full, 'r--')plt.plot(xl, acc, 'bs')plt.plot(xl, cublas, 'g^') On a side note, you could root out more duplication by extracting your plot commands into such a function: def applyPlotStyle(): plt.xlabel('Size') plt.ylabel('Time(s)'); plt.title('Matrix multiplication') plt.xticks(range(100), rotation=30, size='small') plt.grid(True)def plotSeries(): applyPlotStyle() plt.plot(xl, serial_full, 'r--') plt.plot(xl, acc, 'bs') plt.plot(xl, cublas, 'g^')plt.subplot(211)plotSeries()plt.subplot(212)plt.yscale('log')plotSeries() On another side note, it might suffice to put the title at the top of the figure (instead of over each plot), e.g., using suptitle . Similary, it might be sufficient for the xlabel to only appear beneath the second plot: def applyPlotStyle(): plt.ylabel('Time(s)'); plt.xticks(range(100), rotation=30, size='small') plt.grid(True)def plotSeries(): applyPlotStyle() plt.plot(xl, serial_full, 'r--') plt.plot(xl, acc, 'bs') plt.plot(xl, cublas, 'g^')plt.suptitle('Matrix multiplication')plt.subplot(211)plotSeries()plt.subplot(212)plt.yscale('log')plt.xlabel('Size')plotSeries()plt.show() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12946521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1030529/']} | jdg_84280 |
stackexchange | llm_judgeable_groundtruth_similarity | 3447435 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following session. How are the differences explained? I thought that a += b is a syntactical sugar of (and thus equivalent to) a = a + b . Obviously I'm wrong. >>> import numpy as np>>> a = np.arange(24.).reshape(4,6)>>> print a[[ 0. 1. 2. 3. 4. 5.] [ 6. 7. 8. 9. 10. 11.] [ 12. 13. 14. 15. 16. 17.] [ 18. 19. 20. 21. 22. 23.]]>>> for line in a:... line += 100...>>> print a #a has been changed[[ 100. 101. 102. 103. 104. 105.] [ 106. 107. 108. 109. 110. 111.] [ 112. 113. 114. 115. 116. 117.] [ 118. 119. 120. 121. 122. 123.]]>>>>>> for line in a:... line = line + 999...>>> print a #a hasn't been changed[[ 100. 101. 102. 103. 104. 105.] [ 106. 107. 108. 109. 110. 111.] [ 112. 113. 114. 115. 116. 117.] [ 118. 119. 120. 121. 122. 123.]] Thank you
Now provide the response and nothing else.
| Using the + operator results in a call to the special method __add__ which should create a new object and should not modify the original. On the other hand, using the += operator results in a call to __iadd__ which should modify the object if possible rather than creating a new object. __add__ These methods are called to implement the binary arithmetic operations (+, -, *, //, %, divmod(), pow(), **, <<, >>, &, ^, |). For instance, to evaluate the expression x + y, where x is an instance of a class that has an __add__() method, x.__add__(y) is called. __iadd__ These methods are called to implement the augmented arithmetic assignments (+=, -=, *=, /=, //=, %=, **=, <<=, >>=, &=, ^=, |=). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). Of course it is possible to implement __add__ and __iadd__ to have some other behaviour if you wanted to, but what you observe is the standard and recommended way. And, yes, it is a little surprising the first time you see it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3447435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17523/']} | jdg_84281 |
stackexchange | llm_judgeable_groundtruth_similarity | 31078447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Like in the image, the Facebook comment box has no submit button, when you write something and press Enter button, the comment posted. I want to submit the comment via JavaScript that running in console, but I tried to trigger Enter event, submit event of the DOM. Could not make it work.
Now provide the response and nothing else.
| The current comment boxes aren't a traditional <textarea> inside of a <form> . They're using the contenteditable attribute on a div. In order to submit in this scenario, you'd want to listen to one of the keyboard events ( keydown , keypress , keyup ) and look for the Enter key which is keycode 13. Looks like FB is listening to the keydown evt in this case, so when I ran this code I was able to fake submit a comment: function fireEvent(type, element) { var evt; if(document.createEvent) { evt = document.createEvent("HTMLEvents"); evt.initEvent(type, true, true); } else { evt = document.createEventObject(); evt.eventType = type; } evt.eventName = type; evt.keyCode = 13; evt.which = 13; if(document.createEvent) { element.dispatchEvent(evt); } else { element.fireEvent("on" + evt.eventType, evt); }}fireEvent('keydown', document.querySelector('[role="combobox"]._54-z span span')); A couple of things to note about this. The class ._54-z was a class they just happened to use on my page. Your mileage may vary. Use dev tools to make sure you grab the right element (it should have the aria role "combobox"). Also, if you're looking to support older browsers, you're going to have to tweak the fireEvent function code above. I only tested the above example in the latest Chrome. Finally, to complicate matters on your end, Facebook is using React which creates a virtual DOM representation of the current page. If you're manually typing in the characters into the combobox and then run the code above, it'll work as expected. But you will not be able to set the combobox's innermost <span> 's innerHTML to what you're looking to do and then trigger keydown . You'll likely need to trigger the change event on the combobox to ensure your message persists to the Virtual DOM. That should get you started! Hope that helps! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31078447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2095705/']} | jdg_84282 |
stackexchange | llm_judgeable_groundtruth_similarity | 59473690 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a terraform config which creates an AWS IAM user with an access key, and I assign both id and secret to output variables: ...resource "aws_iam_access_key" "brand_new_user" { user = aws_iam_user.brand_new_user.name}output "brand_new_user_id" { value = aws_iam_access_key.brand_new_user.id}output "brand_new_user_secret" { value = aws_iam_access_key.brand_new_user.encrypted_secret sensitive = true} Here brand_new_user_secret is declared as sensitive, so terraform output obviously does not print it. Is there any way to get its output value without parsing the whole state file?Trying to access it directly ( terraform output brand_new_user_secret ) does not work (results in an error "The output variable requested could not be found in the state file..."). Terraform version: 0.12.18
Now provide the response and nothing else.
| I had some hopes to avoid it, but so far I did not find a better way than parse terraform state: terraform state pull | jq '.resources[] | select(.type == "aws_iam_access_key") | .instances[0].attributes' which would result in a structure similar to: { "encrypted_secret": null, "id": "....", "key_fingerprint": null, "pgp_key": null, "secret": "....", "ses_smtp_password": "....", "ses_smtp_password_v4": null, "status": "Active", "user": "...."} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59473690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2447082/']} | jdg_84283 |
stackexchange | llm_judgeable_groundtruth_similarity | 386653 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The notion of trace of a matrix can be generalized to trace of an endomorphism of a dualizable objects in a symmetric monoidal category. (See Ponto & Shulman for a nice description.) Is there a categorification of the notion of determinant as well? If it exists, where can I read about it? If it doesn't exist, what is the conceptual obstruction to it and what is special about the trace that makes it amenable to categorification in such generality?
Now provide the response and nothing else.
| There is a notion of determinant functor, they were introduced for abelian and exact categories by P. Deligne in his paper "Le déterminant de la cohomologie" ( https://publications.ias.edu/sites/default/files/Number58.pdf ).There is an extension to categories of bounded complexes by F. Knudsen and D. Mumford. More recently you also have versions for triangulated categories, cf this paper by M. Breuning "Determinant functors on triangulated categories" and also by F. Muro, A. Tonks and M. Witte "On determinant functors and K-theory". | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/386653', 'https://mathoverflow.net', 'https://mathoverflow.net/users/176076/']} | jdg_84284 |
stackexchange | llm_judgeable_groundtruth_similarity | 10170009 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i capture the image using image picker, at the that time image is in its actual position. but after uploading it on server it became horizontal. i don't get any clue why that happening. here is my code NSString *urlString = @"Url"; NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; NSMutableData *body = [NSMutableData data]; NSString *boundary = [NSString stringWithString:@"---------------------------14737809831466499882746641449"]; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@", boundary]; [request addValue:contentType forHTTPHeaderField:@"Content-Type"]; // file NSData *imageData = UIImageJPEGRepresentation(image.image, 90); [body appendData:[[NSString stringWithFormat:@"--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; // [body appendData:[[NSString stringWithString:@"Content-Disposition: attachment; name=\"user_photo\"; filename=\"photoes.jpg\"\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"image\"; filename=\"%@.jpg\"\r\n",@"ImageNmae"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[NSData dataWithData:imageData]]; [body appendData:[[NSString stringWithString:@"\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"desc\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:ImageDesc.text] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; // close form [body appendData:[[NSString stringWithFormat:@"--%@--\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; // set request body [request setHTTPBody:body]; //return and test NSData *returnData = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding];
Now provide the response and nothing else.
| i find the solution here is code. - (UIImage *)scaleAndRotateImage:(UIImage *)image { // here we rotate the image in its orignel int kMaxResolution = 640; // Or whateverCGImageRef imgRef = image.CGImage;CGFloat width = CGImageGetWidth(imgRef);CGFloat height = CGImageGetHeight(imgRef);CGAffineTransform transform = CGAffineTransformIdentity;CGRect bounds = CGRectMake(0, 0, width, height);if (width > kMaxResolution || height > kMaxResolution) { CGFloat ratio = width/height; if (ratio > 1) { bounds.size.width = kMaxResolution; bounds.size.height = roundf(bounds.size.width / ratio); } else { bounds.size.height = kMaxResolution; bounds.size.width = roundf(bounds.size.height * ratio); }}CGFloat scaleRatio = bounds.size.width / width;CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));CGFloat boundHeight;UIImageOrientation orient = image.imageOrientation;switch(orient) { case UIImageOrientationUp: //EXIF = 1 transform = CGAffineTransformIdentity; break; case UIImageOrientationUpMirrored: //EXIF = 2 transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0); transform = CGAffineTransformScale(transform, -1.0, 1.0); break; case UIImageOrientationDown: //EXIF = 3 transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height); transform = CGAffineTransformRotate(transform, M_PI); break; case UIImageOrientationDownMirrored: //EXIF = 4 transform = CGAffineTransformMakeTranslation(0.0, imageSize.height); transform = CGAffineTransformScale(transform, 1.0, -1.0); break; case UIImageOrientationLeftMirrored: //EXIF = 5 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width); transform = CGAffineTransformScale(transform, -1.0, 1.0); transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0); break; case UIImageOrientationLeft: //EXIF = 6 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeTranslation(0.0, imageSize.width); transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0); break; case UIImageOrientationRightMirrored: //EXIF = 7 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeScale(-1.0, 1.0); transform = CGAffineTransformRotate(transform, M_PI / 2.0); break; case UIImageOrientationRight: //EXIF = 8 boundHeight = bounds.size.height; bounds.size.height = bounds.size.width; bounds.size.width = boundHeight; transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0); transform = CGAffineTransformRotate(transform, M_PI / 2.0); break; default: [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];}UIGraphicsBeginImageContext(bounds.size);CGContextRef context = UIGraphicsGetCurrentContext();if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) { CGContextScaleCTM(context, -scaleRatio, scaleRatio); CGContextTranslateCTM(context, -height, 0);}else { CGContextScaleCTM(context, scaleRatio, -scaleRatio); CGContextTranslateCTM(context, 0, -height);}CGContextConcatCTM(context, transform);CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();UIGraphicsEndImageContext();return imageCopy;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10170009', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1091020/']} | jdg_84285 |
stackexchange | llm_judgeable_groundtruth_similarity | 561263 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to model the change in rate of an event in a sampled population. The populations are grouped according to some grouping variable. Each individual observation is of one organism, and the sample can either return as positive or negative (+/-). My resulting parameters are: -number of positive samples (+) -total number tested per grouping variable Should this be modeled using a Poisson or binomial distribution? My sample parameters are meant to estimate the true rate value in an overall larger population.
Now provide the response and nothing else.
| As in this question , in typical count data situations, the rule of thumb is: if the count has a ceiling/maximum value set by the experimental design, model the response as binomial (or some over/underdispersed variant: quasibinomial, beta-binomial, observation-level random effect ...) if there is no well-defined limit (e.g. the number of trees in a 1-hectare plot can't be infinite, but we can't typically quantify the number of available "tree sites" that are available) then use a Poisson response (or some variant: quasi-Poisson, negative binomial, generalized Poisson, COM-Poisson ...) if the count has a maximum value but the proportion of the maximum is always small (e.g. the number of cancer cases in a county), then a binomial and a Poisson with a log-offset term to scale the maximum value will give nearly identical results , and it's a matter of computational convenience. If you want a literature reference I have one for you :-):Bolker, Benjamin M. Ecological Models and Data in R Princeton University Press, 2008: p. 161: You should only use the binomial in fitting data when there is an upper limit to the number of possible successes. When $N$ is large and $p$ is small, so that the probability of getting N successes is small, the binomial approaches the Poisson distribution, which is covered in the next section (Figure 4.17). Examples: number of surviving individuals/nests out of an initial sample; number of infested/infected animals, fruits, etc. in a sample; number of a particular class (haplotype, subspecies, etc.) in a larger population. p. 163: The Poisson distribution has no upper limit, although values much larger than the mean value are highly improbable. This characteristic provides a rule for choosing between the binomial and Poisson. If you expect to observe a “ceiling” on the number of counts, you should use the binomial; if you expect the number of counts to be effectively unlimited, even if it is theoretically bounded (e.g. there can’t really be an infinite number of plants in your sampling quadrat), use the Poisson. Examples: number of seeds/seedlings falling in a gap; number of offspring produced in a season (although this might be better fit by a binomial if the number of breeding attempts is fixed); number of prey caught per unit time. PS it turns out I am quite consistent: I wrote this answer without looking at the examples I used in 2008, and came up with almost the same wording and examples ... | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/561263', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/267286/']} | jdg_84286 |
stackexchange | llm_judgeable_groundtruth_similarity | 18841044 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an issue with the google mock EXPECT_CALL macro.The following code gives compilation error on the EXPECT_CALL Statement: error C2660: 'testing::Eq' : function does not take 1 arguments \gmock-1.6.0\include\gmock\gmock-matchers.h Basically I have a base container and base data object for that container, both abstract and a cache which has a pointer to base container and an Add method that takes a reference to base data object. I have created a basic program to demonstrate the issue. Thanks a lot if anyone can help. #include "gtest/gtest.h"#include "gmock/gmock.h"namespace{class BaseData{ public: virtual void SetValue(const int value) = 0;};class BaseContainer{ public: virtual void Add(const BaseData& data) = 0;};class MockContainer : public BaseContainer{public: MOCK_METHOD1(Add, void (const BaseData& data));};class MockData : public BaseData{public: MOCK_METHOD1(SetValue, void (int));};class Cache{ private: BaseContainer* container; public: Cache(BaseContainer* c) { container = c; } ~Cache() { } void AddToContainer(const BaseData& data) { container->Add(data); }};class CacheTestFixture : public ::testing::Test{protected: CacheTestFixture() {} virtual ~CacheTestFixture() {} virtual void SetUp() {} virtual void TearDown() {}};TEST_F(CacheTestFixture, TestAdd){ MockData data; MockContainer container; EXPECT_CALL(container, Add(data)).WillRepeatedly(::testing::Return()); Cache c(&container); ASSERT_NO_THROW(c.AddToContainer(data));}}int _tmain(int argc, _TCHAR* argv[]){::testing::InitGoogleMock(&argc, argv);return RUN_ALL_TESTS();}
Now provide the response and nothing else.
| EXPECT_CALL(container, Add(testing::Ref(data))).WillRepeatedly(::testing::Return()); To send the mock implementation as a base class reference, testing::Eq would require the == operator to be implemented on the abstract base class which is not desirable. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18841044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1778022/']} | jdg_84287 |
stackexchange | llm_judgeable_groundtruth_similarity | 7334595 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
From the question: Is it good programming practice to use setjmp and longjmp in C? Two of the comments left said: "You can't throw an exception in a signal handler, but you can do a longjmp safely -- as long as you know what you are doing. – Dietrich Epp Aug 31 at 19:57 @Dietrich: +1 to your comment. This is a little-known and completely-under-appreciated fact. There are a number of problems that cannot be solved (nasty race conditions) without using longjmp out of signal handlers. Asynchronous interruption of blocking syscalls is the classic example." I was under the impression that signal handlers were called by the kernel when it encountered an exceptional condition (e.g. divide by 0). Also, that they're only called if you specifically register them. This would seem to imply (to me) that they aren't called through your normal code. Moving on with that thought... setjmp and longjmp as I understand them are for collapsing up the stack to a previous point and state. I don't understand how you can collapse up a stack when a signal handler is called since its called from the Kernel as a one-off circumstance rather than from your own code. What's the next thing up the stack from a signal handler!?
Now provide the response and nothing else.
| The way the kernel "calls" a signal handler is by interrupting the thread, saving the signal mask and processor state in a ucontext_t structure on the stack just beyond (below, on grows-down implementations) the interrupted code's stack pointer, and restarting execution at the address of the signal handler. The kernel does not need to keep track of any "this process is in a signal handler" state; that's entirely a consequence of the new call frame that was created. If the interrupted thread was in the middle of a system call, the kernel will back out of the kernelspace code and adjust the return address to repeat the system call (if SA_RESTART is set for the signal and the system call is a restartable one) or put EINTR in the return code (if not restartable). It should be noted that longjmp is async-signal-unsafe. This means it invokes undefined behavior if you call it from a signal handler if the signal interrupted another async-signal-unsafe function. But as long as the interrupted code is not using library functions, or only using library functions that are marked async-signal-safe, it's legal to call longjmp from a signal handler. Finally, my answer is based on POSIX since the question is tagged unix . If the question were just about pure C, I suspect the answer is somewhat different, but signals are rather useless without POSIX anyway... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7334595', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/857994/']} | jdg_84288 |
stackexchange | llm_judgeable_groundtruth_similarity | 93509 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the functional inverse of $f(\theta) = \sin\theta\sqrt{\tan\theta}$? Or, equivalently, what is the inverse of $$f(\theta)=\sin^2\,\theta\tan\,\theta=\frac{\sin^3\,\theta}{\cos\,\theta}$$ It comes from a physics setup involving two equivalently massed and charged pith balls separated by a certain distance, and the equation simplifies to $q = 4L\sin\theta\sqrt{\pi\epsilon_0mg\tan{\theta}}$, where $\pi$, $g$, and $\epsilon_0$ are the obvious physical constants and $L$ and $m$ will be fixed. The question asked for $\theta$ in terms of $q$, however, so I'm wondering if there is a way to rearrange this. I can't seem to find anything on the internet, and Wolfram refuses to reveal the steps for its complex rearranged formula.
Now provide the response and nothing else.
| I will assume you are interested in finding $\theta = f^{-1}(x)$ for $x \geq 0$ with the range $0 \leq \theta < \frac{\pi}{2}$.$$ x^2 = \left(f(\theta)\right)^2 = \sin^2(\theta) \tan(\theta) = \frac{\tan^3(\theta)}{1+\tan^2(\theta)}$$Hence $\theta = \arctan(y(x))$, where $y$ is the positive root of $y^3 = x^2 (1 + y^2)$. Using Cardano's formula:$$ \theta(x) = \arctan\left( \frac{1}{3} \left(x^2+\frac{\sqrt[3]{2} x^{10/3}}{\sqrt[3]{2 x^4+3 \left(\sqrt{12 x^4+81}+9\right)}}+\frac{\sqrt[3]{2 x^4+3 \left(\sqrt{12 x^4+81}+9\right)} x^{2/3}}{\sqrt[3]{2}}\right) \right)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/93509', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18850/']} | jdg_84289 |
stackexchange | llm_judgeable_groundtruth_similarity | 15443411 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have three radiobuttons and I want to evenly space them across the screen. When I use android:layout_weight="1" , the buttons are stretched out across the screen. So how would I have the same amount of space in between each of them that also scales on different screen sizes? <RadioGroup android:id="@+id/auton_bar" android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:paddingLeft="10dp" android:layout_below="@id/clear_fields" > <RadioButton android:id="@+id/auton_radio_1" android:layout_height="wrap_content" android:layout_width="wrap_content" android:background="@drawable/auton_col" android:layout_weight="1" /> <!-- android:layout_marginRight="380dp" --> <RadioButton android:id="@+id/auton_radio_2" android:layout_height="wrap_content" android:layout_width="wrap_content" android:background="@drawable/auton_col" android:layout_weight="1" /> <RadioButton android:id="@+id/auton_radio_3" android:layout_height="wrap_content" android:layout_width="wrap_content" android:background="@drawable/auton_col" android:layout_weight="1" /> </RadioGroup>
Now provide the response and nothing else.
| If you want them to share screen width equally you need to set android:layout_width="match_parent" on each View . Your xml would become: <RadioGroup android:id="@+id/auton_bar" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/clear_fields" android:orientation="horizontal" android:paddingLeft="10dp" > <RadioButton android:id="@+id/auton_radio_1" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="@drawable/auton_col" /> <!-- android:layout_marginRight="380dp" --> <RadioButton android:id="@+id/auton_radio_2" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="@drawable/auton_col" /> <RadioButton android:id="@+id/auton_radio_3" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:background="@drawable/auton_col" /></RadioGroup> To elaborate, layout_weight can be used in two ways. If you have multiple views in a vertical linear layout and you want the last one to take up all the remaining space, you can set their heights to wrap_content and give the last view a weight of 1. If you want all views to share the available space, set all width/heights to 0dp or match_parent and give each view the same weight value. They will share the space equally. To have your background drawable scale, make a new xml that goes in your drawable/ folder that looks like this <?xml version="1.0" encoding="utf-8"?><bitmap xmlns:android="http://schemas.android.com/apk/res/android" android:gravity="center" android:src="@drawable/auton_col" /> Name it whatever you like (e.g. auton_col_scale.xml) and reference that drawable as your background. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15443411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1807163/']} | jdg_84290 |
stackexchange | llm_judgeable_groundtruth_similarity | 2342237 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this space completely metrizable? The metric that is inherited from $\mathbb R$ is not complete on $[0,1) \cup [2,3]$ since there are Cauchy sequences that do not converge, e.g. $x_n=1-\frac{1}{n}$. This question is similar to the question of why $(0,1)$ is completely metrizable which was answered positively in the following thread... Show that $(0,1)$ is completely metrizable
Now provide the response and nothing else.
| $[2,3]\cup[4,\infty)$ with the subspace metric is complete. And$$ f(x) = \begin{cases} x & \text{when }x\in[2,3] \\3+\frac{1}{1-x} & \text{when }x\in[0,1) \end{cases} $$is a homeomorphism from your space to mine. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2342237', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/41728/']} | jdg_84291 |
stackexchange | llm_judgeable_groundtruth_similarity | 6704536 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the documentation of sapply and replicate there is a warning regarding using ... Now, I can accept it as such, but would like to understand what is behind it. So I've created this little contrived example: innerfunction<-function(x, extrapar1=0, extrapar2=extrapar1){ cat("x:", x, ", xp1:", extrapar1, ", xp2:", extrapar2, "\n")}middlefunction<-function(x,...){ innerfunction(x,...)}outerfunction<-function(x, ...){ cat("Run middle function:\n") replicate(2, middlefunction(x,...)) cat("Run inner function:\n") replicate(2, innerfunction(x,...))}outerfunction(1,2,3)outerfunction(1,extrapar1=2,3)outerfunction(1,extrapar1=2,extrapar2=3) Perhaps I've done something obvious horribly wrong, but I find the result of this rather upsetting. So can anyone explain to me why, in all of the above calls to outerfunction , I get this output: Run middle function:x: 1 , xp1: 0 , xp2: 0 x: 1 , xp1: 0 , xp2: 0 Run inner function:x: 1 , xp1: 0 , xp2: 0 x: 1 , xp1: 0 , xp2: 0 Like I said: the docs seem to warn for this, but I do not see why this is so.
Now provide the response and nothing else.
| ?replicate , in the Examples section, tells us explicitly that what you are trying to do does not and will not work. In the Note section of ?replicate we have: If ‘expr’ is a function call, be aware of assumptions about where it is evaluated, and in particular what ‘...’ might refer to. You can pass additional named arguments to a function call as additional named arguments to ‘replicate’: see ‘Examples’. And if we look at Examples, we see: ## use of replicate() with parameters: foo <- function(x=1, y=2) c(x,y) # does not work: bar <- function(n, ...) replicate(n, foo(...)) bar <- function(n, x) replicate(n, foo(x=x)) bar(5, x=3) My reading of the docs is that they do far more than warn you about using ... in replicate() calls; they explicitly document that it does not work. Much of the discussion in that help file relates to the ... argument of the other functions, not necessarily to replicate() . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6704536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/709529/']} | jdg_84292 |
stackexchange | llm_judgeable_groundtruth_similarity | 3298610 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand the irrationality of $\sqrt{2}$ in the following way: To prove: $\sqrt{2}$ is irrational Proof: Assume $\sqrt{2}$ is rational. i.e. $\sqrt{2}=\dfrac{a}{b}$ Assume $a$ and $b$ are co-prime ...... (the usual steps) Hence $a$ and $b$ cannot be co-prime. This contradicts our second assumption. So first assumption is wrong. So $\sqrt{2}$ is irrational. MY CONFUSION: We are making two different assumptions. This is not the way proof by contradiction works. If the second assumption gets contradicted, for what reason will the first assumption be false?
Now provide the response and nothing else.
| The second assumption can be avoided if it bothers you. You can say: Suppose $\sqrt 2=\frac{a}{b}$ ( $a,b$ not necessarily co-prime). Let $c=\frac{a}{(a,b)}$ and $d=\frac{b}{(a,b)}$ , where $(a,b)$ denotes the highest common factor of $a$ and $b$ . Then $\sqrt 2=\frac{c}{d}$ , and $c$ and $d$ are co-prime. Now proceed with the proof as above, with $c,d$ in place of $a,b$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3298610', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/419536/']} | jdg_84293 |
stackexchange | llm_judgeable_groundtruth_similarity | 31491667 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to remove the shadow below the toolbar with the Theme.AppCompat.Light.NoActionBar, using every recommendation of people who have ever answer it before, but no one worked. I tried <item name="android:windowContentOverlay">@null</item> and <item name="android:windowContentOverlay">@drawable/solid_line</item> ....<shape xmlns:android="http://schemas.android.com/apk/res/android"android:shape="rectangle"><solid android:color="@color/accueil_color" /><size android:height="15dp" /></shape> and with this, android:elevation="0dp" the shadow goes from the top of the screen but not disapear. Have you any idea for totally remove this shadow line ??
Now provide the response and nothing else.
| I'm not an expert but I run into the same problem just a few hours ago. So the idea here is that with AppCompat we have to manage the library attributes rather than Android attributes. In other words, instead of android:elevation try app:elevation: <android.support.design.widget.AppBarLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/appbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" android:fitsSystemWindows="true" app:elevation="0dp"> <android.support.v7.widget.Toolbar xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="match_parent" app:popupTheme="@style/ThemeOverlay.AppCompat.Light" /></android.support.design.widget.AppBarLayout> EDIT : I just tried another option without AppBarLayout . This way works perfectly fine for me, the shadow is completely gone. So I suspect the problem is in your other View. I don't think it's your ToolBar drops the shadow. <android.support.v7.widget.Toolbar xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" app:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" app:popupTheme="@style/ThemeOverlay.AppCompat.Light" android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:minHeight="?attr/actionBarSize" app:elevation="0dp" /> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31491667', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3607438/']} | jdg_84294 |
stackexchange | llm_judgeable_groundtruth_similarity | 73670 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Revelation 3:12 states: The one who conquers I will make him a pillar in the temple of my God.Never shall he go out of it, and I will write on him the name of my God and the name of the city of my God, the new Jerusalem which comes down from my God out of heaven and my own new name. (ESV) If Jesus is God and he is obviously in heaven at this point, how can he be God himself and have a God?
Now provide the response and nothing else.
| Jesus calls the Father, "My God" in many places such as: Matt 27:46, John 20:17, Rom 15:6, Eph 1:3, 17, 2 Cor 11:31, Heb 1:9, 1 Peter 1:3, Rev 1:6, 3:12. Therefore, the central question is, if Jesus calls the Father "God" then Jesus cannot be "God". What is the evidence? The reason that many refer to Jesus as God is because the NT regularly refers to Jesus as "God". 1. Jesus is called "God" frequently in the NT . See appendix 1 below. Now, either Jesus is God or He is not and according to the NT, Jesus is God. Therefore, is Jesus the same kind of God as the Father or a lesser God? If Jesus is a lesser God, then the NT teaches (contrary to Deut 4:35, 6:4, 32:39, Isa 44:6, 45:5, 6) that there are two gods. This is impossible! 2. The Father calls Jesus "God" Note that we have the Father addressing Jesus as "the God" (ὁ Θεός) Heb 1:8 - But about the Son He [the Father] says: “Your throne, O God , endures forever and ever ... Heb 1:9 - therefore [the] God [Jesus], Your God [the Father], has anointed You above Your companions with the oil of joy.” Note that in Heb 1:10-12 we have a quotation from Ps 102:25-27 about Jehovah/YHWH that is applied to Jesus. This is not unique in the NT as we also have: Heb 16 quotes Deut 32:43 Phil 2:10, 11 quotes Isa 45:23 Matt 3:3 (cf V11) quotes Isa 40:3 Rom 10:13 quotes Joel 2:32 3. The OT references are regularly used by the NT to show that Jesus is given the unique title and attributes of God - see Appendix 2 below. Now, all these means what Jesus said in John 5:23, “so that all may honor the Son just as they honor the Father." What theology one makes of this is another question entirely; whether one is a "binitarian" or "trinitarian" is another matter. APPENDIX 1 - Jesus is God in the NT Matt 1:23, … and they will call Him Immanuel, which means, “[the] God with us”. (This declares Jesus as ὁ Θεός = ho theos.) John 1:1, “In the beginning was the Word, and the Word was with God, and the Word was God ”. (This is known in Geek logic as a qualitative category statement. Compare v14 where the Word became flesh and dwelt among us, thus identifying the Word as Jesus.) The same title, “Word”, is used in 1 John 1:1 & Rev 19:13. John 1:18, “…but God the one and only who is at the Father’s side has made him known” John 5:17, 18, “In his defence, Jesus said, ‘My Father is always at his work to this very day, and I too am working.’ For this reason they tried all the more to kill him; not only was he breaking the Sabbath, but he was calling God his own Father, making him equal to God .” See also Luke 22:69-71. John 5:23, “so that all may honor the Son just as they honor the Father . Whoever does not honor the Son does not honor the Father who sent Him.” John 10:30, “I and the Father are one.” (Compare Deut 6:4.) John 20:28, “Thomas said to him, ‘My Lord and my God .’” (This declares Jesus as ὁ Θεός.) [Compare this statement with Ps 35:23, “Contend for me, my God and Lord.” See also V24.][Note: If we take the corpus of the four Gospels, Matt 1:23 and John 20:28 (& 21:19) we find that they begin and end with clear, unambiguous statements that Jesus is God, more specifically, “The God” = ὁ Θεός.] Acts 20:28 - Be shepherds of the church of God , which He purchased with His own blood. [It was Jesus’ blood, not the Father’s, that was shed on the cross.] Rom 9:5, “…Messiah, who is God over all, forever praised! Amen.” Eph 5:5, “For you may be sure of this, that everyone who is sexually immoral or impure, or who is covetous (that is, an idolater), has no inheritance in the kingdom of Christ and God. [By Sharp’s rule, Paul calls Christ God here; Christ and God are the same person.] Phil 2:5-8, “…Jesus Christ: who, being in very nature God …” 2 Thess 1:12, “…according to the grace of our God and Lord , Jesus Christ.” 1 Tim 3:16, “Who was revealed in flesh …” [The antecedent of “who” is God in v15, according to NA28/UBS5, etc. The Byzantine text makes this explicit: “God was revealed in flesh …”.] Titus 2:13, “…our great God and Savior , Jesus Christ.” [This also has, “ho theos”.] Heb 1:8, “About the Son he says, ‘Your throne, O God , will last forever’”. [ho theos] Heb 1:9, “therefore O God , Your God, has anointed You above Your companions with the oil of joy.” [ … also, “ho theos”] 2 Peter 1:1, “…righteousness of our God [= ὁ Θεός] and Savior, Jesus Christ.” 1 John 5:20, “And we know that the Son of God has come and has given us understanding, so that we may know Him who is true; and we are in Him who is true—in His Son Jesus Christ. He (οὗτός) is the true God and eternal life.” [Note: οὗτός never refers to the Father in John’s writings and nearly always refers to the Son.] Isa 9:6, “For unto us a Child is born, Unto us a Son is given; And the government will be upon His shoulder. And His name will be called Wonderful Counsellor, Mighty God , Everlasting Father, Prince of Peace.” [See also Isa 10:21] APPENDIX 2 - Jesus is Jehovah of the OT The NT often takes unique title of YHWH of the OT and applies them to Jesus. Here is a good example: Isa 43:11 - I, yes I, am the LORD, and there is no Savior but Me . Titus 2:13 - as we await the blessed hope and glorious appearance of our great God and Savior , Jesus Christ. Here is a longer list that is far from exhaustive. Unique Title Old Testament New Testament “I AM” Ex 3:13-15; Deut 32:39, Isa 41:4, 43:10, 13, 25, 45:19, 46:4, 48:12, 51:12, 52:6 (LXX) Matt 14:27, Mark 6:50, Mark 13:6, Luke 21:8, Mark 14:62, Luke 22:70, John 4:26, 6:20, 8:24, 28, 58, 13:9, 18:5-8. Creator Isa 44:24, 45:18 John 1:3, 10, Col 1:16, 17, Heb 1:2 Savior Isa 43:3, 11, 45:17, 21 Matt 1:21; Acts 4:12; 2 Tim 1:10; Tit 1:4, 2:13, 3:6; 2 Pet 1:1, 11 Glory Isa 42:8, 48:11 John 17:5, 24 First & Last Isa 41:4, 44:6, 48:12 Rev 1:17, 18, 2:8, 22:13 Lord of Lords Deut 10:17, Ps 136:3, 26 Rev 17:14, 19:16 Lord of All Deut 10:17, Josh 3:11, 13, Ps 97:5, Zech 4:14, 6:5, Mic 4:13 Acts 10:36, Rom 10:12, Col 1:15 | {} | {'log_upvote_score': 5, 'links': ['https://hermeneutics.stackexchange.com/questions/73670', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/48163/']} | jdg_84295 |
stackexchange | llm_judgeable_groundtruth_similarity | 39029274 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a raw string literal which is very long. Is it possible to split this across multiple lines without adding newline characters to the string? file.write(r#"This is an example of a line which is well over 100 characters in length. Id like to know if its possible to wrap it! Now some characters to justify using a raw string \foo\bar\baz :)"#) In Python and C for example, you can simply write this as multiple string literals. # "some string"(r"some " r"string") Is it possible to do something similar in Rust?
Now provide the response and nothing else.
| While raw string literals don't support this, it can be achieved using the concat! macro : let a = concat!( r#"some very "#, r#"long string "#, r#"split over lines"#);let b = r#"some very long string split over lines"#;assert_eq!(a, b); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39029274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/432509/']} | jdg_84296 |
stackexchange | llm_judgeable_groundtruth_similarity | 12268602 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Python's (2.7) argparse facility and would like to automatically sort the help it produces alphabetically by option. By default help entries are sorted in the order they are added*, as in: p = argparse.ArgumentParser(description='Load duration curves and other plots')p.add_argument('--first', '-f', type=int, default=1, help='First Hour')p.add_argument('--dur', '-d', type=int, default=-1, help='Duration in Hours. Use -1 for all')p.add_argument('--title', '-t', help='Plot Title (for all plots), default=file name')p.add_argument('--interp', '-i', action="store_true", default=True, help='Use linear interpolation for smoother curves')...args = p.parse_args() Which when called as python script -h produces: usage: script.py [-h] [--first FIRST] [--dur DUR] [--title TITLE] [--interp]Load duration curves and other plotsoptional arguments: -h, --help show this help message and exit --first FIRST, -f FIRST First Hour --dur DUR, -d DUR Duration in Hours. Use -1 for all --title TITLE, -t TITLE Plot Title (for all plots), default=file name --interp, -i Use linear interpolation for smoother curves Is it possible to automatically sort them alphabetically instead? This would be dur, first, h, interp, title. *Obviously the work around is to manually maintain by adding entries using p.add_argument in alphabetical added order but I am trying to avoid doing so.
Now provide the response and nothing else.
| You can do this by providing a custom HelpFormatter class ; the internals of which are officially undocumented. This means you are on your own when it comes to compatibility from Python version to version, but I find the interface quite stable: from argparse import HelpFormatterfrom operator import attrgetterclass SortingHelpFormatter(HelpFormatter): def add_arguments(self, actions): actions = sorted(actions, key=attrgetter('option_strings')) super(SortingHelpFormatter, self).add_arguments(actions)p = argparse.ArgumentParser(... formatter_class=SortingHelpFormatter,) Here I sort on the option strings ( ('--dur', '-d') , etc.), but you could take your pick as to what you want to sort on. This simple sorting option puts the single-dash options last, like the -h option. which outputs: usage: [-h] [--first FIRST] [--dur DUR] [--title TITLE] [--interp]Load duration curves and other plotsoptional arguments: --dur DUR, -d DUR Duration in Hours. Use -1 for all --first FIRST, -f FIRST First Hour --interp, -i Use linear interpolation for smoother curves --title TITLE, -t TITLE Plot Title (for all plots), default=file name -h, --help show this help message and exit | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12268602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1224158/']} | jdg_84297 |
stackexchange | llm_judgeable_groundtruth_similarity | 32281294 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I installed Lumen on my web server, but I have problems with routes // http://12.345.678.910/$app->get('/', function() use ($app) { return "This works";}); But in this second case he cant find directory // http://12.345.678.910/api$app->get('/api', function() use ($app) { return "This dont work";}); In second case I am getting standard 404 error. The requested URL /api was not found on this server. I use Apache, Ubuntu, PHP 5.5 and Lumen
Now provide the response and nothing else.
| It sounds like your URL rewriting isn't working. If you add index.php to the URL right before the /api does it work? For example, yourdomain.com/api would become yourdomain.com/index.php/api and if the second URL works, then rewriting isn't working. If your rewriting isn't working, but you have the .htaccess file in your public directory, then you probably need to allow overrides in your Apache configuration. Here is an example virtual host configuration for Lumen on Ubuntu. I've marked the lines you need to change. Change the first and third to point to the public directory in your website's directory. Then change the second line to the domain name you're using with your website. <VirtualHost *:80> DocumentRoot "/var/www/lumen/public" # Change this line ServerName yourdomain.com # Change this line <Directory "/var/www/lumen/public"> # Change this line AllowOverride All # This line enables .htaccess files Order allow,deny Allow from all </Directory></VirtualHost> You'll need to restart Apache for these settings to take effect. A Better Way Enabling the .htaccess file should work, but using .htaccess slows down your site some. The best solution is to put the contents of the .htaccess file in your virtual host, and then disable .htaccess files. The example virtual host configuration for that looks like this: <VirtualHost *:80> DocumentRoot "/var/www/lumen/public" # Change this line ServerName yourdomain.com # Change this line <Directory "/var/www/lumen/public"> # Change this line # Ignore the .htaccess file in this directory AllowOverride None # Make pretty URLs <IfModule mod_rewrite.c> <IfModule mod_negotiation.c> Options -MultiViews </IfModule> RewriteEngine On # Redirect Trailing Slashes... RewriteRule ^(.*)/$ /$1 [L,R=301] # Handle Front Controller... RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> </Directory></VirtualHost> Once again, you'll need to restart Apache for these settings to take effect. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32281294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2277576/']} | jdg_84298 |
stackexchange | llm_judgeable_groundtruth_similarity | 7554873 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm planning of implementing a document ranker which uses neural networks. How can one rate a document by taking in to consideration the ratings of similar articles?. Any good python libraries for doing this?. Can anyone recommend a good book for AI, with python code. EDIT I'm planning to make a recommendation engine which would make recommendations from similar users as well as using the data clustered using tags. User would be given chance to vote for articles. There will be about hundred thousand articles. Documents would be clustered based on their tags. Given a keyword articles would be fetched based on their tags and passed through a neural network for ranking.
Now provide the response and nothing else.
| The problem you are trying to solve is called "collaborative filtering". Neural Networks One state-of-the-art neural network method is Deep Belief Networks and Restricted Boltzman Machines . For a fast python implementation for a GPU (CUDA) see here . Another option is PyBrain . Academic papers on your specific problem: This is probably the state-of-the-art of neural networks and collaborative filtering (of movies): Salakhutdinov, R., Mnih, A. Hinton, G, Restricted Boltzman Machines for Collaborative Filtering, To appear in Proceedings of the 24th International Conference on Machine Learning 2007. PDF A Hopfield network implemented in Python: Huang, Z. and Chen, H. and Zeng, D. Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering. ACM Transactions on Information Systems (TOIS), 22, 1,116--142, 2004, ACM. PDF A thesis on collaborative filtering with Restricted Boltzman Machines (they say Python is not practical for the job): G. Louppe. Collaborative filtering: Scalable approaches using restricted Boltzmann machines. Master's thesis, Universite de Liege, 2010. PDF Neural networks are not currently the state-of-the-art in collaborative filtering. And they are not the simplest, wide-spread solutions. Regarding your comment about the reason for using NNs being having too little data, neural networks don't have an inherent advantage/disadvantage in that case. Therefore, you might want to consider simpler Machine Learning approaches. Other Machine Learning Techniques The best methods today mix k-Nearest Neighbors and Matrix Factorization . If you are locked on Python, take a look at pysuggest (a Python wrapper for the SUGGEST recommendation engine) and PyRSVD (primarily aimed at applications in collaborative filtering, in particular the Netflix competition). If you are open to try other open source technologies look at: Open Source collaborative filtering frameworks and http://www.infoanarchy.org/en/Collaborative_Filtering . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7554873', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/390587/']} | jdg_84299 |
stackexchange | llm_judgeable_groundtruth_similarity | 5809790 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
var myObj = {"suppliers":[{"name":"supplier1","12m":"0.08","24m":"0.06"}]};alert(myObj.suppliers[0].12m); Is there a different way to get this property, or should I just not use a key that starts with a number?
Now provide the response and nothing else.
| You can use the following syntax to do what you describe using bracket notation : myObject["myProperty"] Bracket notation differs from dot notation (e.g. myObject.myProperty ) in that it can be used to access properties whose names are illegal. Illegal meaning that with dot notation, you're limited to using property names that are alphanumeric (plus the underscore _ and dollar sign $ ), and don't begin with a number. Bracket notation allows us to use a string to access a property and bypass this. myObject.1 // fails, properties cannot begin with numbersmyObject.& // fails, properties must be alphanumeric (or $ or _)myObject["1"] // succeedsmyObject["&"] // succeeds This also means we can use string variables to look up and set properties on objects: var myEdgyPropertyName = "||~~(_o__o_)~~||";myEdgyObject[myEdgyPropertyName] = "who's there?";myEdgyObject[myEdgyPropertyName] // "who's there?"; You can read more about dot and bracket notation here, on MDN . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5809790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/592844/']} | jdg_84300 |
stackexchange | llm_judgeable_groundtruth_similarity | 1177974 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently developing a script that takes a Word document, opens it up in Word, and then prints the Word file as a PDF. Everything works great when run manually either on my machine or the server. The problem comes in when I attempt to run it as a scheduled task. One of the requirements is that this should be done nightly without any intervention. When I set up a schedule task to do this and set the security options to "Run only when user is logged on" everything works fine. Problem is, this has to be run with or without someone being logged on. If I set the task to "Run whether user is logged on or not", the script fails on the following line: wordDoc = MSWord.Documents.Open(ref fileToConvert, ref refFalse, ref refTrue, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN, ref RN); MSWord.Documents.Open() works, but returns null. This task is running on Windows Server 2008 with Office 2007. Is there another approach I should be taking for this?
Now provide the response and nothing else.
| I've had problems automating Office from a Windows Service under Windows Server 2008, even though that works fine under Windows Server 2003. The problem also occurs at the Open call, though it actually raises an exception rather than simply returning null. Anyway, you might want to try this... I tried following the advice given by H Ogawa in this MSDN thread , and it seemed to work. It's bizarre, but kudos to Mr. Ogawa for discovering it. Summary of the 'Ogawa Hack': create a desktop folder for the system profile, as either C:\Windows\SysWOW64\config\systemprofile\Desktop , or C:\Windows\System32\config\systemprofile\Desktop ...depending on whether you have 64-bit Windows. Also, the folder needs write permission for whatever user is "driving" Office. [Edit: corrected link URL] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1177974', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48421/']} | jdg_84301 |
stackexchange | llm_judgeable_groundtruth_similarity | 93232 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is probably a really simple question, but I can't seem to find a definite answer anywhere.I'm guessing 50Ω cable means 50Ω per unit length. What unit length is this? If this is not how its defined, how is it?
Now provide the response and nothing else.
| I see you have some accrurate but probably difficult to understand answers. I'll try to give you a better intuitive feel. Consider what happens when you first apply a voltage to the end of a long cable. The cable has some capacitance, so it will draw some current. If that were all there was to it, you get a big current spike, then nothing. However, it also has some series inductance. You can approximate it with a little series inductance, followed by a little capacitance to ground, followed by another series inductance, etc. Each one of these inductors and capacitors models a little length of the cable. If you make that length smaller, the inductance and the capacitance goes down and there are more of them in the same length. However, the ratio of the inductance to the capacitance stays the same. Now imagine your initial applied voltage propagating down the cable. Each step of the way, it charges up a little capacitance. But, this charging up is slowed down by the inductances. The net result is that the voltage you applied to the end of the cable propagates slower than the speed of light, and it charges the capacitance along the length of the cable in a way to require a constant current. If you had applied twice the voltage, the capacitors would get charged to twice that voltage, therefore would require twice the charge, which would take twice the current to supply. What you have is the current that the cable draws being proportional to the voltage you applied. Gee, that's what a resistor does. Therefore, while the signal is propagating down the cable, the cable looks resistive to the source. This resistance is only a function of the parallel capacitance and series inductance of the cable, and has nothing to do with what it connected to the other end. This is the characteristic impedance of the cable. If you have a coil of cable on your bench that is short enough so that you can ignore the DC resistance of the conductors, then this all works as described until the signal propagates to the end of the cable and back. Until then, it looks like a infinite cable to whatever is driving it. In fact, it looks like a resistor at the characteristic impedance. If the cable is short enough and you short the end, for example, then eventually your signal source will see the short. But, at least for the time it takes the signal to propagate to the end of the cable and back, it will look like the characteristic impedance. Now imagine that I put a resistor of the characteristic impedance accross the other end of the cable. Now the input end of the cable will look like a resistor forever. This is called terminating the cable, and has the nice property of making the impedance be consistant over time and preventing the signal from reflecting when it gets to the end of the cable. After all, to the end of the cable another length of cable would look the same as a resistor at the characteristic impedance. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/93232', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/24404/']} | jdg_84302 |
stackexchange | llm_judgeable_groundtruth_similarity | 26542 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
They are similar. Both contain a projection map and one can define sections, moreover the fiber of the fiber bundle is just like the stalk of the sheaf. But what are the differences between them? Maybe a sheaf is more abstract and can break down, while a fibre bundle is more geometric and must keep itself continuous. Any other differences?
Now provide the response and nothing else.
| If $(X,\mathcal{O}_X)$ is a ringed topological space, you can look at locally free sheaves of $\mathcal{O}_X$-modules on $X$. If $\mathcal{O}_X$ is the sheaf of continuous functions on a topological manifold (=Hausdorff and locally homeomorphic to $\mathbb{R}^n$), or the sheaf of smooth functions on a smooth manifold, you get fiber bundles (the sheaf associated to a fiber bundle is the sheaf of "regular" (=continuous or smooth here) sections). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/26542', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4706/']} | jdg_84303 |
stackexchange | llm_judgeable_groundtruth_similarity | 15518894 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This question refers to: When should I use the new ranged-for and can I combine it with the new cbegin/cend? Based on that question, to force the use of cbegin() and cend() , one needs to do, for example: for (auto& v: const_cast<decltype(container) const>(container)) That's a lot of boilerplate code for a construct that was supposed to eliminate it. Is there some more compact way to do it? The reason for my question is, that an implicitly shared container might take my use of begin() as a clue to detach itself.
Now provide the response and nothing else.
| Update: std::as_const will be in C++17, in the <utility> header. Prior to C++17 , there's no built-in syntax for it; however, you can easily write a convenience wrapper: template<typename T> constexpr const T &as_const(T &t) noexcept { return t; }for (auto &v: as_const(container)) Note that this calls begin() const rather than cbegin() specifically; the Standard container general requirements specify that cbegin() and begin() const behave identically. If your container treats non-const iteration specially, it might make sense for it itself to have a member function: const Container &crange() const noexcept { return *this; }for (auto &v: container.crange()) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15518894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1095108/']} | jdg_84304 |
stackexchange | llm_judgeable_groundtruth_similarity | 49052469 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to vim. I read online that one of the best fuzzy finder in vim is fzf. I have started using it. But whenever I write the command :Ag I get ag is not found . I don't know what it means and I don't know how to fix it.
Now provide the response and nothing else.
| You need to install ag , in case you are on a mac: brew install the_silver_searcher As another option for testing fzf you could also use ctrl p to call :Files by using this map: nnoremap <c-p> :Files<CR> And you could use ripgrep when calling :Files , for this you will need to modify the default FZF_DEFAULT_COMMAND : export FZF_DEFAULT_COMMAND='rg --files --no-ignore --hidden --follow --glob "!.git/*"' You can find more information here: https://github.com/junegunn/fzf#usage | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49052469', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1380626/']} | jdg_84305 |
stackexchange | llm_judgeable_groundtruth_similarity | 5106335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My requirements are: Need to be able to dynamically add User-Defined fields of any data type Need to be able to query UDFs quickly Need to be able to do calculations on UDFs based on datatype Need to be able to sort UDFs based on datatype Other Information: I'm looking for performance primarily There are a few million Master records which can have UDF data attached When I last checked, there were over 50mil UDF records in our current database Most of the time, a UDF is only attached to a few thousand of the Master records, not all of them UDFs are not joined or used as keys. They're just data used for queries or reports Options: Create a big table with StringValue1, StringValue2... IntValue1, IntValue2,... etc. I hate this idea, but will consider it if someone can tell me it is better than other ideas and why. Create a dynamic table which adds a new column on demand as needed. I also don't like this idea since I feel performance would be slow unless you indexed every column. Create a single table containing UDFName, UDFDataType, and Value. When a new UDF gets added, generate a View which pulls just that data and parses it into whatever type is specified. Items which don't meet the parsing criteria return NULL. Create multiple UDF tables, one per data type. So we'd have tables for UDFStrings, UDFDates, etc. Probably would do the same as #2 and auto-generate a View anytime a new field gets added XML DataTypes? I haven't worked with these before but have seen them mentioned. Not sure if they'd give me the results I want, especially with performance. Something else?
Now provide the response and nothing else.
| If performance is the primary concern, I would go with #6... a table per UDF (really, this is a variant of #2). This answer is specifically tailored to this situation and the description of the data distribution and access patterns described. Pros: Because you indicate that some UDFshave values for a small portion ofthe overall data set, a separatetable would give you the bestperformance because that table willbe only as large as it needs to beto support the UDF. The same holds true for the related indices. You also get a speed boost by limiting the amount of data that has to be processed for aggregations or other transformations. Splitting the data out into multiple tables lets you perform some of the aggregating and other statistical analysis on the UDF data, then join that result to the master table via foreign key to get the non-aggregated attributes. You can use table/column names thatreflect what the data actually is. You have complete control to use data types, check constraints, default values, etc.to define the data domains. Don't underestimate the performance hit resulting from on-the-fly data type conversion. Suchconstraints also help RDBMS queryoptimizers develop more effectiveplans. Should you ever need to use foreignkeys, built-in declarativereferential integrity is rarely out-performed by trigger-based or application level constraint enforcement. Cons: This could create a lot of tables.Enforcing schema separation and/or anaming convention would alleviatethis. There is more application codeneeded to operate the UDF definitionand management. I expect this isstill less code needed than for theoriginal options 1, 3, & 4. Other Considerations: If there is anything about thenature of the data that would makesense for the UDFs to be grouped,that should be encouraged. That way,those data elements can be combinedinto a single table. For example,let's say you have UDFs for color,size, and cost. The tendency in thedata is that most instances of thisdata looks like 'red', 'large', 45.03 rather than NULL, 'medium', NULL In such a case, you won't incur anoticeable speed penalty bycombining the 3 columns in 1 tablebecause few values would be NULL andyou avoid making 2 more tables,which is 2 fewer joins needed whenyou need to access all 3 columns. If you hit a performance wall from aUDF that is heavily populated andfrequently used, then that should beconsidered for inclusion in themaster table. Logical table design can take you toa certain point, but when the recordcounts get truly massive, you alsoshould start looking at what tablepartitioning options are provided by your RDBMS of choice. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5106335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302677/']} | jdg_84306 |
stackexchange | llm_judgeable_groundtruth_similarity | 20386402 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting a Unknown column 'userDetails.createdAt' in 'field list'When trying to fetch with association. Using findAll without association works fine. My code is as follows: var userDetails = sequelize.define('userDetails', { userId :Sequelize.INTEGER, firstName : Sequelize.STRING, lastName : Sequelize.STRING, birthday : Sequelize.DATE});var user = sequelize.define('user', { email: Sequelize.STRING, password: Sequelize.STRING});user.hasOne(userDetails, {foreignKey: 'userId'});user.findAll({include: [userDetails] }).success(function(user) { console.log(user)});
Now provide the response and nothing else.
| I think the error is that you have timestamps enabled in sequelize, but your actual table definitions in the DB do not contain a timestamp column. When you do user.find it will just do SELECT user.* , which only takes the columns you actually have. But when you join, each column of the joined table will be aliased, which creates the following query: SELECT `users`.*, `userDetails`.`userId` AS `userDetails.userId`,`userDetails`.`firstName` AS `userDetails.firstName`,`userDetails`.`lastName` AS `userDetails.lastName`, `userDetails`.`birthday` AS `userDetails.birthday`, `userDetails`.`id` AS `userDetails.id`, `userDetails`.`createdAt` AS `userDetails.createdAt`, `userDetails`.`updatedAt` AS `userDetails.updatedAt` FROM `users` LEFT OUTER JOIN `userDetails` AS `userDetails` ON `users`.`id` = `userDetails`.`userId`; The fix would be to disable timestamps for either the userDetails model: var userDetails = sequelize.define('userDetails', { userId :Sequelize.INTEGER, firstName : Sequelize.STRING, lastName : Sequelize.STRING, birthday : Sequelize.DATE}, { timestamps: false}); or for all models: var sequelize = new Sequelize('sequelize_test', 'root', null, { host: "127.0.0.1", dialect: 'mysql', define: { timestamps: false }}); | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20386402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1575291/']} | jdg_84307 |
stackexchange | llm_judgeable_groundtruth_similarity | 12356152 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been having a hard time with what must be an incredibly normal task. I upload and save images to my web server and save the path to the file in MySQL data base (this is all working). The thing that doesn't work is fetching an image file from the server and displaying it on the page via ajax. Originally I was trying to just retrieve the path from the database, and update an tag's src attribute with the path to the image. This was working, but this way all the images are in a folder on the server where people all have access to them. This is not good. I can only have the pictures that belong to certain users accessible by these users. In order to restrict access to these photos I added an Apache directive on that folder, which successfully restricted access. The problem then became that I could not display the images in the browser by setting the src attribute to that path. See my post: https://serverfault.com/questions/425974/apache-deny-access-to-images-folder-but-still-able-to-display-via-img-on-site Finally I have learned that I have to use PHP to read the image data directly from the server and send this data to the browser. I have used the file_get_contents() function, which works to convert the image file (PNG) on the server into a string. I return this string to the browser in an ajax call. The thing I can't get is: how to convert this string back into an image using JavaScript? See this code: $.ajax({ url: get_image.php, success: function(image_string){ //how to load this image string from file_get_contents to the browser?? }});
Now provide the response and nothing else.
| You could display a default "no access" image to users who are forbidden to access the image: <?php$file = $_GET['file']; // don't forget to sanitize this!header('Content-type: image/png');if (user_is_allowed_to_access($file)) { readfile($file);}else { readfile('some/default/file.png');} And, on the client side: <img src="script.php?file=path/to/file.png" /> Alternatively, if you really really want or need to send the data via Ajax, you can Base64 encode it: <?phpecho base64_encode(file_get_contents($file)); And place the response in an img tag using the Data URI scheme var img = '<img src="data:image/png;base64,'+ server_reponse +'"/>'; Given that the Base64 encoded data is significantly larger than the original, you could send the raw data and encode it in the browser using a library . Does that make sense to you? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12356152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/959306/']} | jdg_84308 |
stackexchange | llm_judgeable_groundtruth_similarity | 14871640 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question about std::unique_ptr and std::shared_ptr . I know there are loads of questions about when to use which one, but I'm still not sure if I understand it correctly. I read somewhere that the default choice for smart pointer should be std::unique_ptr , but as I understand it, for my needs I should rather use std::shared_ptr . For example, I have: class B;class A{private: B* b;public: B* getB();};A::getB(){ return b;} So basically class A owns pointer to object of type B and there's a method which returns this pointer. If I create getter, I assume that some other class can access this pointer and therefore it should be shared_ptr instead of unique_ptr . Am I right, or I don't get it yet?
Now provide the response and nothing else.
| Short answer: depends. It depends on if the pointer returned by getB may be stored/used somewhere while the owning A has gone out of scope. The difference is about ownership not about how many pointers you do have. If the A still exists when you use the result of getB , you can store a unique_ptr and return a plain pointer (or a reference if getB can never return nullptr ). That expresses "A owns B, and no-one else does". If the A might go out of scope while you are using/holding the result of getB , but the B should go out of scope together with (or shortly after) the A, store a shared_ptr and return a weak_ptr . If possibly many objects, including the callers of getB , may hold on to the B and there is no clear single owner, store and return shared_ptr s. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14871640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1448360/']} | jdg_84309 |
stackexchange | llm_judgeable_groundtruth_similarity | 8774002 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I set the colour for a single data point in a scatter plot in R ? I am using plot
Now provide the response and nothing else.
| To expand on @Dirk Eddelbuettel's answer, you can use any function for col in the call to plot . For instance, this colors the x==3 point red, leaving all others black: x <- 1:5plot(x, x, col=ifelse(x==3, "red", "black")) Same goes for point character pch , character expansion cex , etc. plot(x, x, col=ifelse(x==3, "red", "black"), pch=ifelse(x==3, 19, 1), cex=ifelse(x==3, 2, 1)) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8774002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/788853/']} | jdg_84310 |
stackexchange | llm_judgeable_groundtruth_similarity | 646 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Our nearest star Proxima Centauri is 4.243 light years away from Earth. Does that mean we are seeing light that is 4.243 years old everyday?
Now provide the response and nothing else.
| Yes, the speed of light in vacuum (or c ) is 299,792,458 m/s and one light-year is the distance the light travels in one Julian year (365.25 days), which comes out to 9.46073 × 10 15 meters. The exact value is 9,460,730,472,580,800 meters. Since c is the maximum speed at which all energy, matter, and information in the Universe can travel, it is the universal physical constant on which the light-year ( ly ) as one of the astronomical units of length is based. That means that visible light as an electromagnetic radiation cannot travel faster than c and in one Julian year it can traverse a maximum distance of d = t * c d is distance in meters t time in seconds c the speed of light in vacuum in meters per second If we calculate this distance for a 4.243 ly distant object, that comes out as 4.243 * 365.25 * 86,400 s * 299,792,458 m * sˉ¹ or exactly 40,141,879,395,160,334.4 meters (roughly 40 trillion kilometers or 25 trillion miles). That is the distance the light traveled since it was last reflected of (or in our case emitted from, since Proxima Centauri is a red dwarf star) the surface of a celestial object to be 4.243 Julian years later visible at our observation point, in this case our planet Earth from where the distance to Proxima Centauri you quoted was measured. The more powerful the telescope, the further into the past we can see because the light is much older! This goes the same regardless of the distance of the object you're observing, but astronomy is particularly neat in this regard and we can observe objects that are so distant that we see them from the time when they were still forming. For further reading on other units used to measure far away objects you might be interested in reading this question on the parsec. | {} | {'log_upvote_score': 6, 'links': ['https://astronomy.stackexchange.com/questions/646', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/239/']} | jdg_84311 |
stackexchange | llm_judgeable_groundtruth_similarity | 36605 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a quite a beginner in network engineering so please be nice. I am not sure I understand how ARP protocol works in some situation. For example, if we have following problem: ARP cache is empty host1 (source) --> router1 (default) router1 (decision) -- > router2 router2 (decision) --> host2 (destination) Is ARP used in 1. , 2. or 3. or in all? Is ARP used between routers or only host and routers? And what if some router (like router1) is default for some host, is then ARP used?
Now provide the response and nothing else.
| Address Resolution Protocol (ARP) is used to resolve an IPv4 address (32 bit Logical Address) to the physical address (48 bit MAC Address). Network Applications at the Application Layer use IPv4 Address to communicate with another device. But at the Datalink layer , the addressing is MAC address (48 bit Physical Address). How it works Step 1: If a source device want to communicate with another device, source device checks its Address Resolution Protocol (ARP) cache to find if it already has a resolved MAC Address of the destination device. If it is there, it will use that MAC Address for communication. Step 2: If ARP resolution is not there in local cache, the source machine will generate an Address Resolution Protocol (ARP) request message, it puts its own data link layer address as the Sender Hardware Address and its own IPv4 Address as the Sender Protocol Address. It fills the destination IPv4 Address as the Target Protocol Address. The Target Hardware Address will be left blank, since the machine is trying to find that. Step 3: The source broadcast the Address Resolution Protocol (ARP) request message to the local network. Step 4: The message is received by each device on the LAN since it is a broadcast. Each device compare the Target Protocol Address (IPv4 Address of the machine to which the source is trying to communicate) with its own Protocol Address (IPv4 Address). Those who do not match will drop the packet without any action. Step 5: When the targeted device checks the Target Protocol Address, it will find a match and will generate an Address Resolution Protocol (ARP) reply message. It takes the Sender Hardware Address and the Sender Protocol Address fields from the Address Resolution Protocol (ARP) request message and uses these values for the Targeted Hardware Address and Targeted Protocol Address of the reply message. Step 6: The destination device will update its Address Resolution Protocol (ARP) cache, since it need to contact the sender machine soon. Step 7: Destination device send the Address Resolution Protocol (ARP) reply message and it will NOT be a broadcast, but a unicast in order to save network resources. Step 8: The source machine will process the Address Resolution Protocol (ARP) reply from destination, it stores the Sender Hardware Address as the layer 2 address of the destination. Step 9: The source machine will update its Address Resolution Protocol (ARP) cache with the Sender Hardware Address and Sender Protocol Address it received from the Address Resolution Protocol (ARP) reply message. Machines at local network can't communicate if they don't know the MAC Address of each other. Neither Internal IP Address can be used for that. If a router wants to communicate with its client or with the other router then it must know the MAC Address of its client and the other router as well. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/36605', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/32151/']} | jdg_84312 |
stackexchange | llm_judgeable_groundtruth_similarity | 13782222 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting comfortable with the idea of replacing loops with recursion. I'm fiddling around with a pet project, and I wanted to test some text input functionality so I wrote up a little command line interface that repeatedly asks for input until it receives a specific quit command. It looks something like this: getCommandsFromUser = do putStrLn "Enter command: " keyboardInput <- getLine let command = map toLower keyboardInput if command == "quit" then putStrLn "Good-bye!" else do -- stuff -- more stuff putStrLn ("Sending command: " ++ commandURI) simpleHTTP $ getRequest commandURI getCommandsFromUsermain = do getCommandsFromUser This works exactly as expected, but coming from a C/Java background it still tickles the deep, dark, unconscious parts of my brain and makes me want to break out in hives because I can't shake the thought that every single recursive call of getCommandsFromUser being made is creating a new stack frame. Now, I don't know anything about IO, monads, state, arrows, etc. yet. I'm still working my way through Real World Haskell, I haven't reached that part yet, and some of this code is pattern matched off things that I found on Google. In addition, I know that the whole point of the GHC is that it's a maddeningly optimizing compiler that is designed to do incredible things such as beautiful unrolling of tail recursive functions and the like. So can somebody explain whether or not this implementation is "correct", and if so explain to me what is going on behind the scenes that would prevent this program from blowing up if it were put in the hands of an infinite number of monkeys? I know what tail call optimization is. I'm more concerned about how it works in this case, what with the actions and general functional impurity going on. This question wasn't so much based on the idea that I was confused about how Haskell used the stack and that I was expecting it to work like an imperative language; it was based in the fact that I had no idea how Haskell handled the stack and wanted to know what it was doing differently from traditional C-like languages.
Now provide the response and nothing else.
| Don't worry quite so much about the stack. There is nothing fundamental that says function calls have to be implemented using stack frames; that is merely one possible technique for implementing them. Even when you have "the stack", there's certainly nothing that says the stack has to be limited to a small fraction of available memory. That's essentially a heuristic tuned to imperative programming; where you don't use recursion as a problem solving technique, very deep call stacks tend to result from infinite-recursion bugs, and limiting the stack size to something quite small means such programs die quickly instead of consuming all available memory and swap and then dying. To a functional programmer, having a program terminate "run out" of memory to make more function calls when the computer still has gigabytes of RAM available is a ridiculous flaw in the language design. It would be like C limiting loops to some arbitrary number of iterations. So even if a functional language is implementing function calls by using a stack, there would be a strong motivation to avoid using the standard tiny stack we know from C if possible. In fact, Haskell does have a stack which can overflow, but it's not the call stack you're familiar with from C. It is quite possible to write non-tail-recursive functions which infinitely recurse and will consume all available memory without hitting a limit on call depth. The stack Haskell does have is used to track the "pending" values that need to be evaluated a bit more in order to make a decision (I'll go into this a bit more later). You can read in more detail about this kind of stack overflow here . Lets work through an example to see how your code could be evaluated. 1 I'll use an even simpler example than yours though: main = do input <- getLine if input == "quit" then putStrLn "Good-bye!" else do putStrLn $ "You typed: " ++ input main Haskell's evaluation is lazy 2 . Simplistically, that means it will only bother to evaluate a term when it needs the value of that term to make a decision. For example, if I compute 1 + 1 and then prepend the result of that to the front of a list, it can be left as a "pending" 1 + 1 in the list 3 . But if I use if to test whether the result was equal to 3, then Haskell will need to actually do the work of turning 1 + 1 into 2 . But if that's all there was to it, nothing would ever happen. The entire program would just be left as a "pending" value. But there is an outer driver that needs to know what IO action main evaluates to, in order to execute it. Back to the example. main is equal to that do block. For IO , a do block makes an a big IO action out of a series of smaller ones, which must be executed in order. So the Haskell runtime sees main evaluating to input <- getLine followed by some unevaluated stuff which it doesn't need yet. That's enough to know to read from the keyboard and call the resulting String input . Say I typed "foo". That leaves Haskell with something like the following as its "next" IO action: if "foo" == "quit" then putStrLn "Good-bye!" else do putStrLn $ "You typed: " ++ "foo" main Haskell is only looking at the very outermost thing, so this pretty much looks like " if blah blah blah blah ...". if isn't something that the IO-executor can do anything with, so it needs to be evaluated to see what it returns. if just evaluates to either the then or the else branch, but to know which decision to make Haskell is required to evaluate the condition. So we get: if False then putStrLn "Good-bye!" else do putStrLn $ "You typed: " ++ "foo" main Which allows the whole if to be reduced to: do putStrLn $ "You typed: " ++ "foo" main And again, do gives us an IO action which consists of an ordered sequence of sub-actions. So the next thing the IO-executor has to do is the putStrLn $ "You typed: " ++ "foo" . But that's not an IO action either (it's an unevaluated computation that should result in one). So we need to evalute it. The "outermost" part of putStrLn $ "You typed: " ++ "foo" is actually $ . Getting rid of the infix operator syntax so you can see it the same way the Haskell runtiem does, it would look like this: ($) putStrLn ((++) "You typed: " "foo") But the $ operator just defined by ($) f x = f x , so substituting the right hand side immediately gives us: putStrLn ((++) "You typed: " "foo")` Now ordinarily we'd evaluate this by substituting in the definition of putStrLn , but it's a "magic" primitive function that isn't directly expressible in Haskell code. So it doesn't actually get evaluated like this; the outer IO-executor simply knows what to do with it. But it requires that the argument of putStrLn be fully evaluated, so we can't leave it as (++) "You typed: " "foo" . There's actually a number of steps to fully evaluate that expression, going through the definition of ++ in terms of list operations, but lets just skip over that and say it evaluates to "You typed: foo" . So then the IO-executor can execute the putStrLn (writing the text to the console), and move on to the second part of the do block, which is just: `main` Which is not something that can be immediately executed as an IO action (it's not built in to Haskell like putStrLn and getLine are), so we evaluate it by using the right hand side of the definition of main to get: do input <- getLine if input == "quit" then putStrLn "Good-bye!" else do putStrLn $ "You typed: " ++ input main And I'm sure you can see where the rest is going. Note that I haven't said anything about any kind of stack. All of this is just building up a data structure describing the IO action that is main , so the outer driver can execute it. It's not even a particularly special data structure; from the point of view of the evaluation system it's just like any other data structure, and so there are no arbitrary limitations on its size. In this case lazy evaluation means the generation of this data structure is interleaved with its consumption (and the generation of later parts of it can depend on what happened as a result of consuming earlier parts of it!), and so this program can run in a constant amount of space. But as noted by shachaf's comment on the question, this isn't really an optimization for removing unnecessary stack frames; it's just what happens automatically with lazy evaluation. So I hope that was sufficiently helpful for you to see what's going on. Basically by the time Haskell gets to evaluating the recursive call to getCommandsFromUser , it's already done with all of the data generated within the previous iteration, and so it gets garbage collected. So you can keep running this program indefinitely without needing more than a fixed amount of memory. This is just a straightforward consequence of lazy evaluation, and isn't substantially different when IO is involved. 1 I'm going to disclaim up front that I do not know much in detail about the actual current implementation of Haskell. I do however know general techniques for implementing lazy pure languages like Haskell. I'm also going to try to avoid diving too much into the details, and just explain how things work in an intuitive way. So this account may well be incorrect in some of the fine details of what's actually going on inside your computer, but it should show you how these things can work. 2 The language spec technically just says that evaluation should be "non-strict". The evaluation I'm going to describe, which is known as "lazy" informally, is really only one possible "non-strict" evaluation strategy, but it's what you get in practice. 3 And the new list can in fact be left as a "pending" result of (1 + 1) : originalList until someone needs to know whether or not it's empty. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13782222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/411216/']} | jdg_84313 |
stackexchange | llm_judgeable_groundtruth_similarity | 47652893 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I was testing with some code snippets to wrap my head around the concept of inheritance, when I came across this - for me - strange phenomenon. So first I was testing this simple code: public class Main{ public static void main(String[] args) { Bar bar = new Bar(); System.out.println("age = " + bar.age); bar.test(); }}class Foo{ int age = 2; void test(){ System.out.println("TEST FOO"); }}class Bar extends Foo{ int age = 4; void test(){ System.out.println("TEST BAR"); }} And the output was as I expected: age = 4TEST BAR Then I made a small change to line 3, where I changed the type Bar to Foo like this: Foo bar = new Bar(); Now when I run the code, it gives me an output I believe is weird: age = 2TEST BAR How does it happen that the code bar.age is now using the age member of the Foo class (which makes sense), while bar.test(); still uses the method of the Bar class (and not from Foo as that is what the type is)?
Now provide the response and nothing else.
| The age in Bar shadows the age in Foo . Furthermore, fields are not polymorphic (cf. functions). So when you write Foo bar = new Bar(); , the static type of bar is used when accessing the field age , to return 2. And the dynamic type of bar is used when deciding which override of test() to call, and that's a Bar type. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47652893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4185106/']} | jdg_84314 |
stackexchange | llm_judgeable_groundtruth_similarity | 4712655 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to change our build scripts from using SourceSafe to TFS without using MsBuild (yet). One hiccup is that the workspace directory is renamed and archived by the scripts, which makes TFS think it doesn't need to get any files. Even with the /force flag it just gets the directories without getting the source files. I am currently using TF.exe get "Product/Main/Project1" /force /recursive /noprompt To save me managing workspaces in the scripts or using intermediate directories, does anyone know of a command that can get directories and code without needing a workspace?
Now provide the response and nothing else.
| It's not possible to run a tf get without a workspace. The reason is that the server needs to know the mapping between the server paths and the local paths. If you are working with a large number of files, it is not a good idea to: Create & Delete a new workspace every time Or, Create a new workspace (and then never delete it) The reason for this is that every time you do a Get , the server keeps track of which files, at which versions were downloaded to which workspace. If you never clean up these workspaces, then the table that stores this information will grow over time. Additionally, if you are creating & deleting a workspace all the time, the server has to write all these rows, then delete them when you are done. This is unnecessary. You really should try and reuse the same workspace each time. If you do, the server is very efficient about only sending you files that have changed since you last downloaded them. Even if your build is moving from one branch to another, you can use tf get /remap which is sometimes more efficient if the branches share common files. Although it doesn't solve your problem, it is possible to list files and download files without a workspace. To list files: tf dir $/Product/Main/Project1 /R To download a file: tf view $/Product/Main/Project1/file.cs With a creative batch file, you can string these two together with a FOR command. However I would recommend trying to solve your workspace problem first, since that is the way that TFS was intended to be used. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4712655', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17520/']} | jdg_84315 |
stackexchange | llm_judgeable_groundtruth_similarity | 48534 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Does the force of kinetic friction increase with the relative speed of the objects involved? If not, why not? Layman alert...I last did physics in high school, and am just trying to understand something my son is working on. To keep a box moving across the floor at a constant velocity I need to apply a force equal and opposite to that of the kinetic friction force working on it. If I start applying a force greater than the kinetic friction force, the box will speed up. Will it keep speeding up indefinitely, or will the kinetic friction force I need to overcome increase as the box gets faster?
Now provide the response and nothing else.
| No for "dry", yes for "wet". For "dry friction", such as a box on a floor, it is relatively constant. Why is this? Most objects are microscopically rough with "peaks" that move against each-other. As more pressing force is applied, the peaks deform more and the true contact area is increases proportionally. The surfaces adhere forming a bond that will take a certain amount of shear force to break. Since the molecules are moving much faster ~300m/s than the box (due to thermal vibrations) velocity will not affect how many molecules adhere (with the exception of "static friction"). However, static friction is sometimes be higher, in one explanation because the peaks have time to settle and interlock with each-other. Neglecting static friction, force is constant . The simplest case in wet friction is two objects separated by a film of water. In this case there is zero static friction, as the thermal energy is sufficient to disrupt any static, shear-bearing water molecule structure. However, water molecules still push and pull on each-other, transferring momentum from the top to the bottom. The rate of momentum transfer i.e. "friction" grows in proportion to how much momentum is available, which in turn grows with velocity. Thus, force is linear with velocity. However, interesting things happen when the bulk mass of the water gets important. In this case, bumps, etc on the surface push on the water creating currents that can ram into bumps on the other surface. If you double the velocity, your bumps will push twice as much water twice as fast for 4 times the force; force is quadratic to velocity. You can plug in formulas for the linear case (which depends on viscosity) and quadratic case (which depends on density) to see which one "wins" (this is roughly the Reynolds number), if there is no clear winner the answer is complex (see the Moody diagram). Nevertheless these are approximations and the real answer could fail to follow these "rules". | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/48534', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/17573/']} | jdg_84316 |
stackexchange | llm_judgeable_groundtruth_similarity | 649412 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a prime $p$ and an integer $L$ such that $p \gg L \gg 1$, and I need to compute modular inverses of numbers $1, 2, \ldots, L$ (modulo $p$). Obviously I could apply the extended gcd algorithm to each number in turn, however I was wondering if there is a faster way to accomplish this task?
Now provide the response and nothing else.
| One useful optimization is to compute multiple inverses at a time e.g. compute $\,d = 1/abc,\,$ then $1/a = bc/abc = bcd.\,$ Similarly, $\,1/b = acd,\,\ 1/c = abd.$ This requires only one inversion, trading off other inversions for simpler multiplications. E.g. to invert $\,9,10,11$ mod $101$ we invert $\,9(10)11 \equiv 9(9) \equiv -20\!:\,\ {-}1/20 \equiv 100/20\equiv \color{#c00}5,\,$ so $$\begin{eqnarray} &&1/9\ \equiv 10(11)\color{#c00}5\equiv\quad\ 9(5)\equiv\ \ \ 45\\ &&1/10\equiv \,9(11)\color{#c00}5\equiv\ \ {-}2(5)\equiv -10\\ &&1/11\equiv\, 9(10)\color{#c00}5\equiv {-}11(5)\equiv -55\end{eqnarray}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/649412', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/66802/']} | jdg_84317 |
stackexchange | llm_judgeable_groundtruth_similarity | 15933617 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What i want to achive is to have a TextView, that takes about 70% of the screen, and the rest of the 30 % is divided EQUALY between 2 buttons set side by side (horizontally). The only way I achieved this was using the code below...however the xml editor complains about the nested wights...altough it works I am understanding that this is not good practice...How should i do it instead ? <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"android:orientation="vertical" ><TextView android:id="@+id/news" android:textSize="14sp" android:textStyle="bold" android:textColor="#0033CC" android:layout_height="0px" android:layout_width="fill_parent" android:layout_weight="70" android:gravity="top"/><LinearLayout android:layout_width="fill_parent" android:layout_height="0px" android:orientation="horizontal" android:layout_weight="30"> <Button android:id="@+id/new_order_button" android:layout_width="0px" android:layout_height="wrap_content" android:layout_weight="1" android:text="@string/new_order" /> <Button android:id="@+id/previous_orders_button" android:layout_width="0px" android:layout_height="wrap_content" android:layout_weight="1" android:text="@string/previous_orders" /></LinearLayout > EDIT : UPDATED CODE...warning still present Thanks :)
Now provide the response and nothing else.
| perlop says this If you don't want it to test the right operand until the next evaluation, as in sed, just use three dots ("...") instead of two. In all other regards, "..." behaves just like ".." does. So with three dots your code won't notice the closing </title> if it appears on the same line as the opening tag. However, the problem is that you are testing for <title> in $line and </title> in $_ . What you mean is if ($line =~ /<title>/ .. $line =~ /<\/title>/) { ... } But please don't do that ! You may think it works as it stands but you are clearly running erroneous code already. Regexes are the wrong tool for processing XML: please use XML::Twig or XML::LibXML instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15933617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1206610/']} | jdg_84318 |
stackexchange | llm_judgeable_groundtruth_similarity | 68327 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I create a new Button object but did not specify the command option upon creation. Is there a way in Tkinter to change the command (onclick) function after the object has been created?
Now provide the response and nothing else.
| Though Eli Courtwright's program will work fine¹, what you really seem to want though is just a way to reconfigure after instantiation any attribute which you could have set when you instantiated². How you do so is by way of the configure() method. from Tkinter import Tk, Buttondef goodbye_world(): print "Goodbye World!\nWait, I changed my mind!" button.configure(text = "Hello World!", command=hello_world)def hello_world(): print "Hello World!\nWait, I changed my mind!" button.configure(text = "Goodbye World!", command=goodbye_world)root = Tk()button = Button(root, text="Hello World!", command=hello_world)button.pack()root.mainloop() ¹ "fine" if you use only the mouse; if you care about tabbing and using [Space] or [Enter] on buttons, then you will have to implement (duplicating existing code) keypress events too. Setting the command option through .configure is much easier. ² the only attribute that can't change after instantiation is name . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/68327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/680/']} | jdg_84319 |
stackexchange | llm_judgeable_groundtruth_similarity | 7992559 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the case of a single element tuple, the trailing comma is required. a = ('foo',) What about a tuple with multiple elements? It seems that whether the trailing comma exists or not, they are both valid. Is this correct? Having a trailing comma is easier for editing in my opinion. Is that a bad coding style? a = ('foo1', 'foo2')b = ('foo1', 'foo2',)
Now provide the response and nothing else.
| It is only required for single-item tuples to disambiguate defining a tuple or an expression surrounded by parentheses. (1) # the number 1 (the parentheses are wrapping the expression `1`)(1,) # a 1-tuple holding a number 1 For more than one item, it is no longer necessary since it is perfectly clear it is a tuple. However, the trailing comma is allowed to make defining them using multiple lines easier. You could add to the end or rearrange items without breaking the syntax because you left out a comma on accident. e.g., someBigTuple = ( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, #... 10000000000, ) Note that this applies to other collections (e.g., lists and dictionaries) too and not just tuples. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7992559', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248430/']} | jdg_84320 |
stackexchange | llm_judgeable_groundtruth_similarity | 16075997 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using below code to check for element on my web page private boolean isElementPresent(By by) {try { driver.findElement(by); return true; } catch (NoSuchElementException e) { return false; } catch (Exception e) { return false; } } I need to check in my program if a particular region appears in result as below isElementPresent(By.xpath(".//*[@id='header']"))); If this is present this function completes quickly but if above is not present then it run for very long. Could some one please help me in resolving this issue so that this check can be performed quickly?
Now provide the response and nothing else.
| Here you are missing somethings that is why it is waiting If there is not element. findElement will wait for an element implicitly specified time. so need to set that time to zero in that method. isElementPresent(WebDriver driver, By by) { driver.manage().timeouts().implicitlyWait(0, TimeUnit.SECONDS); try { driver.findElement(by); return true; } catch (NoSuchElementException e) { return false; } finally { driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); } } There are 4 important things going on here. In order: Setting implicity_wait to 0 so that WebDriver does not implicitly wait. Returning True when the element is found. Catching the NoSuchElementException and returning False when we discover that the element is not present instead of stopping the test with an exception. Setting implicitly_wait back to 30 after the action is complete so that WebDriver will implicitly wait in future. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16075997', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/419534/']} | jdg_84321 |
stackexchange | llm_judgeable_groundtruth_similarity | 115728 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For instance, it is said that the practical current limit of the TO-220 device is 75A. But I frequently see MOSFET devices for which the specified maximum continuous drain current is above 75A. ( Examples ) Is there any practical difference between two TO-220 devices such that, one of them has a maximum continuous current of 75A and the other one has a limit of 150A, and all other specifications are same? If there is no difference, why do they bother producing high current devices in TO-220 package and advertise it with current ratings above the practical limit?
Now provide the response and nothing else.
| Usually, current rating is correlated inversely with the ON-Resistance of the device. If the internal device has a lower ON-Resistance, then less power will be dissipated within the device at a given current. This will allow the device overall to run cooler. The cooler a device runs the lower the ON-Resistance remains and the less power is wasted in a device. Higher rated devices will likely last longer, run cooler and run more efficiently under the same current conditions even if the package doesn't allow the silicon to be used to the full potential. Another benefit would be the ability to run at higher pulse currents in a higher rated device as the 75 Amps stated is a continuous current rating rather than a pulse current rating. Another reason this may be done is because it is cheaper to make one silicon device and package it in many different cases. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/115728', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/5542/']} | jdg_84322 |
stackexchange | llm_judgeable_groundtruth_similarity | 14657463 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I make it so Windows will always show Helvetica instead of Arial? Is this even possible? I know it automatically substitutes it, but is there any way to do this. I know that I could use font squirrel but apparently that is illegal. Is there another solution?
Now provide the response and nothing else.
| You cannot. Even embedding a font would not force anything, since settings in the user’s system may prevent the use of downloadable fonts. Besides, embedding Helvetica is illegal, unless permitted by its copyright holder. If you declare font-family: Helvetica , then Helvetica will be used only if the user’s system has Helvetica installed, and Windows systems usually don’t. Due to Windows settings, when Helvetica is not installed, Arial will be used instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14657463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/600322/']} | jdg_84323 |
stackexchange | llm_judgeable_groundtruth_similarity | 24626 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I often want to make some quick date calculations, such as: What is the difference between these two dates? What is the date n weeks after this other date? I usually open a calendar and count the days, but I think there should be a program/script that I can use to do these kinds of calculations. Any suggestions?
Now provide the response and nothing else.
| The "n weeks after a date" is easy with GNU date(1): $ date -d 'now + 3 weeks'Tue Dec 6 23:58:04 EST 2011$ date -d 'Aug 4 + 3 weeks'Thu Aug 25 00:00:00 EST 2011$ date -d 'Jan 1 1982 + 11 weeks'Fri Mar 19 00:00:00 EST 1982 I don't know of a simple way to calculate the difference between two dates, but you can wrap a little logic around date(1) with a shell function. datediff() { d1=$(date -d "$1" +%s) d2=$(date -d "$2" +%s) echo $(( (d1 - d2) / 86400 )) days}$ datediff '1 Nov' '1 Aug' # Note: answer should be 92 days but in my timezone, DST starts between the dates.91 days Swap d1 and d2 if you want the date calculation the other way, or get a bit fancier to make it not matter. Furthermore, in case there is a non-DST to DST transition in the interval, one of the days will be only 23 hours long; you can compensate by adding ½ day to the sum. echo $(( (((d1-d2) > 0 ? (d1-d2) : (d2-d1)) + 43200) / 86400 )) days | {} | {'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/24626', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/10788/']} | jdg_84324 |
stackexchange | llm_judgeable_groundtruth_similarity | 1643636 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be a entire function. Suppose that there are $M$, $r>0$ and $n\in \mathbb{N}$ such that $\left|f(z)\right|<M\left|z\right|^n$ for all $z \in \mathbb{C}$ with $\left|z\right|\geq r$. Show that $f$ polynomial of degree at most $n$. Remark: I tried to follow the proof of Liuville's Theorem but I have complications with the condition $\left|z\right|\geq r$..
Now provide the response and nothing else.
| Well, first of all we know that, as $f$ is an entire function, we can write it as a power series around zero:$$f(z)=\sum\limits_{m\geq 0}a_mz^m,\:\:\:\forall z\in\mathbb{C},$$where, by Taylor, $a_m=f^{(m)}(0)/m!$. Now, if we consider the Cauchy's inequality, we have that$$|f^{(m)}(0)|\leq m!\frac{\sup\limits_{|z|=r}|f(z)|}{r^m}\leq m!\frac{Mr^n}{r^m}=Mm!r^{n-m},\:\:\:\forall m\in\mathbb{N},\: r>0.$$Note that if $m > n$, then $0\leq f^{(m)}(0)|\to 0$ when $r\to\infty$. So, we know that $a_m=f^{(m)}/m!=0$ for all $m>n$ and$$f(z)=\sum\limits_{m=0}^na_mz^m,\:\: \forall z\in\mathbb{C}.$$So we have proved that $f$ is a polynomial of degree at most $n$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1643636', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/165639/']} | jdg_84325 |
stackexchange | llm_judgeable_groundtruth_similarity | 31357558 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets say i have this kind of code: $array = [ 'a'=> [ 'b' => [ 'c'=>'some value', ], ], ]; $array['a']['b']['c'] = 'new value'; Of course this is working, but what i want is to update this 'c' key using variable, something like that: $keys = '[a][b][c]';$array{$keys} = 'new value'; But keys are treatening as string and this is what i get: $array['[a][b][c]'] = 'new value'; So i would like some help, to show me the right way to make this work without using eval(). By the way, there can be any number of array nests, so something like this is not a good answer: $key1 = 'a';$key2 = 'b';$key3 = 'c';$array[$key1][$key2][$key3] = 'new value';
Now provide the response and nothing else.
| It isn't the best way to define your keys, but: $array = [];$keys = '[a][b][c]';$value = 'HELLO WORLD';$keys = explode('][', trim($keys, '[]'));$reference = &$array;foreach ($keys as $key) { if (!array_key_exists($key, $reference)) { $reference[$key] = []; } $reference = &$reference[$key];}$reference = $value;unset($reference);var_dump($array); If you have to define a sequence of keys in a string like this, then it's simpler just to use a simple separator that can be exploded rather than needing to trim as well to build an array of individual keys, so something simpler like a.b.c would be easier to work with than [a][b][c] Demo | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31357558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3338750/']} | jdg_84326 |
stackexchange | llm_judgeable_groundtruth_similarity | 986616 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to tell if a thread has exited normally or because of an exception?
Now provide the response and nothing else.
| As mentioned, a wrapper around the Thread class could catch that state. Here's an example. >>> from threading import Thread>>> class MyThread(Thread): def run(self): try: Thread.run(self) except Exception as err: self.err = err pass # or raise err else: self.err = None>>> mt = MyThread(target=divmod, args=(3, 2))>>> mt.start()>>> mt.join()>>> mt.err>>> mt = MyThread(target=divmod, args=(3, 0))>>> mt.start()>>> mt.join()>>> mt.errZeroDivisionError('integer division or modulo by zero',) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/986616', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10289/']} | jdg_84327 |
stackexchange | llm_judgeable_groundtruth_similarity | 223027 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states. My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not?
Now provide the response and nothing else.
| As you describe it, you already have some sort of version control, though currently there are some issues with it compared to a typical version control: An intentional commit in version control indicates that the developer strongly believes that the current state of the system would build successfully. (There are exceptions, as suggested by Jacobm001's comment . Indeed, several approaches are possible, and some teams would prefer not trying to make every commit possible to build. One approach is to have nightly builds, given that during the day, the system may receive several commits which don't build.) Since you don't have commits, your system will often result in a state which doesn't build. This prevents you from setting Continuous Integration . By the way, a distributed version control system has a benefit: one can do local commits as much as needed while bringing the system to a state where it cannot build, and then do a public commit when the system is able to build. Version control lets you enforce some rules on commit . For example, for Python files, PEP 8 can be run, preventing the commit if the committed files are not compliant. Blame is extremely hard to do with your approach. Exploring what changes were made when, and by who is hard too. Version control logs , the list of changed files and a diff is an excellent way to find exactly what was done. Any merge would be a pain (or maybe developers wouldn't even see that their colleagues were modifying the files before they save the changes). You stated that: It's rare that the same project is worked on by two programmers Rare doesn't mean never, so merges would occur sooner or later. A backup every fifteen minutes means that developers may lose up to fifteen minutes of work . This is always problematic: it's hard to remember exactly what changes were done meanwhile. With source control you can have meaningful commit messages. With backups all you know is that it was x minutes since last backup. A real version control ensures that you can always revert to the previous commit; this is a huge advantage. Reverting a backup using your system would be slightly more difficult than doing a one-click rollback , which you can do in most version control systems. Also, in your system Branching is impossible. There's a better way to do version control, and you should certainly consider changing the way you currently do it. Especially since, like Eric Lippert mentions , your current system is probably a lot more painful to maintain than any common version control system is. Having a Git or Mercurial repository on a network drive is pretty easy for example. Note: Even if you switch to a common version control system, you should still have a daily/weekly backup of the repositories. If you're using a distributed system it's less important though, since then every developer's working copy is also a backup. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/223027', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/101501/']} | jdg_84328 |
stackexchange | llm_judgeable_groundtruth_similarity | 68849313 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I get this error where I run anchor deploy : Deploying workspace: http://127.0.0.1:8899Upgrade authority: /home/<user>/.config/solana/id.jsonDeploying program "faucet"...Program path: /home/<user>/Workspace/<project_path>/target/deploy/xxx.so...Error: RPC request error: cluster version query failed: error sending request for url (http://127.0.0.1:8899/): error trying to connect: tcp connect error: Connection refused (os error 111)There was a problem deploying: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "" }. Before deploying, I have already run the following to change the cluster config in local: solana config set --url https://api.devnet.solana.com How can I solve the problem?
Now provide the response and nothing else.
| Your error clearly states that while you are trying to deploy to your local network, it is not up and running. So, what you have to do is simply open a new terminal window and run: solana-keygen new save the seed phrase and other relevant details somewhere secure and then, run solana-test-validator Now in a separate terminal window where you had earlier tried to deploy, type anchor deploy again and it should be successfully deployed. Else, if you were trying to deploy on any other network, for example, devnet. Then you would want to airdrop some SOL into the account generated after running solana-keygen new using the command: solana airdrop 1 <RECIPIENT_ACCOUNT_ADDRESS> --url https://api.devnet.solana.com Then use additional flags in your deploy command as follows: anchor deploy --provider.cluster devnet | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68849313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16672541/']} | jdg_84329 |
stackexchange | llm_judgeable_groundtruth_similarity | 55061 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a small number of employees who use a company computer but these people aren't very tech-savvy. They use an email client and a messaging client. I'm pretty sure they wouldn't click on a .exe or .zip file in an email without thinking, and I know that's one area of concern. However, I'm thinking about images. In fact, regardless of how capable a person is with technology, I believe that attaching things (code or anything else) to an image can be a security risk. What can be attached to images to harm another? I believe that images can pose a security risk as they 'automatically execute' or something. There are so many ways that images can be received by a computer (or a phone or tablet, of course): email iMessage (or any other messaging app) someone right-clicking and saving an image from a web page just viewing a web page of course downloads the image to cache What precautions do I need to take regarding the above four things? Can someone just attach some code to an image and it execute? What do I need to do to prevent images being used against my computers? I'm guessing you couldn't just attach code to an image and iMessage someone's iPhone. What about Android?
Now provide the response and nothing else.
| The other answers mostly talk about attaching arbitrary code to images via steganographic techniques, but that's not very interesting since it requires that the user be complicit in extracting and executing that. The user could just execute malicious code directly if that's their goal. Really you're interested in whether there's a possibility of unexpected, arbitrary code execution when viewing an image. And yes, there is such a possibility of an attacker constructing a malicious image (or something that claims to be an image) that targets specific image viewing implementations with known flaws. For example, if an image viewer allocates a buffer and computes the necessary buffer size from a naive width * height * bytes_per_pixel calculation, a malicious image could report dimensions sufficiently large to cause the above calculation to overflow, then causing the viewer to allocate a smaller buffer than expected, then allowing for a buffer overflow attack when data is read into it. Specific examples: http://technet.microsoft.com/en-us/security/bulletin/ms05-009 http://technet.microsoft.com/en-us/security/bulletin/ms04-028 http://www.adobe.com/support/security/bulletins/apsb11-22.html https://www.mozilla.org/security/announce/2012/mfsa2012-92.html http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-1205 http://en.wikipedia.org/wiki/Windows_Metafile_vulnerability In general, these sorts of things are difficult to protect against. Some things you can do: Keep your systems and applications updated. Enable DEP . Enable ASLR if possible. Avoid running programs with administrative privileges. On Windows, Microsoft's EMET could also provide some protection. | {} | {'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/55061', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/32062/']} | jdg_84330 |
stackexchange | llm_judgeable_groundtruth_similarity | 27832984 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Kubernetes has master and minion nodes. Will (can) Kubernetes run specified Docker containers on the master node(s)? I guess another way of saying it is: can a master also be a minion? Thanks for any assistance.
Now provide the response and nothing else.
| Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster. A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine. This is going to change significantly when Issue #6087 is implemented. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27832984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1935088/']} | jdg_84331 |
stackexchange | llm_judgeable_groundtruth_similarity | 19450222 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How would one transform Future[Option[X]] into Option[Future[X]] ? val futOpt:Future[Option[Int]] = future(Some(1))val optFut:Option[Future[Int]] = ? Update: This is a follow up to this question . I suppose I'm trying to get a grasp on elegantly transforming nested futures. I'm trying to achieve with Option s what can be done with Seq uences, where you turn a Future[Seq[Future[Seq[X]]]] into Future[Future[Seq[Seq[x]]]] and then flatMap the double layers. As Ionut has clarified, I have phrased the question in flipped order, it was supposed to be Option[Future[X]] -> Future[Option[X]] .
Now provide the response and nothing else.
| Unfortunately, this isn't possible without losing the non-blocking properties of the computation. It's pretty simple if you think about it. You don't know whether the result of the computation is None or Some until that Future has completed, so you have to Await on it. At that point, it makes no sense to have a Future anymore. You can simply return the Option[X] , as the Future has completed already. Take a look here. It always returns Future.successful , which does no computation, just wraps o in a Future for no good reason. def transform[A](f: Future[Option[A]]): Option[Future[A]] = Await.result(f, 2.seconds).map(o => Future.successful(o)) So, if in your context makes sense to block, you're better off using this: def transform[A](f: Future[Option[A]]): Option[A] = Await.result(f, 2.seconds) Response for comments: def transform[A](o: Option[Future[A]]): Future[Option[A]] = o.map(f => f.map(Option(_))).getOrElse(Future.successful(None)) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19450222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1714997/']} | jdg_84332 |
stackexchange | llm_judgeable_groundtruth_similarity | 2898638 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Disclaimer: It is very possible that the notation I will be using is not formally correct. I am not a mathematician, and I'm just trying to write down this idea in a reasonable way. I wasn't able to find literature addressing this topic As an example, take the family of univariate normal distributions with the sum operation. Let see if it satisfies the four defining properties of a group: Closure : Given $X \sim \mathcal{N}(\mu_1,\,\sigma_1^{2})$ and $Y \sim \mathcal{N}(\mu_2,\,\sigma_2^{2})$, $Z = X+Y$ is also normally distributed, with $Z \sim \mathcal{N}(\mu_1+\mu_2,\,\sigma_1^{2}+\sigma_2^{2})$ Associativity : It is clear that $(X + Y) + Z = X + (Y + Z)$ in distribution, with $X, Y, Z$ each independently and normally distributed with given means and variances. Identity element : The distribution $\mathcal{N}(0,\,0)$ is the identity element, because any $I \sim \mathcal{N}(0,\,0)$ satisfies the property that $Y = X + I = I + X$ is distributed as $X$, with $X \sim \mathcal{N}(\mu_1,\,\sigma_1^{2})$. Inverse element : In this case for any $X \sim \mathcal{N}(\mu_1,\,\sigma_1^{2})$, there exists $-X$ such that $X-X = -X+X = I$, with $I \sim \mathcal{N}(0,\,0)$. The four properties seem to be satisfied, therefore couldn't we say that the family of univariate normal distributions with the sum operation constitutes a group? And the same with other several families of distributions and different operations such as sum, product, ratio, etc. Edit : As pointed out below, I should have added that $X$ and $Y$ have to be independent in the first point, otherwise the expression for the variance of the addition is not correct.
Now provide the response and nothing else.
| Given $X \sim \mathcal{N}(\mu_1,\,\sigma_1^2)$ and $Y \sim \mathcal{N}(\mu_2,\,\sigma_2^2)$, $Z = X+Y$ is also normally distributed, with $Z \sim \mathcal{N}(\mu_1+\mu_2,\,\sigma_1^2+\sigma_2^2)$ You neglected independence. As it stands, this is not true. It is true if $X,Y$ are independent. Thus your group operation corresponds to addition of independent random variables. There is no inverse element because there is no probability distribution for which, if $Y$ has that distribution and $X$ is as above and $X,Y$ are independent, then $X+Y \sim \mathcal N(0,0).$ In particular, $X$ and $-X$ are not independent. So this is a semigroup but not a group. Variances are always nonnegative, and the variance of the sum of two independent random variables is at least as big as either of their variances, so you won't get back to $0.$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2898638', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/360787/']} | jdg_84333 |
stackexchange | llm_judgeable_groundtruth_similarity | 134633 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As I understand, top-down design is by refining the abstract high level concept into smaller concrete and comprehensible parts, until the smallest building block is defined. On the other hand, bottom up defines low level parts, then gradually build up higher level blocks until the whole system is formed. In practice, it is said best to combine the two methods: starts with high level specification to fully specify the domain knowledge, its relationship and constraints. Once the problem is well understood, smallest building blocks are created to build up the system. The process of: Creating requirement spec Create a design spec (with diagrams) Implement Deliver Repeat (in iterative development, rather than doing a whole chunk in each phase, we do a little bit each repeatedly, and got daily meeting to adapt to customer's dynamic requirement) looks perfectly normal to me (with specs as plans). It has its flaws but that's why we got iterative development: instead of spending time on one phase, says, requirement analysis to study every possible thing in domain knowledge which is subjected to changes (possibly daily), we do a little bit of analysis, a little bit of design and then implement it. Another way is that each iteration is a mini-waterfall fashion, where analysis is done in a few days (or a week). The same applies for design. The rest of time is spent for implementation. Is there something inherently wrong with top-down approach in combination with iterative development? In his essay Programming Bottom Up , Paul Graham seems to encourage build from bottom up completely, or program it from bottom up, but not the requirement analysis/design phase: Experienced Lisp programmers divide up their programs differently. As well as top-down design, they follow a principle which could be called bottom-up design-- changing the language to suit the problem. As far as I get, what he meant is that Lisper still performs top-down design, but program bottom up, is that true? Another point he wrote: It's worth emphasizing that bottom-up design doesn't mean just writing the same program in a different order. When you work bottom-up, you usually end up with a different program. Instead of a single, monolithic program, you will get a larger language with more abstract operators, and a smaller program written in it. Instead of a lintel, you'll get an arch. Does this means that during the period of writing a program in Lisp, you end up with a generic tool?
Now provide the response and nothing else.
| Top-down is a great way to describe things you know, or to re-build things that you've already built. Top-down biggest problem is that quite often simply there is no "top". You will change your mind about what the system should do while developing the system and while exploring the domain. How can be your starting point something that you don't know (i.e. what you want the system to do)? A "local" top down is a good thing... some thinking ahead of coding is clearly good. But thinking and planning too much is not, because what you are envisioning is not the real scenario (unless you've already been there before, i.e. if you are not building, but re-building). Global top-down when building new things is just nonsense. Bottom-up should be (globally) the approach unless you know 100% of the problem, you need just the known solution to be coded and you don't care about looking for possible alternative solutions. Lisp approach is the distilled bottom-up. You not only build bottom up but you can also shape the bricks the way you need them to be. Nothing is fixed, freedom is total. Of course freedom takes responsibility and you can make horrible things by misusing this power. But horrible code can be written in any language. Even in languages that are shaped as cages for the mind, designed with the hope that with those languages even monkeys could get good programs up and running (an idea so wrong on so many levels that it hurts even just thinking about it). Your example is about a web server. Now in 2012 this is a well-defined problem, you have specs to be followed. A web server is just an implementation problem.Especially if you are aiming at writing a web server substantially identical to the other gajillion of web servers that are out there then nothing is really unclear, except some minutiae. Even your comment about RSA is still talking about a clearly defined problem, with formal specifications. With a well defined problem, with formal specifications and already known solutions then coding is just connecting in the dots. Top down is ok for that. This is the project manager heaven. In many cases however there is no proven well-known approach to be used to connect the dots. Actually very often is hard to say even what are the dots. Suppose for example you are asked to instruct an automatic cutting machine to align the parts to be cut to a printed material that is not perfectly conforming to the theoretic repetitive logo. You are given the parts and pictures of the material as taken by the machine. What is an alignment rule? You decide. What is a pattern, how to represent it? You decide. How to align the parts? You decide. Can parts be "bent"? It depends, some not and some yes, but of course not too much. What to do if the material is just too deformed for a part to cut it acceptably? You decide. Are all the material rolls identical? Of course not, but you cannot bug the user to adapt alignment rules for every roll... that would be impractical. What pictures are seeing the cameras? The material, whatever that may mean... it can be color, it can be black over black where just the light reflex makes the pattern evident. What does it mean to recognize a pattern? You decide. Now try to design the general structure of a solution for this problem and give a quote, in money and time. My bet is that even your system architecture... (yes, the architecture) will be wrong. Cost and time estimation will be random numbers. We implemented it and now it's a working system, but changed our mind about the very shape of the system a big number of times. We added entire sub-systems that now cannot even be reached from the menus. We switched master/slave roles in protocols more than once. Probably now we've enough knowledge to attempt re-building it better. Other companies of course did solve the same problem... but unless you are in one of these companies most probably your top-down detailed project will be a joke. We can design it top-down. You cannot because you never did it before. You can probably solve the same problem too. Working bottom-up however. Starting with what you know, learning what you don't and adding up. New complex software systems are grown, not designed. Every now and then someone starts designing a big new complex ill-specified software system from scratch (note that with a big complex software project there are only three possibilities: a] the specification is fuzzy, b] the specification is wrong and self-contradictory or c] both... and most often [c] is the case). These are the typical huge-company projects with thousands and thousands of hours thrown into powerpoint slides and UML diagrams alone. They invariably fail completely after burning embarrassing amounts of resources... or in some very exceptional case they finally deliver an overpriced piece of software that implements only a tiny part of the initial specs. And that software invariably is deeply hated by users... not the kind of software you would buy, but the kind of software you use because you're forced to. Does this mean that I think that you should think only to code? Of course not. But in my opinion the construction should start from bottom (bricks, concrete code) and should go up... and your focus and attention to detail should in a sense "fade" as you are getting farther from what you have. Top-down is often presented as if you should put the same level of detail to the whole system at once: just keep it splitting every node until everything is just obvious... in reality modules, subsystem are "grown" from subroutines.If you do not have a previous experience in the specific problem your top down design of a subsystem, module or library will be horrible. You can design a good library once you know what functions to put in, not the other way around. Many of the Lisp ideas are getting more popular (first class functions, closures, dynamic typing as default, garbage collection, metaprogramming, interactive development) but Lisp is still today (among the languages I know) quite unique in how easy is to shape code for what you need. Keyword parameters for example are already present, but if they were not present they could be added. I did it (including keyword verification at compile time) for a toy Lisp compiler I am experimenting with and it doesn't take much code. With C++ instead the most you can get is a bunch of C++ experts telling you that keyword parameters are not that useful, or an incredibly complex, broken, half backed template implementation that indeed is not that useful.Are C++ classes first-class objects? No and there's nothing you can do about it. Can you have introspection at runtime or at compile time? No and there's nothing you can do about it. This language flexibility of Lisp is what makes it great for bottom-up building. You can build not only subroutines, but also the syntax and the semantic of the language. And in a sense Lisp itself is bottom-up. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/134633', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/18721/']} | jdg_84334 |
stackexchange | llm_judgeable_groundtruth_similarity | 14513480 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a matrix data with spatial coordinates and one variable. The spatial resolution is 1000 meters. > str(dat1)> List of 3> $ x: num [1:710] 302340 303340 304340 305340 306340 ...> $ y: num [1:1241] 5431470 5432470 5433470 5434470 5435470 ...> $ z: num [1:710, 1:1241] 225 225 225 225 225 ... I want to convert it into raster format. > dat1$x[1:10]> [1] 302339.6 303339.6 304339.6 305339.6 306339.6 307339.6 308339.6 309339.6 310339.6 311339.6> dat1$y[1:10]> [1] 5431470 5432470 5433470 5434470 5435470 5436470 5437470 5438470 5439470 5440470 I used the following code to do it. But the resolution I get is not the same with that I have. Any better way to get the same resolution with my real data? > r <-raster( dat1$z, xmn=range(dat1$x)[1], xmx=range(dat1$x)[2], ymn=range(dat1$y)[1], ymx=range(dat1$y)[2], crs=CRS("+proj=utm +zone=11 +datum=NAD83") )> rclass : RasterLayer dimensions : 710, 1241, 881110 (nrow, ncol, ncell)resolution : 571.3135, 1746.479 (x, y)extent : 302339.6, 1011340, 5431470, 6671470 (xmin, xmax, ymin, ymax)coord. ref. : +proj=utm +zone=11 +datum=NAD83 data source : in memorynames : layer values : 13.65059, 248.6229 (min, max)
Now provide the response and nothing else.
| Try reading the help for raster. When creating a raster from a matrix, the sense of rows and columns isn't what you think it is. You were feeding it a 1241x710 matrix but taking the max and min from the wrong vectors. Try the following: > # small version of your test set> dat1=list()> dat1$x=seq(302339.6,by=1000,len=71)> dat1$y=seq(5431470,by=1000,len=124)> dat1$z=matrix(runif(71*124),71,124)> str(dat1)List of 3 $ x: num [1:71] 302340 303340 304340 305340 306340 ... $ y: num [1:124] 5431470 5432470 5433470 5434470 5435470 ... $ z: num [1:71, 1:124] 0.765 0.79 0.185 0.461 0.421 ...> image(dat1,asp=1) Nice square pixels. Now create your raster: r <-raster( dat1$z, xmn=range(dat1$x)[1], xmx=range(dat1$x)[2], ymn=range(dat1$y)[1], ymx=range(dat1$y)[2], crs=CRS("+proj=utm +zone=11 +datum=NAD83") )plot(r) Totally NON-square pixels. And if you look carefully, the matrix is rotated 90 degrees from the image plot. Or transposed or something. Solution: just create the raster from the x,y,z list: > r=raster(dat1);plot(r) Square pixels, same way round as image plot, and resolution is now what you expect: > rclass : RasterLayer dimensions : 124, 71, 8804 (nrow, ncol, ncell)resolution : 1000, 1000 (x, y)extent : 301839.6, 372839.6, 5430970, 5554970 (xmin, xmax, ymin, ymax)coord. ref. : NA data source : in memorynames : layer values : 7.738103e-05, 0.9995497 (min, max) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14513480', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1867585/']} | jdg_84335 |
stackexchange | llm_judgeable_groundtruth_similarity | 55157 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I received a new debit card which has both a magnetic stripe and a chip (as did my previous card). I know the magnetic stripe can be easily copied whereas the chip system hasn't been compromised (yet?). I suppose the stripe is still there for legacy purposes, even though in the last years I've not seen nor heard of a terminal or ATM that didn't accept chip payment. As far as I am concerned, that's like buying a new fancy lock for your front door while leaving the back door unlocked. Is that analogy right, i.e. is a card with a stripe AND chip no more secure than a card with just a stripe? And if so, why would my bank allow such a security risk? Why not get rid of the stripe altogether? Ps: If it matters, I live in the Netherlands. EDIT: For clarity, I'm mostly concerned about the following scenario: A thief skims an ATM (probably (?) chip payment), but can still get the stripe because the card goes into the ATM. Suppose he reads my PIN too. He then fabricates a new card and empties my bank account.
Now provide the response and nothing else.
| Details depend on bank, card type and country, so they vary quite a lot, but the generic model is the following: The magnetic stripe contains, mostly, a computer-readable copy of the information embossed on the card: account number, holder name, expiration date. The chip contains a secret key which is used to "sign" (not necessarily a true signature; often a MAC ) transactions. The chip knows the PIN code and refuses to work until the PIN code has been presented; it also locks itself if too many wrong PIN codes are presented. When a payment terminal uses the magnetic strip, it must talk to the bank, establish a secure tunnel with the bank, send the PIN code entered by the user, and verify that the owner's account has enough money on it. On the other hand, when a payment terminal uses the chip, the PIN code is sent to the chip only, and there is little need to talk to the bank at all. The whole transaction can be conducted offline. Of course, for big amounts, it is still a good idea to talk to the bank to know whether that much money exists on the buyer's account, but small transactions can be done efficiently with no network at all. Thus, the magnetic stripe and the chip are used in two different ways, and having both does not mean that the security is lowered to the security of the weaker of the two. From the bank point of view, chips are better, because they are more efficient (no need to handle a network call) and harder to clone (statistics show a fraud rate divided by about 10). This is often translated into financial advantages granted to merchants who switch to chip-aware terminals. There can be variants in all of the above. For instance, some card include in the magnetic stripe an encrypted version of the PIN code -- but it won't be verified in the payment terminal. Instead, the terminal will have to talk to a regional bunkerized server who knows the decryption key and can do the verification. For some other card types, it is pretty clear that the magnetic stripe does not know anything about the PIN code, e.g. the chip-less American Express cards (from a few years ago) where you could change your PIN code by phoning your bank. In any case, all the security features of a debit or credit card are not meant to protect you . They protect the bank . From the point of view of the bank, you are the enemy (regardless of what they claim in their ads). | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/55157', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/43739/']} | jdg_84336 |
stackexchange | llm_judgeable_groundtruth_similarity | 6852522 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can you give me a quick runthrough of what these 4 keywords are used for and why? I understand the basics that google would tell you on register and volatile, but would like to know a little more (just a practical overview). Extern and explicit confuse me a little as I've never found a reason to have to use them myself despite doing fairly low-level embedded systems code. Again, I can google but I'd prefer a quick, practical summary from an expert so it sticks in my mind.
Now provide the response and nothing else.
| extern extern is overloaded for several uses. For global variables, it means that it is declaring the variable, not defining it. This is useful for putting global variables in headers. If you put this in a header: int someInteger; Each .cpp file that includes that header would try to have its own someInteger . That will cause a linker error. By declaring it with extern , all you're saying is that there will be a someInteger somewhere in the code: extern int someInteger; Now, in a .cpp file, you can define int someInteger , so that there will be exactly one copy of it. There is also extern "C" , which is used for specifying that certain functions use C-linkage rules rather than C++. This is useful for interfacing with libraries and code compiled as C. In C++0x, there will also be extern template declarations. These are the opposite of explicit template instantiation. When you do this: template class std::vector<int>; You're telling the compiler to instantiate this template right now. Normally, the instantiation is delayed until the first use of the template. In C++0x, you can say: extern template class std::vector<int>; This tells the compiler not to instantiate this template in this .cpp file, ever . That way, you can control where templates are instantiated. Judicious use of this can substantially improve compile times. explicit This is used to prevent automatic conversions of types. If you have a class ClassName with the following constructor: ClassName(int someInteger); This means that if you have a function that takes a ClassName , the user can call it with an int , and the conversion will be done automatically. void SomeFunc(const ClassName &className);SomeFunc(3); That's legal, because ClassName has a conversion constructor that takes an integer. This is how functions that take std::string can also take a char* ; std::string has a constructor that takes a char* . However, most of the time you don't want implicit conversions like this. You only usually want conversions to be explicit. Yes, it's sometimes useful as with std::string, but you need a way to turn it off for conversions that are inappropriate. Enter explicit : explicit ClassName(int someInteger); This will prevent implicit conversions. You can still use SomeFunc(ClassName(3)); but SomeFunc(3) will no longer work. BTW: if explicit is rare for you, then you're not using it nearly enough. You should use it at all times, unless you specifically want conversion. Which is not that often. volatile This prevents certain useful optimizations. Normally, if you have a variable, C/C++ will assume that it's contents will only change if it explicitly changes them. So if you declare a int someInteger; as a global variable, C/C++ compilers can cache the value locally and not constantly access the value every time you use it. Sometimes, you want to stop this. In those cases, you use volatile ; this prevents those optimizations. register This is just a hint. It tells the compiler to try to put the variable's data in a register. It's essentially unnecessary; compilers are better than you are at deciding what should and should not be a register. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6852522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/857994/']} | jdg_84337 |
stackexchange | llm_judgeable_groundtruth_similarity | 5697612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am just coding something and wanted to know how I can include a function in a construct. if (!formatPhone($phone) && !empty($phone)) { $e = 1; $e_message .= '<li>The phone number is invalid, please use the format (000) 000-0000.</li>';} else { $phone = formatPhone($phone);} Can I assign $phone to the formatePhone() output directly in the contruct statement, but still check for a return false; ?
Now provide the response and nothing else.
| This is completely possible. Use Javascript. I use jQuery to select the input element. I have it set up with an onChange event. $("#aFile_upload").on("change", function (e) { var count=1; var files = e.currentTarget.files; // puts all files into an array // call them as such; files[0].size will get you the file size of the 0th file for (var x in files) { var filesize = ((files[x].size/1024)/1024).toFixed(4); // MB if (files[x].name != "item" && typeof files[x].name != "undefined" && filesize <= 10) { if (count > 1) { approvedHTML += ", "+files[x].name; } else { approvedHTML += files[x].name; } count++; } } $("#approvedFiles").val(approvedHTML);}); The code above saves all the file names that I deem worthy of persisting to the submission page before the submission actually happens. I add the "approved" files to an input element's val using jQuery so a form submit will send the names of the files I want to save. All the files will be submitted, however, now on the server-side, we do have to filter these out. I haven't written any code for that yet but use your imagination. I assume one can accomplish this by a for loop and matching the names sent over from the input field and matching them to the $_FILES (PHP Superglobal, sorry I don't know ruby file variable) variable. My point is you can do checks for files before submission. I do this and then output it to the user before he/she submits the form, to let them know what they are uploading to my site. Anything that doesn't meet the criteria does not get displayed back to the user and therefore they should know, that the files that are too large won't be saved. This should work on all browsers because I'm not using the FormData object. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5697612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/490239/']} | jdg_84338 |
stackexchange | llm_judgeable_groundtruth_similarity | 5841681 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Before I dive really deep into MongoDB for days, I thought I'd ask a pretty basic question as to whether I should dive into it at all or not. I have basically no experience with nosql. I did read a little about some of the benefits of document databases, and I think for this new application, they will be really great. It is always a hassle to do favourites, comments, etc. for many types of objects (lots of m-to-m relationships) and subclasses - it's kind of a pain to deal with. I also have a structure that will be a pain to define in SQL because it's extremely nested and translates to a document a lot better than 15 different tables. But I am confused about a few things. Is it desirable to keep your database normalized still? I really don't want to be updating multiple records. Is that still how people approach the design of the database in MongoDB? What happens when a user favourites a book and this selection is still stored in a user document, but then the book is deleted? How does the relationship get detached without foreign keys? Am I manually responsible for deleting all of the links myself? What happens if a user favourited a book that no longer exists and I query it (some kind of join)? Do I have to do any fault-tolerance here?
Now provide the response and nothing else.
| MongoDB doesn't support server side foreign key relationships, normalization is also discouraged. You should embed your child object within parent objects if possible, this will increase performance and make foreign keys totally unnecessary. That said it is not always possible, so there is a special construct called DBRef which allows to reference objects in a different collection. This may be then not so speedy because DB has to make additional queries to read objects but allows for kind of foreign key reference. Still you will have to handle your references manually. Only while looking up your DBRef you will see if it exists, the DB will not go through all the documents to look for the references and remove them if the target of the reference doesn't exist any more. But I think removing all the references after deleting the book would require a single query per collection, no more, so not that difficult really. If your schema is more complex then probably you should choose a relational database and not nosql. There is also a book about designing MongoDB databases: Document Design for MongoDB UPDATE The book above is not available anymore, yet because of popularity of MongoDB there are quite a lot of others. I won't link them all, since such links are likely to change, a simple search on Amazon shows multiple pages so it shouldn't be a problem to find some. See the MongoDB manual page for 'Manual references' and DBRefs for further specifics and examples | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5841681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/331439/']} | jdg_84339 |
stackexchange | llm_judgeable_groundtruth_similarity | 1052976 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the scope of $1 through $9 in Perl? For instance, in this code: sub bla { my $x = shift; $x =~ s/(\d*)/$1 $1/; return $x; }my $y;# some code that manipulates $y$y =~ /(\w*)\s+(\w*)/;my $z = &bla($2);my $w = $1;print "$1 $2\n"; What will $1 be? Will it be the first \w* from $x or the first \d* from the second \w* in $x ?
Now provide the response and nothing else.
| from perldoc perlre The numbered match variables ($1, $2, $3, etc.) and the related punctuation set ($+ , $& , $` , $' , and $^N ) are all dynamically scoped until the end of the enclosing block or until the next successful match, whichever comes first. (See ""Compound Statements"" in perlsyn.) This means that the first time you run a regex or substitution in a scope a new local ized copy is created. The original value is restored (à la local) when the scope ends. So, $1 will be 10 up until the regex is run, 20 after the regex, and 10 again when the subroutine is finished. But I don't use regex variables outside of substitutions. I find much clearer to say things like #!/usr/bin/perluse strict;use warnings;sub bla { my $x = shift; $x =~ s/(\d*)/$1 $1/; return $x; }my $y = "10 20";my ($first, $second) = $y =~ /(\w*)\s+(\w*)/;my $z = &bla($second);my $w = $first;print "$first $second\n"; where $first and $second have better names that describe their contents. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1052976', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1084/']} | jdg_84340 |
Subsets and Splits