source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 74956 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This may seem like a very simple question, but I've searched all over the place and haven't found an answer. When jumping a car, we connect the + end of the charged battery to the + end of the dead battery, and the - end of the charged battery to the chassis or other metal part of the car. I always thought that you need a closed circuit for current to flow. But this circuit appears to be open: we are connecting the - end of the charged battery to the ground! Thus, how can any circuit connected to ground have a current? I believe another way to ask this question is: will jump starting a car still work if I connect the - end of the charged battery to a third (powered-off) car, instead of to the chassis of the car with the dead battery? If so, why? (I've heard people say that jump starting a car only works because the chassis is connected to the electrical components of the vehicle, thus providing a closed circuit since the battery is also connected to the electrical components of the vehicle).
Now provide the response and nothing else.
| "Ground" is just a code word which, in this case, refers to the "current return common" circuit node. There is a complete circuit because everything electrical in the car, such as the starter motor, also connects to ground in order to return current to the minus terminal of the battery through the ground. The car's chassis is used for this return network, and so the entire chassis is an extension of the minus terminal of the battery. During jump-starting, we connect the boosting battery to ground rather than to the dead battery's - terminal for the simple reason that this provides a more direct return path to the good battery which is powering the dead car: the return current does not have to travel through the dead battery's minus terminal hookup cable and then to the jumper cable, but can go directly from the chassis ground to the jumper cable. A more direct return path allows for better current flow and less voltage drop, like plugging a big appliance directly into an outlet, rather than via an extension cord. In case you're also wondering why the plus jumper connections are made first, then the minuses. This is because there is no harm done if you leave the minus jumper dangling in the chassis of the car. Anything it accidentally touches is likely to be ground. If you connect both alligator clips on one end before connecting the other end, the other end is now live and you can accidentally touch the clips together to create a short circuit. If you connect the minuses/grounds first and then go to connect one of the pluses, you can create a short circuit, because the opposite side plus is probably dangling and touching something that is grounded. | {} | {'log_upvote_score': 7, 'links': ['https://electronics.stackexchange.com/questions/74956', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/26037/']} | jdg_81341 |
stackexchange | llm_judgeable_groundtruth_similarity | 38130705 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My goal is to set a style (height and width in this case) from a component variable using the "styles" attribute. I would think there is a simple data binding method but that may be wishful thinking... For example if I were using the html mustache binding it might look like this: @Component({ selector: '[sidebar]', templateUrl: 'app/Nav/sidebar.comp.html', styles: [` .sidebar-nav { overflow: scroll; height: {{ height }}; } .sidebar { height: {{ 0.9 * height }}; width: {{ 0.21 * width }}; } `]})export class SidebarComp { width: number; height: number; constructor() { this.height = window.innerHeight; this.width = window.innerWidth; }} Obviously this is all wrong but I've tried some more likely permutations and had no luck finding solutions on the Angular site, Stack Overflow, or Google. I may be reduced to using ngStyle inline but that's not ideal in this case.
Now provide the response and nothing else.
| You can style the host element like @Component({ selector: '[sidebar]', templateUrl: 'app/Nav/sidebar.comp.html', host: { '[style.height.px]':'0.9 * height', '[style.width.px]':'0.21 * width' }})export class SidebarComp { width: number; height: number; constructor() { this.height = window.innerHeight; this.width = window.innerWidth; }} and the content ( app/Nav/sidebar.comp.html ) like <div class="sidebar-nav" [style.overflow]="'scroll'" [style.height.px]="height"> or <div class="sidebar-nav" [ngStyle]="{overflow: 'scroll', height: height + 'px'}"> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38130705', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3393520/']} | jdg_81342 |
stackexchange | llm_judgeable_groundtruth_similarity | 2506355 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read the JVM specification for the fpstrict modifier but still don't fully understand what it means. Can anyone enlighten me?
Now provide the response and nothing else.
| Basically, it mandates that calculations involving the affected float and double variables have to follow the IEEE 754 spec to the letter, including for intermediate results. This has the effect of: Ensuring that the same input will always generate exactly the same result on all systems The CPU may have to do some extra work, making it slightly slower The results will be, in some cases, less accurate (much less, in extreme cases) Edit: More specifically, many modern CPUs use 80 bit floating point arithmetic ( "extended precision" ) internally. Thus, they can internally represent some numbers in denormalized form that would cause arithmetic overflow or underflow (yielding Infinity or zero, respectively) in 32 or 64bit floats; in borderline cases, 80 bit just allows to retain more precision. When such numbers occur as intermediate results in a calculation, but with an end result inside the range of numbers that can represented by 32/64bit floats, then you get a "more correct" result on machines that use 80bit float arithmetic than on machines that don't - or in code that uses strictfp . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2506355', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/24545/']} | jdg_81343 |
stackexchange | llm_judgeable_groundtruth_similarity | 281883 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be a finite dimentional Euclidean space with the inner product $\langle...,...\rangle$, and let $k$ be an integer. Consider the polylinear form $X^k\times X^k\to{\mathbb R}$$$\big\langle x_1,...,x_k\; |\; y_1,...,y_k \big\rangle=\det\begin{pmatrix} \langlex_1,y_1\rangle & \dots & \langle x_1,y_k\rangle \\ \dots & \dots & \dots \\\langle x_k,y_1\rangle & \dots & \langle x_k,y_k\rangle\end{pmatrix},\quad x_i,y_i\in X.$$It is extended to a bilinear form $\langle...,...\rangle$ on the space $V_k(X)$ of polyvectors such that $$\big\langle x_1\vee...\vee x_k\; |\; y_1\vee...\vee y_k \big\rangle=\det\begin{pmatrix} \langlex_1,y_1\rangle & \dots & \langle x_1,y_k\rangle \\ \dots & \dots & \dots \\\langle x_k,y_1\rangle & \dots & \langle x_k,y_k\rangle\end{pmatrix},\quad x_i,y_i\in X.$$Why is this bilinear form $\langle...,...\rangle$ on $V_k(X)$ positive definite? I mean, from the properties of the Gram determinant it immediately follows that $\langle P|P\rangle>0$ for each non-zero elementary polyvector $P=x_1\vee...\vee x_k$. But why is the same true for all non-zero polyvectors, not necessary elementary?
Now provide the response and nothing else.
| I write the Fourier transform as $$\hat{f}(v) = \int_{-\infty}^{\infty} dx \: \frac{\sin{x}}{x} e^{-i 2 \pi v x} $$ Consider, rather, the integral $$ \frac{1}{i 2} \int_{-\infty}^{\infty} dx \: \frac{e^{i x}-e^{-i x}}{x} e^{-i 2 \pi v x} $$ $$ = \frac{1}{i 2} \int_{-\infty}^{\infty} dx \: \frac{e^{i (1-2 \pi v) x}}{x} - \frac{1}{i 2} \int_{-\infty}^{\infty} dx \: \frac{e^{-i (1+2 \pi v) x}}{x} $$ Consider the following integral corresponding to the first integral: $$\oint_C dz \: \frac{e^{i (1-2 \pi v) z}}{z} $$ where $C$ is the contour defined in the illustration below: This integral is zero because there are no poles contained within the contour. Write the integral over the various pieces of the contour: $$\int_{C_R} dz \: \frac{e^{i (1- 2 \pi v)z}}{z} + \int_{C_r} dz \: \frac{e^{i (1- 2 \pi v) z}}{z} + \int_{-R}^{-r} dx \: \frac{e^{i (1- 2 \pi v) x}}{x} + \int_{r}^{R} dx \: \frac{e^{i (1- 2 \pi v) x}}{x} $$ Consider the first part of this integral about $C_R$, the large semicircle of radius $R$: $$\int_{C_R} dz \: \frac{e^{i (1- 2 \pi v)z}}{z} = i \int_0^{\pi} d \theta e^{i (1-2 \pi v) R (\cos{\theta} + i \sin{\theta})} $$ $$ = i \int_0^{\pi} d \theta e^{i (1-2 \pi v) R \cos{\theta}} e^{-(1- 2 \pi v) R \sin{\theta}} $$ By Jordan's lemma , this integral vanishes as $R \rightarrow \infty$ when $1-2 \pi v > 0$. On the other hand, $$ \int_{C_r} dz \: \frac{e^{i (1-2 \pi v) z}}{z} = i \int_{\pi}^0 d \phi \: e^{i (1-2 \pi v) r e^{i \phi}} $$ This integral takes the value $-i \pi$ as $r \rightarrow 0$. We may then say that $$\begin{align} & \int_{-\infty}^{\infty} dx \: \frac{e^{i (1-2 \pi v) x}}{x} = i \pi & 1-2 \pi v > 0\\ \end{align}$$ When $1-2 \pi v < 0$, Jordan's lemma does not apply, and we need to use another contour. A contour for which Jordan's lemma does apply is one flipped about the $\Re{z}=x$ axis. By using similar steps as above, it is straightforward to show that $$\begin{align} & \int_{-\infty}^{\infty} dx \: \frac{e^{i (1-2 \pi v) x}}{x} = -i \pi & 1-2 \pi v < 0\\ \end{align}$$ Using a similar analysis as above, we find that $$\int_{-\infty}^{\infty} dx \: \frac{e^{-i (1+2 \pi v) x}}{x} = \begin{cases} -i \pi & 1+2 \pi v < 0 \\ i \pi & 1+2 \pi v >0 \\ \end{cases} $$ We may now say that $$\hat{f}(v) = \int_{-\infty}^{\infty} dx \: \frac{\sin{x}}{x} e^{-i 2 \pi v x} = \begin{cases} \pi & |v| < \frac{1}{2 \pi} \\ 0 & |v| > \frac{1}{2 \pi} \\ \end{cases} $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/281883', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58864/']} | jdg_81344 |
stackexchange | llm_judgeable_groundtruth_similarity | 19179599 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I change the border color of the segmented controller in iOS7 without changing the text color? It would be ideal if I could keep the line between the segments as is (i.e. same color as the text), but if a border color change implies a change of this line it would also be ok. Note also that the text (and the lines between segments) have the color which is set with [segmCtrl setTintColor:choosenTintColor]
Now provide the response and nothing else.
| The linked answer does indeed answer your question, but you have to read between the lines. Here's a clearer example to change all segmented control styles within the app: // Sets the tint color which typically sets the color of the segment images, text, dividers,// borders, and selected segment. A translucent version of this color is also used to tint a// segment when it is pressed and transitioning to being selected, as shown on the first// segment below.[[UISegmentedControl appearance] setTintColor:[UIColor blackColor]];// The attributes dictionary can specify the font, text color, text shadow color, and text// shadow offset for the title in the text attributes dictionary[[UISegmentedControl appearance] setTitleTextAttributes:@{NSForegroundColorAttributeName:[UIColor redColor]} forState:UIControlStateNormal]; For one control within the app: // Sets the tint color which typically sets the color of the segment images, text, dividers,// borders, and selected segment. A translucent version of this color is also used to tint a// segment when it is pressed and transitioning to being selected, as shown on the first// segment below.self.segControl.tintColor = [UIColor blackColor];// The attributes dictionary can specify the font, text color, text shadow color, and text// shadow offset for the title in the text attributes dictionary[self.segControl setTitleTextAttributes:@{NSForegroundColorAttributeName:[UIColor redColor]} forState:UIControlStateNormal]; More info here: https://developer.apple.com/library/content/documentation/UserExperience/Conceptual/UIKitUICatalog/UISegmentedControl.html | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19179599', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2600810/']} | jdg_81345 |
stackexchange | llm_judgeable_groundtruth_similarity | 424848 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a $198$ -sample dataset containing miRNA types (numerical features) and one categorical feature "Type" with values "Tumor" or "Healthy". Index miRNA1 miRNA2 miRNA3 Type 1 48421.52 24242.14 23842.1518 Tumor 2 2757.96 28965.2 7339.57 Healthy 3 4300.34 52565.07 6981.41 Healthy ... ... ... 198 23854.73 24722.28 7611.53 Tumor Since there are 1584 of these features in total, I need to select the ones that are most influential towards developing a Tumor. My approach is described below. Is it correct? The distributions of features are mostly log-normal. I've transformed each feature with a Box-Cox transformation to get approximately normal distributions. I scaled the values with Min-Max scaler to put them in range $[0,1]$ . miRNA1 has $100$ Healthy samples and $98$ Tumor samples. I should make a null hypothesis that Tumor samples have the same values as Healthy samples. I calculate mean and standard deviation for Tumor samples and Healthy samples, calculate the t-score and calculate the p-value, using significance level of $0.05$ and DF in this case is $97$ . This is a two-tails test so it is $p$ -value $\times 2$ . If it's lower than $0.05$ I reject the null hypothesis and consider miRNA1 as a feature that impacts Tumor development, right?
Now provide the response and nothing else.
| First, what you have is high-dimensional data. This alone poses some problems and you should use a method which was designed for this, which is better suited. Second, a regular t-test is a bad idea in this case, it is a univariate test - meaning it does not consider multiple variables together and their possible interactions. Also, p-values are not meant to be used for feature selection. Nonetheless, if you are fixed on a t-test, it would be better to use a permutation test to test for significance, as you have many variables which will lead to some serious corrections, when you adjust your p-values for multiple testing, and you will adjust them, right? Finally, personally I would use LASSO regression to solve this, which is a better and simpler option, LASSO automatically performs feature selection and it considers all the variables together, rather than one by one. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/424848', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/257948/']} | jdg_81346 |
stackexchange | llm_judgeable_groundtruth_similarity | 15650187 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
UserController.php ------------- editAction (method) ---------- $UserInfo = array( 'hdn_uid'=>$UserResult['user_id'], 'username'=>$UserResult['user_name'], 'firstname'=>$UserResult['first_name'], ); $form->populate($UserInfo); $this->view->form = $form; Forms/userEdit.php ------------------ $elementDecoration = array( 'ViewHelper', 'Description', 'Errors', array(array('data' => 'HtmlTag'), array('tag' => 'td')), array('Label', array('tag' => 'td', 'placement' => 'prepend')), array(array('row' => 'HtmlTag'), array('tag' => 'tr')), ); $hdn_id = new Zend_Form_Element_Hidden('hdn_uid'); $hdn_id->addFilter('Int') ->removeDecorator('label') ->removeDecorator('HtmlTag'); $this->setName('login'); $this->setDecorators($formDecoration); $username = new Zend_Form_Element_Text('username'); //Note this username and in conroller $UserInfo arr 'username' matched so in the text fields existing username is populated from table. $username->setLabel('Username') ->setDecorators($elementDecoration) ->setRequired(true) ->addFilter('StripTags') ->addFilter('StringTrim');$submit = new Zend_Form_Element_Submit('submit');$submit->setDecorators($buttonDecoration);$this->addElements(array($hdn_id,$username,$submit)); Problem Server side validation not working, due to some mistake in the above snippet Detailed Info Server side validation not working in the above code, when i clear the username and if i submited the button then program does not validated the field, instead it updated the empty value into table. Note This same code working for user add form. but it fails to work for edit form.
Now provide the response and nothing else.
| The answer from robrich has a good hint: You can just remove every remote-tracking branch (or even the remote), and then use git fetch to grab only those you want now from scratch. If you do try to remove the remote all together, you may want to backup your .git/config file, so that when you add the remote back later, you can pick up the per-remote setting from the backup. However, removing remote does not remove the remote-tracking branches for me. Maybe my local repo is bad. For any one who has the same problem, what I ended up doing is: # This deletes all remote tracking branches for all remotes. So be careful if you have multiple remotes.git branch -r | xargs -L 1 git branch -rD Also, I have a lot of tags from the remote, which slow things down. I did this too: # Be careful! This deletes EVERY tag!git tag | xargs -L 1 git tag -d You may want to configure git fetch to not fetch all those tags back next time, which is beyond the scope of this question. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15650187', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246963/']} | jdg_81347 |
stackexchange | llm_judgeable_groundtruth_similarity | 7348737 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It used to work just fine ...But recently I've just changed my wordpress theme, and not the google map isn't displayed anymore.And I can't figure out why ... Here is the link for the search form: http://www.propertycrunch.co.uk/properties-search/ Enter an address and click find. It will list the properties, but the map won't appear ... I can't figure out why :-( Any idea are more than welcome ... Thanks a lot.
Now provide the response and nothing else.
| Thanks to Miroshko's and yojimbo87's answers I was able to catch the 'close' event, but I had to make some additional tweaks. The reason why just catching 'close' event wasn't fixing my problem, is that when client sends the request to the node.js server, the server itself can't get information if the connection is still open until he sends something back to the client (as far as I understood - this is because of the HTTP protocol). So, the additional tweak was to write something to the response from time to time. One more thing that was preventing this to work, is that I had 'Content-type' as 'application/json'. Changing it to 'text/javascript' helped to stream 'white spaces' from time to time without closing the connection. In the end, I had something like this: var server = http.createServer(function(req,res){ res.writeHead(200, {'Content-type': 'text/javascript'}); req.connection.on('close',function(){ // code to handle connection abort }); /** * Here goes some long polling handler * that performs res.write(' '); from time to time */ // some another code...});server.listen(NODE_PORT, NODE_LISTEN_HOST); My original code is much bigger, so I had to cut it a lot just to show the sensitive parts. I'd like to know if there are better solutions, but this is working for me at the moment. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7348737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/568269/']} | jdg_81348 |
stackexchange | llm_judgeable_groundtruth_similarity | 24616057 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I am trying to fix my Gridview in ASP so the columns are bigger according the the cell data. Here is what it looks like: http://s18.postimg.org/hjw6dwrmh/Capture.png Here is the code I am using for this Gridview: <asp:GridView ID="GridView1" runat="server" CellPadding="7" OnSelectedIndexChanged="GridView1_SelectedIndexChanged" Width="100%" BackColor="White" BorderColor="#CCCCCC" BorderWidth="1px" AllowSorting="True"> <FooterStyle BackColor="White" ForeColor="#000066" /> <HeaderStyle BackColor="#006699" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="White" ForeColor="#000066" HorizontalAlign="Left" /> <RowStyle ForeColor="#000066" /> <SelectedRowStyle BackColor="#669999" Font-Bold="True" ForeColor="White" /> <SortedAscendingCellStyle BackColor="#F1F1F1" /> <SortedAscendingHeaderStyle BackColor="#007DBB" /> <SortedDescendingCellStyle BackColor="#CAC9C9" /> <SortedDescendingHeaderStyle BackColor="#00547E" /> </asp:GridView> Thank you.
Now provide the response and nothing else.
| You should be able to read the values as a single int and then use bitwise math to pull out the values. // Read the value as little-endian since the least significant bytes are first.var val = buf.readUInt32LE(0);// Mask the last 2 bits out of the 32-bit value.var nanoseconds = val & 0x3FFFFFFF;// Mark just the final bits and convert to a boolean.var bit6Set = !!(val & 0x40000000);var bit7Set = !!(val & 0x80000000); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24616057', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3784679/']} | jdg_81349 |
stackexchange | llm_judgeable_groundtruth_similarity | 26878386 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I updated appcompat-v7 to lollipop version(21.0.0) then I can't hide ActionBar with following style that worked before. <style name="AppTheme.NoActionBar"> <item name="android:windowActionBar">false</item> <item name="android:windowNoTitle">true</item></style> I set it to specific activity. <activity android:name=".NoActionBarActivity" android:theme="@style/AppTheme.NoActionBar"/> When I use appcompat-v7 20.0.0 version, actionbar will be hidden as it is intended. How can I hide actionbar with custom style with AppCompat version 21 library?
Now provide the response and nothing else.
| @Chk0nDanger your answer is true but you should use below code : <style name="Theme.AppCompat.NoActionBar" parent="Theme.AppCompat.Light"> <item name="windowActionBar">false</item> <item name="android:windowNoTitle">true</item></style> without parent element, everything will be white color ( textviews , buttons , chekcboxs , etc)! in your manifest.xml file : <activity android:name=".MyClass" android:theme="@style/Theme.AppCompat.NoActionBar" /> updated 2015 29 July make sure android:windowNoTitle should be replaced by windowNoTitle when Upgraded to AppCompat v22.1.0 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26878386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1395707/']} | jdg_81350 |
stackexchange | llm_judgeable_groundtruth_similarity | 27733121 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have create a form which contain just only one input tag which name is first_name . Now I want to show the first_name is either exist in database or not. I have create a database name db_practice ,table which name is tbl_user and there are many field. first_name is one of the field name. Here is my all code. Please help me to find out my problem. This is view code: <html> <head> <meta charset="UTF-8"> <title>Check username availability using jQuery, AJAX</title> <script type="text/javascript"> $(document).ready(function() { $("#username").live("blur", function(e) { $('#msg').hide(); if ($('#username').val() == null || $('#username').val() == "") { $('#msg').show(); $("#msg").html("Username is required field.").css("color", "red"); } else { $.ajax({ type: "POST", url: "<?php echo base_url();?>"+"/welcome/get_user", data: $('#signupform').serialize(), dataType: "html", cache: false, success: function(msg) { $('#msg').show(); $("#msg").html(msg); }, error: function(jqXHR, textStatus, errorThrown) { $('#msg').show(); $("#msg").html(textStatus + " " + errorThrown); } }); } }); });</script> </head> <body> <div style="margin: 10px 0 0 10px;width: 600px"> <h3>Codeigniter username availability check</h3> <form id="signupform" style="padding: 10px;"> <fieldset> <legend>Check username</legend> <div> <label>Username</label><br/> <input type="text" name="first_name" id="username"/> <div id="msg"></div> </div> </fieldset> </form> </div> <script src="<?php echo assets_url(); ?>js/jquery-1.9.1.min.js"></script> <script src="<?php echo assets_url(); ?>js/jquery-migrate-1.2.1.js"></script> <script src="<?php echo assets_url(); ?>js/jquery-ui-1.10.3-custom.min.js"> </script> </body></html><!-- below jquery things triggered on onblur event and checks the username availability in the database --> This is controller code: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');class Welcome extends CI_Controller { public function __construct() { parent::__construct(); $this->load->model('my_model'); } public function index() { $this->load->view('welcome_message'); } public function get_user() { if (isset($_POST['first_name'])) { $username = $_POST['first_name']; $results = $this->my_model->get_user($username); if ($results === TRUE) { echo '<span style="color:red;">Username unavailable</span>'; } else { echo '<span style="color:green;">Username available</span>'; } } else { echo '<span style="color:red;">Username is required field.</span>'; } }}/* End of file welcome.php *//* Location: ./application/controllers/welcome.php */ This is model code: <?php/* * To change this template, choose Tools | Templates * and open the template in the editor. *//** * Description of my_model * * @author Masud */ class My_Model extends CI_Model { //put your code here public function get_user($username) { $this->db->select('*'); $this->db->from('tbl_user'); $this->db->where('first_name',$username); $query = $this->db->get(); $result = $query->row(); if ($result) { return TRUE; } else { return FALSE; } } }?>
Now provide the response and nothing else.
| The number 919971692474 is normally displayed as 9.19972E+11 in Excel. To force it to display the full number you have to set the number format to 0 (right click, format cell, choose custom type '0'). And when you do that, the full number is displayed. If you don't set a format in excel, the xlsx module uses the "General" format and that number format displays the phone number as an exponential. If the file is incorrect, you can override the CSV formatting by deleting the w key and adding a z key corresponding to the desired number format. For example, to change cell A2: var sheet = workbook.Sheets[workbook.SheetNames[0]];delete sheet.A2.w;sheet.A2.z = '0'; If you want to do this for all number cells, just loop: Object.keys(sheet).forEach(function(s) { if(sheet[s].w) { delete sheet[s].w; sheet[s].z = '0'; }}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27733121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4286201/']} | jdg_81351 |
stackexchange | llm_judgeable_groundtruth_similarity | 13509752 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently building a site from a PSD. Most of the fonts have a letter-tracking of -25 ( <- AV-> : I'm guessing that is the symbol for letter spacing?). How would I get the same effect in CSS? I know the property is letter-spacing: X but it doesn't take percentages and -25px or pts would be a huge figure!
Now provide the response and nothing else.
| In Photoshop letter-spacing is called letter-tracking and is specifically the space between each letter of text. The problem is that Photoshop Letter Tracking doesn’t convert 1:1 to Letter Spacing in CSS. It's very easy to calculate the conversion from Photoshop to CSS though. Formulas to convert Photoshop Letter Tracking to CSS Letter-spacing em Formula X / 1000 = YX is the value of the letter-tracking in PhotoshopY is the value in "em" to use in CSS Example Consider the following example: Photoshop has a letter tracking value of 200.. 200 / 1000 = 0.2 The result is 0.2em in CSS. px Formula If you prefer to use "px" values the formula is X * S / 1000 = PX is equal to the letter-tracking value in PhotoshopS is the font-size in pixelsP is the resulted value in "px" to use in CSS Example Consider the following example: Photoshop has a letter tracking value of 200 and a font-size value of 10. 200 * 10 / 1000 = 2 The result is 2px in CSS. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13509752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1160747/']} | jdg_81352 |
stackexchange | llm_judgeable_groundtruth_similarity | 1793532 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been studying about k-means clustering , and one thing that's not clear is how you choose the value of k. Is it just a matter of trial and error, or is there more to it?
Now provide the response and nothing else.
| You can maximize the Bayesian Information Criterion (BIC): BIC(C | X) = L(X | C) - (p / 2) * log n where L(X | C) is the log-likelihood of the dataset X according to model C , p is the number of parameters in the model C , and n is the number of points in the dataset.See "X-means: extending K -means with efficient estimation of the number of clusters" by Dan Pelleg and Andrew Moore in ICML 2000. Another approach is to start with a large value for k and keep removing centroids (reducing k) until it no longer reduces the description length. See "MDL principle for robust vector quantisation" by Horst Bischof, Ales Leonardis, and Alexander Selb in Pattern Analysis and Applications vol. 2, p. 59-72, 1999. Finally, you can start with one cluster, then keep splitting clusters until the points assigned to each cluster have a Gaussian distribution. In "Learning the k in k -means" (NIPS 2003), Greg Hamerly and Charles Elkan show some evidence that this works better than BIC, and that BIC does not penalize the model's complexity strongly enough. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1793532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2147/']} | jdg_81353 |
stackexchange | llm_judgeable_groundtruth_similarity | 38352779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As we know Java '+' operator is used for both Arithmetic Add String concatenation Need to know exactly expected behavior and applied rule when i used both together When i try following java code System.out.println("3" + 3 + 3); // print 333 String concatenation ONLYSystem.out.println(3 + "3" + 3); // print 333 String concatenation OLNYSystem.out.println(3 + 3 + "3"); // print 63 Arithmetic Add & String concatenation
Now provide the response and nothing else.
| This is basic operator precedence, combined with String concatenation vs numerical addition. Quoting: If only one operand expression is of type String, then string conversion (§5.1.11) is performed on the other operand to produce a string at run time. The result of string concatenation is a reference to a String object that is the concatenation of the two operand strings. The characters of the left-hand operand precede the characters of the right-hand operand in the newly created string. The String object is newly created (§12.5) unless the expression is a constant expression (§15.28). An implementation may choose to perform conversion and concatenation in one step to avoid creating and then discarding an intermediate String object. To increase the performance of repeated string concatenation, a Java compiler may use the StringBuffer class or a similar technique to reduce the number of intermediate String objects that are created by evaluation of an expression. For primitive types, an implementation may also optimize away the creation of a wrapper object by converting directly from a primitive type to a string. See language specifications here . TL;DR Operator precedence for + is from left to right If any operand in a binary operation is a String , the result is a String If both operands are numbers, the result is a number | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38352779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1824361/']} | jdg_81354 |
stackexchange | llm_judgeable_groundtruth_similarity | 9120294 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm doing some scripting and I need to get a list of all the parent poms for any given pom. The dependency plugin seems to be only interested in the dependencies that are listed in the dependency section of the pom, but there doesn't seem to be a way to show the parent poms, which are also required dependencies for Maven to work. Am I missing something basic?
Now provide the response and nothing else.
| There is no simple Maven command that will show you the chain of parent POMs for a pom.xml. The reason for this is that it is not a common question one would typically ask (more on that below). For your script, you'll just have to parse the pom.xml file, get the parent artifact coordinates, get a hold of the artifact's pom.xml file and then parse it's pom.xml file (and repeat). Sorry, but there is no short cut I know of, but other folks have solved similar problems . You are right that technically the parent pom is a dependency of your project, but it is not a literal Maven Dependency and is handled completely differently. The chain of parent poms, along with active profiles, your settings.xml file, and the Maven super pom from the installation directory are all combined together to create your project's effective pom . The effective POM is what Maven really uses to do its work. So basically, the parent pom inheritance chain is already resolved and combined before the dependency plugin (or any other plugin) is even activated. The questions most people typically ask is 'what does my REAL pom.xml really look like when Maven is done combining everything?' or 'What is the result my inheritance chain of parent poms?' or 'How are my pom.xml properties affected by an active profile?' The effective pom will tell you all of this. I know you didn't ask, but for others reading this, if you want to see your parent pom.xml, simply open up the pom.xml in the M2Eclipse POM editor and click on the parent artifact link on the overview tab. In this way you can quickly move up the chain of pom.xml files with just a single click per pom. It would be a strange project that had more than 3 or 4 parent poms of inheritance. If you want to see your effective pom, you can run the command mvn help:effective-pom . Alternatively, you can click on the Effective POM tab in M2Eclipse's POM editor. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9120294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/953006/']} | jdg_81355 |
stackexchange | llm_judgeable_groundtruth_similarity | 686980 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a script like this find path -type f -exec md5sum {} +' It has this conclusion /tmp❯ find $pwd -type f -exec md5sum {} + \a7c8252355166214d1f6cd47db917226 ./guess.bashe1c06d85ae7b8b032bef47e42e4c08f9 ./qprint.bash8d672b7885d649cb76c17142ee219181 ./uniq.bash2d547f5b610ad3307fd6f466a74a03d4 ./qpe523166a51f0afbc89c5615ae78b3d9b0 ./Makefile57a01f2032cef6492fc77d140b320a32 ./my.cc5c7b1345f1bcb57f6cf646b3ad0869e ./my.h6014bc12ebc66fcac6460d634ec2a508 ./my.exe0ff50f0e65b0d0a5e1a9b68075b297b8 ./levik/2.txt5f0650b247a646355dfec2d2610a960c ./levik/1.txt5f0650b247a646355dfec2d2610a960c ./levik/3.txt We need such a conclusion 5f0650b247a646355dfec2d2610a960c ./levik/1.txt5f0650b247a646355dfec2d2610a960c ./levik/3.txt
Now provide the response and nothing else.
| If you’ve got GNU uniq , you can ask it to show all lines duplicating the first 32 characters¹: find path -type f -exec md5sum {} + | sort | uniq -D -w32 The list needs to be sorted since uniq only spots consecutive duplicates. This also assumes that none of the file paths contain a newline character; to handle that, assuming GNU implementations of all the tools, use: find . -type f -exec md5sum -z {} + | sort -z | uniq -z -D -w32 | tr '\0' '\n' (GNU md5sum has its own way of handling special characters in file names , but this produces output which isn’t usable with uniq in the way shown above.) ¹ Technically, in current versions of GNU uniq , it's the first 32 bytes that are considered, for instance UTF-8 encoded á and é characters would be considered identical by uniq -w1 as their encoding both start with the 0xc3 byte. In the case of 0-9a-f characters found in hex-encoded MD5 sums though, that makes no difference as those characters are always encoded on one byte. | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/686980', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/498234/']} | jdg_81356 |
stackexchange | llm_judgeable_groundtruth_similarity | 41300209 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Most of my Celery tasks have ETA longer then maximal visibility timeout defined by Amazon SQS. Celery documentation says: This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop. So you have to increase the visibility timeout to match the time of the longest ETA you’re planning to use. At the same time it also says that: The maximum visibility timeout supported by AWS as of this writing is 12 hours (43200 seconds): What should I do to avoid multiple execution of tasks in my workers if I am using SQS?
Now provide the response and nothing else.
| Generally its not a good idea to have tasks with very long ETAs. First of all, there is the "visibility_timeout" issue. And you probably dont want a very big visibility timeout because if the worker crashes 1 min before the task is about to run, then the Queue will still wait for the visibility_timeout to finish before sending the task to another worker and, I guess you dont want this to be another 1 month. From celery docs: Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers. And also, SQS allows only so many tasks to be in the list to be ack'ed. SQS calls these tasks as "Inflight Messages". From http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html : A message is considered to be in flight after it's received from a queue by a consumer, but not yet deleted from the queue. For standard queues, there can be a maximum of 120,000 inflight messages per queue. If you reach this limit, Amazon SQS returns the OverLimit error message. To avoid reaching the limit, you should delete messages from the queue after they're processed. You can also increase the number of queues you use to process your messages. For FIFO queues, there can be a maximum of 20,000 inflight messages per queue. If you reach this limit, Amazon SQS returns no error messages. I see two possible solutions, you can either use RabbitMQ instead, which doesnt rely on visibility timeouts (there are "RabbitMQ as a service" services if you dont want to manage your own) or change your code to have really small ETAs (best practice) These are my 2 cents, maybe @asksol can provide some extra insights. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41300209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/709897/']} | jdg_81357 |
stackexchange | llm_judgeable_groundtruth_similarity | 531359 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm setting up my first Gluster 3.4 install and all is good up until I want to create a distributed replicated volume. I have 4 servers 192.168.0.11, 192.168.0.12, 192.168.0.13 & 192.168.0.14. From 192.168.0.11 I ran: gluster peer probe 192.168.0.12gluster peer probe 192.168.0.13gluster peer probe 192.168.0.14 On each server I have a mounted storage volume at /export/brick1 I then ran on 192.168.0.11 gluster volume create gv0 replica2 192.168.0.11:/export/brick1 192.168.0.12:/export/brick1 192.168.0.13:/export/brick1 192.168.0.14:/export/brick1 But I get the error: volume create: gv0: failed: Host 192.168.0.11 is not in 'Peer in Cluster' state Sure enough if you rungluster peer statusit shows 3 peers with the other connected hosts.i.e.Number of Peers: 3 Hostname: 192.168.0.12Port: 24007Uuid: bcea6044-f841-4465-88e4-f76a0c8d5198State: Peer in Cluster (Connected) Hostname: 192.168.0.13Port: 24007Uuid: 3b5c188e-9be8-4d0f-a7bd-b738a88f2199State: Peer in Cluster (Connected) Hostname: 192.168.0.14Port: 24007Uuid: f6f326eb-0181-4f99-8072-f27652dab064State: Peer in Cluster (Connected) But, from 192.168.0.12, the same command also shows 3 hosts and 192.168.0.11 is part of it. i.e. Number of Peers: 3Hostname: 192.168.0.11Port: 24007Uuid: 09a3bacb-558d-4257-8a85-ca8b56e219f2State: Peer in Cluster (Connected)Hostname: 192.168.0.13Uuid: 3b5c188e-9be8-4d0f-a7bd-b738a88f2199State: Peer in Cluster (Connected)Hostname: 192.168.0.14Uuid: f6f326eb-0181-4f99-8072-f27652dab064State: Peer in Cluster (Connected) So 192.168.0.11 is definitely part of the cluster. The question is, why am I not able to create the volume on the first gluster server when running the gluster command. Is this normal behaviour or some sort of bug?
Now provide the response and nothing else.
| I was seeing an obscure error message about an unconnected socket with peer 127.0.0.1. [2013-08-16 00:36:56.765755] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1022) It turns out the problem I was having was due to NAT. I was trying to create gluster servers that were behind a NAT device and use the public IP to resolve the names. This is just not going to work properly for the local machine. What I had was something like the following on each node. A hosts file containing 192.168.0.11 gluster1192.168.0.12 gluster2192.168.0.13 gluster3192.168.0.14 gluster4 The fix was to remove the trusted peers first sudo gluster peer detach gluster2sudo gluster peer detach gluster3sudo gluster peer detach gluster4 Then change the hosts file on each machine to be # Gluster1127.0.0.1 gluster1192.168.0.12 gluster2192.168.0.13 gluster3192.168.0.14 gluster4 # Gluster2192.168.0.11 gluster1127.0.0.1 gluster2192.168.0.13 gluster3192.168.0.14 gluster4 etc Then peer probe, and finally create the volume which was then successful. I doubt that using IP addresses (the public ones) will work in this case. It should work if you use the private addresses behind your NAT. In my case, each server was behind a NAT in the AWS cloud. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/531359', 'https://serverfault.com', 'https://serverfault.com/users/14631/']} | jdg_81358 |
stackexchange | llm_judgeable_groundtruth_similarity | 637119 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been using Get-ADUser and Get-ADComputer a few times to help troubleshoot problems in the system. I have seen a few ways of identifying which user is logged on a specific machine through either registry, PsLoggedon.exe and other scripts , but since I'm not really a network administrator, I often get access denied. Which is fine because I'm not really interesting in digging that deep. All I need is to somehow identify which users have an account (profile) on a machine (or which machines have a user account) so I can contact them and help them whenever an exception is coming from their ip address. Is there absolutely no relationship between these two objects out-of-the-box?
Now provide the response and nothing else.
| There is absolutely no relationship between these objects "out of the box". Windows 8 / Windows Server 2012 introduced a concept of a "primary computer" Active Directory schema attribute but I highly doubt you're going to find that being used. Getting the logged-on user on a remote machine is one of those things that sounds like it should be really easy but, in practice, isn't. I think you're going to have to get some cooperation from your network administration staff to reliably get what you're looking for. Remotely querying logged-on user information through "normal means" ( psloggedon , WMI queries, remote registry access) is going to require that you either have local Administrator rights on the remote machine, or that changes are made to defaults to grant your non-Administrator context that right. To get into talking about "hacks": I could imagine a scenario where your exception handler attempts to redirect the user's browser to a page that requires NTLM authentication and, if the clients are configured to automatically attempt authentication with the logged-on user's credential, you could "harvest" the credential that way. I could also see how that could be badly misinterpreted by the network administration staff as being an attack on users, so I'd strongly recommend against doing that. Presumably you're administering a webapp but not part of the network administration staff. If you could get network administration to buy into joining your web server to the domain you could enable authentication, at which point you'd know the username of the remote user and wouldn't have to muck about with any of this IP address stuff. If your clients are properly configured that authentication can happen transparently, too. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/637119', 'https://serverfault.com', 'https://serverfault.com/users/232594/']} | jdg_81359 |
stackexchange | llm_judgeable_groundtruth_similarity | 1168746 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I count the number of objects of a class type within a method of that class? For that matter, how to do it outside of a class without adding the objects to a list? I should have thought of that! Thanks! I'm gonna leave it unanswered for a little while to see if there is a better way, because I agree. I'm just sortv wrapping my head around OO. If you don't mind let me explain a little more and maybe there is a better way in general? I have an object class that i want to add 3 pieces of information to, but first I want to cycle through and make sure there are no other objects with any of the three pieces the same, and if there are, do something different for each case.
Now provide the response and nothing else.
| The only way to accomplish what you're looking for is to keep a static list of these objects in the class itself. If you just want to see if there is an instance somewhere that hasn't been garbage collected, then you'll want to use the WeakReference class. For example... public class MyClass{ private static List<WeakReference> instances = new List<WeakReference>(); public MyClass() { instances.Add(new WeakReference(this)); } public static IList<MyClass> GetInstances() { List<MyClass> realInstances = new List<MyClass>(); List<WeakReference> toDelete = new List<WeakReference>(); foreach(WeakReference reference in instances) { if(reference.IsAlive) { realInstances.Add((MyClass)reference.Target); } else { toDelete.Add(reference); } } foreach(WeakReference reference in toDelete) instances.Remove(reference); return realInstances; }} Since you're new to OO/.NET, don't let the WeakReference use scare you. The way garbage collection works is by reference counting. As long as some piece of code or an object has access to a particular instance (meaning it's within scope as a or as part of a local, instance, or static variable) then that object is considered alive. Once that variable falls OUT of scope, at some point after that the garbage collector can/will collect it. However, if you were to maintain a list of all references, they would never fall out of scope since they would exist as references in that list. The WeakReference is a special class allows you to maintain a reference to an object that the garbage collector will ignore. The IsAlive property indicates whether or not the WeakReference is pointing to a valid object that still exists. So what we do here is keep this list of WeakReference s that point to every instance of MyClass that's been created. When you want to obtain a list of them, we iterate through our WeakReference s and snatch out all of them that are alive. Any we find that are no longer alive are placed into another temporary list so that we can delete them from our outer list (so that the WeakReference class itself can be collected and our list doesn't grow huge without reason). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1168746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135056/']} | jdg_81360 |
stackexchange | llm_judgeable_groundtruth_similarity | 6817107 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
abstract class CustomControl : UserControl { protected abstract int DoStuff();}class DetailControl : CustomControl{ protected override int DoStuff() { // do stuff return result; }} I dropped a DetailControl in a form. It renders correctly at runtime, but the designer displays an error and won't open because the base user control is abstract. For the moment, I'm contemplating the following patch, which seems pretty wrong to me, as I want the child classes to be forced to implement the method. class CustomControl : UserControl { protected virtual int DoStuff() { throw new InvalidOperationException("This method must be overriden."); }}class DetailControl : CustomControl{ protected override int DoStuff() { // do stuff return result; }} Anyone has a better idea on how to work my way around this problem?
Now provide the response and nothing else.
| What we want First, let's define the final class and the base abstract class. public class MyControl : AbstractControl...public abstract class AbstractControl : UserControl // Also works for Form... Now all we need is a Description provider . public class AbstractControlDescriptionProvider<TAbstract, TBase> : TypeDescriptionProvider{ public AbstractControlDescriptionProvider() : base(TypeDescriptor.GetProvider(typeof(TAbstract))) { } public override Type GetReflectionType(Type objectType, object instance) { if (objectType == typeof(TAbstract)) return typeof(TBase); return base.GetReflectionType(objectType, instance); } public override object CreateInstance(IServiceProvider provider, Type objectType, Type[] argTypes, object[] args) { if (objectType == typeof(TAbstract)) objectType = typeof(TBase); return base.CreateInstance(provider, objectType, argTypes, args); }} Finally we just apply a TypeDescriptionProvider attribute to the Abstract control. [TypeDescriptionProvider(typeof(AbstractControlDescriptionProvider<AbstractControl, UserControl>))]public abstract class AbstractControl : UserControl... And that's it. No middle control required. And the provider class can be applied to as many Abstract bases as we want in the same solution. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6817107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81361 |
stackexchange | llm_judgeable_groundtruth_similarity | 13585 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given an sorted array of integers, I want to find the number of pairs that sum to $0$. For example, given $\{-3,-2,0,2,3,4\}$, the number of pairs sum to zero is $2$. Let $N$ be the number of elements in the input array. If I use binary search to find the additive inverse for an element in the array, the order is $O(\log N)$. If I traverse all the elements in the set, then the order is $O(N\log N)$. How to find an algorithm which is of order $O(N)$?
Now provide the response and nothing else.
| Let $A$ be the sorted input array. Keep two pointers $l$ and $r$ that go through the elements in $A$. The pointer $l$ will go through the "left part" of $A$, that is the negative integers. The pointer $r$ does the same for the "right part", the positive integers. Below, I will outline a pseudocode solution and assume that $0 \notin A$ for minor simplicity. Omitted are also the checks for the cases where there are only positive or only negative integers in $A$. COUNT-PAIRS(A[1..N]): l = index of the last negative integer in A r = index of the first positive integer in A count = 0; while(l >= 0 and r <= N) if(A[l] + A[r] == 0) ++count; ++right; --left; continue; if(A[r] > -1 * A[l]) --left; else ++right; It is obvious the algorithm takes $O(N)$ time. | {} | {'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/13585', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/8036/']} | jdg_81362 |
stackexchange | llm_judgeable_groundtruth_similarity | 50437054 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So, I'm trying to set an initial state for an edit component that gets data from the server and now should be editable in the component state. But when I try to do this: <Query query={POST_QUERY} variables={{ id: this.props.match.params.id }}> {({ data, loading, error }) => { this.setState({ title: data.title }) I get stuck in an infinite loop since this is in render. Should I not use the component state with the query component? And if not, what is the alternative?
Now provide the response and nothing else.
| Whatever component needs this data as state should be rendered inside the Query component, and then have the data passed down to it as a prop. For example: class MyComponent extends React.Component { constructor (props) { this.state = { title: props.post.title } }}<Query query={POST_QUERY} variables={{ id: this.props.match.params.id }}> {({ data, loading, error }) => { <MyComponent post={data.post}/> }}</Query> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50437054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1227922/']} | jdg_81363 |
stackexchange | llm_judgeable_groundtruth_similarity | 4843 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The "standard" way to compute Kappa for a predictive classification model (Witten and Frank page 163) is to construct the random confusion matrix in such a way that the number of predictions for each class is the same as the model predicted. For a visual, see (right side is the random): Does anyone know why this is the case, instead of truly creating a random confusion matrix where the prior probabilities drive the number of predictions for each class. That seems the more accurate comparison against "a null model". For example, in this case, the number of actual and predicted classes would coincide (in the image uploaded, this would mean that the columns of the random confusion matrix would be 100, 60 and 40 respectively). Thanks! BMiner
Now provide the response and nothing else.
| It might be useful to consider Cohen's $\kappa$ in the context of inter-rater-agreement. Suppose you have two raters individually assigning the same set of objects to the same categories. You can then ask for overall agreement by dividing the sum of the diagonal of the confusion matrix by the total sum. But this does not take into account that the two raters will also, to some extent, agree by chance. $\kappa$ is supposed to be a chance-corrected measure conditional on the baseline frequencies with which the raters use the categories (marginal sums). The expected frequency of each cell under the assumption of independence given the marginal sums is then calculated just like in the $\chi^2$ test - this is equivalent to Witten & Frank's description (see mbq's answer). For chance-agreement, we only need the diagonal cells. In R # generate the given data> lvls <- factor(1:3, labels=letters[1:3])> rtr1 <- rep(lvls, c(100, 60, 40))> rtr2 <- rep(rep(lvls, nlevels(lvls)), c(88,10,2, 14,40,6, 18,10,12))> cTab <- table(rtr1, rtr2)> addmargins(cTab) rtr2rtr1 a b c Sum a 88 10 2 100 b 14 40 6 60 c 18 10 12 40 Sum 120 60 20 200> library(irr) # for kappa2()> kappa2(cbind(rtr1, rtr2)) Cohen's Kappa for 2 Raters (Weights: unweighted) Subjects = 200 Raters = 2 Kappa = 0.492 z = 9.46 p-value = 0 # observed frequency of agreement (diagonal cells)> fObs <- sum(diag(cTab)) / sum(cTab)# frequency of agreement expected by chance (like chi^2)> fExp <- sum(rowSums(cTab) * colSums(cTab)) / sum(cTab)^2> (fObs-fExp) / (1-fExp) # Cohen's kappa[1] 0.4915254 Note that $\kappa$ is not universally accepted at doing a good job, see, e.g., here , or here , or the literature cited in the Wikipedia article. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/4843', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2040/']} | jdg_81364 |
stackexchange | llm_judgeable_groundtruth_similarity | 17736571 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a really simple question, but I was wondering if someone could explain what the 4th line is actually doing? so the first line gives an event to the handler. I don't really know in what circumstances handler will return null or what the last line does. When you pass the handler your object and which property changed, what does it do with them? PropertyChangedEventHandler handler = PropertyChanged; //property changed is the eventif (handler != null){ handler(this, new PropertyChangedEventArgs(name));} I assume I used this to get this code but I would like to understand what it is doing fully.
Now provide the response and nothing else.
| If you just did: PropertyChanged(this, new PropertyChangedEventArgs(name)) you would get a NullReferenceException if no one was subscribed to the event PropertyChanged . To counteract this you add a null check: if(PropertyChanged != null){ PropertyChanged(this, new PropertyChangedEventArgs(name))} Now, if you are using multi-threading someone could unsubscribe between the null check and the calling of the event, so you could still get a NullReferenceException . To handle that we copy the event handler to a temporary variable PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(name)); } Now if someone unsubscribes from the event our temporary variable handler will still point to the old function and this code now has no way of throwing a NullReferenceException . Most often you will see people use the keyword var instead, this makes it so you don't need to type in the full type of the temporary variable, this is the form you will see most often in code. var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(name)); } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17736571', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2324576/']} | jdg_81365 |
stackexchange | llm_judgeable_groundtruth_similarity | 15961099 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What I want to get is the same behaviour that this scroll view has: I know that this is using HTML and not the native API, but I'm trying to implement it as a UIKit component. Now, to the behaviour I'm looking for: Notice that it's a paged scroll view, but the "page size" is less than the view's width. When you scroll it from left to right each page "snap" to the left-most item. When you scroll it from the right end to the left it "snaps" to the right-most item. The same page but now right-to-left: What I've tried: I've tried making the scroll view smaller than it's super view and overriding hitTest, and that got me that left-to-right behaviour. I've tried implementing scrollViewWillEndDragging:withVelocity:targetContentOffset: and setting the targetContentOffset I want but since I can't change the velocity it just scrolls too slowly or too fast. I've tried implementing scrollViewDidEndDecelerating: and then animating to the correct offset but the scroll view first stops then moves, it doesn't look natural. I've tried implementing scrollViewDidEndDragging:willDecelerate: and then animating to the correct offset but the scroll view "jumps" and does not animate correctly. I'm out of ideas. Thanks! Update: I ended up using Rob Mayoff's method, it looks clean. I changed it so it would work when the velocity is 0, for example when a user drags, stops and releases the finger. - (void)scrollViewWillEndDragging:(UIScrollView *)scrollView withVelocity:(CGPoint)velocity targetContentOffset:(CGPoint *)targetContentOffset { CGFloat maxOffset = scrollView.contentSize.width - scrollView.bounds.size.width; CGFloat minOffset = 0; if (velocity.x == 0) { CGFloat targetX = MAX(minOffset,MIN(maxOffset, targetContentOffset->x)); CGFloat diff = targetX - baseOffset; if (ABS(diff) > offsetStep/2) { if (diff > 0) { //going left baseOffset = MIN(maxOffset, baseOffset + offsetStep); } else { //going right baseOffset = MAX(minOffset, baseOffset - offsetStep); } } } else { if (velocity.x > 0) { baseOffset = MIN(maxOffset, baseOffset + offsetStep); } else { baseOffset = MAX(minOffset, baseOffset - offsetStep); } } targetContentOffset->x = baseOffset;} The only problem with this solution is that swiping the scroll view doesn't produce the "bounce" effect. It feels "stuck".
Now provide the response and nothing else.
| Setting scrollView.decelerationRate = UIScrollViewDecelerationRateFast , combined with implementing scrollViewWillEndDragging:withVelocity:targetContentOffset: , seems to work for me using a collection view. First, I give myself some instance variables: @implementation ViewController { NSString *cellClassName; CGFloat baseOffset; CGFloat offsetStep;} In viewDidLoad , I set the view's decelerationRate : - (void)viewDidLoad { [super viewDidLoad]; cellClassName = NSStringFromClass([MyCell class]); [self.collectionView registerNib:[UINib nibWithNibName:cellClassName bundle:nil] forCellWithReuseIdentifier:cellClassName]; self.collectionView.decelerationRate = UIScrollViewDecelerationRateFast;} I need offsetStep to be the size of an integral number of items that fit in the view's on-screen bounds. I compute it in viewDidLayoutSubviews : - (void)viewDidLayoutSubviews { [super viewDidLayoutSubviews]; UICollectionViewFlowLayout *layout = (UICollectionViewFlowLayout *)self.collectionView.collectionViewLayout; CGFloat stepUnit = layout.itemSize.width + layout.minimumLineSpacing; offsetStep = stepUnit * floorf(self.collectionView.bounds.size.width / stepUnit);} I need baseOffset to the be the X offset of the view before scrolling starts. I initialize it in viewDidAppear: : - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; baseOffset = self.collectionView.contentOffset.x;} Then I need to force the view to scroll in steps of offsetStep . I do that in scrollViewWillEndDragging:withVelocity:targetContentOffset: . Depending on the velocity , I increase or decrease baseOffset by offsetStep . But I clamp baseOffset to a minimum of 0 and a maximum of the contentSize.width - bounds.size.width . - (void)scrollViewWillEndDragging:(UIScrollView *)scrollView withVelocity:(CGPoint)velocity targetContentOffset:(inout CGPoint *)targetContentOffset { if (velocity.x < 0) { baseOffset = MAX(0, baseOffset - offsetStep); } else if (velocity.x > 0) { baseOffset = MIN(scrollView.contentSize.width - scrollView.bounds.size.width, baseOffset + offsetStep); } targetContentOffset->x = baseOffset;} Note that I don't care what targetContentOffset->x comes in as. This has the effect of aligning to the left edge of the leftmost visible item, until the user scrolls all the way to the last item. At that point it aligns to the right edge of the rightmost visible item, until the user scroll all the way to the left. This seems to match the behavior of the App Store app. If that doesn't work for you, you can try replacing the last line ( targetContentOffset->x = baseOffset ) with this: dispatch_async(dispatch_get_main_queue(), ^{ [scrollView setContentOffset:CGPointMake(baseOffset, 0) animated:YES]; }); That also works for me. You can find my test app in this git repository . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15961099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/366492/']} | jdg_81366 |
stackexchange | llm_judgeable_groundtruth_similarity | 16186083 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to get started with ASP.NET MVC Ajax calls. Controller: public class AjaxTestController : Controller{ // // GET: /AjaxTest/ public ActionResult Index() { return View(); } public ActionResult FirstAjax() { return Json("chamara", JsonRequestBehavior.AllowGet); } } View: <head runat="server"> <title>FirstAjax</title> <script src="http://code.jquery.com/jquery-1.9.1.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function () { var serviceURL = '/AjaxTest/FirstAjax'; $.ajax({ type: "POST", url: serviceURL, data: param = "", contentType: "application/json; charset=utf-8", dataType: "json", success: successFunc, error: errorFunc }); function successFunc(data, status) { alert(data); } function errorFunc() { alert('error'); } }); </script></head> I just need to print an alert with the controller method returning data. Above code just print "chamara" on my view. An alert is not firing. UPDATE I modified my controller as below and it start working. I don't have an clear idea why it's working now. Some one please explain. The parameter "a" does not related i added it because i can not add two methods with same method name and parameters.I think this might not be the solution but its working public class AjaxTestController : Controller { // // GET: /AjaxTest/ [HttpGet] public ActionResult FirstAjax() { return View(); } [HttpPost] public ActionResult FirstAjax(string a) { return Json("chamara", JsonRequestBehavior.AllowGet); } }
Now provide the response and nothing else.
| Remove the data attribute as you are not POSTING anything to the server (Your controller does not expect any parameters). And in your AJAX Method you can use Razor and use @Url.Action rather than a static string: $.ajax({ url: '@Url.Action("FirstAjax", "AjaxTest")', contentType: "application/json; charset=utf-8", dataType: "json", success: successFunc, error: errorFunc}); From your update: $.ajax({ type: "POST", url: '@Url.Action("FirstAjax", "AjaxTest")', contentType: "application/json; charset=utf-8", data: { a: "testing" }, dataType: "json", success: function() { alert('Success'); }, error: errorFunc}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16186083', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/367562/']} | jdg_81367 |
stackexchange | llm_judgeable_groundtruth_similarity | 20117 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I found the comparison function below (slightly modified) from a crypto library I was using. I was curious about the potential vulnerability to side channel attacks. Specifically, the character comparison is only done if the character being compared is within the bounds of the two strings. I suspected this might allow an attacker to determine string length. Perhaps this difference is simply too small to be subject to a timing attack, but I played with an attempt below. I basically create strings of increasing lengths and compare to a given initial string. I was expecting to perhaps see linear growth in comparison time both before and after the point where the second string becomes longer, but perhaps with a different slope since the operations performed are different. Instead, I see the data below (note the string being compared is 27 characters in length). Any explanation as to why I have no clue what I'm talking about would be greatly appreciated :) A small note, I did try with -O0 in case some strange optimization was at fault. The only thing I can think to do from here is start digging into the generated assembly. #include <string.h>#include <sys/time.h>#include <stdio.h>int CompareStrings(const char* s1, const char* s2) { int eq = 1; int s1_len = strlen(s1); int s2_len = strlen(s2); if (s1_len != s2_len) { eq = 0; } const int max_len = (s2_len < s1_len) ? s1_len : s2_len; // to prevent timing attacks, should check entire string // don't exit after found to be false int i; for (i = 0; i < max_len; ++i) { if (s1_len >= i && s2_len >= i && s1[i] != s2[i]) { eq = 1; } } return eq;}double time_diff(struct timeval x , struct timeval y) { double x_ms , y_ms , diff; x_ms = (double)x.tv_sec*1000000 + (double)x.tv_usec; y_ms = (double)y.tv_sec*1000000 + (double)y.tv_usec; diff = (double)y_ms - (double)x_ms; return diff;}void test_with_length(char* str1, int n) { char str2[n + 1]; struct timeval tp1; struct timeval tp2; int i; for (i = 0; i < n; i++) { str2[i] = 'a'; } str2[n] = '\0'; gettimeofday(&tp1, NULL); for (i = 0; i < 20000000; i++) { CompareStrings(str1, str2); } gettimeofday(&tp2, NULL); printf("%d %.01f\n", n, time_diff(tp1, tp2));}int main() { char *str1 = "XXXXXXXXXXXXXXXXXXXXXXXXXXX"; int i = 0; for (i = 1; i <= 100; i++) { test_with_length(str1, i); }}
Now provide the response and nothing else.
| First, some comments on the code: Your calls to strlen are a black box - you cannot possibly know how they react to timing attacks. Your compare loop exits after you exhaust the shortest string, leaking the shorter length. You did not remove bias generated by the loop in test_with_length . Array accesses are not O(1) time complexity, especially if each element is smaller than the word width of your CPU. A better way to compensate for such attacks: bool CompareStr(char* strA, char* strB){ bool result = true; int lenA, lenB = 0; while(strA[n] != 0) lenA++; while(strB[n] != 0) lenB++; int maxLen = (lenA > lenB) ? lenA : lenB; // compensate for array access to some extent int diff = (lenA > lenB) ? lenA - lenB : lenB - lenA; char dummy = '\0'; for(int i = 0; i < diff; i++) { dummy = strA[0]; } // better way to avoid timing attacks for(int i = 0; i < maxLen; i++) { if ( ((i >= lenA) ? strA[0] : strA[i]) != ((i >= lenB) ? strB[0] : strB[i]) ) result = false; } return false;}// compute loop overheadtime_pre_loop = getTime();for(int i = 0; i < LOOP_COUNT; i++) { }time_post_loop = getTime();double diff = time_diff(time_post_loop, time_pre_loop);time_pre_test = getTime();for(int i = 0; i < LOOP_COUNT; i++) CompareStr(strA, strB);time_post_test = getTime();double result = time_diff(time_post_test, time_pre_test) - diff; Anyway, here's where the fun part starts. Consider the following assembly code that could be used to compute whether two strings are equal, with early-out optimisation removed as a basic timing attack defense: xor ecx, ecx ; clear counter to 0 mov dword ptr ds:[0040AAAA], 1 ; set result to trueloopStart: mov ah, byte ptr ds:[ecx+00401234] ; read char from string A mov bh, byte ptr ds:[ecx+00402468] ; read char from string B cmp ah, bh ; compare the chars jz match ; are they equal? mov dword ptr ds:[0040AAAA], 0 ; strings don't match! set result to truematch: cmp ah, 0 ; are we at the end of string A? jz done cmp bh, 0 ; are we at the end of string B? jz done inc ecx ; increment counter and continue loop jmp loopStartdone: ret There are actually a few timing attacks here. The first two instructions are O(1) . However, the two single-byte reads that follow are not. Whilst memory reads are usually O(1) , in practice they will not be. In fact, they don't fit into the big-O model at all. On a 32-bit system, every 4th byte read will be faster than the others. On a 64-bit system, every 4th byte will be faster, and every 8th byte will be faster still. This is due to non-aligned memory reads. The CPU is great at fetching blocks of data that are aligned to its bus size, but not so great at fetching random individual bytes. The CPU will attempt to optimise this by pipelining instructions and loading the block into its L2 cache, which makes this issue more subtle, but it's still there. The next trick is for strings that are larger than the CPU's cache. If you have a string that exceeds this size (usually 128KB or higher for the L2 cache) you'll see slight delays in memory fetches every time a read exceeds that boundary. This is usually only a small delay, since the L3 cache will be used to store the next block, but we can see an even bigger difference for larger strings (8MB+) as the memory fetches have to be done from system memory. If we consider a block memory copy as O(n), where n is the memory length, this makes some sense, but does not properly depict the small variances in timing caused by implementation idiosyncrasies. Finally, we can see that the mov dword ptr ds:[0040AAAA], 0 instruction is executed whenever one string does not match another. This leaks information in two ways: Allows us to estimate the relative length of the shorter string by identifying the extra time used when repeatedly setting the result. If we can measure accurately, allows us to identify which characters are different. As we can see, it's quite difficult to prevent these issues. Here's my attempt at a better system, assuming fixed-length strings padded to alignment with null bytes: int r, d = 0;int* bufferA = (int*)stringA;int* bufferB = (int*)stringB;for(int i = 0; i < BUFFER_SIZE; i += 4){ d = bufferA[i] ^ bufferB[i]; // if equal, d = 0 r |= d; // if all cases are 0, r will be 0}return r; // result is 0 if equal, non-zero if not equal This is almost perfectly O(n) for the following reasons: Bitwise xor of aligned ints is O(1) Bitwise or of aligned ints is O(1) Memory reads are aligned. Neither length is leaked, due to BUFFER_LENGTH padding. No branches, excluding the loop. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/20117', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/12961/']} | jdg_81368 |
stackexchange | llm_judgeable_groundtruth_similarity | 347494 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question regarding differential forms. Let $\omega = dx_1\wedge dx_2$. What would $d\omega$ equal? Would it be 0?
Now provide the response and nothing else.
| Yes. The same holds true for any differential form whose coefficients are constant functions. For example, if $\omega = 3(dx\land dy) + 5(dx\land dz) + 7 (dy\land dz)$, then $d\omega = 0$. Edit: In general, the exterior derivative is defined by$$d\bigl(f\, dx_{i_1}\land\cdots\land dx_{i_n}\bigr) \;=\; df\land dx_{i_1}\land\cdots\land dx_{i_n}$$where$$df \;=\; \frac{\partial f}{\partial x_1}dx_1 + \cdots + \frac{\partial f}{\partial x_n}dx_n$$For example, in three dimensions$$\begin{align*}d\bigl(x^3y^2 z^4 dy\bigr) \;&=\; (3x^2y^2z^4 dx + 2x^3yz^4 dy + 4x^3y^2z^3 dz)\land dy \\&=\; 3x^2y^2z^4 dx\land dy \,-\, 4x^3y^2z^3 dy\land dz\end{align*}$$Note that the $2x^3yz^4$ term goes away since $dy\land dy = 0$. Also, the exterior derivative of a sum of forms is the sum of the exterior derivatives of the forms, i.e.$$d(z\,dx + x^2\,dy) \;=\; 2x\, dx\land dy \,-\, dx\land dz$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/347494', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/68072/']} | jdg_81369 |
stackexchange | llm_judgeable_groundtruth_similarity | 5987371 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL?
Now provide the response and nothing else.
| Not really, since you can't rebind arguments. if foo is None: foo = 42 or def ifnull(var, val): if var is None: return val return varfoo = ifnull(foo, 42) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5987371', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131238/']} | jdg_81370 |
stackexchange | llm_judgeable_groundtruth_similarity | 40939037 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a view controller with a GMSMapView and have loaded a number of markers onto the map. I can change which marker is selected with mapView.selectedMarker = ... but how do I change the color of the selected marker?
Now provide the response and nothing else.
| You can use GMSMarker.markerImage(with: <UIColor?>) to reset a marker's icon. Docs: Google Maps iOS SDK GMSMarker Class Reference import GoogleMaps// view controllerclass MapViewController: UIViewController { // outlets @IBOutlet weak var mapView: GMSMapView! // view did load method override func viewDidLoad() { super.viewDidLoad() // set map view delegate mapView.delegate = self }}// extension for GMSMapViewDelegateextension MapViewController: GMSMapViewDelegate { // tap map marker func mapView(_ mapView: GMSMapView, didTap marker: GMSMarker) -> Bool { print("didTap marker \(marker.title)") // remove color from currently selected marker if let selectedMarker = mapView.selectedMarker { selectedMarker.icon = GMSMarker.markerImage(with: nil) } // select new marker and make green mapView.selectedMarker = marker marker.icon = GMSMarker.markerImage(with: UIColor.green) // tap event handled by delegate return true }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40939037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4754881/']} | jdg_81371 |
stackexchange | llm_judgeable_groundtruth_similarity | 37232408 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Simple question.IMDU command do imdu /b file.imd file.raw and convert the file.imd on file.raw I have a lot of .imd so I need a batch. I have tried: for %%x in (*.imd) do imdu /b %%x %%~nx.raw But it doesn't work and creates a file called %%.raw I need a batch which will remove the extension imd and replace it with raw.How can I do that?
Now provide the response and nothing else.
| Open a command prompt window, run for /? and read the output help carefully and completely. There is explained %~xI – only file extension of %I – and %~nI – only file name of %I – and %~nxI – file name with extension of %I . Please note that it does not matter if the string assigned to case-sensitive loop variable I is really a file or a folder which really exists or does not exist at all. In fact it can be any string. The file name referenced with %~nI is the string after last backslash up to last dot or end of string. It can be an empty string if the string assigned to loop variable ends with a backslash, i.e. is a folder path, or name of file starts with a dot and has no real file extension like hidden files on UNIX/MAC are often named. The file extension referenced with %~xI is everything from last dot after last backlash to end of string assigned to loop variable. So %~xI references the file name of a file with a name like .htaccess and not %~nI which is in this special case an empty string. A file named .htaccess is on UNIX/MAC a file with real name being htaccess and . at beginning of file name makes this file hidden on UNIX/MAC file systems. Please note that loop variables are case-sensitive while the modifiers like n , x , f , ... are not case-sensitive. So %~NXI is identical to %~nxI . It is in general better readable to use a loop variable in upper case and the modifiers in lower case. It can be confusing for readers and in some special cases also for cmd.exe what is meant on using as loop variable a character which is also a modifier, for example on running in a cmd window the command line: for %f in ("1. " "2. " "3. ") do @echo %~ffile I is not a modifier and so the wrong output by the command line above can be avoided using %I instead of %f as it can be seen on running in a Windows command prompt window: for %I in ("1. " "2. " "3. ") do @echo %~Ifile Now it is clear that %~ff was interpreted as reference to full qualified file name of string assigned to loop variable f while %~I is interpreted as referencing the string assigned to loop variable I with double quotes removed. So I recommend to avoid one of these characters ADFNPSTXZadfnpstxz as loop variable or be at least careful on using them. For all that reasons it is better to use in batch file on which the percent sign must be doubled the following command line: for %%I in (*.imd) do imdu.exe /b "%%I" "%%~nI.raw" For such simple loops it is also possible and good practice to use an ASCII character not being a letter or digit having no special meaning for Windows command processor like: for %%# in (*.imd) do imdu.exe /b "%%#" "%%~n#.raw" It is easier to search in batch file for all occurrences of # than for all occurrences of a letter existing also many times in other strings. The character $ is also very good as loop variable because it has also no special meaning and does usually not exist in other strings in a batch file. Don't forget the double quotes around the file names as files could contain spaces or round brackets or ampersands in their file names which require double quotes around file names. Command FOR holds in this use case a file name always without surrounding double quotes in loop variable. One more hint: The usage of FOR to process files or folders matching a wildcard pattern which are renamed, moved or deleted by executed command line(s) on FOR iterations is problematic on FAT32 and exFAT drives because of list of directory entry changes while FOR accesses this list during loop iterations. For example the command line below in a batch file with current directory being on a FAT32 or exFAT drive can result in a temporary file being renamed more than once. for %%# in (*.tmp) do ren "%%#" "%%~n#_1.tmp" In such cases it is better to use in the batch file command DIR to get a list of file names captured by FOR which processes now a list of file names not being modified by the command line(s) executed by FOR on each file name as shown below. for /F "eol=| delims=" %%# in ('dir *.tmp /A-D /B /ON 2^>nul') do ren "%%#" "%%~n#_1.tmp" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37232408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3970561/']} | jdg_81372 |
stackexchange | llm_judgeable_groundtruth_similarity | 59481105 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make an asynchronous web scraper using beautifulsoup and aiohttp.This is my initial code to start things.I'm getting a [TypeError: An asyncio.Future, a coroutine or an awaitable is required] and having a hard time figuring out what is wrong with my code.I am new to python and would appreciate any help regarding this. import bs4import asyncioimport aiohttpasync def parse(page): soup=bs4.BeautifulSoup(page,'html.parser') soup.prettify() print(soup.title)async def request(): async with aiohttp.ClientSession() as session: async with session.get("https://google.com") as resp: await parse(resp)loop=asyncio.get_event_loop()loop.run_until_complete(request) Traceback:- Traceback (most recent call last): File "C:\Users\User\Desktop\Bot\aio-req\parser.py", line 21, in <module> loop.run_until_complete(request) File "C:\Users\User\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 591, in run_until_complete future = tasks.ensure_future(future, loop=self) File "C:\Users\User\AppData\Local\Programs\Python\Python38-32\lib\asyncio\tasks.py", line 673, in ensure_future raise TypeError('An asyncio.Future, a coroutine or an awaitable is 'TypeError: An asyncio.Future, a coroutine or an awaitable is required
Now provide the response and nothing else.
| One issue is that loop.run_until_complete(request) should be loop.run_until_complete(request()) - You actually have to call it for it to return a coroutine. There are further problems - like you are passing an aiohttp.ClientResponse object to parse and treating it as text/html. I got it to work with the following but don't know if it fits your needs because parse is no longer a coroutine. def parse(page): soup=bs4.BeautifulSoup(page,'html.parser') soup.prettify() return soup.titleasync def fetch(session, url): async with session.get(url) as response: return await response.text()async def request(): async with aiohttp.ClientSession() as session: html = await fetch(session, "https://google.com") print(parse(html))if __name__ == '__main__': loop=asyncio.get_event_loop() loop.run_until_complete(request()) This also works: def parse(page): soup=bs4.BeautifulSoup(page,'html.parser') soup.prettify() print(soup.title)async def request(): async with aiohttp.ClientSession() as session: async with session.get("https://google.com") as resp: parse(await resp.text()) And finally , your original code, passing an awaitable response object to parse then awaiting for page.text() . async def parse(page): soup=bs4.BeautifulSoup(await page.text(),'html.parser') soup.prettify() print(soup.title)async def request(): async with aiohttp.ClientSession() as session: async with session.get("https://google.com") as resp: await parse(resp) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59481105', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7657046/']} | jdg_81373 |
stackexchange | llm_judgeable_groundtruth_similarity | 546930 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataset, containing some trees and the estimated upper and lower bounds of their ages. For example, tree #1 is 10~15 years old, and tree #2 is 13~20 years old, etc. I want to visualize the data. I know that I can calculate the mean values of the bounds and plot a histogram. However, the plot would be more useful if the upper and lower bounds are also presented, since it shows the uncertainty of our estimations. How should I do that? Or is this even possible or reasonable? My original thought is that the x-axis should be the ages and the y-axis should be the number of trees, which is the same as the histogram, but I'm not whether using these axes are possible or practical. The followings are some samples from the dataset (unrelated columns are omitted): Tree ID Estimated ages (year) (lower bound) Estimated ages (year) (upper bound) 1 10 15 2 13 20 3 1 3 4 7 16 5 13 15 6 12 18
Now provide the response and nothing else.
| You could plot each tree's interval but sort on interval midpoint at least. The "data" here are fake; I didn't have any sight of your data when I did this. This is in effect a quantile plot with added information on uncertainty. Adding points at the midpoints and/or connecting them with a line would emphasize the overall distribution more. Many other variations are possible. For example, if age in the data is always positive, then a transformed scale might help. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/546930', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/286165/']} | jdg_81374 |
stackexchange | llm_judgeable_groundtruth_similarity | 32537532 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have Java String array which contains 45 string which is basically column names String[] fieldNames = {"colname1","colname2",...}; Currently I am storing above array of String in a Spark driver in a static field. My job is running slow so trying to refactor code. I am using above String array while creating a DataFrame DataFrame dfWithColNames = sourceFrame.toDF(fieldNames); I want to do the above using broadcast variable to that it don't ship huge string array to every executor. I believe we can do something like the following to create broadcast String[] brArray = sc.broadcast(fieldNames,String[].class);//gives compilation error DataFrame df = sourceFrame.toDF(???);//how do I use above broadcast can I use it as is by passing brArray I am new to Spark.
Now provide the response and nothing else.
| The return variable of sc.broadcast is of type Broadcast<String[]> and not String[] . When you want to access the value, you simply call value() on the variable. From your example it would be like: Broadcast<String[]> broadcastedFieldNames = sc.broadcast(fieldNames)DataFrame df = sourceFrame.toDF(broadcastedFieldNames.value()) Note, that if you are writing this in Java, you probably want to wrap the SparkContext within the JavaSparkContext. It makes everything easier and you can then avoid having to pass a ClassTag to the broadcast function. You can read more on broadcasting variables on http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32537532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/449355/']} | jdg_81375 |
stackexchange | llm_judgeable_groundtruth_similarity | 24889783 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I get a list of all registered modules at run time? For example: // Some code somewhere in some .js filevar module1 = angular.module('module1', []);// Some code in some other .js filevar module2 = angular.module('module2', []);// Main .js filevar arrayWithNamesOfAllRegisteredModules = .....// (result would be: ['module1', 'module2'])
Now provide the response and nothing else.
| Angular does not provide a way to retrieve the list of registered modules (at least I was not able to find a way in source code). You can however decorate angular.module method to store names in array. Something like this: (function(orig) { angular.modules = []; angular.module = function() { if (arguments.length > 1) { angular.modules.push(arguments[0]); } return orig.apply(null, arguments); }})(angular.module); Now you can check angular.modules array. Demo: http://plnkr.co/edit/bNUP39cbFqNLbXyRqMex?p=preview | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24889783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1147862/']} | jdg_81376 |
stackexchange | llm_judgeable_groundtruth_similarity | 11589320 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have the following entity: public class User{ public int Id { get; set; } public string Username { get; set; } public Guid UserGuid { get; set; } public Guid ConfirmationGuid { get; set; }} And the following interface method: void CreateUser(string username); Part of the implementation should create two new GUIDs: one for UserGuid , and another for ConfirmationGuid . They should do this by setting the values to Guid.NewGuid() . I already have abstracted Guid.NewGuid() using an interface: public interface IGuidService{ Guid NewGuid();} So I can easily mock this when only one new GUID is needed. But I'm not sure how to mock two different calls to the same method, from within one method, such that they return different values.
Now provide the response and nothing else.
| If you are using Moq, you can use: mockGuidService.SetupSequence(gs => gs.NewGuid()) .Returns( ...some value here...) .Returns( ...another value here... ); I suppose you could also do the following: mockGuidService.Setup(gs => gs.NewGuid()) .Returns(() => ...compute a value here...); Still, unless you are just supplying a random value within the return function, knowledge of order still seems to be important. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11589320', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/213902/']} | jdg_81377 |
stackexchange | llm_judgeable_groundtruth_similarity | 7947849 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read that on Unix systems, malloc can return a non-NULL pointer even if the memory is not actually available, and trying to use the memory later on will trigger an error. Since I cannot catch such an error by checking for NULL, I wonder how useful it is to check for NULL at all? On a related note, Herb Sutter says that handling C++ memory errors is futile, because the system will go into spasms of paging long before an exception will actually occur. Does this apply to malloc as well?
Now provide the response and nothing else.
| Quoting Linux manuals : By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non- NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose some randomly picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior using a command like: # echo 2 > /proc/sys/vm/overcommit_memory You ought to check for NULL return, especially on 32-bit systems, as the process address space could be exhausted far before the RAM: on 32-bit Linux for example, user processes might have usable address space of 2G - 3G as opposed to over 4G of total RAM. On 64-bit systems it might be useless to check the malloc return code, but might be considered good practice anyway, and it does make your program more portable. And, remember, dereferencing the null pointer kills your process certainly; some swapping might not hurt much compared to that. If malloc happens to return NULL when one tries to allocate only a small amount of memory, then one must be cautious when trying to recover from the error condition as any subsequent malloc can fail too, until enough memory is available. The default C++ operator new is often a wrapper over the same allocation mechanisms employed by malloc() . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7947849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252000/']} | jdg_81378 |
stackexchange | llm_judgeable_groundtruth_similarity | 2901 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it true that the whole galaxy is actually revolving, and powered by a black hole? Has it been proven, and if it is true, how can our solar systems actually keep up the momentum to withstand the pull?
Now provide the response and nothing else.
| I was giving a talk about the galactic black hole at the center, Sagittarius A*, back in 1998. At that time, it was already clear to enlightened people that it had to be a black hole. An analysis of a two-temperature plasma helped to bring some new evidence that the object had a real event horizon. The black hole is huge but it is not "galactically" huge. Its mass is 4.2 million solar masses or so. This is of course large, in comparison with any star, but it is negligible if compared to - thousands of times smaller than - the mass of the Milky Way. So it would be unreasonable to say that the black hole has a tremendous impact on the gravitational forces across the Milky Way. It is just a heavy single object but if one looks at the size of 5% of the Galaxy's diameter, the total amount of stars in such a region is already vastly larger than the mass of the black hole. Already in such small regions, the black hole is just a small droplet. Black holes, just like any other heavy objects, are unable to "power" galaxies. Galaxies are composed of stars that move according to the laws of mechanics (or general relativity) - inertia modified by the gravitational force. (Today, we believe that most of the gravitational force is exerted by the dark matter that represents a majority of the galactic masses.) The dependence of the gravitational force on the distance from the center of the Galaxy determines the orbital velocity of the stars at every distance. For every distribution of matter, we get some dependence of the gravitational force on the distance, and we can write down the velocities as a function of the distance for which the orbits remain circular. (And if the orbits are a bit elliptic, there is no problem with that, either.) Whatever the radial attractive force is, there always exists a velocity such that the gravitational attractive force exactly cancels against the centrifugal force. (More precisely, the gravitational force is the centripetal force.) So for any pull, there is a velocity such that one can withstand the pull, and it makes absolutely no difference whether a black hole contributes to the pull. So while the object is interesting - and probably generic for most galaxies - it doesn't have any "systemic" importance for the functioning of the Galaxy. The radius of the object is millions of kilometers - something like 10 times the distance to the Moon. The matter around the black hole is being heated and "cooked" by the gravitational field and there's a high temperature. But if one managed to cross the event horizon, he could live for a few more seconds before he would be squeezed by the singularity at the black hole center. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/2901', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_81379 |
stackexchange | llm_judgeable_groundtruth_similarity | 9993669 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using storyboard, static cells, in cellForRowAtIndexPath: the line UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; always returns nil . I have checked the following: Identifier of the cell is correctly set in IB/Storyboard and I use the same identifier in code. I verified this many times. I have not instantiated the view controller elsewhere (which was the problem in this stackoverflow question ). My view controller is a subclass of UITableViewController , of course, wrapped into an ad hoc navigation controller in storyboard. Suspecting that my view controller somehow does not know about the cell identifiers defined in storyboard because it might be another instance, here is the code the "instantiates" it. In prepareForSegue: , I use CustomViewController *vc = [[[segue destinationViewController] viewControllers] objectAtIndex:0]; Other customizations of the view controller done here (setting properties etc.) works fine. I am using static cells because the number of sections and rows does not change, and each cell contains static text (and other controls or text fields to be edited). It seems to me this is a very common task (customize static cells from storyboard in the view controller's datasource methods). What am I doing wrong?
Now provide the response and nothing else.
| With static content in a table view, you do not implement any of the datasource methods (including tableView:cellForRowAtIndexPath: , so you would never dequeue the cells. There is no dequeuing for static content (that you can get involved in, anyway). If you want to get a pointer to a particular cell: get it from the table view using cellForRowAtIndexPath: : UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; have an outlet to the specific cell and customise it directly. Iterate through the cells and check the reuseIdentifier property to get the cell you are interested in. Any of these things can be done in viewWillAppear or similar. If you want to have completely different content in your cells to that found on the storyboard then static cells probably aren't the right choice. You should use dynamic prototypes (note you can have multiple prototypes in the storyboard) with the traditional data source methods instead. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9993669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427083/']} | jdg_81380 |
stackexchange | llm_judgeable_groundtruth_similarity | 33738467 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Every time I run my app that uses Flask-SQLAlchemy I get the following warning that the SQLALCHEMY_TRACK_MODIFICATIONS option will be disabled. /home/david/.virtualenvs/flask-sqlalchemy/lib/python3.5/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning. warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.') I tried to find out what this option does, but the Flask-SQLAlchemy documentation isn't clear about what uses this tracking. SQLALCHEMY_TRACK_MODIFICATIONS If set to True (the default) Flask-SQLAlchemy will track modifications of objects and emit signals. This requires extra memory and can be disabled if not needed. How do I find out if my project requires SQLALCHEMY_TRACK_MODIFICATIONS = True or if I can safely disable this feature and save memory on my server?
Now provide the response and nothing else.
| Most likely your application doesn't use the Flask-SQLAlchemy event system, so you're probably safe to turn off. You'll need to audit the code to verify--you're looking for anything that hooks into models_committed or before_models_committed . If you do find that you're using the Flask-SQLAlchemy event system, you probably should update the code to use SQLAlchemy's built-in event system instead. The default value as of Flask-SQLAlchemy 2.1 is None , which is a falsy value, so the event system is disabled . In older versions, the default value was True , so you'll need to explicitly disable it. However, in both cases, the warning won't be silenced until this is explicitly set to False . To do that, add: SQLALCHEMY_TRACK_MODIFICATIONS = False to your app config. Background--here's what the warning is telling you: Flask-SQLAlchemy has its own event notification system that gets layered on top of SQLAlchemy. To do this, it tracks modifications to the SQLAlchemy session. This takes extra resources, so the option SQLALCHEMY_TRACK_MODIFICATIONS allows you to disable the modification tracking system. The rationale for the change is three-fold: Not many people use Flask-SQLAlchemy's event system, but most people don't realize they can save system resources by disabling it. So a saner default is to disable it and those who want it can turn it on. The event system in Flask-SQLAlchemy has been rather buggy (see issues linked to in the pull request mentioned below), requiring additional maintenance for a feature that few people use. In v0.7, SQLAlchemy itself added a powerful event system including the ability to create custom events. Ideally, the Flask-SQLAlchemy event system should do nothing more than create a few custom SQLAlchemy event hooks and listeners, and then let SQLAlchemy itself manage the event trigger. You can see more in the discussion around the pull request that started triggering this warning . | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33738467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/150978/']} | jdg_81381 |
stackexchange | llm_judgeable_groundtruth_similarity | 566422 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $D \subseteq \mathbb{C}$ be open and $f : D \rightarrow \mathbb{C}$ meromorphic with a pole of order $\ge 2$ in $a \in D$. Then $f$ is not injective. Is there an easy proof to this? This is not homework; it comes from user8268's answer in entire 1-1 function .
Now provide the response and nothing else.
| If $f$ has a pole of order $m$ at $a$, then (after removing the removable singularity) $g = 1/f$ has a zero of order $m$ there. Let $C$ be a small circle (oriented positively) around $a$. For $\alpha \notin g(C)$, the number of zeros (counted by multiplicity) of $g - \alpha$ inside $C$ is$\dfrac{1}{2\pi i} \oint_C \dfrac{g'(z)}{g(z)}\ dz$, and this is continuous(and therefore constant) in a neighbourhood of $\alpha = 0$, with value $m$ at $\alpha$. But the zeros of $g'$ are isolated, so the $m$ zeros of $g-\alpha$ are all distinct if $C$ is small enough. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/566422', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108717/']} | jdg_81382 |
stackexchange | llm_judgeable_groundtruth_similarity | 213100 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In our work, we have several different .net applications that share a lot of base functionality. We've built these applications using a clean n-tier architecture, but we've hit that moment where we realize that we've re-implemented the same functions several different times. Obviously this violates DRY, and we would like to correct that. We're already using Nuget to some success for common glue code (wiring up IoC, logging, settings), but we also would like to share our data and business layers between all our applications. The idea is that the UI would only deal with the parts of the business layer it actually needs. This seems like a straight-forward problem at first, but ongoing development could provide some pitfalls and we're not sure how to proceed. Let's say we make our One Business Layer to Rule Them All. For brevity, I'll call it "Foundation." We port our applications to use the Foundation, and everything is runnding great. The Foundation is distributed to light UI layers via nuget, and we're looking good. But then we start adding features to our applications, and we run into trouble. Let's say we're working on Project A and we add a new feature that requires changes to Foundation. We make the changes to foundation (Foundation-A) and push them out to the nuget feed as an unstable package. Project A gets the latest nuget package, and all is good. Meanwhile, another developer is working on Project B. He gets the latest Foundation from source control, but takes it from a stable branch, so that it doesn't have Project A changes in it. He makes changes and created Foundation-B. And all is good. But then we discover that Foundation-A and Foundation-B implementation functionality that could actually share code, so we combine them. Meanwhile Foundation-C is floating out there with it's own changes. Eventually, Foundation-B is ready for production, so we push it out. But then we need to update Production A, B, & C with the new foundation, so we update the nuget packages and deploy (as long as nothing breaks). This seems like it could work, but we're worried about working with different database schemas and keeping everything synchronized between the various branches of the Foundation repository as well as the Project A, B, and C repositories. It seems like it will probably take a lot of manual work, which opens up the possibility for errors. I would like this as automated as possible. Here's the stack we're using: C#, TFS with Continuous Integration, Nuget. Our applications are all various types of ASP.NET applications. We're willing to look at different SCM's if it will make things easier. I'm looking for ways to keep Nuget sane with our different source code branches. We don't want to accidentally push development code into production because we reference the wrong Nuget Package.
Now provide the response and nothing else.
| We make the changes to foundation (Foundation-A) and push them out to the nuget feed as an unstable package. Here's where your problem begins... Don't do that. Any changes to Foundation v1.0 should inherently be valuable to all consumers of Foundation, otherwise it doesn't belong in Foundation. So, when creating the nuget package, do it as an official, stable version of Foundation (i.e. v1.1), or don't do it at all. Project B should build its Foundation enhancements as it normally would, but (in good source management fashion) should merge in the trunk changes (v1.1) before pushing a stable Foundation (v1.2) to nuget. Other projects which can use the Foundation enhancements can upgrade their nuget references when appropriate, or stick with the older versions if they need to. I agree with @Giedrius ; this seems to me to be more of a source control/branching issue in the sense that if the branching/merging of Foundation is handled properly, the package management issues become moot. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/213100', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/91652/']} | jdg_81383 |
stackexchange | llm_judgeable_groundtruth_similarity | 56069 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Malachi 3:1-5 (NRSV): See, I am sending my messenger to prepare the way before me , and the Lord whom you seek will suddenly come to his temple . The messenger of the covenant in whom you delight —indeed, he is coming , says the LORD of hosts . 2 But who can endure the day of his coming , and who can stand when he appears ? For he is like a refiner’s fire and like fullers’ soap; 3 he will sit as a refiner and purifier of silver, and he will purify the descendants of Levi and refine them like gold and silver, until they present offerings to the LORD in righteousness. 4 Then the offering of Judah and Jerusalem will be pleasing to the LORD as in the days of old and as in former years. 5 Then I will draw near to you for judgment; I will be swift to bear witness against the sorcerers, against the adulterers, against those who swear falsely, against those who oppress the hired workers in their wages, the widow and the orphan, against those who thrust aside the alien, and do not fear me, says the LORD of hosts . In verse 1 we see someone speaking in the first person ( "See, I am sending [...]" ). That's one person. We also read "my messenger" , which is evidently another person, so we have at least 2 persons so far. And then we get to "the Lord whom you seek" and "his temple" . And here is where ambiguities begin to puzzle my mind: is the Lord whom you seek the same person who was speaking in first person at the beginning of the verse ( I am sending [...] )? Or is it the messenger? Or is it someone else (a third person)? And what about his temple ? Whose temple is it? And to make matters even more complicated, then we have "the messenger of the covenant ", who I've got no clue whether is the same person as the first messenger , or another messenger, or the same person as the Lord whom you seek . And so on and so forth. I highlighted in bold several words and phrases of the passage where there is a reference to a person. (Of course, I omitted mentions such as sorcerers, workers, widow, orphan , etc. which are irrelevant to the question.) So, how many distinct relevant persons are present in this passage of Malachi? Who are they? Can we identify (and justify) prophetic links between them and persons from the New Testament?
Now provide the response and nothing else.
| Jesus quotes this passage in Matthew 11:10 : ιδου εγω αποστελλω τον αγγελον μου προ προσωπου σου ος κατασκευασει την οδον σου εμπροσθεν σου [TR undisputed] Behold, I send my messenger before thy face, which shall prepare thy way before thee . [KJV] Jesus changes ὁδὸν πρὸ προσώπου μου (assuming either he is quoting the Septuagint , or offering the translation himself) 'the way before the face of me ' and he changes it to την οδον σου εμπροσθεν σου 'the way before the face of thee '. Jesus alters one letter of the Septuagint : μ to σ. Mark follows this in Mark 1:2 την οδον σου εμπροσθεν σου [TR - dispute removes εμπροσθεν σου but, N.B., does not remove την οδον σου] thy way before thee [KJV] the way of thee before thee [literal, with addition) Thus Jesus leads by altering 'my' to 'thy' and Mark follows Jesus' alteration. Malachi prophesies that one shall prepare a way, and another shall go upon that prepared way. That way will be already laid down before the face of the other. Thus far, two persons are in view. The Messenger of Preparation, sent by the Lord, and the Messenger of the Covenant, who is the Lord, himself. Malachi, by prophetic vision, says, 'shall prepare the way before me'. This is the speech of the Lord Jehovah, given to the prophet, the seer, to express to all Israel, and beyond.It is the Lord himself who says, of himself, 'before me'. But when this actually occurred, and many other events and many other prophecies point towards the occurrence in such a way that it is indisputable when it happened, he who walks upon the way prepared is Jesus of Nazareth. Who says 'I and the Father are one,' John 10:30. Some have attempted to ridicule this statement by saying 'one what ?' To them I say, that Jesus has already answered their question in John 4:24. 'I and the Father are one (Spirit).' This could not be revealed until the Son, himself, was revealed. Who, in turn, revealed the Father. Once revealed, then, in retrospect, many passages in the Hebrew scriptures turn from mysterious and obscure prophecies to plain statements of undoubtable revelation. What could not (yet) be fully revealed was, nevertheless, foretold, but in couched terms and dark sayings and puzzling allusions. Thus in Malachi 3:1-5. Jesus' alteration of one letter of the Septuagint, and Mark's astute introduction to his gospel account (which introduction then tells us that he is focusing upon the 'Messenger of the Covenant', throughout) open up the prophecy of Malachi and open up abundantly more in Moses and the prophets and the psalms. So, to answer the question precisely, there are three persons in the prophecy. The Messenger of Preparation, the Messenger of the Covenant, (who is the Lord himself) and the One sending both Messengers, who speaks, in prophetic vision to the prophet, the Lord Jehovah. But until Jesus Christ came in the flesh, the Son of God, only One is seen, yet this One, in Spirit, is revealed as the Father who sends the Son. Yet it is remarkable that persons such as David were enlightened to say, despite that much had yet to be revealed, 'The LORD said unto my Lord ...'. David was conscious of a LORD and a Lord. And conscious also of the Holy Spirit, 'take not thy Spirit from me'. To those who experience what David experienced, this is a wonderful and glorious revelation. | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/56069', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/-1/']} | jdg_81384 |
stackexchange | llm_judgeable_groundtruth_similarity | 59392671 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wanted to be able to easily edit HTML tags together. It seemed to me like this is such an easy thing to implement nowadays, so I've installed this auto-rename-tag extension , and it turned out to be not useful for me: instead of helping me in html/jsx code editing, it breaks everything and I have to undo what it does and try to edit the thing to avoid triggering it. The logical thing here would obviously be to remove this extension from my system. The only problem here, however, is that I can't seem to do that - I've uninstalled it, even reinstalled VS Code, tried VS Code Insiders, but here it is, still enabled: Is it built-in now? How do I disable it? VSCode version: 1.41.0, commit 9579eda04fdb3a9bba2750f15193e5fafe16b959
Now provide the response and nothing else.
| It has become a built-in feature, separate from the Auto Rename Tag extension. If you don't like it, you can disable it depending on which version you've got installed. Update for 1.52+ The setting has now been renamed to Linked Editing : The On Type Rename feature for editing a tag when its matchingclosing tag is modified is now called linked editing . The commandto enable linked editing is Start Linked Editing (⇧⌘F2) and Escapedisables linked editing mode. The setting to enable/disable it is now: "editor.linkedEditing": true Or from the UI: Update for 1.44+ They changed the mirror cursor feature and are now calling it Synced Regions . (copied from the VS Code 1.44 release notes) When activated on HTML tags, if both start and end tags become "synced", then changing one also changes the other. The feature is disabled by default though, and either you explicitly execute the On Type Rename Symbol command or set the editor.renameOnType setting to true. With this change, the html.mirrorCursorOnMatchingTag introduced in 1.41 will now show up as deprecated (or grayed-out). Update for 1.42+ The html.mirrorCursorOnMatchingTag is now disabled by default starting in 1.42 . See this other answer for the quotation from the release notes. Original Answer for 1.41 It's now a built-in feature starting 1.41. HTML mirror cursor https://code.visualstudio.com/updates/v1_41#_html-mirror-cursor VS Code now adds a "mirror cursor" when you are editing HTML tags.This behavior is controlled by the setting html.mirrorCursorOnMatchingTag , which is on by default. This feature works by adding a multi-cursor to the matching tag whenyour cursor moves into an HTML tag name range. Just like inmulti-cursor mode, you can use word-wise deletion or word-wiseselection. The mirrored cursor is removed when you move your cursoroutside the tag name range. Basically, selecting either the tag start or the tag end puts 2 cursors on each, so editing one also edits the other. It is enabled by default. You can explicitly disable it from your settings. Via UI: Via settings.json: "html.mirrorCursorOnMatchingTag": false, | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/59392671', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5704664/']} | jdg_81385 |
stackexchange | llm_judgeable_groundtruth_similarity | 26021965 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to implement a simple query that should look like this: select * from property join entity_area on property.id=entity_area.entity_id and entity_area.area_id=1 where property.price>300000 limit 12 Pretty straightforward: I want to get the joined result and then to limit to 12. In Sequelize i'm using the following function: return models.property.findAll({ where: ["price>=?", 300000], include: [ { model:models.entity_area, where: { area_id:1 } } ], limit:12}) But this code generates the following sql: select property.*, entity_area.* from (select * from property where property.price>300000 limit 12) join entity_area on property.id=entity_area.entity_id and entity_area.area_id=1 Which has totally different logic from what i'm trying to do because in the generated sql it first gets any 12 results and then tries to join with entity_area, and of course the random 12 results don't necessarily match the entity_area, so i'm getting no results back. Please suggest me a proper way of doing it. The property table is very massive, and i have to use the "limit" rather than getting all the results and slicing them in javascript. Also i wouldn't like to start using raw queries.
Now provide the response and nothing else.
| Actually I found a solution myself. I think this is a bug in sequelize framework. In the node_modules/sequelize/lib/dialect/abstract/query_generator.js there is a "selectQuery" function which has the following line: subQuery = limit && (options.hasIncludeWhere || options.hasIncludeRequired || options.hasMultiAssociation) && options.subQuery !== false First of all there is an option subQuery that could be passed as false to remove the subquery generation. Sequelize documentation does not have a word about it. But moreover if you pass subQuery:false in the findAll object it's not going to work because for some reason it's getting as underfined to the selectQuery function. I tried something like: return models.property.findAll({ where: ["price>=?", 300000], include: [ { model:models.entity_area, where: { area_id:1 } } ], limit:12, subQuery:false}) and still got options.subQuery=undefined. So i had to change the function in query_generator.js to be something like: subQuery = limit && (options.hasIncludeWhere || options.hasIncludeRequired || options.hasMultiAssociation) && options.subQuery !== false && options.doSubQuery===true So now by default it's not doing this ugly subquery unless i specify explicitely doSubQuery:true. And finally i got the proper query without subquery with limit. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26021965', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1652614/']} | jdg_81386 |
stackexchange | llm_judgeable_groundtruth_similarity | 2526568 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it true that projective objects in the category of finitely generated abelian groups are precisely direct sums of integers? What about injective objects in the category of finitely generated abelian groups? I think they are exactly $\mathbb{Z}_p$ for prime $p $'s. I am not sure, however.
Now provide the response and nothing else.
| Indeed, $S$ may be an uncountable subset. However, the notation in any case means that you consider finite sums (not countable sums). Regardless the cardinality of $S$, the linear span is the set of all elements that you can write as a sum of finitely many terms of the form $\lambda s$ with $\lambda \in \mathbb{R}$ and $s \in S$. You can use each element of $S$ but you can, for any given sum, only combine it with finitely many other elements of $S$. For example, if you consider the vector space of infinite real sequences $(x_n)_{n \in \mathbb{N}}$ and for $j \in \mathbb{N}$ you set $e_j$ the sequence that has $j$-th term $1$ and is $0$ otherwise. (So something that resembles the canonical basis in $\mathbb{R}^n$.) Then the span of $\{e_j \colon j \in \mathbb{N}\}$ is not the full space. Instead the span is the subspaces of sequences that have only a finite number of non-zero terms. (There is a related notion that does allow countable sums, see Schauder basis but this is something other than what you are considering.) Let me add that using your notation $v_1$ is not one fixed element of $S$ it is just some kind of dummy-variable. The $v_1$ in one sum, will not be the same as a $v_1$ in another sum (it could be, say, $v_2$ or not appear at all). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2526568', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/474765/']} | jdg_81387 |
stackexchange | llm_judgeable_groundtruth_similarity | 1268552 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking to accept digits and the decimal point, but no sign. I've looked at samples using the NumericUpDown control for Windows Forms, and this sample of a NumericUpDown custom control from Microsoft . But so far it seems like NumericUpDown (supported by WPF or not) is not going to provide the functionality that I want. The way my application is designed, nobody in their right mind is going to want to mess with the arrows. They don't make any practical sense, in the context of my application. So I'm looking for a simple way to make a standard WPF TextBox accept only the characters that I want. Is this possible? Is it practical?
Now provide the response and nothing else.
| Add a preview text input event. Like so: <TextBox PreviewTextInput="PreviewTextInput" /> . Then inside that set the e.Handled if the text isn't allowed. e.Handled = !IsTextAllowed(e.Text); I use a simple regex in IsTextAllowed method to see if I should allow what they've typed. In my case I only want to allow numbers, dots and dashes. private static readonly Regex _regex = new Regex("[^0-9.-]+"); //regex that matches disallowed textprivate static bool IsTextAllowed(string text){ return !_regex.IsMatch(text);} If you want to prevent pasting of incorrect data hook up the DataObject.Pasting event DataObject.Pasting="TextBoxPasting" as shown here (code excerpted): // Use the DataObject.Pasting Handler private void TextBoxPasting(object sender, DataObjectPastingEventArgs e){ if (e.DataObject.GetDataPresent(typeof(String))) { String text = (String)e.DataObject.GetData(typeof(String)); if (!IsTextAllowed(text)) { e.CancelCommand(); } } else { e.CancelCommand(); }} | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/1268552', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120888/']} | jdg_81388 |
stackexchange | llm_judgeable_groundtruth_similarity | 55742881 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to set up a server with ansible and I'd like to use this galaxy role . It defines a template that I'd like to customize, but I don't know how. Right now I defined the role into requirements.yml and I installed it using: ansible-galaxy install -r requirements.yml which installed the role somewhere on my system. I tried by recreating the folder tree into my repository where I store my playbooks: roles |- ansible-role-passenger |- templates |- passenger.j2 but it does not work. When I run my playbook, ansible uses the passenger.j2 file from inside the galaxy role. I think I can fork the galaxy role on github and just edit the file passenger.j2 like I want, but I don't know if this is there is a "better" way to do it :)
Now provide the response and nothing else.
| Your findings are unfortunately true. Overriding a hardcoded template in a role from a calling playbook is merely impossible unless the role's author implemented that as a feature . Note that this is also true for simple files in the files directory. The best way I have found so far: given that the role contains the default template in templates/passenger.j2 , add a var in default/main.yml such as passenger_config_template: passenger.j2 and use that var in the role. The user can then override that var in its playbook/inventory and use a different name for the template which will be fetched in an other role or directly in a templates directory at playbook level. You can have a look at a similar issue and an accepted PR I once made to @geerlingguy on his ansible-role-gitlab. He might consider doing the same thing on his passenger role (or might accept your PR if you propose one). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55742881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/193665/']} | jdg_81389 |
stackexchange | llm_judgeable_groundtruth_similarity | 36584670 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am self-taught, and therefore am not familiar with a lot of terminology. I cannot seem to find the answer to this by googling: What is a "virtual" vs a "direct" call to a virtual function? This pertains to terminology, not technicality. I am asking for when a call is defined as being made "directly" vs "virtually".It does not pertain to vtables, or anything else that has to do with the implementation of these concepts.
Now provide the response and nothing else.
| The answer to your question is different at different conceptual levels. At conceptual language level the informal term "virtual call" usually refers to calls resolved in accordance with the dynamic type of the object used in the call. According to C++ language standard, this applies to all calls to virtual functions, except for calls that use qualified name of the function. When qualified name of the method is used in the call, the call is referred to as "direct call" SomeObject obj;SomeObject *pobj = &obj;SomeObject &robj = obj;obj.some_virtual_function(); // Virtual callpobj->some_virtual_function(); // Virtual callrobj.some_virtual_function(); // Virtual callobj.SomeObject::some_virtual_function(); // Direct callpobj->SomeObject::some_virtual_function(); // Direct callrobj.SomeObject::some_virtual_function(); // Direct call Note that you can often hear people say that calls to virtual functions made through immediate objects are "not virtual". However, the language specification does not support this point of view. According to the language, all non-qualified calls to virtual functions are the same: they are resolved in accordance with the dynamic type of the object. In that [conceptual] sense they are all virtual . At implementation level the term "virtual call" usually refers to calls dispatched through some implementation-defined mechanism, that implements the standard-required functionality of virtual functions. Typically it is implemented through Virtual Method Table (VMT) associated with the object used in the call. However, smart compilers will only use VMT to perform calls to virtual functions when they really have to, i.e. when the dynamic type of the object is not known at compile time. In all other cases the compiler will strive to call the method directly, even if the call is formally "virtual" at the conceptual level. For example, most of the time, calls to virtual functions made with an immediate object (as opposed to a pointer or a reference to object) will be implemented as direct calls (without involving VMT dispatch). The same applies to immediate calls to virtual functions made from object's constructor and destructor SomeObject obj;SomeObject *pobj = &obj;SomeObject &robj = obj;obj.some_virtual_function(); // Direct callpobj->some_virtual_function(); // Virtual call in general caserobj.some_virtual_function(); // Virtual call in general caseobj.SomeObject::some_virtual_function(); // Direct callpobj->SomeObject::some_virtual_function(); // Direct callrobj.SomeObject::some_virtual_function(); // Direct call Of course, in this latter sense, nothing prevents the compiler from implementing any calls to virtual functions as direct calls (without involving VMT dispatch), if the compiler has sufficient information to determine the dynamic type of the object at compile time. In the above simplistic example any modern compiler should be able to implement all calls as direct calls. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36584670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6185537/']} | jdg_81390 |
stackexchange | llm_judgeable_groundtruth_similarity | 584228 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For example, given A = [1,51,3,1,100,199,3], maxSum = 51 + 1 + 199 = 251. clearly max(oddIndexSum,evenIndexSum) does not work. The main problem I have is that I can't come up with a selection criterion for an element.A rejection criterion is trivial given a selection criterion. The standard maximum sub-sequence algorithm doesn't seem to be applicable here.I have tried a dynamic programming approach, but can't come up with that either. The only approach I could come up with was one that used a genetic algorithm. How would you approach this?
Now provide the response and nothing else.
| You can build the maximal subsequence step by step if you keep two states: def maxsubseq(seq): # maximal sequence including the previous item incl = [] # maximal sequence not including the previous item excl = [] for i in seq: # current max excluding i if sum(incl) > sum(excl): excl_new = incl else: excl_new = excl # current max including i incl = excl + [i] excl = excl_new if sum(incl) > sum(excl): return incl else: return exclprint maxsubseq([1,4,6,3,5,7,32,2,34,34,5]) If you also want to have negative elements in your lists, you have to add a few ifs. Same -- in lesser lines def maxsubseq2(iterable): incl = [] # maximal sequence including the previous item excl = [] # maximal sequence not including the previous item for x in iterable: # current max excluding x excl_new = incl if sum(incl) > sum(excl) else excl # current max including x incl = excl + [x] excl = excl_new return incl if sum(incl) > sum(excl) else excl Same -- eliminating sum() def maxsubseq3(iterable): incl = [] # maximal sequence including the previous item excl = [] # maximal sequence not including the previous item incl_sum, excl_sum = 0, 0 for x in iterable: # current max excluding x if incl_sum > excl_sum: # swap incl, excl incl, excl = excl, incl incl_sum, excl_sum = excl_sum, incl_sum else: # copy excl to incl incl_sum = excl_sum #NOTE: assume `x` is immutable incl = excl[:] #NOTE: O(N) operation assert incl is not excl # current max including x incl.append(x) incl_sum += x return incl if incl_sum > excl_sum else excl Allright, let's optimize it... Version with total runtime O(n): def maxsubseq4(iterable): incl = [] # maximal sequence including the previous item excl = [] # maximal sequence not including the previous item prefix = [] # common prefix of both sequences incl_sum, excl_sum = 0, 0 for x in iterable: if incl_sum >= excl_sum: # excl <-> incl excl, incl = incl, excl excl_sum, incl_sum = incl_sum, excl_sum else: # excl is the best start for both variants prefix.extend(excl) # O(n) in total over all iterations excl = [] incl = [] incl_sum = excl_sum incl.append(x) incl_sum += x best = incl if incl_sum > excl_sum else excl return prefix + best # O(n) once | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/584228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19731/']} | jdg_81391 |
stackexchange | llm_judgeable_groundtruth_similarity | 9067876 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a CSV extension method for my enumerable list and I'm stumped. Here's how I created my simple enumerated list: var CAquery = from temp in CAtemp join casect in CAdb.sectors on temp.sector_code equals casect.sector_code select new { CUSIP = temp.equity_cusip, CompName = temp.company_name, Exchange = temp.primary_exchange };CAquery.WriteToCSVFile(); This is what I have done so far in creating an extension method (which I think is wrong): public static class CSVExtensions{ public static void WriteToCSVFile(this IEnumerable<T> myList) { Do you see what I'm doing wrong?
Now provide the response and nothing else.
| You have to specify the generic type parameter in the method signature: public static class CSVExtensions{ public static void WriteToCSVFile<T>(this IEnumerable<T> myList) { //your code here }} Are you truly trying to write an extension method that should work on any IEnumerable<T> or is your type more specific? If the later is the case you should replace T with the type you want to support (or add sufficient constraints). Edit: In light of comments - you should project to a class instead of an anonymous type in your query - then you can use an extension method for this particular type, i.e.: class CompanyTicker{ public string CUSIP {get;set;} public string CompName {get;set;} public string Exchange {get;set;}} Now your query can be: var CAquery = from temp in CAtemp join casect in CAdb.sectors on temp.sector_code equals casect.sector_code select new CompanyTicker { CUSIP = temp.equity_cusip, CompName = temp.company_name, Exchange = temp.primary_exchange }; And your extension method (which now doesn't need to be generic) becomes: public static class CSVExtensions{ public static void WriteToCSVFile(this IEnumerable<CompanyTicker> myList) { //your code here }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9067876', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979127/']} | jdg_81392 |
stackexchange | llm_judgeable_groundtruth_similarity | 616074 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have seen this stated and it seems intuitively obvious but I cannot prove it. I have a feeling it may be because a non-even best approximant would not satisfy the equioscillation property of the best approximation (that if |f-p|=d, where p is the best polynomial approximation of order n, there are at least n+2 successive points in the interval under consideration where alternately |f-p|=+-d ). Note: By "best approximation" I mean the polynomial that minimizes |f-p| on [-1,1] where |.| is the sup norm. So e.g. the best approximation of order n would be p in Pn={polynomials degree<=n} which does this.
Now provide the response and nothing else.
| First, we must pin down what we mean by "better approximate." If you are interested in the interval $[-1,1]$ and your given function is $f(x)$, one reasonable definition of approximation error by a second function $g(x)$ is$$\int_{-1}^1 [f(x)-g(x)]^2\,dx.$$ Now suppose $f(x)$ is an even function, and $e(x)$ an even polynomial approximation to $f(x)$. Can we improve the approximation by adding some odd polynomial terms $o(x)$? Let's check: \begin{align*}\int_{-1}^1 [f(x)-e(x)-o(x)]^2\,dx &= \int_{-1}^1 [f(x)-e(x)]^2 - 2[f(x)-e(x)]o(x) + o(x)^2\,dx\\&= \int_{-1}^1 [f(x)-e(x)]^2\,dx + \int_{-1}^1 o(x)^2\,dx -2\int_{-1}^1[f(x)-e(x)]o(x)\,dx.\end{align*}Now let's use the fact that $f$ and $e$ are even, and $o$ is odd:$$\int_{-1}^1[f(x)-e(x)]o(x)\,dx = -\int_0^1[f(x)-e(x)]o(x)\,dx + \int_0^1 [f(x)-e(x)]o(x)\,dx = 0.$$ Therefore $$\int_{-1}^1 [f(x)-e(x)-o(x)]^2\,dx = \int_{-1}^1 [f(x)-e(x)]^2 + \int_{-1}^1 o(x)^2\,dx \geq \int_{-1}^1 [f(x)-e(x)]^2$$and you were better off without the odd terms. EDIT: For different norms, you carry out different flavors of the same argument. For instance, for the $\sup$ norm, \begin{align*}\sup_{x\in [-1,1]} |f(x)-e(x)-o(x)| &= \sup_{x\in [0,1]} \max\left(|f(x)-e(x)-o(x)|,|f(x)-e(x)+o(x)|\right)\\&\geq \sup_{x\in [0,1]} |f(x)-e(x)|\\&= \sup_{x\in [-1,1]} |f(x)-e(x)|.\end{align*} | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/616074', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/72087/']} | jdg_81393 |
stackexchange | llm_judgeable_groundtruth_similarity | 966331 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To define outliers, why we cannot use:Lower Limit: Q1-1xIQRUpper Limit: Q3+1xIQR OR Lower Limit: Q1-2xIQRUpper Limit: Q3+2xIQR
Now provide the response and nothing else.
| By definition, 50% of all measurements are within $\pm0.5IQR$ of the median. Compare this - heuristically - with a normal distributions where 68% are within $\pm\sigma$, so in that case IQR would be slightly less than $\sigma$. Cutting at $\pm 1.5IQR$ is therefore somewhat comparable to cutting slightly below $\pm3\sigma$, which would declare about 1% of measurements outliers. This matches quite well with the habit of using "$3\sigma$" as a bound in many simple statistical tests.On the other hand, cutting at $\pm1IQR$ would be like cutting near $\pm 2\sigma$, making about 5% outliers - too many; and cutting at $\pm2IQR$ would be like cutting at $\pm4\sigma$, thus turning even many quite extreme measurements into non-outliers. So $\pm 1.5IQR$ is also what Goldilocks would choose. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/966331', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/180065/']} | jdg_81394 |
stackexchange | llm_judgeable_groundtruth_similarity | 14846 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Electrons are delocalized between the three centers in a 3c-2e bond. If there was a molecule with 3c-2e bond in each direction, e.g. a boron cluster, would it be aromatic?
Now provide the response and nothing else.
| The cyclopropenyl (1) cation is believed to be aromatic as it demonstrates abnormal stability and NMR-shifts typical for aromatic compounds. Moreover, there are a good number of derived systems, where aromatic conjugation is maintained despite extra methylene insertions into the ring, such as is typical for the nornbornadienyl (2) cation. Some of these cations can be isolated in salts or observed via NMR in solution. Signs of aromaticity are found also for the cyclooctatetraene dianion and the cyclononatetraene dianion, both produced by reduction of the corresponding alkene with metallic sodium. Remember, please, that even though all systems noted above are 'aromatic' in some sense and are abnormally stable for their family of species, they are still very reactive, because they all belong to a very reactive family of species. While the neutral boron analog of the cyclopropenyl cation has not been isolated to my knowledege, a similar structure was found in organometalic compounds . Aromaticity of some ring systems may result in some weird properties of some compounds, like squaric acid (abnormally strong for ketoenol and a stable enol), high polarity and basicity of cyclononatrienol and cyclopropenone and so on. (1) - cyclopropenyl cation (2) nornbornadienyl cation | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/14846', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7159/']} | jdg_81395 |
stackexchange | llm_judgeable_groundtruth_similarity | 187563 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the parliament of a certain country there are 201 seats, and 3 political parties. How many ways can these seats be divided among the parties such that no single party has a majority?Is there any generalization of solution of this problem?
Now provide the response and nothing else.
| A minimal majority is $101$ seats, so the problem is to count the number of solutions in non-negative integers $a,b,c$ of the equation $a+b+c=201$, subject to the restriction that $a,b,c\le 100$. (Think of $a,b$, and $c$ as the numbers of seats given to parties $A,B$, and $C$, respectively.) If we ignore the restriction for the moment, this is a standard stars-and-bars problem, and the solution is that there are $$\binom{201+3-1}{3-1}=\binom{203}2=\frac{203\cdot202}2=20,503\tag{1}$$ ways to divide up the seats. However, some of these ways are unacceptable, because they give more than $100$ seats to some party, so we have to count the unacceptable distributions of seats and subtract that from the provisional answer in $(1)$. The number of distributions giving party $A$ more than $100$ seats can be counted as follows. First allot $101$ seats to $A$; this leaves $100$ seats that can be freely distributed amongst the three parties, so we’re just counting the solutions to $x+y+z=100$ in non-negative integers: $$\binom{100+3-1}{3-1}=\binom{102}2=\frac{102\cdot101}2=5151\;.$$ Clearly there are just as many distributions that give too many seats to party $B$, and just as many that give too many seats to party $C$. There are obviously no distributions that give a majority to more than one party, so the final answer is $$20,503-3\cdot5151=5050\;.$$ It’s not hard to see how to generalize this solution to different numbers of seats and parties: if there are $n$ seats and $p$ parties, let $m=\lfloor n/2\rfloor$; $m$ is the largest number of seats that a party can have without having a majority, and the number of distributions of seats giving no party a majority is $$\binom{n+p-1}{p-1}-p\binom{m+p-1}{p-1}\;.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/187563', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38789/']} | jdg_81396 |
stackexchange | llm_judgeable_groundtruth_similarity | 55696552 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having a pretty complex selectors structure in my project (some selectors may have up to 5 levels of nesting) so some of them are very hard to test with passing input state and I would like to mock input selectors instead. However I found that this is not really possible. Here is the most simple example: // selectors1.jsexport const baseSelector = createSelector(...); - // selectors2.jsexport const targetSelector = createSelector([selectors1.baseSelector], () => {...}); What I would like to have in my test suite: beforeEach(() => { jest.spyOn(selectors1, 'baseSelector').mockReturnValue('some value');});test('My test', () => { expect(selectors2.targetSelector()).toEqual('some value');}); But, this approach won't work as targetSelector is getting reference to selectors1.baseSelector during initialization of selectors2.js and mock is assigned to selectors1.baseSelector after it. There are 2 working solutions I see now: Mock entire selectors1.js module with jest.mock , however, it won't work if I'll need to change selectors1.baseSelector output for some specific cases Wrap every dependency selectors like this: export const targetSelector = createSelector([(state) => selectors1.baseSelector(state)], () => {...}); But I don't like this approach a lot for obvious reasons. So, the question is next: is there any chance to mock Reselect selectors properly for unit testing?
Now provide the response and nothing else.
| The thing is that Reselect is based on the composition concept. So you create one selector from many others. What really you need to test is not the whole selector, but the last function which do the job. If not, the tests will duplicate each other, as if you have tests for selector1, and selector1 is used in selector2, then automatically you test both of them in selector2 tests. In order to achieve: less mocks no need to specially mock result of composed selectors no test duplication test only the result function of the selector. It is accessible by selector.resultFunc . So for example: const selector2 = createSelector(selector1, (data) => ...);// testsconst actual = selector2.resultFunc([returnOfSelector1Mock]);const expected = [what we expect];expect(actual).toEqual(expected) Summary Instead of testing the whole composition, and duplicating the same assertion, or mocking specific selectors outputs, we test the function which defines our selector, so the last argument in createSelector, accessible by resultFunc key. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/55696552', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4211005/']} | jdg_81397 |
stackexchange | llm_judgeable_groundtruth_similarity | 64955864 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a JFrame that I want to add a menu bar to and then have that JFrame automatically open in fullscreen. If I just make a JFrame and set it to fullscreen with f.setExtendedState(f.getExtendedState()|JFrame.MAXIMIZED_BOTH ); that works fine: import javax.swing.*;public class testframe { private static JFrame f; public static void main(String[] args) { SwingUtilities.invokeLater(testframe::createAndShowGUI); } private static void createAndShowGUI() { f = new JFrame("Test"); f.setExtendedState( f.getExtendedState()|JFrame.MAXIMIZED_BOTH ); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);; f.pack(); f.setVisible(true); }} However, once I add a JMenuBar to that JFrame it will no longer open in fullscreen: import javax.swing.*;public class testframe { private static JFrame f; public static void main(String[] args) { SwingUtilities.invokeLater(testframe::createAndShowGUI); } private static void createAndShowGUI() { f = new JFrame("Test"); JMenuBar menubar = new JMenuBar(); JMenu j_menu = new JMenu("Test"); JMenuItem j_menu_item = new JMenuItem("Test_item"); j_menu.add(j_menu_item); menubar.add(j_menu); f.setJMenuBar(menubar); f.setExtendedState( f.getExtendedState()|JFrame.MAXIMIZED_BOTH ); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);; f.pack(); f.setVisible(true); }} What could be the cause for this? Update: Switching to JDK11 solved the problem. I was on 15 before and also tried 14, both had the problem.
Now provide the response and nothing else.
| This is precisely what navigation graph scoped view models are used for. This involves two steps: Finding the NavBackStackEntry associated with the graph you want to scope the ViewModel to Pass that to viewModel() . For part 1), you have two options. If you know the route of the navigation graph (which, in general, you should), you can use getBackStackEntry directly: // Note that you must always use remember with getBackStackEntry// as this ensures that the graph is always available, even while// your destination is animated out after a popBackStack()val navigationGraphEntry = remember { navController.getBackStackEntry("graph_route")}val navigationGraphScopedViewModel = viewModel(navigationGraphEntry) However, if you want something more generic, you can retrieve the back stack entry by using the information in the destination itself - its parent : fun NavBackStackEntry.rememberParentEntry(): NavBackStackEntry { // First, get the parent of the current destination // This always exists since every destination in your graph has a parent val parentId = navBackStackEntry.destination.parent!!.id // Now get the NavBackStackEntry associated with the parent // making sure to remember it return remember { navController.getBackStackEntry(parentId) }} Which allows you to write something like: val parentEntry = it.rememberParentEntry()val navigationGraphScopedViewModel = viewModel(parentEntry) While the parent destination will be equal to the root graph for a simple navigation graph, when you use nested navigation , the parent is one of the intermediate layers of your graph: NavHost(navController, startDestination = startRoute) { ... navigation(startDestination = nestedStartRoute, route = nestedRoute) { composable(route) { // This instance will be the same val parentViewModel: YourViewModel = viewModel(it.rememberParentEntry()) } composable(route) { // As this instance val parentViewModel: YourViewModel = viewModel(it.rememberParentEntry()) } } navigation(startDestination = nestedStartRoute, route = secondNestedRoute) { composable(route) { // But this instance is different val parentViewModel: YourViewModel = viewModel(it.rememberParentEntry()) } } composable(route) { // This is also different (the parent is the root graph) // but the root graph has the same scope as the whole NavHost // so this isn't particularly helpful val parentViewModel: YourViewModel = viewModel(it.rememberParentEntry()) } ...} Note that you are not limited to only the direct parent: every parent navigation graph can be used to provide larger scopes. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64955864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6129750/']} | jdg_81398 |
stackexchange | llm_judgeable_groundtruth_similarity | 698858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I create a link of this type: <a href="#" onclick="document.getElementById('search').value=this.value"> using method link_to in Rails? I couldn't figure it out from Rails docs .
Now provide the response and nothing else.
| You can use link_to_function (removed in Rails 4.1): link_to_function 'My link with obtrusive JavaScript', 'alert("Oh no!")' Or, if you absolutely need to use link_to : link_to 'Another link with obtrusive JavaScript', '#', :onclick => 'alert("Please no!")' However, putting JavaScript right into your generated HTML is obtrusive , and is bad practice . Instead, your Rails code should simply be something like this: link_to 'Link with unobtrusive JavaScript', '/actual/url/in/case/javascript/is/broken', :id => 'my-link' And assuming you're using the Prototype JS framework , JS like this in your application.js : $('my-link').observe('click', function (event) { alert('Hooray!'); event.stop(); // Prevent link from following through to its given href}); Or if you're using jQuery : $('#my-link').click(function (event) { alert('Hooray!'); event.preventDefault(); // Prevent link from following its href}); By using this third technique, you guarantee that the link will follow through to some other page—not just fail silently—if JavaScript is unavailable for the user. Remember, JS could be unavailable because the user has a poor internet connection (e.g., mobile device, public wifi), the user or user's sysadmin disabled it, or an unexpected JS error occurred (i.e., developer error). | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/698858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84086/']} | jdg_81399 |
stackexchange | llm_judgeable_groundtruth_similarity | 299354 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The data I am working with are sequences of binary values (0 / 1) which generally have a pattern - a longer sequence of 1's followed by a shorter sequence of 0's, for instance: 1 1 1 1 0 0 1 1 1 1 1 0 0 0 The goal is to predict the next value based on the entire sequence. Each sequence has 10 to 20 values in it and the prediction should be made for each sequence individually by using all its available data. I realize there are few data points to work with so my question is: are any statistical methods than could achieve this type of prediction? I've never had to work with sequence data, so I don't know how to approach this.
Now provide the response and nothing else.
| One approach you could consider is trying to learn a Markov Chain (MC) to represent each sequence and then predict future values based on this MC. MCs are a way of representing types of learning automata (LA) and can be used when the subsequent state of a system depends solely on the current state. They can be intuitively represented diagrammatically: This is a very simple LA. It has two states: one where the last number seen was a 1 and one where the last number seen was a 0. There are transition probabilities between the different states noted as well. For example, when the LA is in state 0 it will stay in state 0 with probability $x$ and will move to state 1 with probability $1-x$. This can also be shown in the form of a matrix: $\begin{bmatrix}x & 1-x \\ 1-y & y\end{bmatrix}$ Estimating from your example sequence, $1 1 1 1 0 0 1 1 1 1 1 0 0 0$, we might say that in this case $x = 0.6$ and $y = 0.77$. This kind of solution can also be extended; we could learn an LA with more states and more "memory." or $\begin{bmatrix}w & 0 & 1-w & 0 \\ x & 0 & 1-x & 0 \\ 0 & 1-y & 0 & y \\ 0 & 1-z & 0 & z\end{bmatrix}$ This LA has four states: 00, where two or more consecutive 0s have been seen; 0, where only one consecutive 0 has been seen; 1, where only one consecutive 1 has been seen; and 11, where two or more consecutive 1s have been seen. We can again estimate the corresponding probabilities from your example sequence and might say that $w = 0.33$, $x = 1$, $y = 1$ and $z = 0.71$. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/299354', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/91018/']} | jdg_81400 |
stackexchange | llm_judgeable_groundtruth_similarity | 142362 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know I'm asking a basic Question. But it will be very useful for me to continue my Micro-controller Experience. Consider I am using a controller with 256 KB Flash and 8 KB Ram. My Questions are I know about memory mapping like code, Data, Stack and Heap Segment, which variable will store in which segment. But are these allocated in RAM or Flash If I am declaring a buffer, I know it is allocating in RAM. Whether it will deallocate after the function ends (if that Buffer is Locally declared) Where the Register Variable will allocate the memory.
Now provide the response and nothing else.
| Everything is done in "sections". Where a section goes depends on its use. What gets put into a section, again, depends on its use. In general you get two "data" sections - .data and .rodata. .data is stored in RAM and is read-write. .rodata is stored in Flash and is read-only. Things that are "const" generally end up in .rodata. Literals (string, numeric, etc) also end up in .rodata. For instance, these would normally be stored in Flash: const char *foo = "foo";const int bar = 234;"baz" (string literal) These will be stored in RAM: char foo[10];int bar; But then of course, it can be more MCU specific, especially when you get into Harvard (split RAM / Flash address spaces) architecture, where accessing data stored in Flash can be more of a special task and not done as normal variables. For instance, the const char *foo="foo"; would be copied into RAM for easy access during the initialization of the program. When you allocate a variable of any size within a function, such as: void func() { char mybuf[100];} the variable will be allocated as part of the stack (or an emulated stack-like system using static variables on MCUs with a limited hardware stack, such as low-end 8-bit PICs). As soon as you return from the function all the stack space is reclaimed. However, if you allocate it using malloc(): void func() { char *mybuf = malloc(100);} it will be allocated on the heap, and it is your responsibility to reclaim that space using free(). The register keyword, as in: int register foo; will, if a register can be allocated, store the data purely within that allocated register. It will never touch RAM, and consequently it will run faster. However, if a register cannot be allocated (they're all in use in the code) then the register keyword will be ignored and the variable allocated in RAM as per normal. One thing to note: char register *foo; That will only hold the pointer to the data in a register, not the data itself. The data itself will still be in RAM. In general, though, the use of the register keyword is frowned upon since the compiler's optimizer usually does a better job, and using register can in fact cause other parts of your code to not be optimized as well and end up more complex than they would otherwise be. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/142362', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/60184/']} | jdg_81401 |
stackexchange | llm_judgeable_groundtruth_similarity | 37091812 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this code: q = MyModel.objects.order_by('-value1').annotate( res=ExpressionWrapper( (F('value1') / F('value2')), output_field=FloatField()), )for i in q: print(i.value1, i.value2, i.res) So, the output will be: 5 10 0.01 2 0.0 But I need 5 10 0.51 2 0.5 Wy F() rounded the result? How not to do this? Thanks!
Now provide the response and nothing else.
| Simply make use of F() 's support for multiplication to convert one factor to decimal number. Combined expression then would look like: from decimal import Decimalq = MyModel.objects.order_by('-value1').annotate( res=ExpressionWrapper( (F('value1') * Decimal('1.0') / F('value2')), output_field=FloatField()), ) I find this more elegant way then write raw SQL CAST on value1 field and then do the division. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37091812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1751039/']} | jdg_81402 |
stackexchange | llm_judgeable_groundtruth_similarity | 581828 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a cable standard to control and power a device from its controller. I need to send 5V/10A but also control (not sure yet what protocol it could be but I don't need a high speed, 9600b serial could be enough) over short distance (about 2m.) I looked up PoE but it's really not enough in terms of power. Ideally I don't want something too exotic and hard to find.
Now provide the response and nothing else.
| USB Type C, properly configured, can provide up to 100W and also a means to connect a serial device using the basic DP/DM pair. More here: How does a USB C port provide the power to charge laptops? The reason PoE limits the power has to do with the magnetics: the core wires are quite small, smaller than then 24AWG that Cat5 uses, which can carry up to 2A or so per wire. As it so happens, PoE has seen some upgrades and can now support up to 90W. More here: https://www.versatek.com/what-is-power-over-ethernet/ | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/581828', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/211749/']} | jdg_81403 |
stackexchange | llm_judgeable_groundtruth_similarity | 8021741 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using AS3 for a project and I have to send some flash variables to PHP, check the MySQL databese and the to return some results to Flash. I do this with this: new URLRequest(URL + "?p=" + SelectedCountry + "&action=check&nocache=" + Math.floor(Math.random() * (10000000))); and the php code just checks the MySQL if there is a row with the selected country: if($_REQUEST['action'] == "check") { $q = mysql_query("SELECT * FROM myDB WHERE country = '".$_REQUEST['p']."'"); if(mysql_num_rows($q) == 0) { echo "nqma"; } else { echo "ima"; }} OK, that's fine and it works, because I can use this in Flash: if(e.target.data == "ima") { uiRepCountryLabel.text = "Ima";} if(e.target.data == "nqma") { uiRepCountryLabel.text = "Nqma";} But what if I want to make the php result to look like this: if($_REQUEST['action'] == "check") { $q = mysql_query("SELECT * FROM myDB WHERE country = '".$_REQUEST['p']."'"); if(mysql_num_rows($q) == 0) { echo "res=nqma"; } else { echo "res=ima&hisCountry=Bulgaria&hisPhoneNumber=000000123&hisCity=Svishtov&his....."; }} How can I use this format In Flash? I know, that many of you will tell me to use AMFPHP, but I am not able, because my client doesn't want to.
Now provide the response and nothing else.
| You're correct that you should have all your re-usable business logic farmed off to a service so that different controllers can re-use the code. Have you checked out the "how to create a service" documentation: Service Container Documentation I'll give you the speed run-down though. In config.yml you need to define your service: services: pricing_service: class: Acme\ProductBundle\Service\PricingService arguments: [@doctrine] Then you just need to make a bog standard PHP class to represent your service: namespace Acme\ProductBundle\Service;class PricingService { private $doctrine; function __construct($doctrine) { $this->doctrine = $doctrine; // Note that this was injected using the arguments in the config.yml } // Now the rest of your functions go here such as "getUnitPrice" etc etc.} Lastly to get your service from a controller you just need to do: $pricingService = $this->get('pricing_service'); There are other ways you can modularise the service such as not dumping all your services into config.yml but all of that is explained in the documentation. Also note that you can inject any other service you wish into your service so if you need stuff like arguments: [@doctrine, @security.context, @validator] you can do all that stuff or even: [@my_other_service] . I suspect from your other question on injecting the EntityManager you may have already gleamed this was the way to go though! Hopefully this was still useful to you! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8021741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/477842/']} | jdg_81404 |
stackexchange | llm_judgeable_groundtruth_similarity | 451658 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As a student, learning about a capacitor after understanding what a resistor is, it was quite surprising to note that the capacitance does not depend on the nature of the plates used, at least in any type of capacitor I have known. I am guided, "it makes no difference as long as the plates are conducting." Is that true?
Now provide the response and nothing else.
| Yes that is true, capacitance is: \$C = \frac q V\$ where q is the charge and V the voltage between the plates. As long as the charge \$q\$ can be "hold in place" this relation applies. I mean, there is no need to have a "good" conductor as the charge is static , it does not move. So as long as for a certain voltage \$V\$ is applied resulting in a certain charge \$q\$ to be present on the capacitor's plates then \$C\$ can be determined. It does not matter if the plates are bad conductors (high resistance) as it will then simply take longer for all charge to reach its final location. In the final state there will be no difference compared to a capacitor with well conducting plates as the amount of charge will be the same. Only if you look at the dynamic behavior of a capacitor (how does it respond to quick voltage changes) would you see an influence of the conductivity of the plates. In first order the capacitor would exhibit additional series resistance . | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/451658', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/228103/']} | jdg_81405 |
stackexchange | llm_judgeable_groundtruth_similarity | 10789042 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to figure out multi-threading programming in python. Here's the simple task with which I want to compare serial and parallel speeds. import threadingimport Queueimport timeimport mathdef sinFunc(offset, n): result = [] for i in range(n): result.append(math.sin(offset + i * i)) return resultdef timeSerial(k, n): t1 = time.time() answers = [] for i in range(k): answers.append(sinFunc(i, n)) t2 = time.time() print "Serial time elapsed: %f" % (t2-t1)class Worker(threading.Thread): def __init__(self, queue, name): self.__queue = queue threading.Thread.__init__(self) self.name = name def process(self, item): offset, n = item self.__queue.put(sinFunc(offset, n)) self.__queue.task_done() self.__queue.task_done() def run(self): while 1: item = self.__queue.get() if item is None: self.__queue.task_done() break self.process(item)def timeParallel(k, n, numThreads): t1 = time.time() queue = Queue.Queue(0) for i in range(k): queue.put((i, n)) for i in range(numThreads): queue.put(None) for i in range(numThreads): Worker(queue, i).start() queue.join() t2 = time.time() print "Serial time elapsed: %f" % (t2-t1)if __name__ == '__main__': n = 100000 k = 100 numThreads = 10 timeSerial(k, n) timeParallel(k, n, numThreads)#Serial time elapsed: 2.350883#Serial time elapsed: 2.843030 Can someone explain to me what's going on? I'm used to C++, and a similar version of this using the module sees the speed-up we would expect.
Now provide the response and nothing else.
| Other answers have referred to the issue of the GIL being the problem in cpython. But I felt there was a bit of missing information. This will cause you performance issues in situations where the code you are running in threads is CPU bound. In your case here, yes doing many calculations in threads is going to most likely result in dramatically degraded performance. But, if you were doing something that was more IO bound, such as reading from many sockets in a network application, or calling out to subprocess, you can get performance increases from threads. A simple example for your code above would be to add a stupidly simple call out to the shell: import osdef sinFunc(offset, n): result = [] for i in xrange(n): result.append(math.sin(offset + i * i)) os.system("echo 'could be a database query' >> /dev/null; sleep .1") return result That call might have been something real like waiting on the filesystem. But you can see that in this example, threading will start to prove beneficial, as the GIL can be released when the thread is waiting on IO and other threads will continue to process. Even so, there is still a sweet spot for when more threads start to become negated by the overhead of creating them and synchronizing them. For CPU bound code, you would make use of multiprocessing From article: http://www.informit.com/articles/article.aspx?p=1850445&seqNum=9 ...threading is more appropriate for I/O-bound applications (I/O releases the GIL, allowing for more concurrency)... Similar question references about threads vs processes: https://stackoverflow.com/a/1227204/496445 https://stackoverflow.com/a/990436/496445 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10789042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/964046/']} | jdg_81406 |
stackexchange | llm_judgeable_groundtruth_similarity | 140231 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Ok So I am about to take the accuplacer college level math for a college but I do not understand this problem on the practice packet. Honestly I am blank on this problem and need help step by step on how to solve it.Please help me understand it. Thanks in advance If a ≠ b and 1/x + 1/a= 1/b, then x = A.1/b – 1/a B. b – a C.1/ab D.a – b/ab E.ab/a – b
Now provide the response and nothing else.
| It is asking you to solve for $x$, given that$$\frac{1}{x} +\frac{1}{a}=\frac{1}{b}.$$ To solve for $x$, first isolate $x$ by itself on one side; for example, move that $\frac{1}{a}$ to the right. That will give you an equation of the form$$\frac{1}{x} = \text{stuff}.$$ Do the operation on the right, and then take reciprocals (or cross-multiply) to get an expression for $x$ in terms of $a$ and $b$. Then figure out which of the five options given is that expression for $x$. You can the work as an edit to your question and we can tell you if you are doing it right or not; that will help you learn better than me doing it for you. (For extra points, figure out exactly on which step you need to assume $a\neq b$...) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/140231', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/30511/']} | jdg_81407 |
stackexchange | llm_judgeable_groundtruth_similarity | 48026 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading through this question on time and big bang, and @John Rennie's answer surprised me. In the immediate environment of a black hole, where does time stop ticking if one were to follow a 'watch' falling into a black hole? At the event horizon? In the central singularity? If time stops at the event horizon, does the watch get stuck there, or does it keep falling in all the way to the singularity. Guess I know less than I thought.
Now provide the response and nothing else.
| If you're sitting outside the event horizon watching a clock fall in, you will never see the clock reach the event horizon. You will see the clock slow as it approaches the horizon and you'll see it running slower and slower. However there is no sense in which time stops at the event horizon. You can wait as long as you want, and you'll see the clock creep closer and closer, but time will continue for both you and the clock. Now suppose you're holding the clock. Assuming you can survive the tidal forces you'll cross the point where the external observer thinks the event horizon is (you would see no horizon there) and you would hit the singularity in a finite time. The problem is that at the singularity the spacetime curvature becomes infinite and there is no way to calculate your path in spacetime past this point. This is known as geodesic incompleteness (annoyingly Wikipedia has no good article on this but Google "geodesic incompleteness" for lots of info on the subject). It's because there is no way to calculate your trajectory past the singularity that it is said (but not by me!) that spacetime stops there. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/48026', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/12584/']} | jdg_81408 |
stackexchange | llm_judgeable_groundtruth_similarity | 19710968 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm following a beginners tutorial to sqlite3 . The first step is creating a new database. So I enter a name ( movies.db ). I'm expecting to get another sqlite> prompt on the next line, and continue with the tutorial, but instead I get a lame ...> after which I can type any gibbersish I want. Clearly, this is not good. What my command prompt looks like: SQLite version 3.8.1 2013-10-17 12:57:35Enter ".help" for instructionsEnter SQL statements terminated with a ";"sqlite> $ sqlite3 movies.db ...> gibberish ...> dsds ...> sdada ...> gfgys ...> a ...> Aaaaarrrgh! ...> How do I get sqlite3 to work normally for me? Pardon my newbie-ness. I hope I've phrased this question in a way that might help other newbs too.
Now provide the response and nothing else.
| Sqlite is working normally. However, the sqlite movies.db command should be issued from your system command line – not from the Sqlite interactive shell. Start by exiting the Sqlite interactive shell ( .exit ), then issuing the database creation command. According to the quickstart documentation : At a shell or DOS prompt, enter: "sqlite3 test.db". This will create a new database named "test.db". (You can use a different name if you like.) Enter SQL commands at the prompt to create and populate the new database. Once the sqlite movies.db command is properly executed from your system command line, you'll automatically be placed in the Sqlite interactive shell, which will be awaiting commands. sqlite> create table tbl1(one varchar(10), two smallint); The ...> shell prompt indicates a continuance from the preceding line. As indicated in the message, you'll need to terminate each database command with a ; semicolon. sqlite> CREATE TABLE tbl2 ( ...> f1 varchar(30) primary key, ...> f2 text, ...> f3 real ...> );sqlite> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19710968', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1675976/']} | jdg_81409 |
stackexchange | llm_judgeable_groundtruth_similarity | 33027 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo . The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html<Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all</Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php , but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html . The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution : $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.
Now provide the response and nothing else.
| I'll bet that some directory above the one you want to access doesn't have permissions to allow Apache to traverse it. Become the user that Apache is running as ( sudo -i -u apache or whatever), then try to change into the directory of interest and ls it. If you can't (as expected), then try getting into the directories above it, one by one, until one lets you in. The subdirectory of that is that one that needs to have o+x set. Lather, rinse, repeat as required. | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/33027', 'https://serverfault.com', 'https://serverfault.com/users/1154/']} | jdg_81410 |
stackexchange | llm_judgeable_groundtruth_similarity | 5114102 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
typedef solution_type (*algorithm_ptr_type) ( problem_type problem, void (*post_evaluation_callback)(void *move, int score)/* = NULL*/); Please help me! Thank you
Now provide the response and nothing else.
| This means, algorithm_ptr_type is a pointer to a function returning solution_type and whose parameters are: problem of type problem_type post_evaluation_callback which is again a function pointer taking two parameters ( void* and int ) , and returning void . And the same can be written as ( easy and readable syntax ): typedef void (*callback_type)(void *move, int score);typedef solution_type (*algorithm_type)(problem_type, callback_type); Note: the name of the parameters are optional, so I removed it, to make the typedef short and cute! In C++11, this can be simplified further as follows: using algorithm_ptr_type = solution_type (*) ( problem_type, void(*)(void*, int)); That is much better, as now it's clear as to what is being defined and in terms of what . In C++11, you can even define a utility to create function-pointer as, //first define a utility to make function pointer.template<typename Return, typename ... Parameters>using make_fn = Return (*)(Paramaters...); then use it as, using callback_type = make_fn<void, void*, int>;using algorithm_type = make_fn<solution_type, problem_type, callback_type>; Here the first argument to make_fn is the return type, and the rest are the parameters — easy to decipher each one! Usage: solution_type SomeFunction(problem_type problem, callback post_evaluation){ //implementation //call the callback function post_evaluation(arg1, arg2); //..}algorithm_ptr_type function = SomeFunction;//call the functionfunction(arg, someOtherFunction); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5114102', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/435645/']} | jdg_81411 |
stackexchange | llm_judgeable_groundtruth_similarity | 15600906 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading some info about Web Services for Android using SOAP or REST but I've never created a web service so I have many questions about this. I'm developing an Android App where the users answer some questions and for each user an XML file is generated and saved in the device. Here are my queries: What I would like to do is to send these XML files from the devices to the Web Service and also sometimes to retrieve all the files to any device. Do I need to convert these files into Byte or just send the XML? Are these actions possible with the Web Service or will you use Java Sockets? In case you prefer the Web Service, what would you use: REST, SOAP. I would appreciate some links to tutorials and piece of advise.
Now provide the response and nothing else.
| Use SQL Server to manage the data on your desktop and create a web-service in .NET on Visual Studio. Then connect to the web-service in your application and set/get data from the DB, using web-services. You can use either XML or JSON to transfer your data between the phone and the server. There is no need for the use of Java Socket API for this. Links which might be useful : How to make a web-service in .NET (does not include the implementation in Android) : http://srikanthtechnologies.com/blog/dotnet/wsdaljava.aspx How to connect your service with Android : http://seesharpgears.blogspot.in/2010/11/basic-ksoap-android-tutorial.html http://www.codeproject.com/Articles/304302/Calling-Asp-Net-Webservice-ASMX-From-an-Android-Ap http://adrianandroid.blogspot.in/2012/05/access-c-net-web-service-in.html Note: I have never worked on RESTful services. My work has always been on SOAP and hence it remains my preference. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15600906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2008828/']} | jdg_81412 |
stackexchange | llm_judgeable_groundtruth_similarity | 28626575 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I cancel my async method with the following content by calling the Cancel() method of my CancellationTokenSource , it will stop eventually. However since the line Console.WriteLine(await reader.ReadLineAsync()); takes quite a bit to complete, I tried to pass my CancellationToken to ReadLineAsync() as well (expecting it to return an empty string) in order to make the method more responsive to my Cancel() call. However I could not pass a CancellationToken to ReadLineAsync() . Can I cancel a call to Console.WriteLine() or Streamreader.ReadLineAsync() and if so, how do I do it? Why is ReadLineAsync() not accepting a CancellationToken ? I thought it was good practice to give async methods an optional CancellationToken parameter even if the method still completes after being canceled. StreamReader reader = new StreamReader(dataStream);while (!reader.EndOfStream){ if (ct.IsCancellationRequested){ ct.ThrowIfCancellationRequested(); break; } else { Console.WriteLine(await reader.ReadLineAsync()); }} Update :Like stated in the comments below, the Console.WriteLine() call alone was already taking up several seconds due to a poorly formatted input string of 40.000 characters per line. Breaking this down solves my response-time issues, but I am still interested in any suggestions or workarounds on how to cancel this long-running statement if for some reason writing 40.000 characters into one line was intended (for example when dumping the whole string into a file).
Now provide the response and nothing else.
| You can't cancel the operation unless it's cancellable. You can use the WithCancellation extension method to have your code flow behave as if it was cancelled, but the underlying would still run: public static Task<T> WithCancellation<T>(this Task<T> task, CancellationToken cancellationToken){ return task.IsCompleted // fast-path optimization ? task : task.ContinueWith( completedTask => completedTask.GetAwaiter().GetResult(), cancellationToken, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default);} Usage: await task.WithCancellation(cancellationToken); You can't cancel Console.WriteLine and you don't need to. It's instantaneous if you have a reasonable sized string . About the guideline: If your implementation doesn't actually support cancellation you shouldn't be accepting a token since it sends a mixed message. If you do have a huge string to write to the console you shouldn't use Console.WriteLine . You can write the string in a character at a time and have that method be cancellable: public void DumpHugeString(string line, CancellationToken token){ foreach (var character in line) { token.ThrowIfCancellationRequested(); Console.Write(character); } Console.WriteLine();} An even better solution would be to write in batches instead of single characters. Here's an implementation using MoreLinq 's Batch : public void DumpHugeString(string line, CancellationToken token){ foreach (var characterBatch in line.Batch(100)) { token.ThrowIfCancellationRequested(); Console.Write(characterBatch.ToArray()); } Console.WriteLine();} So, in conclusion: var reader = new StreamReader(dataStream);while (!reader.EndOfStream){ DumpHugeString(await reader.ReadLineAsync().WithCancellation(token), token);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28626575', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3967289/']} | jdg_81413 |
stackexchange | llm_judgeable_groundtruth_similarity | 244314 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to swap the list to make a new list as follows.The code works but I'm wondering if there is a neat code, or an elegant way to do this. list = {1, -1, -1, 1, 0, 0, d, -1, 1};newlist = {Sequence @@ list[[4 ;; 6]], Sequence @@ list[[1 ;; 3]],Sequence @@ list[[7 ;; 9]]}
Now provide the response and nothing else.
| Permute exists for reordering lists. Permute[list, Cycles[{{1, 4}, {2, 5}, {3, 6}}]] This swaps entries $1 \leftrightarrow 4$ , $2 \leftrightarrow 5$ and $3 \leftrightarrow 6$ . The necessary permutation can be found using FindPermutation : FindPermutation[Range@9, {4, 5, 6, 1, 2, 3, 7, 8, 9}] Cycles[{{1, 4}, {2, 5}, {3, 6}}] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/244314', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/18805/']} | jdg_81414 |
stackexchange | llm_judgeable_groundtruth_similarity | 14138751 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am referring to this source code . The code snippets provided here are from lines (114-138) in the code . This is using the ffmpeg library . Can anyone explain why is the following code required in the program ? // Determine required buffer size and allocate buffernumBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); In a sense I understand that the following function is associating the destination frame to the buffer . But what is the necessity ? avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height); PS : I tried removing the buffer and compiling the program . It got compiled . But it is showing the following run time error . [swscaler @ 0xa06d0a0] bad dst image pointers Segmentation fault (core dumped)
Now provide the response and nothing else.
| I think that what puzzles you is that there seem to be two allocations for AVFrame. The first, done with avcodec_alloc_frame() , allocates the space for a generic frame and its metadata. At this point the memory required to hold the frame proper is still unknown. You then populate that frame from another source, and it is then that you specify how much memory you need by passing width , height and color depth: numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height); At this point the frame and its content are two separate objects (an AVFrame and its buffer ). You put them together with this code, which is not actually a conversion at all: avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,pCodecCtx->width, pCodecCtx->height); What the code above does is to "tell" pFrameRGB : " you are a RGB-24 frame, this wide, this tall, and the memory you need is in 'buffer' ". Then and only then you can do whatever you want with pFrameRGB . Otherwise, you try to paint on a frame without the canvas, and the paint splashes down -- you get a core dump. Once you have the frame (AVFrame) and the canvas (the buffer), you can use it: // Read frames and save first five frames to diski=0;while(av_read_frame(pFormatCtx, &packet)>=0) { // Is this a packet from the video stream? if(packet.stream_index==videoStream) { // Decode video frame avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,&packet); The above code extracts a video frame and decodes it into pFrame (which is native format). We could save pFrame to disk at this stage. We would not need buffer , and we could then not use pFrameRGB . Instead we convert the frame to RGB-24 using sws_scale() . To convert a frame into another format, we copy the source to a different destination. This is both because the destination frame could be bigger than what can be accommodated by the source frame, and because some conversion algorithms need to operate on larger areas of the untransformed source, so it would be awkward to transmogrify the source in-place. Also, the source frame is handled by the library and might conceivably not be safe to write to. Update (comments) What does the data[] of pFrame/pFrameRGB point to : initially, nothing . They are NULL, and that is why using a noninitialized AVframe results in a core dump. You initialize them (and linesize[] etc.) using avpicture_fill (that fits in an empty buffer, plus image format and size information) or one of the decode functions (which do the same). Why does pFrame not require memory allocation : good question. The answer is in the used function's prototype and layout , where the picture parameter is described thus: The AVFrame in which the decoded video frame will be stored. Use avcodec_alloc_frame to get an AVFrame, the codec will allocate memory for the actual bitmap . with default get/release_buffer(), the decoder frees/reuses the bitmap as it sees fit. with overridden get/release_buffer() (needs CODEC_CAP_DR1) the user decides into what buffer the decoder decodes and the decoder tells the user once it does not need the data anymore, the user app can at this point free/reuse/keep the memory as it sees fit. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14138751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2666601/']} | jdg_81415 |
stackexchange | llm_judgeable_groundtruth_similarity | 25719620 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Altough I have very small size image in drawable folder, I am getting this error from users. And I am not using any bitmap function in code. At least intentionally :) java.lang.OutOfMemoryError at android.graphics.BitmapFactory.nativeDecodeAsset(Native Method) at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:683) at android.graphics.BitmapFactory.decodeResourceStream(BitmapFactory.java:513) at android.graphics.drawable.Drawable.createFromResourceStream(Drawable.java:889) at android.content.res.Resources.loadDrawable(Resources.java:3436) at android.content.res.Resources.getDrawable(Resources.java:1909) at android.view.View.setBackgroundResource(View.java:16251) at com.autkusoytas.bilbakalim.SoruEkrani.cevapSecimi(SoruEkrani.java:666) at com.autkusoytas.bilbakalim.SoruEkrani$9$1.run(SoruEkrani.java:862) at android.os.Handler.handleCallback(Handler.java:733) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:146) at android.app.ActivityThread.main(ActivityThread.java:5602) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:515) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1283) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1099) at dalvik.system.NativeStart.main(Native Method) According to this stackTrace I'm gettin this error at this line ('tv' is a textView): tv.setBackgroundResource(R.drawable.yanlis); What is the problem? If you need some other information about code, I can add it.Thanks!
Now provide the response and nothing else.
| You can't increase the heap size dynamically but you can request to use more by using. android:largeHeap="true" in the manifest.xml ,you can add in your manifest these lines it is working for some situations. <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:largeHeap="true" android:supportsRtl="true" android:theme="@style/AppTheme"> Whether your application's processes should be created with a large Dalvik heap. This applies to all processes created for the application. It only applies to the first application loaded into a process; if you're using a shared user ID to allow multiple applications to use a process, they all must use this option consistently or they will have unpredictable results. Most apps should not need this and should instead focus on reducing their overall memory usage for improved performance. Enabling this also does not guarantee a fixed increase in available memory, because some devices are constrained by their total available memory. To query the available memory size at runtime, use the methods getMemoryClass() or getLargeMemoryClass() . If still facing problem then this should also work BitmapFactory.Options options = new BitmapFactory.Options(); options.inSampleSize = 8; mBitmapInsurance = BitmapFactory.decodeFile(mCurrentPhotoPath,options); If set to a value > 1, requests the decoder to subsample the original image, returning a smaller image to save memory. This is the optimal use of BitmapFactory.Options.inSampleSize with regards to speed of displaying the image.The documentation mentions using values that are a power of 2, so I am working with 2, 4, 8, 16 etc. Lets get more deeper to Image Sampling: For example, it’s not worth loading a 1024x768 pixel image into memory if it will eventually be displayed in a 128x128 pixel thumbnail in an ImageView . To tell the decoder to subsample the image, loading a smaller version into memory, set inSampleSize to true in your BitmapFactory.Options object. For example, an image with resolution 2100 x 1500 pixels that is decoded with an inSampleSize of 4 produces a bitmap of approximately 512x384. Loading this into memory uses 0.75MB rather than 12MB for the full image (assuming a bitmap configuration of ARGB_8888 ). Here’s a method to calculate a sample size value that is a power of two based on a target width and height: public static int calculateInSampleSize( BitmapFactory.Options options, int reqWidth, int reqHeight) { // Raw height and width of image final int height = options.outHeight; final int width = options.outWidth; int inSampleSize = 1; if (height > reqHeight || width > reqWidth) { final int halfHeight = height / 2; final int halfWidth = width / 2; // Calculate the largest inSampleSize value that is a power of 2 and keeps both // height and width larger than the requested height and width. while ((halfHeight / inSampleSize) > reqHeight && (halfWidth / inSampleSize) > reqWidth) { inSampleSize *= 2; } } return inSampleSize;} Note : A power of two value is calculated because the decoder uses a final value by rounding down to the nearest power of two, as per the inSampleSize documentation. To use this method, first decode with inJustDecodeBounds set to true , pass the options through and then decode again using the new inSampleSize value and inJustDecodeBounds set to false : public static Bitmap decodeSampledBitmapFromResource(Resources res, int resId, int reqWidth, int reqHeight) { // First decode with inJustDecodeBounds=true to check dimensions final BitmapFactory.Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeResource(res, resId, options); // Calculate inSampleSize options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight); // Decode bitmap with inSampleSize set options.inJustDecodeBounds = false; return BitmapFactory.decodeResource(res, resId, options);} This method makes it easy to load a bitmap of arbitrarily large size into an ImageView that displays a 100x100 pixel thumbnail, as shown in the following example code: mImageView.setImageBitmap(decodeSampledBitmapFromResource(getResources(), R.id.myimage, 100, 100)); You can follow a similar process to decode bitmaps from other sources, by substituting the appropriate BitmapFactory.decode* method as needed. I found this code also interesting: private Bitmap getBitmap(String path) {Uri uri = getImageUri(path);InputStream in = null;try { final int IMAGE_MAX_SIZE = 1200000; // 1.2MP in = mContentResolver.openInputStream(uri); // Decode image size BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(in, null, o); in.close(); int scale = 1; while ((o.outWidth * o.outHeight) * (1 / Math.pow(scale, 2)) > IMAGE_MAX_SIZE) { scale++; } Log.d(TAG, "scale = " + scale + ", orig-width: " + o.outWidth + ", orig-height: " + o.outHeight); Bitmap bitmap = null; in = mContentResolver.openInputStream(uri); if (scale > 1) { scale--; // scale to max possible inSampleSize that still yields an image // larger than target o = new BitmapFactory.Options(); o.inSampleSize = scale; bitmap = BitmapFactory.decodeStream(in, null, o); // resize to desired dimensions int height = bitmap.getHeight(); int width = bitmap.getWidth(); Log.d(TAG, "1th scale operation dimenions - width: " + width + ", height: " + height); double y = Math.sqrt(IMAGE_MAX_SIZE / (((double) width) / height)); double x = (y / height) * width; Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap, (int) x, (int) y, true); bitmap.recycle(); bitmap = scaledBitmap; System.gc(); } else { bitmap = BitmapFactory.decodeStream(in); } in.close(); Log.d(TAG, "bitmap size - width: " +bitmap.getWidth() + ", height: " + bitmap.getHeight()); return bitmap;} catch (IOException e) { Log.e(TAG, e.getMessage(),e); return null;} How to Manage Your App's Memory: link It's not a good idea to use android:largeHeap="true" here's the extract from google that explains it, However, the ability to request a large heap is intended only for a small set of apps that can justify the need to consume more RAM (such as a large photo editing app). Never request a large heap simply because you've run out of memory and you need a quick fix—you should use it only when you know exactly where all your memory is being allocated and why it must be retained. Yet, even when you're confident your app can justify the large heap, you should avoid requesting it to whatever extent possible. Using the extra memory will increasingly be to the detriment of the overall user experience because garbage collection will take longer and system performance may be slower when task switching or performing other common operations. After working excrutiatingly with out of memory errors i would say adding this to the manifest to avoid the oom issue is not a sin Verifying App Behavior on the Android Runtime (ART) The Android runtime (ART) is the default runtime for devices running Android 5.0 (API level 21) and higher. This runtime offers a number of features that improve performance and smoothness of the Android platform and apps. You can find more information about ART's new features in Introducing ART . However, some techniques that work on Dalvik do not work on ART. This document lets you know about things to watch for when migrating an existing app to be compatible with ART. Most apps should just work when running with ART. Addressing Garbage Collection (GC) Issues Under Dalvik, apps frequently find it useful to explicitly call System.gc() to prompt garbage collection (GC). This should be far less necessary with ART, particularly if you're invoking garbage collection to prevent GC_FOR_ALLOC-type occurrences or to reduce fragmentation. You can verify which runtime is in use by calling System.getProperty("java.vm.version"). If ART is in use, the property's value is "2.0.0" or higher. Furthermore, a compacting garbage collector is under development in the Android Open-Source Project (AOSP) to improve memory management. Because of this, you should avoid using techniques that are incompatible with compacting GC (such as saving pointers to object instance data). This is particularly important for apps that make use of the Java Native Interface (JNI). For more information, see Preventing JNI Issues. Preventing JNI Issues ART's JNI is somewhat stricter than Dalvik's. It is an especially good idea to use CheckJNI mode to catch common problems. If your app makes use of C/C++ code, you should review the following article: Also, you can use native memory ( NDK & JNI ), so you actually bypass the heap size limitation. Here are some posts made about it: How to cache bitmaps into native memory https://stackoverflow.com/a/9428660/1761003 JNI bitmap operations , for helping to avoid OOM when using large images and here's a library made for it: https://github.com/AndroidDeveloperLB/AndroidJniBitmapOperations | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25719620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1806914/']} | jdg_81416 |
stackexchange | llm_judgeable_groundtruth_similarity | 1391131 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In cartesian coordinates, the Laplacian is $$\nabla^2 = \frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}\qquad(1)$$ If it's converted to spherical coordinates, we get$$\nabla^2=\frac{1}{r^2}\frac{\partial}{\partial r}\left( r^2 \frac{\partial}{\partial r}\right)+\frac{1}{r^2 sin\theta}\frac{\partial}{\partial \theta}\left(sin \theta \frac{\partial}{\partial \theta}\right)+\frac{1}{r^2 sin^2 \theta}\frac{\partial^2}{\partial \phi^2}\qquad(2)$$ I am following the derivation (i.e. the method of conversion from cartesian to spherical) in "Quantum physics of atoms, molecules, solids, nuclei and particles" by Eisberg and Resnick (it's in Appendix M). Their method is to first consider a function of only $r$, $\psi(r)$, then calculate $\frac{\partial^2 \psi}{\partial x^2}$, $\frac{\partial^2 \psi}{\partial y^2}$, $\frac{\partial^2 \psi}{\partial z^2}$.After plugging into (1) and simplifying we end up with $\nabla^2=\frac{1}{r^2}\frac{\partial}{\partial r}\left( r^2 \frac{\partial}{\partial r}\right)$ which is, indeed the first term in (2). For the second (and third) term they don't show the derivation, they just say "the second and third terms can be obtained by taking $\psi = \psi(\theta)$ and$\psi = \psi(\phi)$. I am trying to complete the derivation for the other terms, but I don't know how to proceed. Here is what I have so far:$$\frac{\partial \psi}{\partial x} = \frac{\partial \psi}{\partial \theta} \frac{\partial \theta}{\partial x}\qquad (3)$$ $$\frac{\partial\theta}{\partial x} = \frac{xz}{(x^2 + y^2 +z^2)^{3/2}\sqrt{1-\frac{z^2}{x^2+y^2+z^2}}}\qquad (4)$$ putting (4) into (3) and substituting for $r$: $$\frac{\partial\psi}{\partial x} =\frac{\partial \psi}{\partial\theta} \frac{xz}{r^{3}\sqrt{1-\frac{z^2}{r^2}}}$$ Now, making the conversions $x=rsin\theta cos\phi$ and $z=r cos\theta$ and simplifying I get $$\frac{\partial\psi}{\partial x}= \frac{\partial\psi}{\partial\theta} \frac{cos \theta cos \phi}{r} $$ My question is how do I proceed?
Now provide the response and nothing else.
| You're on the right track. Now do what you did for $\partial\psi/\partial y$ and $\partial\psi/\partial z$, then compute the second derivatives and add them up. Note that when computing the second derivatives, you will be able to reuse the results from computing the first derivatives. For example, $$\frac{\partial^2\psi}{\partial x^2} = \frac{\partial}{\partial x}\frac{\partial\psi}{\partial x} = \frac{\partial}{\partial x} \Big( \frac{\cos\theta\cos\phi}{r}\,\frac{\partial\psi}{\partial\theta} \Big)$$ To continue from the above, use the chain rule,$$\frac{\partial^2\psi}{\partial x^2} = \frac{\partial}{\partial x} (\cdot) = \frac{\partial r}{\partial x} \frac{\partial}{\partial r} (\cdot) +\frac{\partial \theta}{\partial x} \frac{\partial}{\partial \theta} (\cdot) +\frac{\partial \phi}{\partial x} \frac{\partial}{\partial \phi} (\cdot)$$ where the $\cdot$ is the term within the parentheses in the first equation above. Note that, in addition to the mixed-coordinate derivatives ($\partial r/\partial x$, etc), you'll need to compute the derivative of a product of functions. For example, $$\frac{\partial}{\partial r}\Big( \frac{\cos\theta\cos\phi}{r}\,\frac{\partial\psi}{\partial\theta} \Big) =-\frac{\cos\theta\cos\phi}{r^2}\,\frac{\partial\psi}{\partial\theta} +\frac{\cos\theta\cos\phi}{r}\,\frac{\partial^2\psi}{\partial r\,\partial\theta} $$ Of course, because (in this case) you chose $\psi$ to be a function of only $\theta$, the last term above is zero (since $\psi$ does not depend on $r$). This problem is not difficult per se but it requires a lot of derivative computations and good organisation. It's a great exercise to improve your computational and organisational skills but you'll learn in the future other methods to find the Laplacian in another coordinate system that are far more efficient and economical. Oh, and here's a trick to avoid having to deal with that pesky square root in $r$ as a function of $x, y, z$: $$\frac{\partial r^2}{\partial x} = 2x$$ but also $$\frac{\partial r^2}{\partial x} = 2r\,\frac{\partial r}{\partial x}$$ Thus, combining the two, you get $$\frac{\partial r}{\partial x} = \frac{x}{r}$$ Finally, kudos for wanting to do the extra work on your own. Also, the book you mentioned is excellent. If you have your own copy, you might want to keep it. I still have mine (though not with me here), even after 30 years! | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1391131', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/177987/']} | jdg_81417 |
stackexchange | llm_judgeable_groundtruth_similarity | 4538322 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Use algebraic substitutions and reduce the sextic equation to the cubic equation, where $a$ is a real number: $$ax^6-x^5+x^4+x^3-2x^2+1=0$$ My attempts. First, I tried to use the Rational root theorem, when $a$ is an integer $x=\pm 1$ , but this implies $a=0$ and this is not always correct. Then I realized that, $$x^4-2x^2+1=(x^2-1)^2$$ is a perfect square. So, I tried to write the original equation as $$ax^4-x^3+x+\bigg(x-\frac 1x\bigg)^2=0$$ $$x^2\bigg(ax^2-x+\frac 1x\bigg)+\bigg(x-\frac 1x\bigg)^2=0$$ But I failed again. I couldn't spot the palindromic property.
Now provide the response and nothing else.
| Define $t,u=\frac{1\pm\sqrt{1-4a}}{2a}$ as the roots of $ax^2-x+1$ . Then the sextic splits into two cubics over $\mathbb Q(t)$ : $$ax^6-x^5+x^4+x^3-2x^2+1=a(x^3-tx^2+t)(x^3-ux^2+u)$$ This can be verified by re-expanding. Here the substitution is $a=\frac{t-1}{t^2}$ and not in $x$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4538322', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1094359/']} | jdg_81418 |
stackexchange | llm_judgeable_groundtruth_similarity | 22248043 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have my java enum such as: FOO("foo") , BAR("bar") ...and I have a getValue() method to return the value "foo" and "bar" of the enum and this has to be in Java. On the other hand, I have to match this in Scala: result match { case "foo" => I am trying to do: result match { case Enum.FOO.getValue() => I get this error: method getValue is not a case class constructor, nor does it have anunapply/unapplySeq method I'm not quite sure what is happening here since my getValue() method returns a String so why I can't use it for pattern matching? Thanks
Now provide the response and nothing else.
| You can pattern match on Java enums, but you can't call methods in the destructuring part of the match. So this works: j match { case Jenum.FOO => "yay"; case _ => "boo" } if j is an instance of your Java enum (cleverly labeled Jenum ). You can however do something like this: "foo" match { case s if s == Jenum.FOO.getValue => "yay" case _ => "boo"} Or you can convert your string to the enum first: Jenum.values.find(_.getValue == "foo") match { case Some(Jenum.FOO) => "yay" case _ => "boo"} (you might also want to unwrap the option first to avoid repeating Some(...) so many times). For reference, this is the test enum I used (Jenum.java): public enum Jenum { FOO("foo"), BAR("bar"); private final String value; Jenum(String value) { this.value = value; } public String getValue() { return value; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22248043', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1060224/']} | jdg_81419 |
stackexchange | llm_judgeable_groundtruth_similarity | 14339309 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I am loading several html pages on an index page depending on what link was clicked.I have a problem with the links that have asocietted click event handlers.It seems that after the jQuery loads the html file the links in that html do not have there click event handlers binded to them.Here is my code: This is the html for one of the links that does not work: <a href="#" id="NavContactPage" data-sectionId="Skills">skills</a> This is the jquery code asocieted with it: $("nav a , a.first_anchor , a#NavContactPage").click(function(){ var id = $(this).attr("data-sectionId"); var link = "load/" + id + ".html"; $("div#Content").load(link , function(){ $("section#" + id).fadeIn(); if(id === "Portfolio"){ $('div.slide').cycle('fade'); } if(id === "Home"){ $("#jcycleslider").cycle({ fx: mtransition, easing: easing, before: onBefore, after: onAfter, speed: mpace, timeout: mnext, sync: msync, randomizeEffects:0, pager:'#jslider_nav' }) $("#jslider_nav a").text(""); } }) return false;}) How can I solve this problem?
Now provide the response and nothing else.
| $("selector").click(...) only registers the callbacks for the click event to the elements that where visible to jQuery at the time you did the query. So for newly added elements, that match this selector, the callback will not be applied. You either need to registrate the callbacks to the newly added elements again or you need to use: $(document).on('click',"nav a , a.first_anchor , a#NavContactPage", ... ); Instead of using document as root it is recommend to use e.g. the element where you are loading the content to as root e.g. $("div#Content").on( .... ); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14339309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/985482/']} | jdg_81420 |
stackexchange | llm_judgeable_groundtruth_similarity | 918341 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to test an Order entity method called AddItem and I'm trying to make sure that duplicate items cannot be added. Here is some example code: [Test]public void ItemCannotBeAddedTwiceToOrder(){ Order o = new Order(); Item i = new Item("Bike"); o.AddItem(i); o.AddItem(i); Assert.AreEqual(o.ItemCount, 1, "A duplicate item was added.");}public void AddItem(Item newItem){ if(!CheckForDuplicateItem(newItem)) _items.Add(newItem);}public bool CheckForDuplicateItem(Item newItem){ foreach(Item i in _items) { if(i.Id == newItem.Id) return true; } return false;} So here is my problem: how do I set the new Item's private setter Id in the test method so the CheckForDuplicateItem method will work? I don't want to make that member public for good coding practices, I guess. Am I just being stupid and need to make the entity Item have a public Id setter? Or do I need to use reflection? Thanks Note - I'm using NHibernate for persistence
Now provide the response and nothing else.
| I usually use reflection for this purpose. Something like this will work: typeof(Item).GetProperty(nameof(Item.Id)).SetValue(i, 1, null); where 1 is the id that you want to set for the newItem instance. In my experience, you'll rarely need to set the Id, so it's better just to leave the setter private. In the few cases that you do need to set the Id for testing purposes, simply use Reflection. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/918341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113342/']} | jdg_81421 |
stackexchange | llm_judgeable_groundtruth_similarity | 1609899 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a java equivalent to this php call: hash_hmac('sha1', "test", "secret") I tried this, using java.crypto.Mac , but the two do not agree: String mykey = "secret";String test = "test";try { Mac mac = Mac.getInstance("HmacSHA1"); SecretKeySpec secret = new SecretKeySpec(mykey.getBytes(),"HmacSHA1"); mac.init(secret); byte[] digest = mac.doFinal(test.getBytes()); String enc = new String(digest); System.out.println(enc); } catch (Exception e) { System.out.println(e.getMessage());} The outputs with key = "secret" and test = "test" do not seem to match.
Now provide the response and nothing else.
| In fact they do agree. As Hans Doggen already noted PHP outputs the message digest using hexadecimal notation unless you set the raw output parameter to true. If you want to use the same notation in Java you can use something like for (byte b : digest) { System.out.format("%02x", b);}System.out.println(); to format the output accordingly. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1609899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58164/']} | jdg_81422 |
stackexchange | llm_judgeable_groundtruth_similarity | 70219 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building a model that predicts rather rare event. The data set consists of about 240.000 examples from which only 1410 are positive. One of the variables from the data set, let's call it $X_{1}$ is a binary variable with value 1 in only about 1% examples. From the knowledge about the event I know that it is impossible to occur an event when $X_{1} = 1$. When I include $X_{1}$ in logistic regression model then it has highly negative coefficient, but it is also highly non significant. The coefficient is -12 and the standard error is 122. Question : Is it valid to keep the variable $X_{1}$ despite the low significance?
Now provide the response and nothing else.
| In principle it is perfectly OK to include non-significant variables. You are interested in the effect of a variable, and the fact that that effect is not significant is valueable information. The logical way to convey that information is to have it be part of your model and show that it is not significant. However, in your case I would be worried about perfect separation . If that is the case the non-significance is not really informative and I would not include that variable or use one of the alternatives proposed in the link. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/70219', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1643/']} | jdg_81423 |
stackexchange | llm_judgeable_groundtruth_similarity | 60310642 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an ASP.NET Core 3.1 project. Typically, I register any dependency using the ConfigureServices() method in the Startup.cs class. But, I find myself having to register lots of dependencies and the ConfigureServices() looks huge! I know I can probably create an extension method of a static method and call it from the ConfigureService()` class, but wondering if there is a better way. If there a way to register dependencies in the IoC container without having to define them one at a time like this services.AddScoped<Interface, Class>();.... 200 lines laterservices.AddScoped<ISettings, Settings>()
Now provide the response and nothing else.
| Grouping related dependencies into custom extension methods is a very common way to do this. ASP.NET Core already does this for many of the internal services, and you can easily expand on top of that and set them up the way you need for your application. For example to set up authentication and authorization: public IServiceCollection AddSecurity(this IServiceCollection services){ services.AddAuthentication() .AddCookie(); service.AddAuthorization(options => { options.DefaultPolicy = …; }); return services;} You can do the same for your application-specific services and group them logically in separate extension methods. If you have a lot of service registrations that are very similar, you can also employ a convention-based registration e.g. using Scrutor . For example, this registers all services within a certain namespace as transient for their respective interface: services.Scan(scan => scan .FromAssemblyOf<Startup>() .AddClasses(c => c.InNamespaces("MyApp.Services")) .AsImplementedInterfaces() .WithTransientLifetime()); Scrutor allows for very complex rules to scan for services, so if your services do follow some pattern, you will likely be able to come up with a rule for that. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60310642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81424 |
stackexchange | llm_judgeable_groundtruth_similarity | 52852278 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Have a look at this example typescript code function printLabel(labelledObj: { label: string }) { console.log(labelledObj.label);}printLabel({ size: 10, label: 'hello' }); The above code fails to compile with the following error: 1.ts:6:14 - error TS2345: Argument of type '{ size: number; label: string; }' is not assignable to parameter of type '{ label: string; }'. Object literal may only specify known properties, and 'size' does not exist in type '{ label: string; }'. In short, size is an excess property and not conforming to the type { label: string } resulting in compiler yelling. Let's alter the above code snippet a little: function printLabel(labelledObj: { label: string }) { console.log(labelledObj.label);}const obj = { size: 10, label: 'hello' }printLabel(obj); Now we extracted the object literal which was passed to printLabel in earlier example into an intermediary reference named obj , the weird part is that now it does not complain and works perfectly. Why does typescript behaves so?
Now provide the response and nothing else.
| It's by design. In short, Typescript creators made it this way because they know Javascript is a very dynamic language with many such use cases. You should read this carefully: https://www.typescriptlang.org/docs/handbook/interfaces.html#excess-property-checks (however I bet the question arised from reading it). Object literals get special treatment Their logic might be like this: if you have a variable, then it may come from some third party and there is not much you can do with it. On the other hand, if you pass an object literal, then you are responsible for its correct type. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52852278', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8515210/']} | jdg_81425 |
stackexchange | llm_judgeable_groundtruth_similarity | 222419 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there some sort of rule to determine how large a hash should be to ensure the security (meaning exactly one message maps to a given hash) of a message? Something that can be applied to any message, such as a 32-bit number or an 8-letter ASCII password.
Now provide the response and nothing else.
| exactly one message maps to a given hash This is not possible due to the pigeonhole principle. As long as the input message to the hash function can be larger than the hash itself, it is guaranteed that some messages collide with each other and map to the same hash. This is normal and is not a problem for the security of hashes by itself. You only need to ensure that the hash function is so large that intentionally finding such collisions (a collision attack) is computationally infeasible. A hash digest of n bits has a collision resistance of n / 2 bits. To achieve 128-bit security against a collision attack, it's thus necessary to have a hash digest of 256 bits. This is, of course, assuming the hash is cryptographically secure ( like SHA-256 ) in order to avoid there being attacks that take a shortcut and can find collisions more easily than by brute force. | {} | {'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/222419', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/222980/']} | jdg_81426 |
stackexchange | llm_judgeable_groundtruth_similarity | 3995708 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
"Math lotto" is played as follows: a player marks six squares on a 6x6square. Then six "losing squares" are drawn. A player wins if none of the losing squaresare marked on his lottery ticket. 1)Prove that one can complete nine lottery tickets in such a way that at least one ofthem wins. 2)Prove that this is not possible with only eight tickets. My attempt is as follows; First I divided the square into 6 rectangles (figure 1). If one rectangle doesn't contain a cross then some ticket (ticket 1 to ticket 6) would win the game. Now we consider the case where each rectangle has one cross each. Now take the two rectangles on the top left of the square (figure 2). These have a total of two crosses. The first two columns together contains one cross and the third and fourth columns together contains one cross. There are four cases and we need at least four tickets (ticket 7 to ticket 10) to ensure win. I am only getting a minimum of ten tickets. How do I prove only nine tickets is required and for eight tickets it is not possible? Reference: Combinatorics by Stephan Wagner, Page 42, Problem 49. https://math.sun.ac.za/swagner/Combinatorics.pdf
Now provide the response and nothing else.
| Just to provide a concrete choice of 9 tickets that will have at least one winning ticket: Suppose to the contrary that all of these are losing tickets. The yellow ones forces 3 losing squares on the top half. Then 3 remaining losing squares will fill the bottom half. By pigeonhole, at least 2 of these losing squares will be in the bottom left 3x3 or the bottom right 3x3. Say there are at least two losing squares in the bottom left 3x3, which means at most 1 losing square is in the bottom right 3x3. In this case one of the red tickets will not have a losing square, as the losing square is in one of the 3 columns. And vice versa, if at least two losing squares are in the bottom right 3x3, then one of the purple tickets will not have a losing square. Edit. A reference for this problem can also be found here https://www.cut-the-knot.org/pigeonhole/MathLotto.shtml | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3995708', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/742113/']} | jdg_81427 |
stackexchange | llm_judgeable_groundtruth_similarity | 32322137 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to accomplish something simple using gulp. I want to write a generic method to move files to an output directory from a particular source directory. pretend we have something like so var args = require('yargs');function transform-move-jsx-Development() { gulp.src(config.sourceJSX) .pipe(react, browserify, etc....) .pipe(gulp.dest(config.output_development));};function transform-move-jsx-Production() { gulp.src(config.sourceJSX) .pipe(react, browserify, etc....) .pipe(gulp.dest(config.output_production));};gulp.task('transform-move-jsx-Development', transform-move-jsx-Development);gulp.task('transform-move-jsx-Production', transform-move-jsx-Production);gulp.task('prod', [transform-move-jsx-Production]);gulp.task('dev', ['transform-move-jsx-Development']); The two tasks: transform-move-jsx-Production and transform-move-jsx-Development are identical except for the output directory. I want to make it more DRY (Don't Repeat Yourself). I should be able to make a single method that can use a yarg parameter or something right? In this next example I pretend I can pass the path as an arg So I try something like this using yargs var args = require('yargs');function transform-move-jsx() { return gulp.src(config.sourceJSX) .pipe(gulp.dest(args.outputDirectory));};gulp.task('dev', ['transform-move-jsx']); However this now requires me to add arguments to the gulp call at the command line gulp dev --"path to output, etc." That is obviously less maintainable as we call more and more gulp tasks from inside of the dev gulp task. And would be messy anyways as we shouldn't need to know an implementation detail like what the output directory structure is when we run gulp dev I could instead do something like this: function transform-move-jsx(destination) { return gulp.src(config.sourceJSX) .pipe(gulp.dest(destination));};function transform-move-jsx-Development() { transform-move-jsx("./output/development/");};function transform-move-jsx-Production() { transform-move-jsx("./output/production/");};gulp.task('transform-move-jsx-Development',transform-move-jsx-Development);gulp.task('transform-move-jsx-Production', transform-move-jsx-Production);gulp.task('prod', transform-move-jsx-Production);gulp.task('dev', transform-move-jsx-Development); This seems better in that it is more flexible, however now my gulpfile is littered with several unnecessary functions. Is there a better way ?
Now provide the response and nothing else.
| You were on the right track with your second try, just needed to utilize a bit of DRY and closures function createTransformTaskClosure (destination) { return function () { return gulp.src(config.sourceJSX) .pipe(gulp.dest(destination)); };}gulp.task('dev', createTransformTaskClosure(config.output_development));gulp.task('prod', createTransformTaskClosure(config.output_production)); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32322137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/613483/']} | jdg_81428 |
stackexchange | llm_judgeable_groundtruth_similarity | 6257647 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
import hashlibstring1 = u'test'hashstring = hashlib.md5()hashstring.update(string1)string2 = hashstring.digest()unicode(string2)UnicodeDecodeError: 'ascii' codec can't decode byte 0x8f in position 1: ordinalnot in range(128) The string HAS to be unicode for it to be any use to me, can this be done?Using python 2.7 if that helps...
Now provide the response and nothing else.
| The result of .digest() is a bytestring¹, so converting it to Unicode is pointless. Use .hexdigest() if you want a readable representation. ¹ Some bytestrings can be converted to Unicode, but the bytestrings returned by .digest() do not contain textual data. They can contain any byte including the null byte: they're usually not printable without using escape sequences. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6257647', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/786467/']} | jdg_81429 |
stackexchange | llm_judgeable_groundtruth_similarity | 2203213 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What function can I put as FOO here to yield true at the end? I played with hash-set (only correct for first 2 values), conj, and concat but I know I'm not handling the single-element vs set condition properly with just any of those. (defn mergeMatches [propertyMapList] "Take a list of maps and merges them combining values into a set" (reduce #(merge-with FOO %1 %2) {} propertyMapList))(def in (list {:a 1} {:a 2} {:a 3} {:b 4} {:b 5} {:b 6} ))(def out { :a #{ 1 2 3} :b #{ 4 5 6} }); this should return true(= (mergeMatches in) out) What is the most idiomatic way to handle this?
Now provide the response and nothing else.
| This'll do: (let [set #(if (set? %) % #{%})] #(clojure.set/union (set %) (set %2))) Rewritten more directly for the example (Alex): (defn to-set [s] (if (set? s) s #{s}))(defn set-union [s1 s2] (clojure.set/union (to-set s1) (to-set s2)))(defn mergeMatches [propertyMapList] (reduce #(merge-with set-union %1 %2) {} propertyMapList)) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2203213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7671/']} | jdg_81430 |
stackexchange | llm_judgeable_groundtruth_similarity | 15524475 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The standard STL vector container has a "reserve" function to reserve uninitialized memory that can be used later to prevent reallocations. How come that the other deque container hasn't it?
Now provide the response and nothing else.
| Increasing the size of a std::vector can be costly. When a vector outgrows its reserved space, the entire contents of the vector must be copied (or moved) to a larger reserve. It is specifically because std::vector resizing can be costly that vector::reserve() exists. reserve() can prepare a std::vector to anticipate reaching a certain size without exceeding its capacity. Conversely, a deque can always add more memory without needing to relocate the existing elements. If a std::deque could reserve() memory, there would be little to no noticeable benefit. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15524475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1494037/']} | jdg_81431 |
stackexchange | llm_judgeable_groundtruth_similarity | 2972846 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have just started learning pthreads API and I am following the tutorial here However, in an example program of pthread_create , the sample program creates a long variable and passes its value , typecasted as void* . In the thread entry function, it dereferences it just like a long. Is this legit?I understand if I pass the address of variable t , every thread would be acting on the same variable and not on a copy of it. Can we do this because it's a void* and the compiler has no idea about what type we are sending? #include <pthread.h>#include <stdio.h>#define NUM_THREADS 5void *PrintHello(void *threadid){ long tid; tid = (long)threadid; printf("Hello World! It's me, thread #%ld!\n", tid); pthread_exit(NULL);}int main (int argc, char *argv[]){ pthread_t threads[NUM_THREADS]; int rc; long t; for(t=0; t<NUM_THREADS; t++){ printf("In main: creating thread %ld\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *)t); if (rc){ printf("ERROR; return code from pthread_create() is %d\n", rc); exit(-1); } } pthread_exit(NULL);}
Now provide the response and nothing else.
| MMS is just a htttp-post request. You should perform the request using extra network feature : final ConnectivityManager connMgr = (ConnectivityManager)context.getSystemService(Context.CONNECTIVITY_SERVICE);final int result = connMgr.startUsingNetworkFeature( ConnectivityManager.TYPE_MOBILE, Phone.FEATURE_ENABLE_MMS); If you get result with Phone.APN_REQUEST_STARTED value, you have to wait for proper state. Register BroadCastReciver and wait until Phone.APN_ALREADY_ACTIVE appears: final IntentFilter filter = new IntentFilter();filter.addAction(ConnectivityManager.CONNECTIVITY_ACTION);context.registerReceiver(reciver, filter); If connection background is ready, build content and perform request. If you want to do that using android's internal code, please use this: final SendReq sendRequest = new SendReq(); final EncodedStringValue[] sub = EncodedStringValue.extract(subject); if (sub != null && sub.length > 0) { sendRequest.setSubject(sub[0]); } final EncodedStringValue[] phoneNumbers = EncodedStringValue .extract(recipient); if (phoneNumbers != null && phoneNumbers.length > 0) { sendRequest.addTo(phoneNumbers[0]); } final PduBody pduBody = new PduBody(); if (parts != null) { for (MMSPart part : parts) { final PduPart partPdu = new PduPart(); partPdu.setName(part.Name.getBytes()); partPdu.setContentType(part.MimeType.getBytes()); partPdu.setData(part.Data); pduBody.addPart(partPdu); } } sendRequest.setBody(pduBody); final PduComposer composer = new PduComposer(this.context, sendRequest); final byte[] bytesToSend = composer.make(); HttpUtils.httpConnection(context, 4444L, MMSCenterUrl, bytesToSendFromPDU, HttpUtils.HTTP_POST_METHOD, !TextUtils .isEmpty(MMSProxy), MMSProxy, port); MMSCenterUrl: url from MMS-APNs, MMSProxy: proxy from MMS-APNs, port: port from MMS-APNs Note that some classes are from internal packages. Download from android git is required. The request should be done with url from user's apn-space...code..: public class APNHelper {public class APN { public String MMSCenterUrl = ""; public String MMSPort = ""; public String MMSProxy = ""; }public APNHelper(final Context context) { this.context = context;} public List<APN> getMMSApns() { final Cursor apnCursor = this.context.getContentResolver().query(Uri.withAppendedPath(Telephony.Carriers.CONTENT_URI, "current"), null, null, null, null);if ( apnCursor == null ) { return Collections.EMPTY_LIST; } else { final List<APN> results = new ArrayList<APN>(); if ( apnCursor.moveToFirst() ) { do { final String type = apnCursor.getString(apnCursor.getColumnIndex(Telephony.Carriers.TYPE)); if ( !TextUtils.isEmpty(type) && ( type.equalsIgnoreCase(Phone.APN_TYPE_ALL) || type.equalsIgnoreCase(Phone.APN_TYPE_MMS) ) ) { final String mmsc = apnCursor.getString(apnCursor.getColumnIndex(Telephony.Carriers.MMSC)); final String mmsProxy = apnCursor.getString(apnCursor.getColumnIndex(Telephony.Carriers.MMSPROXY)); final String port = apnCursor.getString(apnCursor.getColumnIndex(Telephony.Carriers.MMSPORT)); final APN apn = new APN(); apn.MMSCenterUrl = mmsc; apn.MMSProxy = mmsProxy; apn.MMSPort = port; results.add(apn); } } while ( apnCursor.moveToNext() ); } apnCursor.close(); return results; }}private Context context;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2972846', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/95321/']} | jdg_81432 |
stackexchange | llm_judgeable_groundtruth_similarity | 324031 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $n>1$ be a positive integer and let $A$ be an abelian variety over $\mathbb{C}$ . Then the symmetric product $S^n(A)$ is a normal projective variety over $\mathbb{C}$ with Kodaira dimension zero (see for instance https://arxiv.org/pdf/math/0006107.pdf ). Let $A(n)\to S^n(A)$ be a resolution of singularities. Then, up to finite etale cover, $A(n)$ is a product of hyperkaehler varieties, an abelian variety, and simply connected strict Calabi-Yau varieties. (This should follow from the Beauville-Bogomolov decomposition theorem. Or does this require an additional hypothesis on $A(n)$ .) I am wondering how the decomposition of $A(n)$ looks like as $n$ grows. Is it always a strict Calabi-Yau variety? Could it be that $A(n)$ is an abelian variety in fact? I am looking for examples and would appreciate any comments.
Now provide the response and nothing else.
| When $\dim A = 1$ , $S^nA$ is a $\mathbb{P}^{n-1}$ -bundle over $A$ , so its Kodaira dimension is $-\infty$ . When $\dim A = 2$ , the minimal resolution of $S^nA$ is given by the Hilbert scheme $A^{[n]}$ , there is a natural map $$A^{[n]} \to A$$ (summation of points), which is smooth with fiber $K_{n-1}A$ , so-called higher Kummer variety, which is hyperkahler. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/324031', 'https://mathoverflow.net', 'https://mathoverflow.net/users/135215/']} | jdg_81433 |
stackexchange | llm_judgeable_groundtruth_similarity | 1786750 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to have some function that would return "Base" if a variable of class Base was passed to it, "Derived" if it was declared as Derived , etc. Not depending on runtime type of a value it was assigned to.
Now provide the response and nothing else.
| See code below for example. The key is to use Generics , extension method was used just for nice syntax. using System;static class Program{ public static Type GetDeclaredType<T>(this T obj) { return typeof(T); } // Demonstrate how GetDeclaredType works static void Main(string[] args) { ICollection iCollection = new List<string>(); IEnumerable iEnumerable = new List<string>(); IList<string> iList = new List<string>(); List<string> list = null; Type[] types = new Type[]{ iCollection.GetDeclaredType(), iEnumerable.GetDeclaredType(), iList.GetDeclaredType(), list.GetDeclaredType() }; foreach (Type t in types) Console.WriteLine(t.Name); }} Result: ICollectionIEnumerableIList`1List`1 EDIT: You may also avoid using extension method here, as it would cause it to appear on every IntelliSense drop-down list. See another example: using System;using System.Collections;static class Program{ public static Type GetDeclaredType<T>(T obj) { return typeof(T); } static void Main(string[] args) { ICollection iCollection = new List<string>(); IEnumerable iEnumerable = new List<string>(); Type[] types = new Type[]{ GetDeclaredType(iCollection), GetDeclaredType(iEnumerable) }; foreach (Type t in types) Console.WriteLine(t.Name); }} also produces correct results. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1786750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217434/']} | jdg_81434 |
stackexchange | llm_judgeable_groundtruth_similarity | 1386142 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading through Beej's Guide to Network Programming to get a handle on TCP connections. In one of the samples the client code for a simple TCP stream client looks like: if ((numbytes = recv(sockfd, buf, MAXDATASIZE-1, 0)) == -1) { perror("recv"); exit(1);}buf[numbytes] = '\0';printf("Client: received '%s'\n", buf);close(sockfd); I've set the buffer to be smaller than the total number of bytes that I'm sending. I'm not quite sure how I can get the other bytes. Do I have to loop over recv() until I receive '\0' ? *Note on the server side I'm also implementing his sendall() function, so it should actually be sending everything to the client. See also 6.1. A Simple Stream Server in the guide.
Now provide the response and nothing else.
| Yes, you will need multiple recv() calls, until you have all data. To know when that is, using the return status from recv() is no good - it only tells you how many bytes you have received, not how many bytes are available, as some may still be in transit. It is better if the data you receive somehow encodes the length of the total data. Read as many data until you know what the length is, then read until you have received length data. To do that, various approaches are possible; the common one is to make a buffer large enough to hold all data once you know what the length is. Another approach is to use fixed-size buffers, and always try to receive min(missing, bufsize) , decreasing missing after each recv() . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1386142', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70342/']} | jdg_81435 |
stackexchange | llm_judgeable_groundtruth_similarity | 11153524 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to request the number of nodes (not procs), while job submission in SGE? for e.g. In TORQUE, we can specify qsub -l nodes=3 How to request the nodes by their names in SGE? for e.g. In TORQUE, we can do this by qsub -l nodes=abc+xyz+pqr , where abc , xyz and pqr are hostnames For single hostname, qsub -l hostname=abc it works. But how do I delimit multiple hostnames in SGE?
Now provide the response and nothing else.
| Requesting the number of nodes with Grid Engine is done indirectly.When you want to submit a parallel job then you have to requesta parallel environment ( man sge_pe ) together with the amount of slots (processors etc) like qsub -pe mytestpe 12 ... Depending on the allocation_rule defined in the parallel environment( qconf -sp mytestpe ) the slots are distributed over one or more nodes. If you have a so called fixed allocation rule where you just add a certain number as allocation rule like 4 (4 slots per host) it is easy. If you like one host just submit with -pe mytestpe 4 if you want 10 nodes just submit with -pe mytestpe 40 . Node name can be requested by the -l h=abc . Since node names are RESTRINGS (regular expression strings) in Grid Engine you can createa regular expression for host filtering: qsub -l h="abc|xyz" .You can also create host groups ( qconf -ahgrp ) and request so called queue domains ( qsub -q all.q@@mygroup ). Daniel http://www.gridengine.eu | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11153524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1220318/']} | jdg_81436 |
stackexchange | llm_judgeable_groundtruth_similarity | 8918 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The state of our knowledge about general arithmetic circuits seems to be similar to the state of our knowledge about Boolean circuits, i.e. we don't have good lower-bounds. On the other hand we have exponential size lower-bounds for monotone Boolean circuits . What do we know about monotone arithmetic circuits? Do we have similar good lower-bounds for them? If not, what is the essential difference that doesn't allow us to get similar lower-bounds for monotone arithmetic circuits? The question is inspired by comments on this question .
Now provide the response and nothing else.
| Lower bounds for monotone arithmetic circuits come easier because they forbid cancellations. On the other hand, we can prove exponential lower bounds for circuits computing boolean functions even if any monotone real-valued functions $g:R\times R\to R$ are allowed as gates (see e.g. Sect. 9.6 in the book ). Even though monotone arithmetic circuits are weaker than monotone boolean circuits (in the latter we have cancellations $a\land a=a$ and $a\lor (a\land b)=a$ ), these circuits are interesting because of their relation to dynamic programming (DP) algorithms. Most of such algorithms can be simulated by circuits over semirings $(+,\min)$ or $(+,\max)$ . Gates then correspond to subproblems used by the algorithm. What Jerrum and Snir (in the paper by V Vinay) actually prove is that any DP algorithm for the Min Weight Perfect Matching (as well as for the TSP problem) must produce exponentially many subproblems. But the Perfect Mathching problem is not of "DP flawor" (it does not satisfy Bellman's Principle of Optimality ). Linear programming (not DP) is much more suited for this problem. So what about optimization problems that can be solved by reasonably small DP algorithms - can we prove lower bounds also for them? Very interesting in this respect is an old result of Kerr (Theorem 6.1 in his phd ). It implies that the classical Floyd-Warshall DP algorithm for the All-Pairs Shortest Paths problem (APSP) is optimal : $\Omega(n^3)$ subproblems are necessary. Even more interesting is that Kerr's argument is very simple (much simpler than that Jerrum and Snir used): it just uses the distributivity axiom $a+\min(b,c)=\min(a,b)+\min(a,c)$ , and the possibility to "kill" min-gates by setting one of its arguments to $0$ .This way he proves that $n^3$ plus-gates are necessary to multiply two $n\times n$ matrices over the semiring $(+,\min)$ . In Sect. 5.9 of the book by Aho, Hopcroft and Ullman it is shown that this problem is equivalent to APSP problem. A next question could be: what about the Single-Source Shortest Paths (SSSP) problem? Bellman-Ford DP algorithm for this (seemingly "simpler") problem also uses $O(n^3)$ gates. Is this optimal? So far, no separation between these two versions of the shortest path problem are known; see an interesting paper of Virginia and Ryan Williams along these lines. So, an $\Omega(n^3)$ lower bound in $(+,\min)$ -circuits for SSSP would be a great result. Next question could be: what about lower bounds for Knapsack? In this draft lower bounds for Knapsack are proved in weaker model of $(+,\max)$ circuits where the usage of $+$ -gates is restricted; in Appendix Kerr's proof is reproduced. | {} | {'log_upvote_score': 6, 'links': ['https://cstheory.stackexchange.com/questions/8918', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/186/']} | jdg_81437 |
stackexchange | llm_judgeable_groundtruth_similarity | 456966 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I found this article: midi-in-through-out Credits by Zynthian I like that it has notification LEDS for MIDI in/out/thru. However, I also notice there are optocouplers for MIDI Out and Thru, while the MIDI electrical spec, page 2/3 only defines an optocoupler for MIDI In. Is this overprotection or are the additional optocouplers useful?
Now provide the response and nothing else.
| Optocouplers are used for galvanic isolation. In most cases, this protects against dangerous voltage differences, but in MIDI inputs, it just prevents ground loops. However, the optocouplers used for the MIDI outputs in the linked schematic do not provide any isolation whatsoever, because the grounds and +5V power supplies are connected together (they are the same). Those optocoupers could be replaced with a simple transistor, or (because no amplification is needed) with a piece of wire. That schematic is not the work of a competent designer. Ignore it. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/456966', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/140343/']} | jdg_81438 |
stackexchange | llm_judgeable_groundtruth_similarity | 2046790 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I used Netbeans to generate Web Sevice client code from WSDL url.But I can't change endpoint address at run time using code. Please help me to solve that problem!
Now provide the response and nothing else.
| You can do it two ways: 1) Cast port to BindingProvider and specify BindingProvider.ENDPOINT_ADDRESS_PROPERTY property MyService service = new MyService();MyPort port = service....BindingProvider bindingProvider = (BindingProvider) port;bindingProvider.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, "http://New.Endpoint/service"); 2) You can call the generated service constructor which takes WSDL URL as parameter. QName qname = new QName("http://serviceuri/", "service");String wsdl = "http://New.Endpoint/service?wsdl";MyServiec service = new MyServiec(new URL(wsdl), qname);MyPort port = check...; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2046790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/231099/']} | jdg_81439 |
stackexchange | llm_judgeable_groundtruth_similarity | 58383041 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found out how you can test an exception or error: https://stackoverflow.com/a/54241438/6509751 But how do I test that the following assert works correctly? void cannotBeNull(dynamic param) { assert(param != null);} I tried the following, but it does not work. The assertion is simply printed out and the test fails: void main() { test('cannoBeNull assertion', () { expect(cannotBeNull(null), throwsA(const TypeMatcher<AssertionError>())); });}
Now provide the response and nothing else.
| There are two key aspects to this: Pass a callback to expect . When you do that, you can never do something wrong, even if you just instantiate an object. This was already shown in the linked answer . Use throwAssertionError . Example: expect(() { assert(false);}, throwsAssertionError); Applied to the code from the question: void main() { test('cannoBeNull assertion', () { expect(() => cannotBeNull(null), throwsAssertionError); });} Why do we need to pass a callback? Well, if you have a function without parameters, you can also pass a reference to that as well. If there was no callback, the assertion would be evaluated before expect is executed and there would be no way for expect to catch the error. By passing a callback, we allow expect to call that callback, which allows it to catch the AssertionError and it is able to handle it. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58383041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6509751/']} | jdg_81440 |
Subsets and Splits