source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 395686 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to build an OpAmp without using an integrated circuit or a transistor or a vacuum tube? I wanted to see how it would look on a larger scale before miniaturization on a chip. I was hoping for a high view of what that would entail, not only that it is logically possible. @HarrySvensson Thanks. That is what I wanted. I didn't mean to cause a ruckus. I only wanted to see what was going on outside of all the YouTube videos and all the rest. I figured if I could see it with big chunky non-IC parts, I could understand it. Apparently I miscalculated there. What I should have asked for was an equivalent circuit. Now I know the keywords to search and have found many.
Now provide the response and nothing else.
| Opamps have power gain, so one way or another you need a external power source and active components. You have ruled out the usual active components used for amplification in electronics, which are transistors and vacuum tubes. You therefore need to get clever and find other ways to amplify. That means you need to be able to control a large amount of power by variations of a small amount of power. Some possibilities: A motor driving a rheostat. LEDs controlling LDRs (light dependent resistors). Pressure-controlled water flow valves. Pressure-controlled pneumatic valves. A gasoline engine with the control being the throttle. A mechanically controlled adjustable transformer, like a variac. A electrical generator where the power input is the mechanical rotation of the shaft, and the control is done by changing the field winding current. Once you have something that can amplify, you still have to use it, or several of them in the right configuration, to make a opamp. This is just like a transistor isn't a opamp, but multiple transistors arranged the right way (with some passive parts) can be a opamp. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/395686', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/59603/']} | jdg_79341 |
stackexchange | llm_judgeable_groundtruth_similarity | 4247472 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Note: In my question by repetition I mean count of extra appearances of any digit in the number. I found this answer for permutations with global limited repetition . The problem with that answer (as far as I understand) is that the solution in that answer requires knowing repetitions specific to each digit, which makes it not useful if we need to know the permutations with global limited repetitions without the need for providing repetitions for each digit in each number, or maybe this formula does cover permutations with global limited repetition, but I didn't learn the correct way of using it. What I need is an example using the linked answer (if it could provide my request) to learn how to get the result without the need for providing repetitions for each digit. The example is: all the permutations of 10-digit numbers coming from digits 0 to 9 with global repetition of 3 (no matter from which digit; meaning repetition of all the digits during all the permutations will be covered) in each number, like: 9576545100 --> 5 appeared three times which in my algorithm one time considered valid, but three time considered two time repetition and 0 appeared one extra time (2+1=3times) 9901286787 --> 9 repeated one time and 8 repeated one time and 7 repeated one time (1+1+1=3times) 7890000234 --> 0 repeated three times (3times) The example requests how many numbers from 0000000000 to 9999999999 exist that have 3 global repetitions in them and not specific to each digit but global between digits? In the solution from the linked answer which you could see below, we need to provide k which is a number of elements (in my case digits) that has been repeated and l is the sum of all repetitions, no matter from which digit. So this means this solution is not for providing permutations with global limited repetition, but a solution to provide permutations with global limited repetitions with knowing each digit has been repeated how many times as a necessity. $$\sum _{k=0}^m\binom{m}{k}\binom{m-k}{n-(l+k)}\sum _{x_1+\cdots +x_k=l}\binom{n}{\underbrace{1,\cdots ,1}_{n-(l+k)},x_1+1,\cdots ,x_k+1}$$
Now provide the response and nothing else.
| In step 1, you're fixing $n_0$ which has the property $|a_{n_0} - a| \geq \epsilon$ , but in step 3, you're using that $|a_{m_0} - a| \geq \epsilon$ for your $m$ large enough. You can't first fix $n_0$ and then replace it. That's already assuming that your sequence $a_n$ converges to $a$ , which is what you're trying to prove. The standard argument constructs (using Bolzano-Weierstrass) a convergent subsequence of $a_n$ . It can then be shown that any Cauchy sequence with a convergent subsequence is convergent. A quicker way to see that your argument doesn't work is the following: it doesn't rely on the "completeness" of $\mathbb{R}$ . You will learn that - in more general settings - convergent sequences are always Cauchy, but not all Cauchy sequences converge. For this you really need the completeness (which is used in the proof of Bolzano Weierstrass). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4247472', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/166912/']} | jdg_79342 |
stackexchange | llm_judgeable_groundtruth_similarity | 34897388 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have already defined a custom admin action for my model which works perfectly as expected. I also looked at multiple ways of adding a button to the admin change form page here on SO. The only step that I am missing is how do I make a button in the change form page execute my custom admin action with current object? The goal is to allow admin to inspect every object individually and perform an action on them without needing to go back to list view, selecting the inspected object, and executing the action from the list. My custom admin action looks like this: def admin_apply_change(modeladmin, request, queryset): # loop over objects in query set and perform action I am assuming there is a simple and clean way of calling this action in admin change form, where the queryset would only contain the currently opened object the admin is looking at. NOTE: It would be preferable if the button is at the bottom of the change form, next to Save button instead of being at top with History which is not very visible. Solution See the answer below by Remi for the solution. In order to make it work the following corrections are needed: In the override of response_change initialization of some variables is missing: opts = self.model._metapk_value = obj._get_pk_val()preserved_filters = self.get_preserved_filters(request) New inclusion tag custom_submit_row should be placed in templatetags and not in admin (see docs for custom templatetags ) This is the oversight you could lose some time on. In change_form.html you not only have to change the suggested line: {% if save_on_top %}{% block submit_buttons_top %}{% submit_row %}{% endblock %}{% endif %} but also the more important line at the bottom where submit_row appears: {% block submit_buttons_bottom %}{% submit_row %}{% endblock %} It is located just above the javascript block in change_form.html .
Now provide the response and nothing else.
| You could take a look at the change_form_template and set it to a custom template of yours and override the response_change method: class MyModelAdmin(admin.ModelAdmin): # A template for a customized change view: change_form_template = 'path/to/your/custom_change_form.html' def response_change(self, request, obj): opts = self.model._meta pk_value = obj._get_pk_val() preserved_filters = self.get_preserved_filters(request) if "_customaction" in request.POST: # handle the action on your obj redirect_url = reverse('admin:%s_%s_change' % (opts.app_label, opts.model_name), args=(pk_value,), current_app=self.admin_site.name) redirect_url = add_preserved_filters({'preserved_filters': preserved_filters, 'opts': opts}, redirect_url) return HttpResponseRedirect(redirect_url) else: return super(MyModelAdmin, self).response_change(request, obj) Copy the change_form.html from your site-packages/django/contrib/admin/templates/change_form.html and edit the line 40 {% if save_on_top %}{% block submit_buttons_top %}{% submit_row %}{% endblock %}{% endif %} to {% if save_on_top %}{% block submit_buttons_top %}{% custom_submit_row %}{% endblock %}{% endif %} Also check the line: {% block submit_buttons_bottom %}{% submit_row %}{% endblock %} just above the javascript block. Then you can register a new inclusion tag somewhere in your admin.py or add it to templatetags: @register.inclusion_tag('path/to/your/custom_submit_line.html', takes_context=True)def custom_submit_row(context): """ Displays the row of buttons for delete and save. """ opts = context['opts'] change = context['change'] is_popup = context['is_popup'] save_as = context['save_as'] ctx = { 'opts': opts, 'show_delete_link': ( not is_popup and context['has_delete_permission'] and change and context.get('show_delete', True) ), 'show_save_as_new': not is_popup and change and save_as, 'show_save_and_add_another': ( context['has_add_permission'] and not is_popup and (not save_as or context['add']) ), 'show_save_and_continue': not is_popup and context['has_change_permission'], 'is_popup': is_popup, 'show_save': True, 'preserved_filters': context.get('preserved_filters'), } if context.get('original') is not None: ctx['original'] = context['original'] return ctx The contents of your custom_submit_line.html : {% load i18n admin_urls %}<div class="submit-row">{% if show_save %}<input type="submit" value="{% trans 'Save' %}" class="default" name="_save" />{% endif %}{% if show_delete_link %} {% url opts|admin_urlname:'delete' original.pk|admin_urlquote as delete_url %} <p class="deletelink-box"><a href="{% add_preserved_filters delete_url %}" class="deletelink">{% trans "Delete" %}</a></p>{% endif %}{% if show_save_as_new %}<input type="submit" value="{% trans 'Save as new' %}" name="_saveasnew" />{% endif %}{% if show_save_and_add_another %}<input type="submit" value="{% trans 'Save and add another' %}" name="_addanother" />{% endif %}{% if show_save_and_continue %}<input type="submit" value="{% trans 'Save and continue editing' %}" name="_continue" />{% endif %}<input type="submit" value="{% trans 'Custom Action' %}" name="_customaction" /></div> It is a lot of code, but mostly copy/paste. Hope that helps. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34897388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4958717/']} | jdg_79343 |
stackexchange | llm_judgeable_groundtruth_similarity | 10427133 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a user control that has a Grid with two rows.the first row for a title and the second one for a content that will be defined outside the user control such as a Button in our example. Somehow I didn't get it to work. UserControl1 xaml: <Grid Background="LightBlue"> <Grid.RowDefinitions> <RowDefinition Height="50" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <TextBlock Text="Title" FontSize="30" Margin="10,0,0,0"/></Grid> MainWindow xaml: <Grid> <local:UserControl1> <Button>Click me</Button> </local:UserControl1></Grid> The picture below should explain what's my problem:
Now provide the response and nothing else.
| The following code <local:UserControl1> <Button>Click me</Button></local:UserControl1> Means that you set UserControl1 's Content property to be that button. This button simply replaces that UserControls1 's markup. So all the things that you have in UserControl1.xaml are not there any more. EDIT If you want your UserControl to host some markup that will be set somewhere outside of it, you can add a DependencyProperty to it, for example: /// <summary> /// Gets or sets additional content for the UserControl /// </summary> public object AdditionalContent { get { return (object)GetValue(AdditionalContentProperty); } set { SetValue(AdditionalContentProperty, value); } } public static readonly DependencyProperty AdditionalContentProperty = DependencyProperty.Register("AdditionalContent", typeof(object), typeof(UserControl1), new PropertyMetadata(null)); And add some element to it's markup to host that additional content. Here's an example extending the markup you provided: <UserControl ... Name="userControl"> <Grid Background="LightBlue"> <Grid.RowDefinitions> <RowDefinition Height="50" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <TextBlock Text="Title" FontSize="30" Margin="10,0,0,0"/> <ContentPresenter Content="{Binding AdditionalContent, ElementName=userControl}" /> </Grid></UserControl> Now you can use it as following: <local:UserControl1> <local:UserControl1.AdditionalContent> <Button>Click me</Button> </local:UserControl1.AdditionalContent></local:UserControl1> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10427133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138627/']} | jdg_79344 |
stackexchange | llm_judgeable_groundtruth_similarity | 2285150 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im using a LinearLayout to put two buttons horizontally side-by-side, but I want to each button to size itself to use 50% of the horizontal space. I thought layout_weight of "1" for each button would do the trick, but maybe my layout_width needs to be changed?
Now provide the response and nothing else.
| The layout_weight attribute controls how much of the left over space each of your buttons is given. If your buttons take up different amounts of space to start with, then each of them will first be given the space they ask for, and then any remaining space will be divided up between them, meaning that you won't have an exact 50/50 split. You can get around this behaviour by first setting layout_width="0px" (keeping your layout_weight s as they are), and relying on 50% being enough space to display each button. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2285150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246793/']} | jdg_79345 |
stackexchange | llm_judgeable_groundtruth_similarity | 15113514 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
1st statement: IEnumerable<char> query = "Not what you might expect";query = query.Where (c => c != 'a');query = query.Where (c => c != 'e');query = query.Where (c => c != 'i');query = query.Where (c => c != 'o');query = query.Where (c => c != 'u'); Output of String.Join("", query) : "Nt wht y mght xpct" 2nd statement: query = "Not what you might expect";foreach (char vowel in "aeiou") query = query.Where (c => c != vowel); Output of String.Join("", query) : "Not what yo might expect" The outputs from these statements are different.Can any one explain why?
Now provide the response and nothing else.
| If you're using a C# version lower than 5.0 (where this was fixed), this is the reason: The lambda in your query captures the loop variable vowel . Because Linq likes to use deferred execution, the value of this reference is not read until the query gets executed (by iterating over it), which is after the foreach loop has finished. At that point, the most recent value of vowel is u , which is why you get the unexpected output. You can get around this by copying the value to another temporary variable (or by upgrading to C# 5.0). Try this: query = "Probably what you might expect";foreach (char vowel in "aeiou") { char currentVowel = vowel; query = query.Where (c => c != currentVowel );} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15113514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2115561/']} | jdg_79346 |
stackexchange | llm_judgeable_groundtruth_similarity | 15942571 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make a webpage where a set of DIVs cycles from top to bottom (see fiddle). And then whenever you hover a certain part of the div (in this case, the STOP link), the animation stops then plays again on mouseout. What I'm lacking right now is a way to stop the animation whenever that STOP link has been clicked. I've added a stop function on the link but it won't work. There might have been a conflict or some sort with the hover function I've made. Thanks in advance for the help and sorry for the noobish question. Link to the fiddle: http://jsfiddle.net/Psp9R/ Jquery: $(document).ready(function() {$(".row").last().addClass("last");$(".mholder").each(function() { var i = 0; $(this).find(".row").each(function() { var $this = $(this); $this.css("bottom", i); i += $this.height(); });});// PLAY AND STOP$('#start').click(function() { $("#overlay").hide(); var countScrolls = $('.mholder .row').length; for (var i=0; i < countScrolls; i++) { doScroll($('.mholder .row:nth-child(' + i + ')')); }});$('.stop').click(function() { var countScrolls = $('.mholder .row').length; $("#overlay").show(); for (var i=0; i < countScrolls; i++) { $('.mholder .row:nth-child(' + i + ')').stop(); }});//PAUSE ON HOVER$(".stop").hover(function () { var countScrolls = $('.mholder .row').length; for (var i=0; i < countScrolls; i++) { $('.mholder .row:nth-child(' + i + ')').stop(); }}, function () { var countScrolls = $('.mholder .row').length; for (var i=0; i < countScrolls; i++) { doScroll($('.mholder .row:nth-child(' + i + ')')); }});});function doScroll($ele) {var bottom = parseInt($ele.css("bottom")); if (bottom < -60) { //bit arbitrary! var $lastEle = $ele.closest('.mholder').find(".last"); $lastEle.removeClass("last"); $ele.addClass("last"); var bottom = (parseInt($lastEle.css("bottom")) + $lastEle.height()); $ele.css("bottom", bottom); } $ele.animate({ bottom: (parseInt(bottom) - 80) }, 2200, 'linear', function() { doScroll($(this)) });}
Now provide the response and nothing else.
| You can override the route function in your router class to intercept the route calls : var Router = Backbone.Router.extend({ routes: { "" : "homepage", "catalog/:id" : "catalogPage" }, route: function(route, name, callback) { var router = this; if (!callback) callback = this[name]; var f = function() { console.log('route before', route); callback.apply(router, arguments); console.log('route after', route); }; return Backbone.Router.prototype.route.call(this, route, name, f); }, homepage: function() { console.log("homepage"); }, catalogPage: function(id) { console.log("catalogPage "+id); }});var r = new Router();Backbone.history.start(); And a demo http://jsfiddle.net/nikoshr/EdLzh/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15942571', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2190072/']} | jdg_79347 |
stackexchange | llm_judgeable_groundtruth_similarity | 52107555 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have unbalanced training dataset, thats why I built custom weighted categorical cross entropy loss function. But the problem is my validation set is balanced one and I want to use the regular categorical cross entropy loss. So can I pass different loss function for validation set within Keras? I mean the wighted one for training and regular one for validation set? def weighted_loss(y_pred, y_ture): ' ' 'return lossmodel.compile(loss= weighted_loss, metric='accuracy')
Now provide the response and nothing else.
| You can try the backend function K.in_train_phase() , which is used by the Dropout and BatchNormalization layers to implement different behaviors in training and validation. def custom_loss(y_true, y_pred): weighted_loss = ... # your implementation of weighted crossentropy loss unweighted_loss = K.sparse_categorical_crossentropy(y_true, y_pred) return K.in_train_phase(weighted_loss, unweighted_loss) The first argument of K.in_train_phase() is the tensor used in training phase, and the second is the one used in test phase. For example, if we set weighted_loss to 0 (just to verify the effect of K.in_train_phase() function): def custom_loss(y_true, y_pred): weighted_loss = 0 * K.sparse_categorical_crossentropy(y_true, y_pred) unweighted_loss = K.sparse_categorical_crossentropy(y_true, y_pred) return K.in_train_phase(weighted_loss, unweighted_loss)model = Sequential([Dense(100, activation='relu', input_shape=(100,)), Dense(1000, activation='softmax')])model.compile(optimizer='adam', loss=custom_loss)model.outputs[0]._uses_learning_phase = True # required if no dropout or batch norm in the modelX = np.random.rand(1000, 100)y = np.random.randint(1000, size=1000)model.fit(X, y, validation_split=0.1)Epoch 1/10900/900 [==============================] - 1s 868us/step - loss: 0.0000e+00 - val_loss: 6.9438 As you can see, the loss in training phase is indeed the one multiplied by 0. Note that if there's no dropout or batch norm in your model, you'll need to manually "turn on" the _uses_learning_phase boolean switch, otherwise K.in_train_phase() will have no effect by default. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52107555', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8262057/']} | jdg_79348 |
stackexchange | llm_judgeable_groundtruth_similarity | 20364236 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Java 8, Class appears to have gained methods to get an AnnotatedType view of its superclasses and its superinterfaces. How can you convert a Class to its own AnnotatedType ? Does that question even make sense? From what I can tell, an AnnotatedType has-a Type , not is-a Type . It's an AnnotatedElement , though; it's all very messy. I have searched through the Javadocs to no avail, so far.
Now provide the response and nothing else.
| So I finally got an acceptable understanding of the AnnotatedType interface. Here's a working Java 8 example to illustrate one of its uses public static void main(String[] args) { Class<?> fooClass = Foo.class; AnnotatedType type = fooClass.getAnnotatedSuperclass(); System.out.println(type); System.out.println(Bar.class == type.getType()); System.out.println(Arrays.toString(type.getAnnotations())); System.out.println(Arrays.toString(type.getDeclaredAnnotations()));}public static class Bar {}public static class Foo extends @Custom Bar {}// So that annotation metadata is available at run time@Retention(RetentionPolicy.RUNTIME)// TYPE_USE being the important one@Target(value = {ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER, TYPE, TYPE_PARAMETER, TYPE_USE}) public @interface Custom {} This prints sun.reflect.annotation.AnnotatedTypeFactory$AnnotatedTypeBaseImpl@1d44bcfatrue[@com.testing.Test$Custom()][@com.testing.Test$Custom()] The AnnotatedType interface states AnnotatedType represents the potentially annotated use of a type in the program currently running in this VM. and Class#getAnnotatedSuperclass() javadoc states Returns an AnnotatedType object that represents the use of a type to specify the superclass of the entity represented by this Class object. I've made potentially bold in the AnnotatedType javadoc because it makes it clear that the type usage doesn't have to be annotated. If you had public static class Bar {}...Bar.class.getAnnotatedSuperclass(); // returns Class instance for java.lang.Object This is a use case that wasn't possible in Java 7 and lower because you could not annotate type usages ( see some examples here ). In Java 8, however, you can do public static class Foo extends @Custom Bar { where the type Bar is used as a super class and its usage is annotated with @Custom . It is therefore an AnnotatedType . Therefore, Foo.class.getAnnotatedSuperClass() will return an AnnotatedType instance for that usage. How can you convert a Class to its own AnnotatedType ? Does that question even make sense? The question doesn't make sense. This is because a Class object holds self-contained metadata about a class. By self-contained, I mean everything that can be deduced from the class' .class file (or actual declaration). You cannot deduce any usages of the type anywhere else and therefore it cannot be converted to any AnnotatedType for itself. You can have public static class Foo extends @Custom Bar {}public static class Zoom extends @Custom Bar {}public static class Doing extends @Custom Bar {} There is an AnnotatedType instance for each of the above uses of Bar , but which one would you chose to convert a [Bar] Class to its own AnnotatedType ? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20364236', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/869736/']} | jdg_79349 |
stackexchange | llm_judgeable_groundtruth_similarity | 2100884 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This might be an unusual question but I was wondering why the 2nd derivative of this function is a straight line? I kind of have the feeling this is not that easy to answer. But it kind of struck me that it is exactly linear. Here's a picture: I mean, yes mathematically you can say that it just is as it is, but is there also an intuitive answer to it? Thank you for answering! Regards!
Now provide the response and nothing else.
| Suppose the second derivative of a function is a straight line: $$f''(x)=ax+b$$ It follows that $$f'(x)=\frac12ax^2+bx+c$$ $$f(x)=\frac16ax^3+\frac12bx^2+cx+d$$ So the only functions that have their second derivatives as straight lines are polynomials degree $3$ or lower. Now, for some insight, let's just worry about the first derivative of $x^2$, which is obviously $2x$, but I wish to make it more intuitive: Rather than letting $h\to0$, we fix $h=1$ and just take finite differences. It is easy enough to see then that $(x+1)^2-x^2=2x+1=1+\frac d{dx}x^2$. Similarly, if we apply a finite difference on $x^3$ 2 times, which is analogous to the second derivative, we get $$(x+1)^3-x^3=3x^2+3x+1$$ $$3(x+1)^2+3(x+1)+1-3x^2-3x-1=6x+6=6+\frac{d^2}{dx^2}x^3$$ Indeed, you should be able to prove by induction that the $n$th derivative of a polynomial degree $n+1$ is linear. And as a last comment, usually, when we have $h\to0$, all the left-over constants and such go to $0$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2100884', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/405216/']} | jdg_79350 |
stackexchange | llm_judgeable_groundtruth_similarity | 4194485 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My question is about using fulltext.As I know like queries which begin with % never use index : SELECT * from customer where name like %username% If I use fulltext for this query can ı take better performance? Can SQL Server use fulltext index advantages for queries like %username%?
Now provide the response and nothing else.
| Short answer There is no efficient way to perform infix searches in SQL Server, neither using LIKE on an indexed column, or with a fulltext index. Long answer In the general case, there is no fulltext equivalent to the LIKE operator. While LIKE works on a string of characters and can perform arbitrary wildcard matches against anything inside the target, by design fulltext operates upon whole words/terms only. (This is a slight simplification but it will do for the purpose of this answer.) SQL Server fulltext does support a subset of LIKE with the prefix term operator. From the docs ( http://msdn.microsoft.com/en-us/library/ms187787.aspx ): SELECT NameFROM Production.ProductWHERE CONTAINS(Name, ' "Chain*" '); would return products named chainsaw, chainmail, etc. Functionally, this doesn't gain you anything over the standard LIKE operator ( LIKE 'Chain%' ), and as long as the column is indexed, using LIKE for a prefixed search should give acceptable performance. The LIKE operator allows you to put the wildcard anywhere, for instance LIKE '%chain' , and as you mentioned this prevents an index from being used. But with fulltext, the asterisk can only appear at the end of a query term, so this is of no help to you. Using LIKE, it is possible to perform efficient postfix searches by creating a new column, setting its value to the reverse your target column, and indexing it. You can then query as follows: SELECT NameFROM Production.ProductWHERE Name_Reversed LIKE 'niahc%'; /* "chain" backwards */ which returns products with their names ending with "chain". I suppose you could then combine the prefix and reversed postfix hack: SELECT NameFROM Production.ProductWHERE Name LIKE 'chain%'AND Name_Reversed LIKE 'niahc%'; which implements a (potentially) indexed infix search, but it's not particularly pretty (and I've never tested this to see if the query optimizer would even use both indexes in its plan). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4194485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/509546/']} | jdg_79351 |
stackexchange | llm_judgeable_groundtruth_similarity | 1057 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This came up in relation to this answer Is it possible to set a timeout for LinkWrite ? LinkWrite seems to block until the sent data is read on the other end of the link. Can it block indefinitely or does it have a long timeout? If it doesn't have a default timeout, is it possible to set one? An example Let's create two new kernels in the front end (this might overwrite your existing kernels, but it will be reversed after restarting the front end): SetOptions[$FrontEndSession, EvaluatorNames -> {"Local" -> {"AutoStartOnLaunch" -> True}, "K1" -> {"AutoStartOnLaunch" -> False}, "K2" -> {"AutoStartOnLaunch" -> False}}] Then open two new notebooks, and in the first one evaluate SetOptions[EvaluationNotebook[], Evaluator -> "K2"]link = LinkCreate["alink"]LinkWrite[link, "boo"] In the second one evaluate SetOptions[EvaluationNotebook[], Evaluator -> "K1"]link = LinkConnect["alink"]LinkRead[link] LinkWrite in the first one will only return after LinkRead has finished evaluating in the second one. If the answer is that it's not possible to set a timeout, that can be reasonable too. I was wondering about this: suppose we're running a long parallel calculation, and we interrupt the main kernel to enter a subsession and inspect the state ( Evaluation -> Interrupt -> Enter subsession ). Is it possible that one of the parallel kernels will time out while trying to send the result, and the calculation won't be able to resume after returning from the subsession?
Now provide the response and nothing else.
| Setting up MathLink connections between kernels acting as peers (as opposed to in a master-slave arrangement) is sparsely documented, and the critical function you need to make this work, i.e. LinkActivate , is undocumented altogether (although, if you clear its ReadProtected attribute, you will see that it is merely a synonym for LinkConnect , which itself is a version of LinkOpen ). In fact, LinkRead and LinkWrite both work with message queues and are not inherently blocking operations, but the behaviour you see is the result of the MathLink connection not having been initialized properly before writing. To initialize the connection correctly, modify your code as follows: SetOptions[EvaluationNotebook[], Evaluator -> "K2"]link = LinkCreate["alink"](* Evaluate only after calling LinkConnect/LinkActivate from K1 *)LinkActivate[link](* No longer blocks *)LinkWrite[link, "boo"] and SetOptions[EvaluationNotebook[], Evaluator -> "K1"](* Evaluate immediately after calling LinkCreate from K2. *)link = LinkConnect["alink"];LinkActivate[link] (* this call is blocking! *)LinkRead[link] Why this is undocumented I do not know; to my knowledge the only place where this is described is the (rather specialist) book, MathLink: Network Programming with Mathematica by Chikara Miyaji and Paul Abbott. I discovered it when I was curious as to whether it was possible to write an MPI-style message-passing implementation in pure Mathematica . (The answer is yes; I posted some code on MathGroup here if you are interested.) | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/1057', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/12/']} | jdg_79352 |
stackexchange | llm_judgeable_groundtruth_similarity | 29336422 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The other day, I gave an answer to this question but then other user solved that problem with sum + case conditional statement to add one edge condition in result. So, the question came to my mind, how statement sum(case when jobname = 'Analyst' then 1 else 0 end) in the below query works select d.*from (select deptno, sum(case when jobname = 'Analyst' then 1 else 0 end) as numAnalysts from employees group by deptno order by numAnalysts asc ) dwhere rownum = 1;` and return the number of employees over a department. Also, I would like to understand the performance of this query. Before posting this question, I read this , this and this but still didn't get how this works.
Now provide the response and nothing else.
| Presumably, this is the part that you are struggling to understand: select deptno, sum(case when jobname = 'Analyst' then 1 else 0 end) as numAnalysts from employees group by deptno This is a simple aggregation query, really. What the query is doing is: Look at each row in employees If jobname is 'Analyst' then assign the value of 1 (this is the case statement . Otherwise, assign a value of 0`. Aggregate by department, summing the value just calculated. This has the effect of counting the number of analysts. case is an expression that returns a value. The sum() is simply adding up that value for each group. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29336422', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1401695/']} | jdg_79353 |
stackexchange | llm_judgeable_groundtruth_similarity | 1169786 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't understand when an output parameter should be used, I personally wrap the result in a new type if I need to return more than one type, I find that a lot easier to work with than out. I have seen method like this, public void Do(int arg1, int arg2, out int result) are there any cases where that actually makes sense? how about TryParse , why not return a ParseResult type? or in the newer framework return a null-able type?
Now provide the response and nothing else.
| Out is good when you have a TryNNN function and it's clear that the out-parameter will always be set even if the function does not succeed. This allows you rely on the fact that the local variable you declare will be set rather than having to place checks later in your code against null. (A comment below indicates that the parameter could be set to null , so you may want to verify the documentation for the function you're calling to be sure if this is the case or not.) It makes the code a little clearer and easier to read. Another case is when you need to return some data and a status on the condition of the method like: public bool DoSomething(int arg1, out string result); In this case the return can indicate if the function succeeded and the result is stored in the out parameter. Admittedly, this example is contrived because you can design a way where the function simply returns a string , but you get the idea. A disadvantage is that you have to declare a local variable to use them: string result;if (DoSomething(5, out result)) UpdateWithResult(result); Instead of: UpdateWithResult(DoSomething(5)); However, that may not even be a disadvantage, it depends on the design you're going for. In the case of DateTime, both means (Parse and TryParse) are provided. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1169786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/98970/']} | jdg_79354 |
stackexchange | llm_judgeable_groundtruth_similarity | 15791 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Looking at the radius and mass of stars on Wikipedia, I see that the Sun is the densest of all, often many times denser than other stars. Is that because only non-dense starts are easily seen from a distance? Are there any stars of comparable luminosity to the sun that can be seen with the naked eye, and do they have a similar density as the sun? If needed I can copy-paste the mass and radius of other stars here for reference.
Now provide the response and nothing else.
| The answer lies in the selection bias towards brighter stars. There are two reasons this makes the Sun look relatively dense. The first is in Martin's answer. Looking at a list of brightest stars , many (e.g. Betelgeuse, Aldebaran, Antares) are red giants. These are stars that have finished burning hydrogen into helium in their cores and are much larger in size than main-sequence stars like the Sun. As a result, their mean densities are small. The second effect is that the more massive a main-sequence star is, the smaller its mean density but the greater its luminosity. So again, more massive stars on the main-sequence (e.g. Rigel) are easier to see but also have lower mean densities. If you compare the Sun to stars from a list of Sun-like stars , you'll find it isn't unusual. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/15791', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4877/']} | jdg_79355 |
stackexchange | llm_judgeable_groundtruth_similarity | 5236052 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Java, I want to get the current time in GMT. I tried various options like this: Date date = new Date();Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("GMT"));date1 = calendar.getTime(); But the date is always is interpreted in my local time zone. What am I doing wrong and how can I convert a java Date to GMT?
Now provide the response and nothing else.
| Odds are good you did the right stuff on the back end in getting the date, but there's nothing to indicate that you didn't take that GMT time and format it according to your machine's current locale. final Date currentTime = new Date();final SimpleDateFormat sdf = new SimpleDateFormat("EEE, MMM d, yyyy hh:mm:ss a z");// Give it to me in GMT time.sdf.setTimeZone(TimeZone.getTimeZone("GMT"));System.out.println("GMT time: " + sdf.format(currentTime)); The key is to use your own DateFormat, not the system provided one. That way you can set the DateFormat's timezone to what you wish, instead of it being set to the Locale's timezone. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5236052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/610741/']} | jdg_79356 |
stackexchange | llm_judgeable_groundtruth_similarity | 2986174 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Sorry for the lengthy post, there is a summary at the bottom. For example, let's say we have two matrices: $$A = \pmatrix{1 & 2\\ 3 & 4\\ 5 & 6}, \quad B = \pmatrix{4 & -2 & 1\\0 & 2 & 3}$$ Apologies for my lack of formatting skills, to clarify: $A$ is a $3\times2$ matrix where the first row has the two elements $1$ and $2$ . $B$ is a $2 \times 3$ matrix where the first row has the three elements $4$ , $-2$ , and $1$ . The way I think about multiplying two matrices is: take the dot product of the first row of the first matrix with every column of the second matrix. The result of each is an element in the first row of the resulting matrix. Once that's done, take the dot product of the second row of the first matrix with every column of the second matrix. These will make up the second row of the resulting matrix. Do this until you've gone through every row in the first matrix. A rule I learned is that you can know the size of the resulting matrix by writing out the sizes of the two matrices being multiplied. In this case, that would look like: $3\times2$ and $2\times3$ . The outside numbers ( $3$ and $3$ , when written that way) will be the size of the product. But what if you wrote the sizes in the other order, i.e. $2\times3$ and $3\times2$ ? That would be saying that the resulting matrix is $2\times2$ . And indeed, depending on how I look at the question, I can get both a $2\times2$ and $3\times3$ matrix as an answer. Going back to my strategy for multiplication, if I think of taking the dot product of the rows of $A$ with the columns of $B$ , then it's easy to see that I'll get a $3\times3$ matrix. However, if I think about taking the dot product of the rows of $B$ with the column of $A$ , I get a $2\times2$ matrix. Which is the right answer? Or is it that the order matters, i.e. if I'm looking for the result of $AB$ then I take the dot products of $A$ 's rows and $B$ 's columns, and if I'm looking for the result of $BA$ I take the dot products of $B$ 's rows and $A$ 's columns? Or am I hugely misunderstanding something? TLDR When I multiply the rows of a $3\times2$ matrix with the columns of a $2\times3$ matrix, I get a $3\times3$ matrix as the answer. But when I look at it the other way and multiply the rows of the $2\times3$ matrix with the columns of the $3\times2$ matrix, I get a $2\times2$ matrix as the answer. Which is correct? I've been sitting here for hours trying to figure this out. Any help whatsoever is greatly appreciated!
Now provide the response and nothing else.
| As you wrote $\sqrt{a-b}$ , I suppose $a\ge b$ . Then $|\sqrt{a}-\sqrt{b}|^2\le|\sqrt{a}-\sqrt{b}||\sqrt{a}+\sqrt{b}|=a-b$ . Take square roots on both sides and we are done. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2986174', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/598227/']} | jdg_79357 |
stackexchange | llm_judgeable_groundtruth_similarity | 5672746 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've started learning some C as a hobby and have blindly used FILE as a declaration for file pointers for quite some time, and I've been wondering. Is this a keyword or special data type for C to handle files with? Does it contain a stream to the file within and other data? Why is it defined as a pointer? An example to show what I mean to make it a little more clear: FILE* fp; //<-- thisfp = fopen("datum.txt", "r");while(!feof(fp)) { // etc.}
Now provide the response and nothing else.
| is this a keyword or special data type for C to handle files with? What you are refering to is a typedef'd structure used by the standard io library to hold the appropriate data for use of fopen, and its family of functions. Why is it defined as a pointer? With a pointer to a struct, you can then pass it as a parameter to a function. This is for example what fgets or fgetc will accept, in the form of function(FILE* fp) The fopen function will return a pointer to a newly created FILE struct, assigning this new pointer to your unused one will cause them to point to the same thing. Does it contain a stream to the file within and other data? The structure definition seems a little more illusive than its description. This is directly taken from my stdio.h, from MinGW32 5.1.4 typedef struct _iobuf{ char* _ptr; int _cnt; char* _base; int _flag; int _file; int _charbuf; int _bufsiz; char* _tmpfname;} FILE; Which includes the lovely comment before it: Some believe that nobody in their right mind should make use of the internals of this structure. The contents of this structure appear to change greatly on other implementations, the glibc sources usually have some form of commenting but their structure for this is burried under a lot of code. It would make sense to heed the aforementioned warning and just not worry what it does. :) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5672746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/709210/']} | jdg_79358 |
stackexchange | llm_judgeable_groundtruth_similarity | 126765 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm still a relative novice with OOP, so I am still trying to understand some of the best practices for how to design "good" objects. I know this question is probably a rather subjective, which is why I am here rather than on stackoverflow. I am designing an object that will have 20 separate string properties, none of which are required. These constraints are fixed and cannot be changed as it is an implementation of an outside data source that I have no control over. The design problem I am running into is how to effectively construct an instance of the object and be able to compare equality by comparing each field without violating normal best practices. A 20 argument constructor seems very wrong, especially since every field is a string, and there are cases where many of them will be null. I could use the optional arguments in C#4 to reduce the number of arguments that are actually called, but since everything is a string, I would have to use named parameters when calling the constructor, and that seems that it would be even uglier. I have thought about trying to group the properties into smaller classes, but the most logical grouping still has an object with 10 properties, and at least half of the remaining properties have no logical grouping. I could give each property a public setter, but that would make the object mutable, and thus complicate matters for overriding GetHashCode() . And I really don't need the object to be mutable. Once it is built, there should never be a reason to change it, but I don't want to rely on that because who knows what someone will try to do with it. Implementing IEquatable<T> is a possibility for the equality test, but my understanding is it is recommended to also override Equals(object obj) and GetHashCode() when implementing IEquatable so all Equals methods have the same behavior. Is this an edge case where overriding Equals without overriding GetHashCode should be done? I don't intend on using this object as a key in a dictionary or as a member of a HashSet , but I can't predict what someone else would try to do in the future. Or should I just abandon the idea of doing an equality comparison on all of the properties with Equals and implement my own Compare() method? Or is there a better method for building the object that would allow it to still be immutable without having to pass all of the arguments through a constructor or a method? EDIT:In response to Sign's comment, the purpose of the object is ultimately to be able to compare the data from the outside data source to our internal data to ensure it is accurate. Both data sets contain address information, but they are broken up differently. The internal data, is generally stored as a typical address (Street, City, State, Postal Code, County, Country), but there are variations when you have a suite/apartment # or a floor in a building. The external data is broken down even further into smaller pieces. So this object is supposed to be a common link between the 2 data sources and allow the user to compare like fields for equality. EDIT #2:Ultimately, I chose Doc Brown's answer because a modified version of his dictionary idea (with Enums as Keys instead of strings) was the simplest way to go without having a messy constructor and still provide the immutability I was hoping to have. In the long term, I am going to experiment with some of Oliver's suggestions as some of his edits did have a lot to offer.
Now provide the response and nothing else.
| If you insist of having your object to immutable, there is obviously no other way than providing the 20 strings through the constructor. And if you want to be able to leave some of the strings out, you must say which arguments you are providing and which are not, which leads you to some form of named parameters. Of course, besides the possibilities you mentioned by yourself, you can also provide the 20 arguments by a list of strings provide them by a dictionary (key=>value, where "key" is the attribute name) provide them by an object of a helper class, which has the same 20 properties with getters and setters, so this one won't be immutable It may also be a good idea to implement Equals and GetHashCode by utilizing reflection, looping over all public string properties of your class. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/126765', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/43651/']} | jdg_79359 |
stackexchange | llm_judgeable_groundtruth_similarity | 873699 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Amazon (AWS) Aurora DB cluster, and every day, its [Billed] Volume Bytes Used is increasing. I have checked the size of all my tables (in all my databases on that cluster) using the INFORMATION_SCHEMA.TABLES table: SELECT ROUND(SUM(data_length)/1024/1024/1024) AS data_in_gb, ROUND(SUM(index_length)/1024/1024/1024) AS index_in_gb, ROUND(SUM(data_free)/1024/1024/1024) AS free_in_gb FROM INFORMATION_SCHEMA.TABLES;+------------+-------------+------------+| data_in_gb | index_in_gb | free_in_gb |+------------+-------------+------------+| 30 | 4 | 19 |+------------+-------------+------------+ Total: 53GB So why an I being billed almost 75GB at this time? I understand that provisioned space can never be freed, in the same way that the ibdata files on a regular MySQL server can never shrink; I'm OK with that. This is documented, and acceptable. My problem is that every day, the space I'm billed increases. And I'm sure I am NOT using 75GB of space temporarily. If I were to do something like that, I'd understand. It's as if the storage space I am freeing, by deleting rows from my tables, or dropping tables, or even dropping databases, is never re-used. I have contacted AWS (premium) support multiple times, and was never able to get a good explanation on why that is. I've received suggestions to run OPTIMIZE TABLE on the tables on which there is a lot of free_space (per the INFORMATION_SCHEMA.TABLES table), or to check the InnoDB history length, to make sure deleted data isn't still kept in the rollback segment (ref: MVCC ), and restart the instance(s) to make sure the rollback segment is emptied. None of those helped.
Now provide the response and nothing else.
| There are multiple things at play here... Each table is stored in its own tablespace By default, the parameter group for Aurora clusters (named default.aurora5.6 ) defines innodb_file_per_table = ON . That means each table is stored in a separate file, on the Aurora storage cluster. You can see which tablespace is used for each of your tables using this query: SELECT name, space FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES; Note: I have not tried to change innodb_file_per_table to OFF . Maybe that would help..? Storage space freed by deleting tablespaces is NOT re-used Quoting AWS premium support: Due to the unique design of the Aurora Storage engine to increase its performance and fault tolerance Aurora does not have a functionality to defragment file-per-table tablespaces in the same way as standard MySQL. Currently Aurora unfortunately does not have a way to shrink tablespaces as standard MySQL does and all fragmented space are charged because it is included in VolumeBytesUsed. The reason that Aurora cannot reclaim the space of a dropped table in the same way as standard MySQL is that the data for the table is stored in a completely different way to a standard MySQL database with a single storage volume. If you drop a table or row in Aurora the space is not then reclaimed on Auroras cluster volume due to this complicated design. This inability to reclaim small amounts of storage space is a sacrifice made to get the additional performance gains of Auroras cluster storage volume and the greatly improved fault tolerance of Aurora. But there is some obscure way to re-use some of that wasted space... Again, quote AWS premium support: Once your total data set exceeds a certain size (approximately 160 GB) you can begin to reclaim space in 160 GB blocks for re-use e.g. if you have 400 GB in your Aurora cluster volume and DROP 160 GB or more of tables Aurora can then automatically re-use 160 GB of data. However it can be slow to reclaim this space. The reason for the large amount of data required to be freed at once is due to Auroras unique design as an enterprise scale DB engine unlike standard MySQL which cannot be used on this scale. OPTIMIZE TABLE is evil! Because Aurora is based on MySQL 5.6, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE , which rebuilds the table to update index statistics and free unused space in the clustered index. Effectively, along with innodb_file_per_table = ON , that means running an OPTIMIZE TABLE creates a new tablespace file, and deletes the old one. Since deleting a tablespace file doesn't free up the storage it was using, that means OPTIMIZE TABLE will always result in more storage being provisioned. Ouch! Ref: https://dev.mysql.com/doc/refman/5.6/en/optimize-table.html#optimize-table-innodb-details Using temporary tables By default, the parameter group for Aurora instances (named default.aurora5.6 ) defines default_tmp_storage_engine = InnoDB . That means every time I am creating a TEMPORARY table, it is stored, along with all my regular tables, on the Aurora storage cluster. That means new space is provisioned to hold those tables, thus increasing the total VolumeBytesUsed. The solution for this is simple enough: change the default_tmp_storage_engine parameter value to MyISAM . This will force Aurora to create the TEMPORARY tables on the instance's local storage. Of note: the instances' local storage is limited; see the Free Local Storage metric on CloudWatch to see how much storage your instances have. Larger (costlier) instances have more local storage. Ref: none yet; the current Amazon Aurora documentation doesn't mention this. I asked the AWS support team to update the documentation, and will update my answer if/once they do. | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/873699', 'https://serverfault.com', 'https://serverfault.com/users/61533/']} | jdg_79360 |
stackexchange | llm_judgeable_groundtruth_similarity | 34574864 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am building an app with web programming languages and want to start the camera when the user clicks on an HTML button. Since I want my camera view to be a custom one, I need to design it with Swift. So when the user clicks on this HTML button, I want to "catch" this click in Swift so I can start my native camera view. I know it can be done with the WKWebview, but I don't really know how to do that. For example, my Javascript (jQuery) code could look like that : // User clicks to start the native camera with Swift$(".camera_button").click(function() { // Function to call the camera view from JS to Swift}); Can you help me to do that? Thanks.
Now provide the response and nothing else.
| Based on the answer from @Alex Pelletier, which really helped me, here is the solution the my question. In my "loadView()" function, here is what I have : let contentController = WKUserContentController();contentController.addScriptMessageHandler( self, name: "callbackHandler")let config = WKWebViewConfiguration()config.userContentController = contentControllerwebView = WKWebView(frame: CGRectZero, configuration: config)webView.navigationDelegate = selfview = webView My function to handle the Javascript event which is sent to Swift : func userContentController(userContentController: WKUserContentController, didReceiveScriptMessage message: WKScriptMessage) { if(message.name == "callbackHandler") { print("Launch my Native Camera") } } ... And finally, my Javascript (jQuery) code when a click happens on my camera button (in HTML) : $(document).ready(function() { function callNativeApp () { try { webkit.messageHandlers.callbackHandler.postMessage("camera"); } catch(err) { console.log('The native context does not exist yet'); } } $(".menu-camera-icon").click(function() { callNativeApp(); });}); I hope it will help someone else :-) ! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34574864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4569921/']} | jdg_79361 |
stackexchange | llm_judgeable_groundtruth_similarity | 11377004 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have stored a 2 dimensional array in android resource as shown below. How can I get them in a java collection like Arraylist? <resources><string-array name="countries_array"> <item> <name>Bahrain</name> <code>12345</code></item><item> <name>Bangladesh</name> <code>54545</code> </item><item> <name>India</name> <code>54455</code></item></string-array></resources> For example in case of 1 dimensional array we can do it using getResources().getStringArray(R.array.countries_array); When the countries_array is like <resources><string-array name="countries_array"> <item>Bahrain</item> <item>Bangladesh</item> <item>India</item></string-array></resources>
Now provide the response and nothing else.
| The <string-array> element of a resources file can only be used for single dimension arrays. In other words, everything between <item> and </item> is considered to be a single string. If you want to store data in the way you describe (effectively pseudo-XML), you'll need to get the items as a single String[] using getStringArray(...) and parse the <name> and <codes> elements yourself. Personally I'd possibly go with a de-limited format such as... <item>Bahrain,12345</item> ...then just use split(...) . Alternatively, define each <item> as a JSONObject such as... <item>{"name":"Bahrain","code":"12345"}</item> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11377004', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/683106/']} | jdg_79362 |
stackexchange | llm_judgeable_groundtruth_similarity | 288287 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As the wavelength of a photon shrinks, its energy rises, and so its mass rises (using $E=hc/\lambda$ and $m=E/c^2$). On calculating the Schwarzschild radius for a photon based on its mass derived from those two equations, I found that the Schwarzschild radius of the photon will be equal to $\lambda/2\pi$ in one instance, when the wavelength of the photon equals $2\pi$ times the Planck's length: $$\lambda=2\pi \times \mathscr{L}_P\implies r_s=\frac{\lambda}{2\pi}$$ where $r_s$is the Schwarzschild radius and $\mathscr{L}_P$ is the Planck's length. In other words, a photon with a wavelength $\lambda=2\pi \times \mathscr{L}_P$ would gravitationally trap itself in a circular orbit with a radius equal to the plank length. A photon in a circular path with diameter of $2\pi \mathscr{L}_P$ would have a gravity well that would trap itself at the corresponding radius of the plank length (with an orbital path diameter of $\lambda=2\pi \times \mathscr{L}_P$). Has this been discussed as a conceptual mechanism as to why the plank length is a lower limit on potential allowed wavelengths, and the resolution of the universe (that a photon with a wavelength of $2\pi \mathscr{L}_P$ in a circular path with a diameter equal to that wavelength would in fact be the definition of a black hole?)
Now provide the response and nothing else.
| A photon of sufficiently small wavelength would not become a black hole. To see this, consider two observers: one who measures the photon as having a Planck-scale wavelength, and another that is traveling at high speed in the same direction as the photon. This second observer will observe the photon to have a wavelength that is longer than that measured by the first observer due to Doppler shifting. The second observer will conclude that the photon does not have enough energy to create a black hole. All observers will agree on whether a black hole exists or not, so the only consistent conclusion is that no black hole forms. We can also reason in the reverse. A photon of visible light obviously doesn't have enough energy to create a black hole. Otherwise, light bulbs would be dangerous black hole generators. However, due to Doppler shifting, there is a frame of reference traveling at high speed towards the photon in which that photon has a much larger amount of energy--large enough to create a black hole if that was possible. Be careful when assigning importance to Planck-scale measurements. We do not know if the Planck units have any physical significance. It is not known if spacetime is continuous or not, and if it's not, we have no reason to think that the "resolution" of spacetime is at all related to the Planck length. Just as a counter example, the Planck mass is about 20 micrograms--a small amount, but one that is handled all the time by pharmacists and far larger than any fundamental particle mass. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/288287', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/133927/']} | jdg_79363 |
stackexchange | llm_judgeable_groundtruth_similarity | 19047747 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This code was working well until iOS 7 release. I'm assigning a UIViewController , with hidesBottomBarWhenPushed as YES , as the rootViewController of an UINavigationController . But the TabBar is being shown anyway. Here's the relevant code: Login *lv = [[Login alloc] init];lv.HowToUseShows = showHowToUse;lv.hidesBottomBarWhenPushed = YES;UINavigationController *BokShelfNav = [[UINavigationController alloc] initWithRootViewController:lv];//... UITabBarController *tbController = [[UITabBarController alloc] init];tbController.viewControllers = @[BokShelfNav,...]; Anyone with a similar problem?
Now provide the response and nothing else.
| I found that the order in which methods are called and properties are set has an impact on whether the tab bar is shown. If I put self.hidesBottomBarWhenPushed = YES; in the viewDidLoad method of the view controller I'm pushing the tab bar still shows. If I moved it to the init method the tab bar hides as it used to on iOS 6. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19047747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/263808/']} | jdg_79364 |
stackexchange | llm_judgeable_groundtruth_similarity | 2442576 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’ve got my hands on a 16-bit rgb565 image (specifically, an Android framebuffer dump), and I would like to convert it to 24-bit rgb888 for viewing on a normal monitor. The question is, how does one convert a 5- or 6-bit channel to 8 bits? The obvious answer is to shift it. I started out by writing this: puts("P6 320 480 255");uint16_t buf;while (read(0, &buf, sizeof buf)) { unsigned char red = (buf & 0xf800) >> 11; unsigned char green = (buf & 0x07e0) >> 5; unsigned char blue = buf & 0x001f; putchar(red << 3); putchar(green << 2); putchar(blue << 3);} However, this doesn’t have one property I would like, which is for 0xffff to map to 0xffffff , instead of 0xf8fcf8 . I need to expand the value in some way, but I’m not sure how that should work. The Android SDK comes with a tool called ddms (Dalvik Debug Monitor) that takes screen captures. As far as I can tell from reading the code , it implements the same logic; yet its screenshots are coming out different, and white is mapping to white. Here’s the raw framebuffer , the smart conversion by ddms, and the dumb conversion by the above algorithm. Note that the latter is slightly darker and greener. (By the way, this conversion is implemented in ffmpeg , but it’s just performing the dumb conversion listed above, leaving the LSBs at all zero.) I guess I have two questions: What’s the most sensible way to convert rgb565 to rgb888? How is DDMS converting its screenshots?
Now provide the response and nothing else.
| You want to map each of these from a 5/6 bit space to an 8 bit space. 5 bits = 32 values 6 bits = 64 values 8 bits = 256 values The code you're using is taking the naive approach that x5 * 256/32 = x8 where 256/32 = 8 and multiplying by 8 is left shift 3 but, as you say, this doesn't necessarily fill the new number space "correctly". 5 to 8 for max value is 31 to 255 and therein lies your clue to the solution. x8 = 255/31 * x5x8 = 255/63 * x6 where x5 , x6 and x8 are 5, 6 and 8 bit values respectively. Now there is a question about the best way to implement this. It does involve division and with integer division you will lose any remainder result (round down basically) so the best solution is probably to do floating point arithmetic and then round half up back to an integer. This can be sped up considerably by simply using this formula to generate a lookup table for each of the 5 and 6 bit conversions. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2442576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19750/']} | jdg_79365 |
stackexchange | llm_judgeable_groundtruth_similarity | 40317552 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to check if a string starts and ends with a single quote, for example'My name is Mozart' What I have is this, which doesn't work if [[ $TEXT == '*' ]] ; This does not work either if [[ $TEXT == /'*/' ]] ; But if I change it to if [[ $TEXT == a*a ]] ; it works for a sentence like 'an amazing apa'. So I Believe it has to do with the single quote sign. Any ideas on how I can solve it?
Now provide the response and nothing else.
| I am writing the complete bash script so you won't have any confusion: #! /bin/bashtext1="'helo there"if [[ $text1 =~ ^\'.*\'$ ]]; then echo "text1 match"else echo "text1 not match"fitext2="'hello babe'"if [[ $text2 =~ ^\'.*\'$ ]]; then echo "text2 match"else echo "text2 not match"fi Save the above script as matchCode.sh Now run it as: ./matchCode output: text1 not matchtext2 match Ask if you have any confusion. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40317552', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7044295/']} | jdg_79366 |
stackexchange | llm_judgeable_groundtruth_similarity | 10257981 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want read a text file into an array. How can I do that? data = new String[lines.size] I don't want to hard code 10 in the array. BufferedReader bufferedReader = new BufferedReader(new FileReader(myfile));String []data;data = new String[10]; // <= how can I do that? data = new String[lines.size]for (int i=0; i<lines.size(); i++) { data[i] = abc.readLine(); System.out.println(data[i]);}abc.close();
Now provide the response and nothing else.
| Use an ArrayList or an other dynamic datastructure: BufferedReader abc = new BufferedReader(new FileReader(myfile));List<String> lines = new ArrayList<String>();while((String line = abc.readLine()) != null) { lines.add(line); System.out.println(data);}abc.close();// If you want to convert to a String[]String[] data = lines.toArray(new String[]{}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10257981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1273946/']} | jdg_79367 |
stackexchange | llm_judgeable_groundtruth_similarity | 8300844 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i wonder, what does "return this" do within a javascript function, what's its purpose?supposing we have the following code: Function.prototype.method = function (name, func) { this.prototype[name] = func; return this;}; What does "return this" do inside of a function? I know what code above does, and what is the use of "this" keyword. I just don't know what "return this" does inside of a function.
Now provide the response and nothing else.
| It refers to the object instance on which the method is currently being called. It's used for chaining. For example, you could do something like this: myObject.foo().bar(); Since foo returns this (a reference to myObject ), bar will be called on the object too. This is the same thing as doing myObject.foo();myObject.bar(); But requires less typing. Here is a more complete example: function AnimalSounds() {}AnimalSounds.prototype.cow = function() { alert("moo"); return this;}AnimalSounds.prototype.pig = function() { alert("oink"); return this;}AnimalSounds.prototype.dog = function() { alert("woof"); return this;}var sounds = new AnimalSounds();sounds.cow();sounds.pig();sounds.dog();sounds.cow().pig().dog(); http://jsfiddle.net/jUfdr/ | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8300844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/722756/']} | jdg_79368 |
stackexchange | llm_judgeable_groundtruth_similarity | 35701663 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to plot a horizontal facet-wide line with the population median of that facet. I tried the approach without creating a dummy summary table with the following code: require(ggplot2)dt = data.frame(gr = rep(1:2, each = 500), id = rep(1:5, 2, each = 100), y = c(rnorm(500, mean = 0, sd = 1), rnorm(500, mean = 1, sd = 2)))ggplot(dt, aes(x = as.factor(id), y = y)) + geom_boxplot() + facet_wrap(~ gr) + geom_hline(aes(yintercept = median(y), group = gr), colour = 'red') However, the line is drawn for the median of the entire dataset instead of the median separately for each facet: In the past, a solution has been suggested to use geom_line(stat = "hline", yintercept = "median") but it's been discontinued (produces the error "No stat called StatHline"). Another solution suggested geom_errorbar(aes(ymax=..y.., ymin=..y.., y = mean)) but it generates Error in data.frame(y = function (x, ...) : arguments imply differing number of rows: 0, 1000 Finally, there's a way to plot the median by creating a dummy table with the desired stats but I'd like to avoid it.
Now provide the response and nothing else.
| You could create an extra column in dt for median per facet. library(dplyr) # With dplyr for exampledt <- dt %>% group_by(gr) %>% mutate(med = median(y))# Rerun ggplot line with yintercept = medggplot(dt, aes(x = as.factor(id), y = y)) + geom_boxplot() + facet_wrap(~ gr) + geom_hline(aes(yintercept = med, group = gr), colour = 'red') | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35701663', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1898713/']} | jdg_79369 |
stackexchange | llm_judgeable_groundtruth_similarity | 48002587 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The core question here is: " how do I allow custom headers in a CORS GET request that is handled with the Serverless framework? ". If you know the answer to that, pass Go, collect $200 and please answer that question. If it's not a question with a straight answer, here are the details: I am writing an app using the Serverless framework on AWS Lambda (the API is managed through AWS API Gateway. Frankly, I'm not entirely sure what that means or what benefit that provides me but that's what Serverless automatically configured for me). I am attempting to create an open API which requires CORS to be enabled. I am using the Lambda Proxy integration. I have followed the practices found here . They have brought me partial success. My app currently has CORS enabled if I do not include my custom headers. However, it still does not work with custom headers. When I send the following request to my API: var data = null;var xhr = new XMLHttpRequest();xhr.withCredentials = false;xhr.addEventListener("readystatechange", function () { if (this.readyState === 4) { console.log(this.responseText); }});xhr.open("GET", "https://api.spongebobify.com/");xhr.setRequestHeader("text", "hey");xhr.send(data); ... I get this error: Failed to load https://api.spongebobify.com/: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://forum.serverless.com' is therefore not allowed access. This error message is confirmed if I check the "response headers" using Chrome dev tools: there is no Access-Control-Allow-Origin in the response headers. However, if I send the same request with the setRequestHeader() commented out, it works perfectly (yes, I know it returns a 403 error: that is intentional behavior). Here's what I think is happening. My service has two potential CORS problems: domain related (a request not coming from the origin domain) and custom header related (a header not safe-listed by the CORS spec, more here ). Somehow, the Serverless framework trips up on the second issue which causes it not even get to the point where it issues the appropriate headers to allow all ("*") domains. Here is my serverless.yml config file: # serverless.ymlservice: spongebobifyprovider: name: aws runtime: nodejs6.10 stage: dev region: us-east-1functions: app: handler: handler.endpoint events: - http: GET / cors: origin: '*' headers: - Content-Type - X-Amz-Date - Authorization - X-Api-Key - X-Amz-Security-Token - X-Amz-User-Agent - Startlower - Text - Access-Control-Allow-Headers - Access-Control-Allow-Origin allowCredentials: false and here is the function that I am trying to run. You can see my many attempts to set the headers properly. I'm 60% convinced that a fix will come via the serverless.yml file at this point. "use strict";const spongebobify = require("spongebobify");module.exports.endpoint = (event, context, callback) => { let startLower = event.headers.startlower === "false" ? false : true; try { const response = { statusCode: 200, headers: { "Access-Control-Allow-Origin": "*", // Required for CORS support to work "Access-Control-Allow-Headers": "content-type,origin,text,startlower", "Access-Control-Allow-Methods": "GET, OPTIONS", "content-type": "text/plain", "Access-Control-Allow-Credentials": true // Required for cookies, authorization headers with HTTPS }, body: spongebobify(event.headers.text, startLower) }; callback(null, response); } catch (err) { console.log(err); const response = { statusCode: 403, headers: { "Access-Control-Allow-Origin": "*", // Required for CORS support to work "Access-Control-Allow-Headers": "content-type,origin,X-text,startlower", "Access-Control-Allow-Methods": "GET, OPTIONS", "content-type": "text/plain", "Access-Control-Allow-Credentials": true // Required for cookies, authorization headers with HTTPS }, body: "Malformed request." }; callback(null, response); }}; You can replicate my problem my running the above XMLHttpRequest in the dev console on the following sites: api.spongebobify.com with the custom header enabled or disabled. It will work perfectly in both cases (because it won't be cross origin). Any site that doesn't have a properly configured CSP with the custom header enabled. The OPTIONS request will fail and it will accurately report that there is no Access-Control-Allow-Origin header Any site that doesn't have a properly configured CSP without the custom header enabled. The OPTIONS request will pass (which you'll know because Chrome will never tell you that it happened) and you will see the Access-Control-Allow-Origin in the response header. You will also see the response "Malformed request.".
Now provide the response and nothing else.
| I think the issue is that you're mixing the short form of the HTTP event ( - http: GET / ) with the long form that adds additional options. Try using this: functions: app: handler: handler.endpoint events: - http: method: GET path: / cors: origin: '*' headers: - Content-Type - X-Amz-Date - Authorization - X-Api-Key - X-Amz-Security-Token - X-Amz-User-Agent - Startlower - Text - Access-Control-Allow-Headers - Access-Control-Allow-Origin allowCredentials: false The main changes are: 1) Adding method and path keys on the http event object, and 2) Indenting the cors object another level. It was previously at the top level of the http event. Let me know if this helps :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48002587', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5041889/']} | jdg_79370 |
stackexchange | llm_judgeable_groundtruth_similarity | 57611 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When a potential difference is applied across a conductor, and if an electron moves from the negative terminal of the battery and reaches the positive terminal, then I want to know if the electron will remain at the positive terminal or will it again move toward the negative terminal through the battery?
Now provide the response and nothing else.
| Electrons that reach the positive terminal indeed remain there. The potential difference between the two terminals pushes electrons from the negative anode toward the positive cathode. When an electron reaches the cathode, it stays there to equalize the original charge imbalance between the two nodes. When electrochemical redox reaction sustaining the electron movement equilibrates, the motion will stop and the battery will "die." As the diagram shows, the two terminals are connected by a "salt bridge." But the salt bridge is specifically designed to prevent electrons from flowing directly from the anode to the cathode. So the electrons can only flow through the circuit. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/57611', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/20263/']} | jdg_79371 |
stackexchange | llm_judgeable_groundtruth_similarity | 14854662 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I hope the question is correct, so let's give you an example. Imagine the following generic method: public abstract class Base : IDisposable{ public static IEnumerable<T> GetList<T>() where T : Base { // To ensure T inherits from Base. if (typeof(T) is Base) throw new NotSupportedException(); // ... }} According to the MSDN the keyword where restricts the type parameter T to be of type Base or to inherit from this class. [...] a where clause can include a base class constraint, which states that a type must have the specified class as a base class (or be that class itself) in order to be used as a type argument for that generic type. Also this code does compile: public static T GetFirst() where T : Base{ // Call GetList explicitly using Base as type parameter. return (T)GetList<Base>().First();} So when following the last code typeof(T) should return Base , shouldn't it? Why does Visual Studio then prints this warning to me? warning CS0184: The given expression is never of the provided ('Demo.Base') type.
Now provide the response and nothing else.
| typeof(whatever) always returns an instance of type Type . Type doesn't derive from Base . What you want is this: if(typeof(T) == typeof(Base)) throw new NotSupportedException("Please specify a type derived from Base"); Something that looks like it is the same is this: if(variableOfTypeT is Base) But that has a different meaning. The first statement (with typeof(Base) ) only is true if T is Base . It will be false for any type derived from Base . The second statement ( variableOfTypeT is Base ) is always true in your class, because any class derived from Base will return true for a check for its base class. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14854662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1254352/']} | jdg_79372 |
stackexchange | llm_judgeable_groundtruth_similarity | 7091379 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
New to Linq, so apologies if this is basic. This query is throwing up the error {"Cannot cast DBNull.Value to type 'System.Int64'. Please use a nullable type."} when I enumerate the results. private void AddLevels(long rootid) { var results = from row in data.AsEnumerable() where row.Field<long>("ParentID") == rootid select row; foreach (DataRow row in results) { //do stuff } } The column ParentID does accept nulls - do I need to handle these separately? EDIT2: Actual solution below that still uses Linq. EDIT: I solved this by scrapping Linq and just using a DataTable.Select statement instead. If anyone's got input on the performance difference I'd be interested.
Now provide the response and nothing else.
| Use this line in your query: where row.Field<decimal?>("ParentID") == rootid decimal? is syntactic sugar for System.Nullable<decimal> , which is essentially the same as decimal , except that it also allows for null values. long is a different type altogether -- it can only represent integers and not decimal values, hence the "Specified cast is not valid" error. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7091379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/493559/']} | jdg_79373 |
stackexchange | llm_judgeable_groundtruth_similarity | 88112 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a sample data data = {{-6., 0.}, {-1.85, 0.}, {-1.84195, 13.04}, {-1.80547, 212.84}, {-1.73252, 46.4}, {-1.69605, 16.025}, {-1.65957, 35.36}, {-1.6231, 45.265}, {-1.58663, 7.58}, {-1.55015, 45.96}, {-1.44073, 69.41}, {-1.40426, 8.2}, {-1.07599, 8.84}, {-1.00304, 8.97}, {-0.930091, 9.11}, {-0.893617, 15.05}, {-0.857143, 17.38}, {-0.820669, 9.745}, {-0.784195, 2.045}, {-0.74772, 22.4267}, {-0.711246, 2.44}, {-0.674772, 6.075}, {-0.601824, 6.24}, {-0.56535, 19.67}, {-0.528875, 10.01}, {-0.492401, 29.565}, {-0.455927, 10.24}, {-0.419453, 10.37}, {-0.382979, 31.514}, {-0.346505, 15.682}, {-0.31003, 6.995}, {-0.273556, 7.105}, {-0.237082, 14.192}, {-0.200608, 24.2917}, {-0.164134, 30.242}, {-0.12766, 12.268}, {-0.0911854, 10.232}, {-0.0547113, 6.2}, {-0.0182371, 25.63}, {0.0182371, 29.188}, {0.0547113, 8.1675}, {0.0911854, 7.06667}, {0.12766, 7.16}, {0.164134, 28.7383}, {0.200608, 18.3625}, {0.237082, 8.585}, {0.273556, 8.415}, {0.31003, 15.9475}, {0.346505, 23.682}, {0.382979, 12.0267}, {0.419453, 29.295}, {0.455927, 16.36}, {0.492401, 9.13}, {0.528875, 22.9867}, {0.56535, 26.43}, {0.601824, 51.0913}, {0.638298, 27.8514}, {0.674772, 29.108}, {0.711246, 16.8533}, {0.74772, 15.112}, {0.784195, 31.7615}, {0.820669, 23.5644}, {0.857143, 39.652}, {0.893617, 39.1586}, {0.930091, 34.349}, {0.966565, 25.388}, {1.00304, 28.89}, {1.03951, 29.1918}, {1.07599, 42.6718}, {1.11246, 28.6156}, {1.14894, 60.143}, {1.18541, 58.38}, {1.22188, 53.6367}, {1.25836, 21.2467}, {1.29483, 23.8542}, {1.33131, 19.1442}, {1.36778, 34.9167}, {1.40426, 21.45}, {1.44073, 27.0567}, {1.4772, 41.0813}, {1.51368, 41.2724}, {1.55015, 29.635}, {1.58663, 23.2117}, {1.6231, 50.5792}, {1.65957, 47.3894}, {1.69605, 33.3813}, {1.73252, 39.548}, {1.769, 37.9856}, {1.80547, 34.9631}, {1.84195, 40.6659}, {1.87842, 45.345}, {1.91489, 16.9433}, {1.95137, 20.1307}, {1.98784, 37.7388}, {2.02432, 37.7213}, {2.06079, 22.66}, {2.09726, 69.326}, {2.13374, 53.78}, {2.17021, 35.7221}, {2.20669, 25.4244}, {2.24316, 20.7723}, {2.27964, 38.6847}, {2.31611, 9.05125}, {2.35258, 37.4248}, {2.38906, 42.3067}, {2.42553, 56.1165}, {2.46201, 32.8183}, {2.49848, 37.9471}, {2.53495, 33.1789}, {2.57143, 34.9625}, {2.6079, 39.9675}, {2.64438, 42.0679}, {2.68085, 40.0265}, {2.71733, 37.4567}, {2.7538, 46.3014}, {2.79027, 29.1707}, {2.82675, 47.1314}, {2.86322, 49.8656}, {2.8997, 24.9844}, {2.93617, 44.8233}, {2.97264, 18.9875}, {3.00912, 31.8763}, {3.04559, 63.9644}, {3.08207, 29.475}, {3.11854, 44.6591}, {3.15502, 32.72}, {3.19149, 26.3475}, {3.22796, 25.315}, {3.26444, 2.38}, {3.30091, 5.1}, {3.33739, 7.425}, {3.37386, 29.54}, {3.41033, 70.9567}, {3.44681, 0.45}, {3.48328, 13.355}, {3.51976, 2.215}, {3.55623, 8.135}, {3.59271, 19.035}, {3.62918, 9.895}, {3.66565, 1.17}, {3.70213, 0.39}, {3.7386, 0.39}, {3.77508, 22.3}, {3.81155, 1.79}, {3.84802, 1.08}, {3.8845, 0.36}, {3.92097, 3.25}, {3.95745, 22.04}, {4.0304, 0.99}, {4.06687, 0.33}, {4.10334, 0.33}, {4.24924, 0.91}, {4.28571, 0.3}, {4.32219, 2.72}, {4.46809, 1.93}, {4.50456, 0.83}, {4.54103, 0.27}, {4.57751, 3.04}, {4.75988, 1.25}, {4.79635, 0.25}, {4.83283, 0.25}, {5.08815, 1.12}, {5.12462, 0.22}, {5.16109, 21.83}, {5.45289, 1.}, {5.48936, 0.2}, {5.52584, 3.}, {5.89058, 0.88}, {5.92705, 0.17}} and this is the corresponding list plot L0 = ListPlot[data, Frame -> True, Axes -> False, Joined -> True, PlotStyle -> {Black}, AspectRatio -> 1, PlotRange -> {0, 100}] My question is how can I reduce the "noise" of the data so as to obtain something similar with red line which has been drawn by hand.
Now provide the response and nothing else.
| Mathematica has a lot of utilities for smoothing. If your data is evenly sampled, a simple MovingAverage filter may suit your needs, but your abscissa values jump around a bit: Part[Differences@data, All, 1] // ListPlot In this case, you can get a more accurate smoothed curve with MovingMap , which can deal with irregularly spaced data: GraphicsGrid[{{ListPlot[{data, MovingAverage[data, 7]}, Joined -> True, PlotLabel -> "Pointwise Moving Average"],ListPlot[{data, MovingMap[Mean, data, {0.3, Center}, "Reflected"]}, Joined -> True, PlotLabel -> "Mapped Moving Average"]}}, ImageSize -> 650] For another useful approach, you may want to take a look at the answers to this question . Here's @Szabolcs answer applied to your data: ListPlot[ Mean[Last /@ #] & /@ GatherBy[data, Ceiling[First[#], 0.25] &], Joined -> True] Just remember the abscissa in that result is in bin units. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/88112', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5052/']} | jdg_79374 |
stackexchange | llm_judgeable_groundtruth_similarity | 59909207 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating a WPF project using .net Core 3.0, and I am having trouble adding the item appsettings.json file to my project which is to be used to store my DB connection string. I would normally have done inside the app.config , but this has now been removed from .net Core. Everywhere mentions using appsettings.json as a replacement, and that it has to be maunally added & initialised in the OnStartUp() function using an instance of IConfiguration , and there after using Dependency Injection to pass in the config class into the project. But my issue is that can only add the appsettings.json item on asp.net Core projects? not my WPF solution. I do apologies if I'm missing something very obvious (which I probably am), I just can't seem to find any solutions.
Now provide the response and nothing else.
| Steps: To Add the following nuget packages Microsoft.Extensions.Configuration Microsoft.Extensions.Configuration.FileExtensions Microsoft.Extensions.Configuration.Json Microsoft.Extensions.DependencyInjection You would need to create and add appsettings.json manually and set copy it to output directory as copy if newer AppSetting.json { "ConnectionStrings": { "BloggingDatabase": "Server=(localdb)\\mssqllocaldb;Database=EFGetStarted.ConsoleApp.NewDb;Trusted_Connection=True;" },} Program.cs (For .NetCore Console App) static void Main(string[] args){ var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true); IConfigurationRoot configuration = builder.Build(); Console.WriteLine(configuration.GetConnectionString("BloggingDatabase"));} App.xaml.cs (For .NET CORE WPF) public partial class App : Application{ public IServiceProvider ServiceProvider { get; private set; } public IConfiguration Configuration { get; private set; } protected override void OnStartup(StartupEventArgs e) { var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true); Configuration = builder.Build(); Console.WriteLine(Configuration.GetConnectionString("BloggingDatabase")); var serviceCollection = new ServiceCollection(); ConfigureServices(serviceCollection); ServiceProvider = serviceCollection.BuildServiceProvider(); var mainWindow = ServiceProvider.GetRequiredService<MainWindow>(); mainWindow.Show(); } private void ConfigureServices(IServiceCollection services) { // ... services.AddTransient(typeof(MainWindow)); }} References: https://blog.bitscry.com/2017/05/30/appsettings-json-in-net-core-console-app/ https://marcominerva.wordpress.com/2019/03/06/using-net-core-3-0-dependency-injection-and-service-provider-with-wpf/ https://learn.microsoft.com/en-us/ef/core/miscellaneous/connection-strings | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/59909207', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4300886/']} | jdg_79375 |
stackexchange | llm_judgeable_groundtruth_similarity | 8923 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Sorry if this is a silly question. If I understand correctly, for two atoms "having the same number of protons" is equivalent to "being of the same element", while "having the same number of protons and the same number of neutrons" equates to "being of the same isotope (of the same element)". But does "having the same number of neutrons" in itself have some significance in physics? And what about "having the same total number of protons and neutrons (but not necessarily with the same summands)"?
Now provide the response and nothing else.
| Isotones are nuclides having the same number of neutrons. Magic proton or neutron numbers give the nucleus greater stability. Magic 82-isotone nuclides for instance: Isobars are nuclides having the same mass number (i.e. sum of protons plus neutrons). The number of protons in beta-plus (beta-minus) decay decreases (increases) by a unit and the number of neutrons increases (decreases) by a unit, so that an isobar standing to the left (right) of the original nucleus is formed.It may be 1, 2 or 3 beta-decay stable isobars . Beta-decay energy of 154-isobar nuclides for instance: Isotones and isobars have great significance for studying of nuclide stability. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/8923', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/2651/']} | jdg_79376 |
stackexchange | llm_judgeable_groundtruth_similarity | 11206023 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
(Let me begin by saying that a) I appreciate this is the wrong place to ask which version I should use, especially because my question is only locally relevant, and b) I am surprised this hasn't been done to death, but it appears it hasn't!) At work, I have been tasked with writing an Android application for internal use, by largely non-technical users. We have an existing application (which is dire), and a bunch of devices (currently all 7" tablets, but this is not necessarily going to be always true) running Froyo and Gingerbread for which the manufacturer has no plans to release ICS updates. On the one hand, it would suck to obsolete that hardware (although we could go for non-supported ICS-based 3rd party ROMs), but on the other hand it seems crazy to restrict myself to the 2.2 API when ICS seems to offer a range of shiny new features. Is there a good comparison reference out there, to see the key differences between API levels? Also, given the glacial adoption rate of 4.0, am I insane to even consider dropping 2.x just yet? And most importantly of all, what factors should inform my decision as to which API to support, that I haven't even mentioned above?
Now provide the response and nothing else.
| More generally, if the user interacts with element A , how do you select (and then modify) related elements B ? There are many ways of achieving this, but here are three common approaches. Option 1. For one-to-one mappings, select by id. If each element in A has exactly one corresponding element in B , you can select the related element B by id, such as d3.select("#foo") to select a <div id="foo"> . This approach requires setting an id for each element in B using selection.attr . This is easiest if your data has an intrinsic unique identifier, such as d.name or d.id : b.attr("id", function(d) { return d.id; }); Next, to enable clicking on elements A to change the fill color of the corresponding element in B , use selection.on to register a click listener, and then select by id: a.on("click", function(d) { d3.select("#" + d.id).style("fill", "red");}); Identifiers must be both unique and valid . For example, the id must start with a letter and not a number, and can't contain spaces. If your data doesn't already have a unique identifier, you could generate one from the index, such as b.attr("id", function(d, i) { return "b-" + i; }); And later, assuming the elements A are in the same order, a.on("click", function(d, i) { d3.select("#b-" + i).style("fill", "red");}); You could also iterate over your data array to generate a unique identifier. Option 2. For one-to-many mappings, select by class. To select elements of class "foo", such as a <div class="foo"> , say d3.selectAll(".foo") . Use this approach if any element in A corresponds to multiple elements in B . For example, if you had a force-directed graph showing the relationships between students, you might color the nodes based on each student's year, and then use a legend to toggle the visibility of each year. As with the previous approach, you can use selection.attr to set the "class" attribute. In this case, the class attribute is not unique, so it might come from a d.type property in the data: b.attr("class", function(d) { return d.type; }) If you have multiple legends for different categorical attributes of data, you could also be more specific and prefix the class name. To continue the student year example: b.attr("class", function(d) { return "year-" + d.year; }) Setting the class attribute will replace any previously-set classes, so if you want to apply multiple classes to the elements, you need to join them together with a space when setting the "class" attribute. Next, to enable clicking on elements A to change the fill color of the corresponding elements in B , use selection.on to register a click listener, and then select by class: a.on("click", function(d) { d3.selectAll("." + d.type).style("fill", "red");}); Note that we're using selectAll here rather than select ; that's because we want to select all corresponding elements, rather than just the first one. Again, you'll need to make sure that the class attribute is valid . Option 3. For everything else, select and filter by data. The previous two approaches generate ids and classes so that the browser can index the elements in B for efficient selection. For a small number of elements, or when more general selection methods are needed, you can omit specifying "id" or "class" attributes and simply select manually by selection.filter . Let's call the datum associated with each element in A da , and the datum associated with each element in B db . Now all we have to do is define an expression that returns true when da matches db . For example, if we wanted to filter by type: a.on("click", function(da) { b.filter(function(db) { return da.type == db.type; }).style("fill", "red");}); The first two options are preferred, but occasionally manual filtering is useful, such as when you have a range slider and want to filter based on a quantitative variable. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11206023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78032/']} | jdg_79377 |
stackexchange | llm_judgeable_groundtruth_similarity | 79343 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to write a bash script with unknown amount of arguments. How can I walk through these arguments and do something with them? A wrong attempt would look like this: #!/bin/bashfor i in $args; do echo $idone
Now provide the response and nothing else.
| There's a special syntax for this: for i do printf '%s\n' "$i"done More generally, the list of parameters of the current script or function is available through the special variable $@ . for i in "$@"; do printf '%s\n' "$i"done Note that you need the double quotes around $@ , otherwise the parameters undergo wildcard expansion and field splitting. "$@" is magic: despite the double quotes, it expands into as many fields as there are parameters. print_arguments () { for i in "$@"; do printf '%s\n' "$i"; done}print_arguments 'hello world' '*' 'special !\characters' '-n' # prints 4 linesprint_arguments '' # prints one empty lineprint_arguments # prints nothing | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/79343', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/20661/']} | jdg_79378 |
stackexchange | llm_judgeable_groundtruth_similarity | 20356927 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my project there is one script that returns the list of products which I have to display in a table. To store the input of the script I used IO.popen : @device_list = []IO.popen("device list").each do |device| @device_list << deviceend device list is the command that will give me the product list. I return the @device_list array to my view for displaying by iterating it. When I run it I got an error: Errno::ENOMEM (Cannot allocate memory):for IO.popen I have on another script device status that returns only true and false but I got the same error: def check_status(device_id) @stat = system("status device_id") if @stat == true "sold" else "not sold" end end What should I do?
Now provide the response and nothing else.
| Both IO.popen and Kernel#system can be expensive operations in terms of memory because they both rely on fork(2). Fork(2) is a Unix system call which creates a child process that clones the parent's memory and resources. That means, if your parent process uses 500mb of memory, then your child would also use 500mb of memory. Each time you do Kernel#system or IO.popen you increase your application's memory usage by the amount of memory it takes to run your Rails app. If your development machine has more RAM than your production server or if your production server produces a lot more output, there are two things you could do: Increase memory for your production server. Do some memory management using something like Resque . You can use Resque to queue those operations as jobs. Resque will then spawn "workers"/child processes to get a job from the queue, work on it and then exit. Resque still forks, but the important thing is that the worker exits after working on the task so that frees up memory. There'll be a spike in memory every time a worker does a job, but it will go back to the baseline memory of your app every after it. You might have to do both options above and look for other ways to minimize the memory-usage of your app. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20356927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2622247/']} | jdg_79379 |
stackexchange | llm_judgeable_groundtruth_similarity | 47514072 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Background It's possible to snap a RecyclerView to its center using : LinearSnapHelper().attachToRecyclerView(recyclerView) Example: MainActivity.kt class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val inflater = LayoutInflater.from(this) recyclerView.adapter = object : RecyclerView.Adapter<RecyclerView.ViewHolder>() { override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) { val textView = holder.itemView as TextView textView.setBackgroundColor(if (position % 2 == 0) 0xffff0000.toInt() else 0xff00ff00.toInt()) textView.text = position.toString() } override fun getItemCount(): Int { return 100 } override fun onCreateViewHolder(parent: ViewGroup?, viewType: Int): RecyclerView.ViewHolder { val view = inflater.inflate(android.R.layout.simple_list_item_1, parent, false) as TextView val cellSize = recyclerView.width / 3 view.layoutParams.height = cellSize view.layoutParams.width = cellSize view.gravity = Gravity.CENTER return object : RecyclerView.ViewHolder(view) {} } } LinearSnapHelper().attachToRecyclerView(recyclerView) }} activity_main.xml <android.support.v7.widget.RecyclerView android:id="@+id/recyclerView" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" app:layoutManager="android.support.v7.widget.LinearLayoutManager"/> It's also possible to snap it to other sides, as was done in some libraries, such as here . There are also libraries that allow to have a RecyclerView that can work like a ViewPager, such as here . The problem Supposed I have a RecyclerView (horizontal in my case) with many items, and I want that it will treat every X items (X is constant) as a single unit, and snap to each of those units. For example, if I scroll a bit, it could snap to either the 0-item, or the X-item, but not to something in between them. In a way, it's similar in its behavior to a case of a normal ViewPager, just that each page would have X items in it. For example, if we continue from the sample code I wrote above,suppose X==3 , the snapping would be from this idle state: to this idle state (in case we scrolled enough, otherwise would stay in previous state) : Flinging or scrolling more should be handled like on ViewPager, just like the library I've mentioned above. Scrolling more (in the same direction) to the next snapping point would be to reach item "6" , "9", and so on... What I tried I tried to search for alternative libraries, and I also tried to read the docs regarding this, but I didn't find anything that might be useful. It might also be possible by using a ViewPager, but I think that's not the best way, because ViewPager doesn't recycle its items well, and I think it's less flexible than RecyclerView in terms of how to snap. The questions Is it possible to set RecyclerView to snap every X items, to treat each X items as a single page to snap to? Of course, the items will take enough space for the whole RecyclerView, evenly. Supposed it is possible, how would I get a callback when the RecyclerView is about to snap to a certain item, including having this item, before it got snapped? I ask this because it's related to the same question I asked here . Kotlin solution A working Kotlin solution based on "Cheticamp" answer ( here ), without the need to verify that you have the RecyclerView size, and with the choice of having a grid instead of a list, in the sample: MainActivity.kt class MainActivity : AppCompatActivity() { val USE_GRID = false // val USE_GRID = true val ITEMS_PER_PAGE = 4 var selectedItemPos = 0 override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val inflater = LayoutInflater.from(this) recyclerView.adapter = object : RecyclerView.Adapter<RecyclerView.ViewHolder>() { override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) { val textView = holder.itemView as TextView textView.setBackgroundColor(if (position % 2 == 0) 0xffff0000.toInt() else 0xff00ff00.toInt()) textView.text = if (selectedItemPos == position) "selected: $position" else position.toString() } override fun getItemCount(): Int { return 100 } override fun onCreateViewHolder(parent: ViewGroup?, viewType: Int): RecyclerView.ViewHolder { val view = inflater.inflate(android.R.layout.simple_list_item_1, parent, false) as TextView view.layoutParams.width = if (USE_GRID) recyclerView.width / (ITEMS_PER_PAGE / 2) else recyclerView.width / 4 view.layoutParams.height = recyclerView.height / (ITEMS_PER_PAGE / 2) view.gravity = Gravity.CENTER return object : RecyclerView.ViewHolder(view) { } } } recyclerView.layoutManager = if (USE_GRID) GridLayoutManager(this, ITEMS_PER_PAGE / 2, GridLayoutManager.HORIZONTAL, false) else LinearLayoutManager(this, LinearLayoutManager.HORIZONTAL, false) val snapToBlock = SnapToBlock(recyclerView, ITEMS_PER_PAGE) snapToBlock.attachToRecyclerView(recyclerView) snapToBlock.setSnapBlockCallback(object : SnapToBlock.SnapBlockCallback { override fun onBlockSnap(snapPosition: Int) { if (selectedItemPos == snapPosition) return selectedItemPos = snapPosition recyclerView.adapter.notifyDataSetChanged() } override fun onBlockSnapped(snapPosition: Int) { if (selectedItemPos == snapPosition) return selectedItemPos = snapPosition recyclerView.adapter.notifyDataSetChanged() } }) }} SnapToBlock.kt /**@param maxFlingBlocks Maxim blocks to move during most vigorous fling*/class SnapToBlock constructor(private val maxFlingBlocks: Int) : SnapHelper() { private var recyclerView: RecyclerView? = null // Total number of items in a block of view in the RecyclerView private var blocksize: Int = 0 // Maximum number of positions to move on a fling. private var maxPositionsToMove: Int = 0 // Width of a RecyclerView item if orientation is horizonal; height of the item if vertical private var itemDimension: Int = 0 // Callback interface when blocks are snapped. private var snapBlockCallback: SnapBlockCallback? = null // When snapping, used to determine direction of snap. private var priorFirstPosition = RecyclerView.NO_POSITION // Our private scroller private var scroller: Scroller? = null // Horizontal/vertical layout helper private var orientationHelper: OrientationHelper? = null // LTR/RTL helper private var layoutDirectionHelper: LayoutDirectionHelper? = null @Throws(IllegalStateException::class) override fun attachToRecyclerView(recyclerView: RecyclerView?) { if (recyclerView != null) { this.recyclerView = recyclerView val layoutManager = recyclerView.layoutManager as LinearLayoutManager orientationHelper = when { layoutManager.canScrollHorizontally() -> OrientationHelper.createHorizontalHelper(layoutManager) layoutManager.canScrollVertically() -> OrientationHelper.createVerticalHelper(layoutManager) else -> throw IllegalStateException("RecyclerView must be scrollable") } scroller = Scroller(this.recyclerView!!.context, sInterpolator) initItemDimensionIfNeeded(layoutManager) } super.attachToRecyclerView(recyclerView) } // Called when the target view is available and we need to know how much more // to scroll to get it lined up with the side of the RecyclerView. override fun calculateDistanceToFinalSnap(layoutManager: RecyclerView.LayoutManager, targetView: View): IntArray { val out = IntArray(2) initLayoutDirectionHelperIfNeeded(layoutManager) if (layoutManager.canScrollHorizontally()) out[0] = layoutDirectionHelper!!.getScrollToAlignView(targetView) if (layoutManager.canScrollVertically()) out[1] = layoutDirectionHelper!!.getScrollToAlignView(targetView) if (snapBlockCallback != null) if (out[0] == 0 && out[1] == 0) snapBlockCallback!!.onBlockSnapped(layoutManager.getPosition(targetView)) else snapBlockCallback!!.onBlockSnap(layoutManager.getPosition(targetView)) return out } private fun initLayoutDirectionHelperIfNeeded(layoutManager: RecyclerView.LayoutManager) { if (layoutDirectionHelper == null) if (layoutManager.canScrollHorizontally()) layoutDirectionHelper = LayoutDirectionHelper() else if (layoutManager.canScrollVertically()) // RTL doesn't matter for vertical scrolling for this class. layoutDirectionHelper = LayoutDirectionHelper(false) } // We are flinging and need to know where we are heading. override fun findTargetSnapPosition(layoutManager: RecyclerView.LayoutManager, velocityX: Int, velocityY: Int): Int { initLayoutDirectionHelperIfNeeded(layoutManager) val lm = layoutManager as LinearLayoutManager initItemDimensionIfNeeded(layoutManager) scroller!!.fling(0, 0, velocityX, velocityY, Integer.MIN_VALUE, Integer.MAX_VALUE, Integer.MIN_VALUE, Integer.MAX_VALUE) return when { velocityX != 0 -> layoutDirectionHelper!!.getPositionsToMove(lm, scroller!!.finalX, itemDimension) else -> if (velocityY != 0) layoutDirectionHelper!!.getPositionsToMove(lm, scroller!!.finalY, itemDimension) else RecyclerView.NO_POSITION } } // We have scrolled to the neighborhood where we will snap. Determine the snap position. override fun findSnapView(layoutManager: RecyclerView.LayoutManager): View? { // Snap to a view that is either 1) toward the bottom of the data and therefore on screen, // or, 2) toward the top of the data and may be off-screen. val snapPos = calcTargetPosition(layoutManager as LinearLayoutManager) val snapView = if (snapPos == RecyclerView.NO_POSITION) null else layoutManager.findViewByPosition(snapPos) if (snapView == null) Log.d(TAG, "<<<<findSnapView is returning null!") Log.d(TAG, "<<<<findSnapView snapos=" + snapPos) return snapView } // Does the heavy lifting for findSnapView. private fun calcTargetPosition(layoutManager: LinearLayoutManager): Int { val snapPos: Int initLayoutDirectionHelperIfNeeded(layoutManager) val firstVisiblePos = layoutManager.findFirstVisibleItemPosition() if (firstVisiblePos == RecyclerView.NO_POSITION) return RecyclerView.NO_POSITION initItemDimensionIfNeeded(layoutManager) if (firstVisiblePos >= priorFirstPosition) { // Scrolling toward bottom of data val firstCompletePosition = layoutManager.findFirstCompletelyVisibleItemPosition() snapPos = if (firstCompletePosition != RecyclerView.NO_POSITION && firstCompletePosition % blocksize == 0) firstCompletePosition else roundDownToBlockSize(firstVisiblePos + blocksize) } else { // Scrolling toward top of data snapPos = roundDownToBlockSize(firstVisiblePos) // Check to see if target view exists. If it doesn't, force a smooth scroll. // SnapHelper only snaps to existing views and will not scroll to a non-existant one. // If limiting fling to single block, then the following is not needed since the // views are likely to be in the RecyclerView pool. if (layoutManager.findViewByPosition(snapPos) == null) { val toScroll = layoutDirectionHelper!!.calculateDistanceToScroll(layoutManager, snapPos) recyclerView!!.smoothScrollBy(toScroll[0], toScroll[1], sInterpolator) } } priorFirstPosition = firstVisiblePos return snapPos } private fun initItemDimensionIfNeeded(layoutManager: RecyclerView.LayoutManager) { if (itemDimension != 0) return val child = layoutManager.getChildAt(0) ?: return if (layoutManager.canScrollHorizontally()) { itemDimension = child.width blocksize = getSpanCount(layoutManager) * (recyclerView!!.width / itemDimension) } else if (layoutManager.canScrollVertically()) { itemDimension = child.height blocksize = getSpanCount(layoutManager) * (recyclerView!!.height / itemDimension) } maxPositionsToMove = blocksize * maxFlingBlocks } private fun getSpanCount(layoutManager: RecyclerView.LayoutManager): Int = (layoutManager as? GridLayoutManager)?.spanCount ?: 1 private fun roundDownToBlockSize(trialPosition: Int): Int = trialPosition - trialPosition % blocksize private fun roundUpToBlockSize(trialPosition: Int): Int = roundDownToBlockSize(trialPosition + blocksize - 1) override fun createScroller(layoutManager: RecyclerView.LayoutManager): LinearSmoothScroller? { return if (layoutManager !is RecyclerView.SmoothScroller.ScrollVectorProvider) null else object : LinearSmoothScroller(recyclerView!!.context) { override fun onTargetFound(targetView: View, state: RecyclerView.State?, action: RecyclerView.SmoothScroller.Action) { val snapDistances = calculateDistanceToFinalSnap(recyclerView!!.layoutManager, targetView) val dx = snapDistances[0] val dy = snapDistances[1] val time = calculateTimeForDeceleration(Math.max(Math.abs(dx), Math.abs(dy))) if (time > 0) action.update(dx, dy, time, sInterpolator) } override fun calculateSpeedPerPixel(displayMetrics: DisplayMetrics): Float = MILLISECONDS_PER_INCH / displayMetrics.densityDpi } } fun setSnapBlockCallback(callback: SnapBlockCallback?) { snapBlockCallback = callback } /* Helper class that handles calculations for LTR and RTL layouts. */ private inner class LayoutDirectionHelper { // Is the layout an RTL one? private val mIsRTL: Boolean constructor() { mIsRTL = ViewCompat.getLayoutDirection(recyclerView) == ViewCompat.LAYOUT_DIRECTION_RTL } constructor(isRTL: Boolean) { mIsRTL = isRTL } /* Calculate the amount of scroll needed to align the target view with the layout edge. */ fun getScrollToAlignView(targetView: View): Int = if (mIsRTL) orientationHelper!!.getDecoratedEnd(targetView) - recyclerView!!.width else orientationHelper!!.getDecoratedStart(targetView) /** * Calculate the distance to final snap position when the view corresponding to the snap * position is not currently available. * * @param layoutManager LinearLayoutManager or descendent class * @param targetPos - Adapter position to snap to * @return int[2] {x-distance in pixels, y-distance in pixels} */ fun calculateDistanceToScroll(layoutManager: LinearLayoutManager, targetPos: Int): IntArray { val out = IntArray(2) val firstVisiblePos = layoutManager.findFirstVisibleItemPosition() if (layoutManager.canScrollHorizontally()) { if (targetPos <= firstVisiblePos) // scrolling toward top of data if (mIsRTL) { val lastView = layoutManager.findViewByPosition(layoutManager.findLastVisibleItemPosition()) out[0] = orientationHelper!!.getDecoratedEnd(lastView) + (firstVisiblePos - targetPos) * itemDimension } else { val firstView = layoutManager.findViewByPosition(firstVisiblePos) out[0] = orientationHelper!!.getDecoratedStart(firstView) - (firstVisiblePos - targetPos) * itemDimension } } if (layoutManager.canScrollVertically() && targetPos <= firstVisiblePos) { // scrolling toward top of data val firstView = layoutManager.findViewByPosition(firstVisiblePos) out[1] = firstView.top - (firstVisiblePos - targetPos) * itemDimension } return out } /* Calculate the number of positions to move in the RecyclerView given a scroll amount and the size of the items to be scrolled. Return integral multiple of mBlockSize not equal to zero. */ fun getPositionsToMove(llm: LinearLayoutManager, scroll: Int, itemSize: Int): Int { var positionsToMove: Int positionsToMove = roundUpToBlockSize(Math.abs(scroll) / itemSize) if (positionsToMove < blocksize) // Must move at least one block positionsToMove = blocksize else if (positionsToMove > maxPositionsToMove) // Clamp number of positions to move so we don't get wild flinging. positionsToMove = maxPositionsToMove if (scroll < 0) positionsToMove *= -1 if (mIsRTL) positionsToMove *= -1 return if (layoutDirectionHelper!!.isDirectionToBottom(scroll < 0)) { // Scrolling toward the bottom of data. roundDownToBlockSize(llm.findFirstVisibleItemPosition()) + positionsToMove } else roundDownToBlockSize(llm.findLastVisibleItemPosition()) + positionsToMove // Scrolling toward the top of the data. } fun isDirectionToBottom(velocityNegative: Boolean): Boolean = if (mIsRTL) velocityNegative else !velocityNegative } interface SnapBlockCallback { fun onBlockSnap(snapPosition: Int) fun onBlockSnapped(snapPosition: Int) } companion object { // Borrowed from ViewPager.java private val sInterpolator = Interpolator { input -> var t = input // _o(t) = t * t * ((tension + 1) * t + tension) // o(t) = _o(t - 1) + 1 t -= 1.0f t * t * t + 1.0f } private val MILLISECONDS_PER_INCH = 100f private val TAG = "SnapToBlock" }} Update Even though I've marked an answer as accepted, as it works fine, I've noticed it has serious issues: Smooth scrolling doesn't seem to work fine (doesn't scroll to correct place). Only scrolling that work is as such (but with the "smearing" effect) : (recyclerView.layoutManager as LinearLayoutManager).scrollToPositionWithOffset(targetPos,0) When switching to RTL (Right to left) locale such as Hebrew ("עברית"), it doesn't let me scroll at all. I've noticed that onCreateViewHolder is called a lot. In fact it is called every time I scroll, even for times it should have recycled the ViewHolders. This means there is an excessive creation of views, and it might also mean there is a memory leak. I've tried to fix those myself, but failed so far. If anyone here knows how to fix it, I will grant the extra, new bounty Update: as we got a fix for RTL/LTR, I've updated the Kotlin solution within this post. Update: about point #3 , this seems to be because there is a pool of views for the recyclerView, which gets filled too soon. To handle this, we can simply enlarge the pool size, by using recyclerView.getRecycledViewPool() .setMaxRecycledViews(viewType, Integer.MAX_VALUE) for each view type we have in it. Weird thing that this is really needed. I've posted about it to Google ( here and here ) but was rejected that the pool should be unlimited by default. In the end, I decided to at least request to have a more convinient function to do it for all view types ( here ).
Now provide the response and nothing else.
| SnapHelper supplies the necessary framework for what you are attempting, but it needs to be extended to handle blocks of views. The class SnapToBlock below extends SnapHelper to snap to blocks of views. In the example, I have used four views to a block but it can be more or less. Update: The code has been change to accommodate GridLayoutManager as well as LinearLayoutManager . Flinging is now inhibited so the snapping works more list a ViewPager . Horizontal and vertical scrolling is now supported as well as LTR and RTL layouts. Update: Changed smooth scroll interpolator to be more like ViewPager . Update: Adding callbacks for pre/post snapping. Update: Adding support for RTL layouts. Here is a quick video of the sample app: Set up the layout manager as follows: // For LinearLayoutManager horizontal orientationrecyclerView.setLayoutManager(new LinearLayoutManager(this, RecyclerView.HORIZONTAL, false));// For GridLayoutManager vertical orientationrecyclerView.setLayoutManager(new GridLayoutManager(this, SPAN_COUNT, RecyclerView.VERTICAL, false)); Add the following to attach the SnapToBlock to the RecyclerView . SnapToBlock snapToBlock = new SnapToBlock(mMaxFlingPages);snapToBlock.attachToRecyclerView(recyclerView); mMaxFlingPages is the maximum number of blocks (rowsCols * spans) to allow to be flung at one time. For call backs when a snap is about to be made and has been completed, add the following: snapToBlock.setSnapBlockCallback(new SnapToBlock.SnapBlockCallback() { @Override public void onBlockSnap(int snapPosition) { ... } @Override public void onBlockSnapped(int snapPosition) { ... }}); SnapToBlock.java /* The number of items in the RecyclerView should be a multiple of block size; otherwise, the extra item views will not be positioned on a block boundary when the end of the data is reached. Pad out with empty item views if needed. Updated to accommodate RTL layouts. */public class SnapToBlock extends SnapHelper { private RecyclerView mRecyclerView; // Total number of items in a block of view in the RecyclerView private int mBlocksize; // Maximum number of positions to move on a fling. private int mMaxPositionsToMove; // Width of a RecyclerView item if orientation is horizonal; height of the item if vertical private int mItemDimension; // Maxim blocks to move during most vigorous fling. private final int mMaxFlingBlocks; // Callback interface when blocks are snapped. private SnapBlockCallback mSnapBlockCallback; // When snapping, used to determine direction of snap. private int mPriorFirstPosition = RecyclerView.NO_POSITION; // Our private scroller private Scroller mScroller; // Horizontal/vertical layout helper private OrientationHelper mOrientationHelper; // LTR/RTL helper private LayoutDirectionHelper mLayoutDirectionHelper; // Borrowed from ViewPager.java private static final Interpolator sInterpolator = new Interpolator() { public float getInterpolation(float t) { // _o(t) = t * t * ((tension + 1) * t + tension) // o(t) = _o(t - 1) + 1 t -= 1.0f; return t * t * t + 1.0f; } }; SnapToBlock(int maxFlingBlocks) { super(); mMaxFlingBlocks = maxFlingBlocks; } @Override public void attachToRecyclerView(@Nullable final RecyclerView recyclerView) throws IllegalStateException { if (recyclerView != null) { mRecyclerView = recyclerView; final LinearLayoutManager layoutManager = (LinearLayoutManager) recyclerView.getLayoutManager(); if (layoutManager.canScrollHorizontally()) { mOrientationHelper = OrientationHelper.createHorizontalHelper(layoutManager); mLayoutDirectionHelper = new LayoutDirectionHelper(ViewCompat.getLayoutDirection(mRecyclerView)); } else if (layoutManager.canScrollVertically()) { mOrientationHelper = OrientationHelper.createVerticalHelper(layoutManager); // RTL doesn't matter for vertical scrolling for this class. mLayoutDirectionHelper = new LayoutDirectionHelper(RecyclerView.LAYOUT_DIRECTION_LTR); } else { throw new IllegalStateException("RecyclerView must be scrollable"); } mScroller = new Scroller(mRecyclerView.getContext(), sInterpolator); initItemDimensionIfNeeded(layoutManager); } super.attachToRecyclerView(recyclerView); } // Called when the target view is available and we need to know how much more // to scroll to get it lined up with the side of the RecyclerView. @NonNull @Override public int[] calculateDistanceToFinalSnap(@NonNull RecyclerView.LayoutManager layoutManager, @NonNull View targetView) { int[] out = new int[2]; if (layoutManager.canScrollHorizontally()) { out[0] = mLayoutDirectionHelper.getScrollToAlignView(targetView); } if (layoutManager.canScrollVertically()) { out[1] = mLayoutDirectionHelper.getScrollToAlignView(targetView); } if (mSnapBlockCallback != null) { if (out[0] == 0 && out[1] == 0) { mSnapBlockCallback.onBlockSnapped(layoutManager.getPosition(targetView)); } else { mSnapBlockCallback.onBlockSnap(layoutManager.getPosition(targetView)); } } return out; } // We are flinging and need to know where we are heading. @Override public int findTargetSnapPosition(RecyclerView.LayoutManager layoutManager, int velocityX, int velocityY) { LinearLayoutManager lm = (LinearLayoutManager) layoutManager; initItemDimensionIfNeeded(layoutManager); mScroller.fling(0, 0, velocityX, velocityY, Integer.MIN_VALUE, Integer.MAX_VALUE, Integer.MIN_VALUE, Integer.MAX_VALUE); if (velocityX != 0) { return mLayoutDirectionHelper .getPositionsToMove(lm, mScroller.getFinalX(), mItemDimension); } if (velocityY != 0) { return mLayoutDirectionHelper .getPositionsToMove(lm, mScroller.getFinalY(), mItemDimension); } return RecyclerView.NO_POSITION; } // We have scrolled to the neighborhood where we will snap. Determine the snap position. @Override public View findSnapView(RecyclerView.LayoutManager layoutManager) { // Snap to a view that is either 1) toward the bottom of the data and therefore on screen, // or, 2) toward the top of the data and may be off-screen. int snapPos = calcTargetPosition((LinearLayoutManager) layoutManager); View snapView = (snapPos == RecyclerView.NO_POSITION) ? null : layoutManager.findViewByPosition(snapPos); if (snapView == null) { Log.d(TAG, "<<<<findSnapView is returning null!"); } Log.d(TAG, "<<<<findSnapView snapos=" + snapPos); return snapView; } // Does the heavy lifting for findSnapView. private int calcTargetPosition(LinearLayoutManager layoutManager) { int snapPos; int firstVisiblePos = layoutManager.findFirstVisibleItemPosition(); if (firstVisiblePos == RecyclerView.NO_POSITION) { return RecyclerView.NO_POSITION; } initItemDimensionIfNeeded(layoutManager); if (firstVisiblePos >= mPriorFirstPosition) { // Scrolling toward bottom of data int firstCompletePosition = layoutManager.findFirstCompletelyVisibleItemPosition(); if (firstCompletePosition != RecyclerView.NO_POSITION && firstCompletePosition % mBlocksize == 0) { snapPos = firstCompletePosition; } else { snapPos = roundDownToBlockSize(firstVisiblePos + mBlocksize); } } else { // Scrolling toward top of data snapPos = roundDownToBlockSize(firstVisiblePos); // Check to see if target view exists. If it doesn't, force a smooth scroll. // SnapHelper only snaps to existing views and will not scroll to a non-existant one. // If limiting fling to single block, then the following is not needed since the // views are likely to be in the RecyclerView pool. if (layoutManager.findViewByPosition(snapPos) == null) { int[] toScroll = mLayoutDirectionHelper.calculateDistanceToScroll(layoutManager, snapPos); mRecyclerView.smoothScrollBy(toScroll[0], toScroll[1], sInterpolator); } } mPriorFirstPosition = firstVisiblePos; return snapPos; } private void initItemDimensionIfNeeded(final RecyclerView.LayoutManager layoutManager) { if (mItemDimension != 0) { return; } View child; if ((child = layoutManager.getChildAt(0)) == null) { return; } if (layoutManager.canScrollHorizontally()) { mItemDimension = child.getWidth(); mBlocksize = getSpanCount(layoutManager) * (mRecyclerView.getWidth() / mItemDimension); } else if (layoutManager.canScrollVertically()) { mItemDimension = child.getHeight(); mBlocksize = getSpanCount(layoutManager) * (mRecyclerView.getHeight() / mItemDimension); } mMaxPositionsToMove = mBlocksize * mMaxFlingBlocks; } private int getSpanCount(RecyclerView.LayoutManager layoutManager) { return (layoutManager instanceof GridLayoutManager) ? ((GridLayoutManager) layoutManager).getSpanCount() : 1; } private int roundDownToBlockSize(int trialPosition) { return trialPosition - trialPosition % mBlocksize; } private int roundUpToBlockSize(int trialPosition) { return roundDownToBlockSize(trialPosition + mBlocksize - 1); } @Nullable protected LinearSmoothScroller createScroller(RecyclerView.LayoutManager layoutManager) { if (!(layoutManager instanceof RecyclerView.SmoothScroller.ScrollVectorProvider)) { return null; } return new LinearSmoothScroller(mRecyclerView.getContext()) { @Override protected void onTargetFound(View targetView, RecyclerView.State state, Action action) { int[] snapDistances = calculateDistanceToFinalSnap(mRecyclerView.getLayoutManager(), targetView); final int dx = snapDistances[0]; final int dy = snapDistances[1]; final int time = calculateTimeForDeceleration(Math.max(Math.abs(dx), Math.abs(dy))); if (time > 0) { action.update(dx, dy, time, sInterpolator); } } @Override protected float calculateSpeedPerPixel(DisplayMetrics displayMetrics) { return MILLISECONDS_PER_INCH / displayMetrics.densityDpi; } }; } public void setSnapBlockCallback(@Nullable SnapBlockCallback callback) { mSnapBlockCallback = callback; } /* Helper class that handles calculations for LTR and RTL layouts. */ private class LayoutDirectionHelper { // Is the layout an RTL one? private final boolean mIsRTL; @TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR1) LayoutDirectionHelper(int direction) { mIsRTL = direction == View.LAYOUT_DIRECTION_RTL; } /* Calculate the amount of scroll needed to align the target view with the layout edge. */ int getScrollToAlignView(View targetView) { return (mIsRTL) ? mOrientationHelper.getDecoratedEnd(targetView) - mRecyclerView.getWidth() : mOrientationHelper.getDecoratedStart(targetView); } /** * Calculate the distance to final snap position when the view corresponding to the snap * position is not currently available. * * @param layoutManager LinearLayoutManager or descendent class * @param targetPos - Adapter position to snap to * @return int[2] {x-distance in pixels, y-distance in pixels} */ int[] calculateDistanceToScroll(LinearLayoutManager layoutManager, int targetPos) { int[] out = new int[2]; int firstVisiblePos; firstVisiblePos = layoutManager.findFirstVisibleItemPosition(); if (layoutManager.canScrollHorizontally()) { if (targetPos <= firstVisiblePos) { // scrolling toward top of data if (mIsRTL) { View lastView = layoutManager.findViewByPosition(layoutManager.findLastVisibleItemPosition()); out[0] = mOrientationHelper.getDecoratedEnd(lastView) + (firstVisiblePos - targetPos) * mItemDimension; } else { View firstView = layoutManager.findViewByPosition(firstVisiblePos); out[0] = mOrientationHelper.getDecoratedStart(firstView) - (firstVisiblePos - targetPos) * mItemDimension; } } } if (layoutManager.canScrollVertically()) { if (targetPos <= firstVisiblePos) { // scrolling toward top of data View firstView = layoutManager.findViewByPosition(firstVisiblePos); out[1] = firstView.getTop() - (firstVisiblePos - targetPos) * mItemDimension; } } return out; } /* Calculate the number of positions to move in the RecyclerView given a scroll amount and the size of the items to be scrolled. Return integral multiple of mBlockSize not equal to zero. */ int getPositionsToMove(LinearLayoutManager llm, int scroll, int itemSize) { int positionsToMove; positionsToMove = roundUpToBlockSize(Math.abs(scroll) / itemSize); if (positionsToMove < mBlocksize) { // Must move at least one block positionsToMove = mBlocksize; } else if (positionsToMove > mMaxPositionsToMove) { // Clamp number of positions to move so we don't get wild flinging. positionsToMove = mMaxPositionsToMove; } if (scroll < 0) { positionsToMove *= -1; } if (mIsRTL) { positionsToMove *= -1; } if (mLayoutDirectionHelper.isDirectionToBottom(scroll < 0)) { // Scrolling toward the bottom of data. return roundDownToBlockSize(llm.findFirstVisibleItemPosition()) + positionsToMove; } // Scrolling toward the top of the data. return roundDownToBlockSize(llm.findLastVisibleItemPosition()) + positionsToMove; } boolean isDirectionToBottom(boolean velocityNegative) { //noinspection SimplifiableConditionalExpression return mIsRTL ? velocityNegative : !velocityNegative; } } public interface SnapBlockCallback { void onBlockSnap(int snapPosition); void onBlockSnapped(int snapPosition); } private static final float MILLISECONDS_PER_INCH = 100f; @SuppressWarnings("unused") private static final String TAG = "SnapToBlock";} The SnapBlockCallback interface defined above can be used to report the adapter position of the view at the start of the block to be snapped. The view associated with that position may not be instantiated when the call is made if the view is off screen. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47514072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/878126/']} | jdg_79380 |
stackexchange | llm_judgeable_groundtruth_similarity | 51414610 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a task to export 100,000 rows of records from my database to csv format. What are the best approaches on doing this? I am totally clueless on what to search and study. I want to have chunk downloads like the other sites with PART format while the download is not yet finished and at the same time do not let my server be exhausted. How can I do that? Thanks!
Now provide the response and nothing else.
| You can use the StreamedResponse from Symfony HttpFoundation component combined with Doctrine iterate . Something like this: $response = new StreamedResponse(function () { $data = $this->myQuery()->iterate(); $csv = fopen('php://output', 'w+'); while (false !== ($line = $data->next())) { fputcsv($csv, [$line[0]->column1], ';'); } fclose($csv);});$response->headers->set('Content-Type', 'text/csv; charset=utf-8');$response->headers->set('Content-Disposition', 'attachment; filename="file.csv"'); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51414610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3483607/']} | jdg_79381 |
stackexchange | llm_judgeable_groundtruth_similarity | 21439756 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to get SignalR working in an MVC5 project with individual accounts. The MVC project has by default Owin 2.0.0 and all of the Owin.* components are also 2.0.0. So I used NuGet to get all the SignalR packages, it automatically resolved dependancies and downloaded v 2.0.2. The project throws an error on startup with the following message: Could not load file or assembly 'Microsoft.Owin, Version=2.0.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' I also tried upgrading Owin to 2.1.0 but that didn't help either. Has anyone faced the same problem and what was the solution?
Now provide the response and nothing else.
| You can update this references to the lastest version I found (now is 2.1.0): Install-Package Microsoft.Owin -Version 2.1.0Install-Package Microsoft.Owin.Security -Version 2.1.0 And make sure your Web.config have these binding redirects for version 2.1.0: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Microsoft.Owin" publicKeyToken="31bf3856ad364e35" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-2.1.0.0" newVersion="2.1.0.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="Microsoft.Owin.Security" publicKeyToken="31bf3856ad364e35" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-2.1.0.0" newVersion="2.1.0.0" /> </dependentAssembly> </assemblyBinding> </runtime></configuration> Or you can update this references to version 2.0.1: Install-Package Microsoft.Owin -Version 2.0.1Install-Package Microsoft.Owin.Security -Version 2.0.1 And make sure your Web.config have these binding redirects for version 2.0.1: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Microsoft.Owin" publicKeyToken="31bf3856ad364e35" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-2.0.1.0" newVersion="2.0.1.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="Microsoft.Owin.Security" publicKeyToken="31bf3856ad364e35" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-2.0.1.0" newVersion="2.0.1.0" /> </dependentAssembly> </assemblyBinding> </runtime></configuration> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21439756', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2304769/']} | jdg_79382 |
stackexchange | llm_judgeable_groundtruth_similarity | 3458930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Continuing from this question . I'm having trouble deserializing the following json array (Sorry for the size): "geometry": { "type": "Polygon", "coordinates": [ [ [ 771230.894373, 4422896.962001 ], [ 804804.852796, 4451159.130080 ], [ 876828.563339, 4417873.954498 ], [ 959794.979827, 4430944.287708 ], [ 910992.515063, 4372980.866944 ], [ 932488.308736, 4357684.778349 ], [ 918573.372386, 4115663.286966 ], [ 834059.614976, 4013708.358795 ], [ 929360.231044, 3833522.241529 ], [ 1008029.715188, 3776446.653183 ], [ 1061663.445852, 3533717.758754 ], [ 1035703.740599, 3519308.069656 ], [ 1095348.723766, 3396028.487184 ], [ 1108462.159782, 3230455.268230 ], [ 1083571.121640, 3163122.508021 ], [ 1103953.720405, 3082716.041755 ], [ 1045722.494771, 3020215.642212 ], [ 1117367.719045, 2915275.458735 ], [ 1141268.013718, 2827405.304519 ], [ 1286729.192338, 2790314.754276 ], [ 1334329.406601, 2695307.513404 ], [ 829417.592210, 2374337.277646 ], [ 647042.870444, 2207530.090128 ], [ 370914.873531, 2152159.656850 ], [ 346669.488436, 2173360.227237 ], [ 359905.375891, 2251757.174668 ], [ 199905.871774, 2309591.361246 ], [ 129963.835709, 2361036.252651 ], [ 130208.738589, 2404106.913263 ], [ -964785.432600, 3159802.671416 ], [ -964829.960396, 3338713.127631 ], [ -851005.781060, 3424742.002477 ], [ -616522.405653, 3491025.523892 ], [ -547749.224241, 3569019.334331 ], [ -403724.067052, 3628920.873754 ], [ -423973.082428, 3724062.779415 ], [ -333893.350478, 3741450.793542 ], [ -317696.364567, 3774909.265404 ], [ -131414.328674, 3777826.527844 ], [ -112467.751341, 3830221.719769 ], [ -185682.580436, 3930014.456814 ], [ -194499.084106, 4129581.855629 ], [ -245950.952751, 4175549.526399 ], [ -42303.076294, 4287174.981681 ], [ -11222.674464, 4271148.905617 ], [ 131633.628071, 4371332.547494 ], [ 433220.392528, 4427574.250017 ], [ 593119.709103, 4389089.571176 ], [ 719645.442339, 4451856.882422 ], [ 771230.894373, 4422896.962001 ] ] ] } If I paste it into a json-viewer, I get this structure: [geometry]...[coordinates] => Array ( [0] => Array ( [0] => Array ( [0] => 771230.894373 [1] => 4422896.962001 ) [1] => Array ( [0] => 804804.852796 [1] => 4451159.13008 ) ... [n] => Array [n] => Array Now, the array containing the arrays with the coordinates has a variable size. So I figured that in java, this whole object whould be an array, containing a collection of arrays, with each array containing a Collection<double[]> . Something like Collection<double[]>[][]. But gson does not accept this. I get the following error message: Exception in thread "main" com.google.gson.JsonParseException: Expecting object but found array: 2.963610 Which seems weird, as 2.963610 doesnt look like an array to me. But it might have confused me to the point where I'm lost, more or less...
Now provide the response and nothing else.
| I think I know where your problem comes from, reading the Gson API : If the object that your are serializing/deserializing is a ParameterizedType (i.e. contains at least one type parameter and may be an array) then you must use the toJson(Object, Type) or fromJson(String, Type) method. Here is an example for serializing and deserialing a ParameterizedType: Type listType = new TypeToken<LinkedList>() {}.getType();List target = new LinkedList();target.add("blah");Gson gson = new Gson();String json = gson.toJson(target, listType);List target2 = gson.fromJson(json, listType); Knowing that Type typeOfCollectionOfFoo = new TypeToken<Collection<Foo>>(){}.getType() Hope this helps. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3458930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/266541/']} | jdg_79383 |
stackexchange | llm_judgeable_groundtruth_similarity | 24126708 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
One of our application screens requires us to place a UICollectionView inside of a UITableViewCell . This UICollectionView will have a dynamic number of items, resulting in a height which must be calculated dynamically as well. However, I am running into problems trying to calculate the height of the embedded UICollectionView . Our overarching UIViewController was created in Storyboards and does make use of auto layout. But, I don't know how to dynamically increase the height of the UITableViewCell based on the height of the UICollectionView . Can anyone give some tips or advice on how to accomplish this?
Now provide the response and nothing else.
| The right answer is YES , you CAN do this. I came across this problem some weeks ago. It is actually easier than you may think. Put your cells into NIBs (or storyboards) and pin them to let auto layout do all the work Given the following structure: TableView TableViewCell CollectionView CollectionViewCell CollectionViewCell CollectionViewCell [...variable number of cells or different cell sizes] The solution is to tell auto layout to compute first the collectionViewCell sizes, then the collection view contentSize, and use it as the size of your cell. This is the UIView method that "does the magic": -(void)systemLayoutSizeFittingSize:(CGSize)targetSize withHorizontalFittingPriority:(UILayoutPriority)horizontalFittingPriority verticalFittingPriority:(UILayoutPriority)verticalFittingPriority You have to set here the size of the TableViewCell, which in your case is the CollectionView's contentSize. CollectionViewCell At the CollectionViewCell you have to tell the cell to layout each time you change the model (e.g.: you set a UILabel with a text, then the cell has to be layout again). - (void)bindWithModel:(id)model { // Do whatever you may need to bind with your data and // tell the collection view cell's contentView to resize [self.contentView setNeedsLayout];}// Other stuff here... TableViewCell The TableViewCell does the magic. It has an outlet to your collectionView, enables the auto layout for collectionView cells using estimatedItemSize of the UICollectionViewFlowLayout . Then, the trick is to set your tableView cell's size at the systemLayoutSizeFittingSize ... method. (NOTE: iOS8 or later) NOTE: I tried to use the delegate cell's height method of the tableView -(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath .but it's too late for the auto layout system to compute the CollectionView contentSize and sometimes you may find wrong resized cells. @implementation TableCell- (void)awakeFromNib { [super awakeFromNib]; UICollectionViewFlowLayout *flow = (UICollectionViewFlowLayout *)self.collectionView.collectionViewLayout; // Configure the collectionView flow.minimumInteritemSpacing = ...; // This enables the magic of auto layout. // Setting estimatedItemSize different to CGSizeZero // on flow Layout enables auto layout for collectionView cells. // https://developer.apple.com/videos/play/wwdc2014-226/ flow.estimatedItemSize = CGSizeMake(1, 1); // Disable the scroll on your collection view // to avoid running into multiple scroll issues. [self.collectionView setScrollEnabled:NO];}- (void)bindWithModel:(id)model { // Do your stuff here to configure the tableViewCell // Tell the cell to redraw its contentView [self.contentView layoutIfNeeded];}// THIS IS THE MOST IMPORTANT METHOD//// This method tells the auto layout// You cannot calculate the collectionView content size in any other place, // because you run into race condition issues.// NOTE: Works for iOS 8 or later- (CGSize)systemLayoutSizeFittingSize:(CGSize)targetSize withHorizontalFittingPriority:(UILayoutPriority)horizontalFittingPriority verticalFittingPriority:(UILayoutPriority)verticalFittingPriority { // With autolayout enabled on collection view's cells we need to force a collection view relayout with the shown size (width) self.collectionView.frame = CGRectMake(0, 0, targetSize.width, MAXFLOAT); [self.collectionView layoutIfNeeded]; // If the cell's size has to be exactly the content // Size of the collection View, just return the // collectionViewLayout's collectionViewContentSize. return [self.collectionView.collectionViewLayout collectionViewContentSize];}// Other stuff here...@end TableViewController Remember to enable the auto layout system for the tableView cells at your TableViewController : - (void)viewDidLoad { [super viewDidLoad]; // Enable automatic row auto layout calculations self.tableView.rowHeight = UITableViewAutomaticDimension; // Set the estimatedRowHeight to a non-0 value to enable auto layout. self.tableView.estimatedRowHeight = 10;} CREDIT: @rbarbera helped to sort this out | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24126708', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/331747/']} | jdg_79384 |
stackexchange | llm_judgeable_groundtruth_similarity | 1090142 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wonder, can we, for instance, say that $\pi$ is more irrational than $e$? Or that $e$ is more irrational than $\sqrt{2}$? What kind of irrationality measurement can we use to say that $A$ is (much) more irrational than $B$? (in particularly rigorous sense)
Now provide the response and nothing else.
| Well, there is a sort of extension of the idea of irrationality called "transcendental". You can think of "irrational" as meaning "There is no way to relate this number to $1$ by thinking only about addition and subtraction". That is, a number $q$ is rational if we can write something like:$$q+q+q-1-1-1-1=0$$or some other similar equation involving only $q,\,+,$ and $1$. The above represents the number $\frac{4}3$ and more generally, you can, for $q=\frac{a}b$ write:$$\underbrace{q+\ldots+q}_{b \text{ repetitions}}-\underbrace{1-\ldots-1}_{a \text{ repetitions}}=0.$$or appropriately change signs for negative $a$. The next logical step would be to consider the idea of allowing multiplication too - and this extends the rational numbers to algebraic numbers, which are (potentially) irrational, but can be related to the rationals via multiplication and addition. For instance, $x=\sqrt{2}$ satisfies$$x^2-2=0$$and we can come up with similar expressions (the set of such equations would be polynomials to be precise) for any expression built with radicals and addition and such - and we could say that the degree of an algebraic number (i.e. how many multiplications we need) represents how irrational it is - that is $\sqrt{2}$ can be related to integers by squaring it, but $\sqrt[3]{2}$ requires cubing to get to an irrational - and numbers like $\sqrt{2}+\sqrt{3}$ require fourth powers. We might say that the degree represents a sort of "distance" from the rational numbers. However, this only extends to a narrow class of numbers - $\pi$ and $e$ are both transcendental, meaning "not algebraic". We have much less understanding of these, since we can't relate them to the rationals through arithmetic - so we could be justified in saying that they are less well behaved then algebraic numbers. One way we can talk about their irrationality measure which essentially tells us how far from rational numbers a given irrational is, relating the growth of the denominator of the best rational approximations to how close they are - this represents how these numbers are comprehended by looking at sequences of rational approximations, rather than algebraic properties. However, this isn't terribly useful to compare actual numbers, because we hardly know how to calculate any. We do know that, from its infinite series, that $e$ acts very similarly to an algebraic number (looking at its series definition) and its irrationality measure is $2$. We don't know the irrationality measure of $\pi$, though we might suspect that it greater than $2$ - but it's hard to compare numbers this way, given our limited knowledge thereof, and all it means to have a small irrationality measure is that it's "far" from any rationals with small denominators. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1090142', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/204141/']} | jdg_79385 |
stackexchange | llm_judgeable_groundtruth_similarity | 36781372 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Docker kind of always had a USER command to run a process as a specific user, but in general a lot of things had to run as ROOT. I have seen a lot of images that use an ENTRYPOINT with gosu to de-elevate the process to run. I'm still a bit confused about the need for gosu . Shouldn't USER be enough? I know quite a bit has changed in terms of security with Docker 1.10, but I'm still not clear about the recommended way to run a process in a docker container. Can someone explain when I would use gosu vs. USER ? Thanks EDIT: The Docker best practice guide is not very clear: It says if the process can run without priviledges, use USER , if you need sudo, you might want to use gosu . That is confusing because one can install all sorts of things as ROOT in the Dockerfile , then create a user and give it proper privileges, then finally switch to that user and run the CMD as that user.So why would we need sudo or gosu then?
Now provide the response and nothing else.
| Dockerfiles are for creating images. I see gosu as more useful as part of a container initialization when you can no longer change users between run commands in your Dockerfile. After the image is created, something like gosu allows you to drop root permissions at the end of your entrypoint inside of a container. You may initially need root access to do some initialization steps (fixing uid's, host mounted volume permissions, etc). Then once initialized, you run the final service without root privileges and as pid 1 to handle signals cleanly. Edit: Here's a simple example of using gosu in an image for docker and jenkins: https://github.com/bmitch3020/jenkins-docker The entrypoint.sh looks up the gid of the /var/lib/docker.sock file and updates the gid of the docker user inside the container to match. This allows the image to be ported to other docker hosts where the gid on the host may differ. Changing the group requires root access inside the container. Had I used USER jenkins in the dockerfile, I would be stuck with the gid of the docker group as defined in the image which wouldn't work if it doesn't match that of the docker host it's running on. But root access can be dropped when running the app which is where gosu comes in. At the end of the script, the exec call prevents the shell from forking gosu, and instead it replaces pid 1 with that process. Gosu in turn does the same, switching the uid and then exec'ing the jenkins process so that it takes over as pid 1. This allows signals to be handled correctly which would otherwise be ignored by a shell as pid 1. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36781372', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2494262/']} | jdg_79386 |
stackexchange | llm_judgeable_groundtruth_similarity | 15555573 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across a problem of calculating 100 factorial . Here is what I tried first in Perl to calculate 100! : #!/usr/bin/perluse strict;use warnings;use Math::BigInt;my $n=<>;chomp($n);print fac($n);sub fac{ my ($m) = @_; return 1 if($m <=1 ); return $m*fac($m-1);} But this is giving me 9.33262154439441e+157 . I need the answer with all of the digits. What do I do?
Now provide the response and nothing else.
| Doubles (which most Perls use) only have ~16 digits of precision. You need to use another system to get the 158 digits of precision you need. use bigint; This will cause Perl to automatically treat all numbers in your script as Math::BigInt objects. If you need finer control (to treat some numbers as BigInt and some numbers as floating point) then see Krishnachandra Sharma's solution and explicitly use the Math::BigInt constructor. Math::BigInt has a builtin factorial function, by the way: $ perl -MMath::BigInt -e 'print Math::BigInt->bfac(100)'93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15555573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/812276/']} | jdg_79387 |
stackexchange | llm_judgeable_groundtruth_similarity | 1188497 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
so I am trying to show that $f(x)$ divides $g(x)$ for all polynomials $g(x)$ satisfying that $g(A)=0$ where $f(x)$ is the minimal polynomial of a square matrix $A$. I know from my professor that $f(x)$ is the minimal polynomial of $A$ so $f(A)=0$ so then $g(A)=0$. Therefore, $f(x)$ divides $g(x)$, i.e. $g(x)=f(x)h(x)$ where $h(x)$ is some other polynomial. My prof told us to solve this question we need to let $b(x)=gcd((f(x),g(x))$ and by using Bezout's Identity, we need to show that $b(A)=0$. I am just having a little trouble trying to prove this. I know that this identity says: For two polynomials $f_1(x)$ and $f_2(x)$, there exist polynomials $g_1(x)$ and $g_2(x)$ such that $g_1(x)f_1(x) + g_2(x)f_2(x) => gcd(f_1(x), f_2(x))$. And the uniqueness of minimal polynomial theorem is: For a square matrix $A$, if $f_1(A) = f_2(A) = 0$ for two polynomials $f_1(x)$ and $f_2(x)$, then $g(A) = 0$ for $g(x) = gcd(f_1(x),> f_2(x))$. So the minimal polynomial of $A$ is unique up to a scalar. So since $f(x)$ is already the minimal polynomial, I know that the $deg(b(x))=deg(f(x))$. I know how to prove the Bezout identity using the Euclidean algorithm, I'm just not exactly sure how I would show all of this using the notation he gave us of $b(x)=gcd((f(x),g(x))$. Any help at all would be appreciated.
Now provide the response and nothing else.
| To me it's unclear what you are actually asking, but if you have defined $f$ to be the monic polynomial of minimal degree such that $f(A) = 0$, then you simply argue like this. Take any $g$ such that $g(A) = 0$. Divide $g$ by $f$, so$$g = q f + r, \qquad\text{where either $r = 0$, or $r \ne 0$ has degree smaller than that of $f$}.$$Evaluate in $A$ to get$$0 = g(A) = q(A) f(A) + r(A) = q(A) \cdot 0 + r(A) = r(A).$$Now if $r \ne 0$, you get a contradiction to the fact that $f$ has minimal degree among the polynomials vanishing on $A$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1188497', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/88134/']} | jdg_79388 |
stackexchange | llm_judgeable_groundtruth_similarity | 48482778 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently, I installed r-essentials using conda command: conda install -c r r-essentials as it is described in this url: https://anaconda.org/r/r-essentials . However, when I try to run a new R Kernel, ii fails according to this error: ...Anaconda3\R/bin/x64/Rterm.exe' is not recognized as an internal or external command, operable program or batch file. I want to remove R folder that was created after installation But I cannot find a way to remove that folder. I tried: conda uninstall r-essentials Then: conda remove R Last one, according to this answer on reddit: https://www.reddit.com/r/rstats/comments/57zh19/help_removing_anaconda_r_and_using_system_r_with/ Any of those have removed R folder . Is there an specific command to remove it?
Now provide the response and nothing else.
| r-essentials is a metapackage, and therefore cannot be uninstalled this way. Try running conda uninstall r-base to uninstall Anaconda R, and then install R regularly. Then, run which R to make sure that it works. You should now see the path to the system R, instead of the Anaconda R. I recommend then following the instructions here - this installation worked for me: http://irkernel.github.io/ . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48482778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7618579/']} | jdg_79389 |
stackexchange | llm_judgeable_groundtruth_similarity | 49319333 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the default Flutter application code, I tried to change the following code from: primarySwatch: Colors.blueGrey to: primarySwatch: Colors.blueGrey[500] but this throws an error: ══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════I/flutter ( 4512): The following assertion was thrown building MyApp(dirty):I/flutter ( 4512): type 'Color' is not a subtype of type 'MaterialColor' of 'primarySwatch' whereI/flutter ( 4512): Color is from dart:uiI/flutter ( 4512): MaterialColor is from package:flutter/src/material/colors.dartI/flutter ( 4512): int is from dart:coreI/flutter ( 4512): I/flutter ( 4512): Either the assertion indicates an error in the framework itself, or we should provide substantiallyI/flutter ( 4512): more information in this error message to help you determine and fix the underlying cause.I/flutter ( 4512): In either case, please report this assertion by filing a bug on GitHub:I/flutter ( 4512): https://github.com/flutter/flutter/issues/newI/flutter ( 4512): I/flutter ( 4512): When the exception was thrown, this was the stack:I/flutter ( 4512): #0 new ThemeData (package:flutter/src/material/theme_data.dart:78:19)I/flutter ( 4512): #1 MyApp.build (/data/user/0/com.hackathon.gunbanana/cache/gun_bananaEMVSSI/gun_banana/lib/main.dart:11:18)I/flutter ( 4512): #2 StatelessElement.build (package:flutter/src/widgets/framework.dart:3678:28)I/flutter ( 4512): #3 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3625:15)I/flutter ( 4512): #4 Element.rebuild (package:flutter/src/widgets/framework.dart:3478:5)I/flutter ( 4512): #5 ComponentElement._firstBuild (package:flutter/src/widgets/framework.dart:3605:5)I/flutter ( 4512): #6 ComponentElement.mount (package:flutter/src/widgets/framework.dart:3600:5)I/flutter ( 4512): #7 Element.inflateWidget (package:flutter/src/widgets/framework.dart:2890:14)I/flutter ( 4512): #8 Element.updateChild (package:flutter/src/widgets/framework.dart:2693:12)I/flutter ( 4512): #9 RenderObjectToWidgetElement._rebuild (package:flutter/src/widgets/binding.dart:852:16)I/flutter ( 4512): #10 RenderObjectToWidgetElement.mount (package:flutter/src/widgets/binding.dart:823:5)I/flutter ( 4512): #11 RenderObjectToWidgetAdapter.attachToRenderTree.<anonymous closure> (package:flutter/src/widgets/binding.dart:769:17)I/flutter ( 4512): #12 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2205:19)I/flutter ( 4512): #13 RenderObjectToWidgetAdapter.attachToRenderTree (package:flutter/src/widgets/binding.dart:768:13)I/flutter ( 4512): #14 BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&RendererBinding&WidgetsBinding.attachRootWidget (package:flutter/src/widgets/binding.dart:657:7)I/flutter ( 4512): #15 runApp (package:flutter/src/widgets/binding.dart:699:7)I/flutter ( 4512): #16 main (/data/user/0/com.hackathon.gunbanana/cache/gun_bananaEMVSSI/gun_banana/lib/main.dart:3:16)I/flutter ( 4512): #17 _startIsolate.<anonymous closure> (dart:isolate-patch/dart:isolate/isolate_patch.dart:279)I/flutter ( 4512): #18 _RawReceivePortImpl._handleMessage (dart:isolate-patch/dart:isolate/isolate_patch.dart:165)I/flutter ( 4512): ════════════════════════════════════════════════════════════════════════════════════════════════════ How do I use shades?
Now provide the response and nothing else.
| TLDR Do ThemeData(primarySwatch: Colors.lime), Don't ThemeData(primarySwatch: Colors.lime.shade700), primarySwatch is not one color. It's all the possible material shades. If you look into ThemeData 's doc it says : The primary color palette (the [primarySwatch]), chosen from one of the swatches defined by the material design spec. This should be one of the maps from the [Colors] class that do not have "accent" in their name. This implies that when needed the material component will use primary[500] but may also use other shades ! In fact, primarySwatch is a shortcut to set a bunch of different colors : primaryColor primaryColorLight/Dark accentColor ... But you can override them separatly depending on your needs, with a Color (and not the MaterialColor that primarySwatch requires) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49319333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8096299/']} | jdg_79390 |
stackexchange | llm_judgeable_groundtruth_similarity | 1744858 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $$A=53\cdot 83\cdot109+40\cdot66\cdot96$$ Is this number prime or composite? I'm sure it's a composite number. But I do not know how to prove it.
Now provide the response and nothing else.
| $$abc+(149-c)(149-b)(149-a)=149 (a b+bc+ca-149 a-149 b-149 c+22201)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1744858', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/309360/']} | jdg_79391 |
stackexchange | llm_judgeable_groundtruth_similarity | 11125727 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a <select> and an <input> element, both specified to be 200px wide: <!doctype html><body><select style="width: 200px"></select><br/><input style="width: 200px" type="text"></body><html> One ends up wider 1,2,3, 4 than the other: What is the reason for this? If someone can give the reason , perhaps the solution would be obvious, and not a hack&pray . Layout The applied layout is perfectly reasonable: Update 1 : While i was writing this question Chrome updated itself from 17 to 19 . Update 2: Changing padding in the <input> from 1 to zero : <!doctype html><body><select style="width: 200px"></select><br/><input style="width: 200px; padding: 0" type="text"></body><html> doesn't make the <input> 200px wide (i.e. doesn't fix it). Update 3: Applying a CSS reset : <!doctype html><head><style type="text/css"> * { padding: 0; margin: 0; }</style><body><select style="width: 200px"></select><br/><input style="width: 200px; padding: 0" type="text"></body><html> Does not solve the problem: Also, i am less interested in a solution than an explanation. Footnotes 1,2,3 Chrome 17 19, Firefox, Internet Explorer 9 4 on Windows 7 64-bit Bonus Reading How to make <option> wider than <select> in IE6? (i don't want the option to be wider than the select, i'm not using IE6) How to show extended option in select list? (width of dropdown matches width of control) HTML input element wider than Containing Div (no containing <div> here) How to line up HTML input elements?
Now provide the response and nothing else.
| Your <input> isn't too wide; your <select> is too narrow! The real issue is that the <select> element doesn't behave like most elements do. It uses a box-sizing: border-box; where width is the width of the element after padding and borders are applied; behaving as though it alone were in "quirks" mode. This runs counter to every other standards mode html element, which use: box-sizing: content-box; To fix it, change the <select> to use the same box model as the rest of html: select { box-sizing: content-box; } OR change the <input> to use the same box model as the select: input { box-sizing: border-box; } The input element behaves like most elements do, using a content-box model, where the width is the width of the element before padding and borders are applied. There are default padding and borders set by your browser, so it is larger than you might want and/or expect. I always use a "CSS reset" at the top of my stylesheets, like so: * { padding: 0; margin: 0;} That will ensure there are no default padding or margins on any element. The select element is a different case though, where is behaves more like an element with box-sizing: border-box enabled, where it takes into account borders and padding into its width specification. If you add box-sizing: border-box to your input element, it will behave exactly as you expect/want. EDIT: Bolded the part that may be relevant to you. An alternate solution is reducing the specified width of the input element by a few pixels, so that it matches the width of the select box. Fiddle demonstrating both solutions: http://jsfiddle.net/n4yT2/2/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11125727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12597/']} | jdg_79392 |
stackexchange | llm_judgeable_groundtruth_similarity | 1584370 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two JavaScript arrays: var array1 = ["Vijendra","Singh"];var array2 = ["Singh", "Shakya"]; I want the output to be: var array3 = ["Vijendra","Singh","Shakya"]; The output array should have repeated words removed. How do I merge two arrays in JavaScript so that I get only the unique items from each array in the same order they were inserted into the original arrays?
Now provide the response and nothing else.
| To just merge the arrays (without removing duplicates) ES5 version use Array.concat : var array1 = ["Vijendra", "Singh"];var array2 = ["Singh", "Shakya"];array1 = array1.concat(array2);console.log(array1); ES6 version use destructuring const array1 = ["Vijendra","Singh"];const array2 = ["Singh", "Shakya"];const array3 = [...array1, ...array2]; Since there is no 'built in' way to remove duplicates ( ECMA-262 actually has Array.forEach which would be great for this), we have to do it manually: Array.prototype.unique = function() { var a = this.concat(); for(var i=0; i<a.length; ++i) { for(var j=i+1; j<a.length; ++j) { if(a[i] === a[j]) a.splice(j--, 1); } } return a;}; Then, to use it: var array1 = ["Vijendra","Singh"];var array2 = ["Singh", "Shakya"];// Merges both arrays and gets unique itemsvar array3 = array1.concat(array2).unique(); This will also preserve the order of the arrays (i.e, no sorting needed). Since many people are annoyed about prototype augmentation of Array.prototype and for in loops, here is a less invasive way to use it: function arrayUnique(array) { var a = array.concat(); for(var i=0; i<a.length; ++i) { for(var j=i+1; j<a.length; ++j) { if(a[i] === a[j]) a.splice(j--, 1); } } return a;}var array1 = ["Vijendra","Singh"];var array2 = ["Singh", "Shakya"]; // Merges both arrays and gets unique itemsvar array3 = arrayUnique(array1.concat(array2)); For those who are fortunate enough to work with browsers where ES5 is available, you can use Object.defineProperty like this: Object.defineProperty(Array.prototype, 'unique', { enumerable: false, configurable: false, writable: false, value: function() { var a = this.concat(); for(var i=0; i<a.length; ++i) { for(var j=i+1; j<a.length; ++j) { if(a[i] === a[j]) a.splice(j--, 1); } } return a; }}); | {} | {'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/1584370', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/165873/']} | jdg_79393 |
stackexchange | llm_judgeable_groundtruth_similarity | 15487151 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a data set that looks something like this: id1 id2 size1 5400 5505 72 5033 5458 13 5452 2873 244 5452 5213 25 5452 4242 266 4823 4823 47 5505 5400 11 Where id1 and id2 are unique nodes in a graph, and size is a value assigned to the directed edge connecting them from id1 to id2 . This data set is fairly large (a little over 2 million rows). What I would like to do is sum the size column, grouped by unordered node pairs of id1 and id2 . For example, in the first row, we have id1=5400 and id2=5505 . There exists another row in the data frame where id1=5505 and id2=5400 . In the grouped data, the sum of the size columns for these two rows would be added to a single row. So in other words I want to summarize the data where I'm grouping on an (unordered) set of (id1,id2). I've found a way to do this using apply with a custom function that checks for the reversed column pair in the full data set, but this works excruciatingly slow. Does anyone know of a way to do this another way, perhaps with plyr or with something in the base packages that would be more efficient?
Now provide the response and nothing else.
| One way is to create extra columns with pmax and pmin of id1 and id2 as follows. I'll use data.table solution here. require(data.table)DT <- data.table(DF)# Following mnel's suggestion, g1, g2 could be used directly in by# and it could be even shortened by using `id1` and id2` as their namesDT.OUT <- DT[, list(size=sum(size)), by=list(id1 = pmin(id1, id2), id2 = pmax(id1, id2))]# id1 id2 size# 1: 5400 5505 18# 2: 5033 5458 1# 3: 5452 2873 24# 4: 5452 5213 2# 5: 5452 4242 26# 6: 4823 4823 4 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15487151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1585874/']} | jdg_79394 |
stackexchange | llm_judgeable_groundtruth_similarity | 61868021 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make WebClient return an Optional.empty() when I get a 404, not found, from the server. But instead I get a Optional with a User object with all properties set to null. What am I missing? @Overridepublic Optional<User> getUser(Username username) { return webClient .get() .uri(buildUrl(username)) .retrieve() .onStatus(HttpStatus.NOT_FOUND::equals, response -> Mono.empty()) .onStatus(HttpStatus::is4xxClientError, response -> createError(response, CLIENTERROR)) .onStatus(HttpStatus::is5xxServerError, response -> createError(response, SERVRERROR)) .bodyToMono(User.class) .blockOptional();}
Now provide the response and nothing else.
| You can make use of onError* functions from Mono to handle these cases. onErrorResume to create a empty/error Mono on exception and onErrorMap to transform exception to a different exception type. For example: @Overridepublic Optional<User> getUser(Username username) { return webClient .get() .uri(buildUrl(username)) .retrieve() .onStatus(httpStatus -> httpStatus.is4xxClientError() && httpStatus != HttpStatus.NOT_FOUND, response -> createError(response, CLIENTERROR)) .onStatus(HttpStatus::is5xxServerError, response -> createError(response, SERVRERROR)) .bodyToMono(User.class) .onErrorResume(WebClientResponseException.NotFound.class, notFound -> Mono.empty()) .blockOptional();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61868021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/655021/']} | jdg_79395 |
stackexchange | llm_judgeable_groundtruth_similarity | 65672356 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using several libraries built through vcpkg (such as civet-web and prometheus-cpp), against my Visual C++ projects. When building x86 all is perfect, in x64 I get a bunch of linker errors: error LNK2001: unresolved external symbol __CxxFrameHandler4 Searching online all references to this symbol/error are about specific projects, I cannot find what __CxxFrameHandler4 is and what problem this error is highlighting. I don't know if it's a problem with the way vcpkg is building the library, or a problem in my project or how to start looking for a solution. I did find this blog article but it is in reference to a preview of VS2019, I cannot find any settings related to it: https://devblogs.microsoft.com/cppblog/making-cpp-exception-handling-smaller-x64/ If anyone can explain what this is all about it would be a big help.
Now provide the response and nothing else.
| I faced the same issues when trying to install and use cpr with vcpkg. I wanted to use cpr library in a VS2015 project. Reason : I had VS2019 installed. vcpkg uses latest version of toolset Visual Studio. Resolution : Add your own triplet or change existing such a way that your specified toolset is used. Adding did not work in my case so I changed existing "triplet" files in triplet folder in vcpkg. I wanted vcpkg to use toolset that comes with VS2015 (It's V140) Content of x86-windows.cmake file set(VCPKG_TARGET_ARCHITECTURE x86)set(VCPKG_CRT_LINKAGE dynamic)set(VCPKG_LIBRARY_LINKAGE dynamic)set(VCPKG_PLATFORM_TOOLSET "v140")set(VCPKG_DEP_INFO_OVERRIDE_VARS "v140") Content of x64-windows.cmake file set(VCPKG_TARGET_ARCHITECTURE x64)set(VCPKG_CRT_LINKAGE dynamic)set(VCPKG_LIBRARY_LINKAGE dynamic)set(VCPKG_PLATFORM_TOOLSET "v140")set(VCPKG_DEP_INFO_OVERRIDE_VARS "v140") | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65672356', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197229/']} | jdg_79396 |
stackexchange | llm_judgeable_groundtruth_similarity | 1565374 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following code works: class Foo(tuple): def __init__(self, b): super(Foo, self).__init__(tuple(b))if __name__ == '__main__': print Foo([3, 4])$ python play.py Result: play.py:4: DeprecationWarning: object.__init__() takes no parameters super(Foo, self).__init__(tuple(b))(3, 4) But not the following: class Foo(tuple): def __init__(self, a, b): super(Foo, self).__init__(tuple(b))if __name__ == '__main__': print Foo(None, [3, 4])$ python play.py Result: Traceback (most recent call last): File "play.py", line 7, in <module> print Foo(None, [3, 4])TypeError: tuple() takes at most 1 argument (2 given) Why?
Now provide the response and nothing else.
| Because tuples are immutable, you have to override __new__ instead: python docs object.__new__(cls[, ...]) Called to create a new instance of class cls . __new__() is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of __new__() should be the new object instance (usually an instance of cls ). Typical implementations create a new instance of the class by invoking the superclass’s __new__() method using super(currentclass, cls).__new__(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it. If __new__() returns an instance of cls , then the new instance’s __init__() method will be invoked like __init__(self[, ...]) , where self is the new instance and the remaining arguments are the same as were passed to __new__() . If __new__() does not return an instance of cls , then the new instance’s __init__() method will not be invoked. __new__() is intended mainly to allow subclasses of immutable types (like int , str , or tuple ) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1565374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/55246/']} | jdg_79397 |
stackexchange | llm_judgeable_groundtruth_similarity | 23700598 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So this is a generic pattern question but one I have been going back and forth with for some time. Should a model have a save method in MV*? I often jump back and forth between Knockout, Ember, and sometimes even Angular but one of the persistent questions I always have is should the model have a save method on it's class or prototype that knows how to save changes to reduce dependencies around the application on services (aka the model has a service for saving that all of the other view models / controllers inherit by knowing about the model) or should there be a service that each of the view models / controllers depend on that has a specific method for saving changes to the object? Example JavaScript pseudo-code var person = new Model.Person();person.name = 'Bill';person.save(); vs var personService = require('services/person.service');var person = new Model.Person();person.name = 'Bill';personService.save(person); Both accomplish the same purpose of saving the person but in Example 1 the view model / controller doesn't know about the service or how it is implemented only that if you want to change a person, you save it. In example two obviously we have to know about not only the way to save it but how the save is implemented. Please read this before you answer - I realize that this is an opinion-based question but if you can back up your opinion with facts it will be factual, so please have references to back up any claims so that this is not closed as 'Primarily opinion based'
Now provide the response and nothing else.
| It depends on the pattern (and not at all opinion based imo). Your first example: A domain object having a .save method is called an ActiveRecord (also see here ). Your second example: A mapper between the data mapping and domain layers is called a Repository (also see here ) Active Record pattern Quoting Fowler: An object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data. The ActiveRecord pattern generally excels at prototyping, and is a good idea sometimes in very small applications where there exists a 1-1 mapping between objects and DB rows. Generally, you want to separate the logic of persisting an object and the logic of an actual domain object, since they are inherently different responsibilities. This is one of the simplest ways to logically handle the persistence of data. For example, this is what Backbone models and collections do with their sync() method. Which causes them to persist to the server. It's often the reason you see larger Backbone applications not use sync() altogether in favor of implementing their own adapters. After all, in the Backbone world it forces a 1-1 mapping between your REST API and your domain objects, effectively making your domain objects and data transfer objects the same which can grow hard to maintain as your application grows. Repository Pattern Quoting Fowler again: Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. A repository is generally a better design pattern for larger applications since it removes the persistence logic from your domain object, so it does better separation of concerns . Implementation wise, a repository usually looks like this: However, to its users a repository might look like: As any abstraction, another object for a responsibility has some overhead, however - as an app grows it starts to pay off. If you create a $resource with Angular and wrap it in a service that maps these objects from the db query to your domain objects (your data mapper) and then query that service like a collection - that's a repository for you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23700598', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1200803/']} | jdg_79398 |
stackexchange | llm_judgeable_groundtruth_similarity | 18835190 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to rewrite some of my JavaScript code in TypeScript. Some of this code has references to an extension I added to the string object prototype. String.prototype.format = function () { var formatted = this; for (var i = 0; i < arguments.length; i++) { formatted = formatted.replace( RegExp("\\{" + i + "\\}", 'g'), arguments[i].toString()); } return formatted;}; However adding this with type script has been quite challenging. I have seen examples where you declare an extension of a basic interface then assign an function to the prototype to match the interface and provide your functionality. Like so... interface String { showString: () => string;}String.prototype.showString = (): string { return this;}; Except this errors because "_this is not defined..." The other things I have tried is to create a new class to extend string... export class MoreString extends string {} However this also does not work because you can only extend classes and string/String are not classes but built in types. What is the simplest way to extend String and access my extension method?
Now provide the response and nothing else.
| I ended up running into another issue later in the day that made me see what was happening here. From the top, here it is... TypeScript is built on top of JavaScript, so like @Nypan says JavaScript is valid TypeScript. Because of this the differences are very easy to overlook. A JavaScript function like this references the scope that the function executes in with "this". var f = function (postFix) {return this + postFix}; To add TypeScript syntax you would define types var f = function (postFix: string): string {return this + postFix}; In both of these cases this refers to the scope of the function just like classic JavaScript. However, things change when we do this... var f = (postFix: string): string {return this + postFix};//or more correctlyvar f = (postFix: string): string => {return this + postFix}; When you remove the function from in front of the parameters then it is no longer a classic function. It becomes a "Fat Arrow" function, apparently even with out using the "=>" syntax. In the examples above "this" now refers to the class that the function exists in like in C#. In my attempt to assign a function to the prototype of string I omitted the function keyword so it was interpreted as a "Fat Arrow" function and tries to bind this to the scope of the class. However the function dose not exist in a class and causes the error "_this is not defined". When I add the "function" keyword, the function is interpreted as I intended and works correctly. interface String { format: () => string;}String.prototype.format = function () : string { var formatted = this; for (var i = 0; i < arguments.length; i++) { formatted = formatted.replace( RegExp("\\{" + i + "\\}", 'g'), arguments[i].toString()); } return formatted;}; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18835190', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/46810/']} | jdg_79399 |
stackexchange | llm_judgeable_groundtruth_similarity | 266497 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Assume that there is an smooth structure of the matrix algebra $M_{n}(\mathbb{R})$ on fibers of the tangent bundle of a $n^2$ dimensional manifold. Is there a Riemannian metric on $M$ such that all operator of parallel transports would be an algebra isomorphism?
Now provide the response and nothing else.
| It is a classic theorem in linear algebra that any ($\mathbb{R}$-linear) automorphism $\phi$ of the the ring $M_n(\mathbb{R})$ is inner , i.e., of the form $\phi(x) = axa^{-1}$ for some invertible $a\in M_n(\mathbb{R})$. In particular, the group of automorphisms of the algebra is $\mathrm{PGL}(n,\mathbb{R})$, a simple group of dimension $n^2{-}1$, whose maximal compact subgroups are all conjugate to $\mathrm{PO}(n) = \mathrm{O}(n)/\{\pm I_n\}$. To avoid trivial cases, assume that $n>1$ and, at least to start with, assume also that $M$ be simply connected. If $M^{n^2}$ carried a Riemannian metric $g$ whose Levi-Civita connection $\nabla$ preserved an $M_n(\mathbb{R})$ algebra structure on $TM$, then, because its holonomy would be compact, there would be an underlying $\nabla$-parallel $\mathrm{PO}(n)$-structure on $M$. Now, the representation of $\mathrm{PO}(n)$ on $M_n(\mathbb{R})$ is reducible, being the sum of three terms: $$M_n(\mathbb{R}) = \mathbb{R}{\cdot}I_n\ \oplus\ A_n(\mathbb{R})\ \oplus\ S_n(\mathbb{R}), $$where $A_n(\mathbb{R})\subset M_n(\mathbb{R})$ consists of the skew-symmetric $n$-by-$n$ matrices, while $S_n(\mathbb{R})\subset M_n(\mathbb{R})$ consists of the traceless symmetric $n$-by-$n$ matrices. When $n>2$, the representations of $\mathrm{PO}(n)$ on both $A_n(\mathbb{R})$ and $S_n(\mathbb{R})$ are irreducible and (almost) faithful. By the Bianchi identities, the two corresponding subbundles of $TM$ will be $\nabla$-parallel and, hence by the reducibility of the holonomy, the metric on $M$ will locally split as a product into three factors The (almost) faithfulness of two of the representations guarantees (by the second Bianchi identity), that the curvature of $\nabla$ must vanish identically. Hence, the only case that occurs is the locally flat structure. One can, of course, do things with open sets and discrete quotients, but, locally, the flat case is the only case. When $n=2$, things are different. The group $\mathrm{PO}(2)$ acts on $A_2(\mathbb{R})\simeq \mathbb{R}$ as $\{\pm1\}\simeq \mathbb{Z}_2$, and it acts on $S_2(\mathbb{R})\simeq \mathbb{R}^2$ as $\mathrm{O}(2)$. Since $M$ is simply connected, it follows that, at least locally, $(M,g)$ can be written as a metric product$$M = \mathbb{R}\times \mathbb{R}\times \Sigma,$$ where $(\Sigma,h)$ is an oriented Riemannian surface, which, hence, has an associated orthogonal complex structure $J$ and area form $\omega$. Then one can define an $M_2(\mathbb{R})$ structure on $T_pM$ = $T_x\mathbb{R}\times T_y\mathbb{R}\times T_z\Sigma = \mathbb{R}\oplus\mathbb{R}\oplus T_z\Sigma$ by the rule$$(a,b,v)(a',b',v') = (aa'{-}bb'{+}h(v,v'),\ ab'{+}a'b{+}\omega(v,v'),\ av'{+}a'v{+}b\,Jv' {+} b'\,Jv).$$Thus, the general solution essentially depends on a Riemannian metric in dimension $2$, which is one function of two variables, up to diffeomorphism. If one is willing to consider pseudo-Riemannian metrics instead of only Riemannian ones, there are other solutions. For example, when $n=2$, $\mathrm{PGL}(2,\mathbb{R})$ acts preserving the irreducible splitting$$M_2(\mathbb{R}) = \mathbb{R}{\cdot}I_2\ \oplus Z_2(\mathbb{R})$$where $Z_2(\mathbb{R})\simeq \mathbb{R}^3$ is the space of $2$-by-$2$ traceless matrices. The action of $\mathrm{PGL}(2,\mathbb{R})$ on $Z_2(\mathbb{R})$ preserves a Lorentzian inner product $\langle,\rangle$ that satisfies $\langle v,v\rangle = \det(v)$, so that $v^2 = -\langle v,v\rangle I_2$ and the 'outer' or 'skew' product $[v,w] = vw-wv$. Consequently, if $(M^4,g)$ is a simply-connected Lorentzian $4$-manifold whose holonomy preserves a $M_2(\mathbb{R})$ algebra structure on $TM$, then $M$ can be written locally as a metric product $$M^4 = \mathbb{R} \times \Sigma^3$$where $(\Sigma^3,h)$ is an oriented Lorentzian $3$-manifold and $g$-parallel algebra structure on $TM= T\mathbb{R}\oplus T\Sigma$ can be written in the form$$(a,v)(a',v') = (aa'-h(v,v'),\ av' + a'v + v{\times}v')$$where $v{\times}v' = \ast_h(v\wedge v')$ and where $\ast_h:\Lambda^2(T\Sigma)\to T\Sigma$ is the Hodge star associated to the Lorentzian metric $h$ and the orientation. This gives a family of (Lorentzian) solutions depending on $3$ functions of $3$ variables locally. Finally, when $n>2$, there is the natural bi-invariant $M_n(\mathbb{R})$ structure on $M=\mathrm{GL}(n,\mathbb{R})$ itself, regarded as a pseudo-Riemannian symmetric space, together its dual symmetric space $M^* = \mathrm{GL}(n,\mathbb{C})/\mathrm{GL}(n,\mathbb{R})$. By Berger's classification of non-symmetric holonomies (or by simply computing the space of curvatures of this particular $\mathrm{PGL}(n,\mathbb{R})$-structure and seeing that the only possibility is the locally symmetric one), these are, locally, the only examples when $n>2$. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/266497', 'https://mathoverflow.net', 'https://mathoverflow.net/users/36688/']} | jdg_79400 |
stackexchange | llm_judgeable_groundtruth_similarity | 2373791 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a really big number: 5799218898. And want to shift it right to 13 bits. So, windows-calculator or python gives me: 5799218898 >> 13 | 100010100100001110011111100001 >> 1370791 | 10001010010000111 As expected. But Javascript: 5799218898 >> 13 | 100010100100001110011111100001 >> 13183624 | 101100110101001000 I think it because of internal integer representation in javascript, but cannot find anything about that.
Now provide the response and nothing else.
| In ECMAScript (Javascript) bitwise operations are always in 32-bit. Therefore 5799218898 is chopped into 32-bit which becomes 1504251602. This integer >> 13 gives 183624. In Python they are arbitrary-length integers. So there's no problem. (And the numbers in Windows calculator are 64-bit, enough to fit 5799218898.) (And the correct answer should be 707912.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2373791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/421010/']} | jdg_79401 |
stackexchange | llm_judgeable_groundtruth_similarity | 21297150 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I did setup a very simple route in Apache Camel where a query is sent to a JDBC component to execute. I got the Camel project up and running.What I'm trying to accomplish is to send dataSource1's database connection parameters in a RabbitMQ message's header. By connection parameters I mean driverClassName, url, username, password. The client of my app would enter all those parameters to decide what database to connect to. I will probably use routing slip depending on what driverClassName user has specified but that's a different thing. Please note that in this example here I put the SQL statement in a file to make it simpler. How can I accomplish that? Here's my Camel Context: <?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd"> <camel:camelContext xmlns="http://camel.apache.org/schema/spring"> <camel:route> <camel:from uri="file:src/data?noop=true"/> <camel:to uri="jdbc:dataSource1"/> <camel:convertBodyTo type="java.lang.String"/> <camel:log message="${body}"/> </camel:route> </camel:camelContext> <bean id="dataSource1" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/employees"/> <property name="username" value="root"/> <property name="password" value=""/> </bean></beans> And Maven project: <?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>group1</groupId> <artifactId>com.mycompany</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>A Camel Spring Route</name> <url>http://www.myorganization.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>2.12.2</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>2.12.2</version> </dependency> <!-- logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> </dependency> <!-- testing --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test-spring</artifactId> <version>2.12.2</version> <scope>test</scope> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.26</version> </dependency> <!-- Jdbc --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jdbc</artifactId> <version>2.12.2</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>4.0.0.RELEASE</version> </dependency> </dependencies> <build> <defaultGoal>install</defaultGoal> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.5.1</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.6</version> <configuration> <encoding>UTF-8</encoding> </configuration> </plugin> <!-- allows the route to be ran via 'mvn camel:run' --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-maven-plugin</artifactId> <version>2.12.2</version> </plugin> </plugins> </build></project>
Now provide the response and nothing else.
| 1. Using the x509 moduleopenssl x509 ......2 Using the ca moduleopenssl ca ...... You are missing the prelude to those commands. This is a two-step process. First you set up your CA, and then you sign an end entity certificate (a.k.a server or user). Both of the two commands elide the two steps into one. And both assume you have a an OpenSSL configuration file already setup for both CAs and Server (end entity) certificates. First, create a basic configuration file : $ touch openssl-ca.cnf Then, add the following to it: HOME = .RANDFILE = $ENV::HOME/.rnd####################################################################[ ca ]default_ca = CA_default # The default ca section[ CA_default ]default_days = 365 # How long to certify fordefault_crl_days = 30 # How long before next CRLdefault_md = sha256 # Use public key default MDpreserve = no # Keep passed DN orderingx509_extensions = ca_extensions # The extensions to add to the certemail_in_dn = no # Don't concat the email in the DNcopy_extensions = copy # Required to copy SANs from CSR to cert####################################################################[ req ]default_bits = 4096default_keyfile = cakey.pemdistinguished_name = ca_distinguished_namex509_extensions = ca_extensionsstring_mask = utf8only####################################################################[ ca_distinguished_name ]countryName = Country Name (2 letter code)countryName_default = USstateOrProvinceName = State or Province Name (full name)stateOrProvinceName_default = MarylandlocalityName = Locality Name (eg, city)localityName_default = BaltimoreorganizationName = Organization Name (eg, company)organizationName_default = Test CA, LimitedorganizationalUnitName = Organizational Unit (eg, division)organizationalUnitName_default = Server Research DepartmentcommonName = Common Name (e.g. server FQDN or YOUR name)commonName_default = Test CAemailAddress = Email AddressemailAddress_default = [email protected]####################################################################[ ca_extensions ]subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid:always, issuerbasicConstraints = critical, CA:truekeyUsage = keyCertSign, cRLSign The fields above are taken from a more complex openssl.cnf (you can find it in /usr/lib/openssl.cnf ), but I think they are the essentials for creating the CA certificate and private key. Tweak the fields above to suit your taste. The defaults save you the time from entering the same information while experimenting with configuration file and command options. I omitted the CRL-relevant stuff, but your CA operations should have them. See openssl.cnf and the related crl_ext section. Then, execute the following. The -nodes omits the password or passphrase so you can examine the certificate. It's a really bad idea to omit the password or passphrase. $ openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM After the command executes, cacert.pem will be your certificate for CA operations, and cakey.pem will be the private key. Recall the private key does not have a password or passphrase. You can dump the certificate with the following. $ openssl x509 -in cacert.pem -text -nooutCertificate: Data: Version: 3 (0x2) Serial Number: 11485830970703032316 (0x9f65de69ceef2ffc) Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, ST=MD, L=Baltimore, CN=Test CA/[email protected] Validity Not Before: Jan 24 14:24:11 2014 GMT Not After : Feb 23 14:24:11 2014 GMT Subject: C=US, ST=MD, L=Baltimore, CN=Test CA/[email protected] Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (4096 bit) Modulus: 00:b1:7f:29:be:78:02:b8:56:54:2d:2c:ec:ff:6d: ... 39:f9:1e:52:cb:8e:bf:8b:9e:a6:93:e1:22:09:8b: 59:05:9f Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: 4A:9A:F3:10:9E:D7:CF:54:79:DE:46:75:7A:B0:D0:C1:0F:CF:C1:8A X509v3 Authority Key Identifier: keyid:4A:9A:F3:10:9E:D7:CF:54:79:DE:46:75:7A:B0:D0:C1:0F:CF:C1:8A X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: Certificate Sign, CRL Sign Signature Algorithm: sha256WithRSAEncryption 4a:6f:1f:ac:fd:fb:1e:a4:6d:08:eb:f5:af:f6:1e:48:a5:c7: ... cd:c6:ac:30:f9:15:83:41:c1:d1:20:fa:85:e7:4f:35:8f:b5: 38:ff:fd:55:68:2c:3e:37 And test its purpose with the following (don't worry about the Any Purpose: Yes ; see "critical,CA:FALSE" but "Any Purpose CA : Yes" ). $ openssl x509 -purpose -in cacert.pem -inform PEMCertificate purposes:SSL client : NoSSL client CA : YesSSL server : NoSSL server CA : YesNetscape SSL server : NoNetscape SSL server CA : YesS/MIME signing : NoS/MIME signing CA : YesS/MIME encryption : NoS/MIME encryption CA : YesCRL signing : YesCRL signing CA : YesAny Purpose : YesAny Purpose CA : YesOCSP helper : YesOCSP helper CA : YesTime Stamp signing : NoTime Stamp signing CA : Yes-----BEGIN CERTIFICATE-----MIIFpTCCA42gAwIBAgIJAJ9l3mnO7y/8MA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV...aQUtFrV4hpmJUaQZ7ySr/RjCb4KYkQpTkOtKJOU1Ic3GrDD5FYNBwdEg+oXnTzWPtTj//VVoLD43-----END CERTIFICATE----- For part two, I'm going to create another configuration file that's easily digestible. First, touch the openssl-server.cnf (you can make one of these for user certificates also). $ touch openssl-server.cnf Then open it, and add the following. HOME = .RANDFILE = $ENV::HOME/.rnd####################################################################[ req ]default_bits = 2048default_keyfile = serverkey.pemdistinguished_name = server_distinguished_namereq_extensions = server_req_extensionsstring_mask = utf8only####################################################################[ server_distinguished_name ]countryName = Country Name (2 letter code)countryName_default = USstateOrProvinceName = State or Province Name (full name)stateOrProvinceName_default = MDlocalityName = Locality Name (eg, city)localityName_default = BaltimoreorganizationName = Organization Name (eg, company)organizationName_default = Test Server, LimitedcommonName = Common Name (e.g. server FQDN or YOUR name)commonName_default = Test ServeremailAddress = Email AddressemailAddress_default = [email protected]####################################################################[ server_req_extensions ]subjectKeyIdentifier = hashbasicConstraints = CA:FALSEkeyUsage = digitalSignature, keyEnciphermentsubjectAltName = @alternate_namesnsComment = "OpenSSL Generated Certificate"####################################################################[ alternate_names ]DNS.1 = example.comDNS.2 = www.example.comDNS.3 = mail.example.comDNS.4 = ftp.example.com If you are developing and need to use your workstation as a server, then you may need to do the following for Chrome. Otherwise Chrome may complain a Common Name is invalid ( ERR_CERT_COMMON_NAME_INVALID ) . I'm not sure what the relationship is between an IP address in the SAN and a CN in this instance. # IPv4 localhostIP.1 = 127.0.0.1# IPv6 localhostIP.2 = ::1 Then, create the server certificate request. Be sure to omit -x509 *. Adding -x509 will create a certificate, and not a request. $ openssl req -config openssl-server.cnf -newkey rsa:2048 -sha256 -nodes -out servercert.csr -outform PEM After this command executes, you will have a request in servercert.csr and a private key in serverkey.pem . And you can inspect it again. $ openssl req -text -noout -verify -in servercert.csrCertificate: verify OK Certificate Request: Version: 0 (0x0) Subject: C=US, ST=MD, L=Baltimore, CN=Test Server/[email protected] Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:ce:3d:58:7f:a0:59:92:aa:7c:a0:82:dc:c9:6d: ... f9:5e:0c:ba:84:eb:27:0d:d9:e7:22:5d:fe:e5:51: 86:e1 Exponent: 65537 (0x10001) Attributes: Requested Extensions: X509v3 Subject Key Identifier: 1F:09:EF:79:9A:73:36:C1:80:52:60:2D:03:53:C7:B6:BD:63:3B:61 X509v3 Basic Constraints: CA:FALSE X509v3 Key Usage: Digital Signature, Key Encipherment X509v3 Subject Alternative Name: DNS:example.com, DNS:www.example.com, DNS:mail.example.com, DNS:ftp.example.com Netscape Comment: OpenSSL Generated Certificate Signature Algorithm: sha256WithRSAEncryption 6d:e8:d3:85:b3:88:d4:1a:80:9e:67:0d:37:46:db:4d:9a:81: ... 76:6a:22:0a:41:45:1f:e2:d6:e4:8f:a1:ca:de:e5:69:98:88: a9:63:d0:a7 Next, you have to sign it with your CA. You are almost ready to sign the server's certificate by your CA. The CA's openssl-ca.cnf needs two more sections before issuing the command. First, open openssl-ca.cnf and add the following two sections. ####################################################################[ signing_policy ]countryName = optionalstateOrProvinceName = optionallocalityName = optionalorganizationName = optionalorganizationalUnitName = optionalcommonName = suppliedemailAddress = optional####################################################################[ signing_req ]subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid,issuerbasicConstraints = CA:FALSEkeyUsage = digitalSignature, keyEncipherment Second, add the following to the [ CA_default ] section of openssl-ca.cnf . I left them out earlier, because they can complicate things (they were unused at the time). Now you'll see how they are used, so hopefully they will make sense. base_dir = .certificate = $base_dir/cacert.pem # The CA certifcateprivate_key = $base_dir/cakey.pem # The CA private keynew_certs_dir = $base_dir # Location for new certs after signingdatabase = $base_dir/index.txt # Database index fileserial = $base_dir/serial.txt # The current serial numberunique_subject = no # Set to 'no' to allow creation of # several certificates with same subject. Third, touch index.txt and serial.txt : $ touch index.txt$ echo '01' > serial.txt Then, perform the following: $ openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out servercert.pem -infiles servercert.csr You should see similar to the following: Using configuration from openssl-ca.cnfCheck that the request matches the signatureSignature okThe Subject's Distinguished Name is as followscountryName :PRINTABLE:'US'stateOrProvinceName :ASN.1 12:'MD'localityName :ASN.1 12:'Baltimore'commonName :ASN.1 12:'Test CA'emailAddress :IA5STRING:'[email protected]'Certificate is to be certified until Oct 20 16:12:39 2016 GMT (1000 days)Sign the certificate? [y/n]:Y1 out of 1 certificate requests certified, commit? [y/n]YWrite out database with 1 new entriesData Base Updated After the command executes, you will have a freshly minted server certificate in servercert.pem . The private key was created earlier and is available in serverkey.pem . Finally, you can inspect your freshly minted certificate with the following: $ openssl x509 -in servercert.pem -text -nooutCertificate: Data: Version: 3 (0x2) Serial Number: 9 (0x9) Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, ST=MD, L=Baltimore, CN=Test CA/[email protected] Validity Not Before: Jan 24 19:07:36 2014 GMT Not After : Oct 20 19:07:36 2016 GMT Subject: C=US, ST=MD, L=Baltimore, CN=Test Server Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:ce:3d:58:7f:a0:59:92:aa:7c:a0:82:dc:c9:6d: ... f9:5e:0c:ba:84:eb:27:0d:d9:e7:22:5d:fe:e5:51: 86:e1 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: 1F:09:EF:79:9A:73:36:C1:80:52:60:2D:03:53:C7:B6:BD:63:3B:61 X509v3 Authority Key Identifier: keyid:42:15:F2:CA:9C:B1:BB:F5:4C:2C:66:27:DA:6D:2E:5F:BA:0F:C5:9E X509v3 Basic Constraints: CA:FALSE X509v3 Key Usage: Digital Signature, Key Encipherment X509v3 Subject Alternative Name: DNS:example.com, DNS:www.example.com, DNS:mail.example.com, DNS:ftp.example.com Netscape Comment: OpenSSL Generated Certificate Signature Algorithm: sha256WithRSAEncryption b1:40:f6:34:f4:38:c8:57:d4:b6:08:f7:e2:71:12:6b:0e:4a: ... 45:71:06:a9:86:b6:0f:6d:8d:e1:c5:97:8d:fd:59:43:e9:3c: 56:a5:eb:c8:7e:9f:6b:7a Earlier, you added the following to CA_default : copy_extensions = copy . This copies extension provided by the person making the request. If you omit copy_extensions = copy , then your server certificate will lack the Subject Alternate Names (SANs) like www.example.com and mail.example.com . If you use copy_extensions = copy , but don't look over the request, then the requester might be able to trick you into signing something like a subordinate root (rather than a server or user certificate). Which means he/she will be able to mint certificates that chain back to your trusted root. Be sure to verify the request with openssl req -verify before signing. If you omit unique_subject or set it to yes , then you will only be allowed to create one certificate under the subject's distinguished name. unique_subject = yes # Set to 'no' to allow creation of # several ctificates with same subject. Trying to create a second certificate while experimenting will result in the following when signing your server's certificate with the CA's private key: Sign the certificate? [y/n]:Yfailed to update databaseTXT_DB error number 2 So unique_subject = no is perfect for testing. If you want to ensure the Organizational Name is consistent between self-signed CAs, Subordinate CA and End-Entity certificates, then add the following to your CA configuration files: [ policy_match ]organizationName = match If you want to allow the Organizational Name to change, then use: [ policy_match ]organizationName = supplied There are other rules concerning the handling of DNS names in X.509/PKIX certificates. Refer to these documents for the rules: RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile RFC 6125, Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS) RFC 6797, Appendix A, HTTP Strict Transport Security (HSTS) RFC 7469, Public Key Pinning Extension for HTTP CA/Browser Forum Baseline Requirements CA/Browser Forum Extended Validation Guidelines RFC 6797 and RFC 7469 are listed, because they are more restrictive than the other RFCs and CA/B documents. RFC's 6797 and 7469 do not allow an IP address, either. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/21297150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16073/']} | jdg_79402 |
stackexchange | llm_judgeable_groundtruth_similarity | 1818641 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The win32 API has for example two methods StrFormatByteSize and StrFormatByteSizeEx.Even though both the methods symantically do the same thing and the Ex counter part only offers a new parameter to slightly change the behavior then couldn't they have two overloads of the same function? Is it a limitation of c/c++ or what is the possible reason for such an awkward convention?
Now provide the response and nothing else.
| The Win32 API is a C (not C++) API. The C language doesn't support overloaded functions. Complete aside: The Win32 API uses __stdcall -decorated functions, which include the number of bytes of parameters as part of the function name. __stdcall is not part of the C language, but Windows linkers have to know about it. Microsoft could have used this to implement some kind of overloading, but (since a lot of languages don't understand overloading) that would limit the number of languages that could be used to program Windows. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1818641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36464/']} | jdg_79403 |
stackexchange | llm_judgeable_groundtruth_similarity | 20529614 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im trying to figure out why this isnt working in my tableViewcontroller viewDidLoad self.headerView = [[UIView alloc] initWithFrame:CGRectMake(0, 5, 320,0)];self.headerLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 5, 320, 0)];self.headerLabel.textAlignment = NSTextAlignmentCenter;self.headerLabel.text = @"text";[self.view addSubview:self.headerView];[self.headerView addSubview:self.headerLabel];[UIView animateWithDuration:.5 delay:0.0 options:UIViewAnimationOptionCurveEaseIn animations:^{ self.headerLabel.frame = CGRectMake(0, 5, 320,15); self.headerView.frame = CGRectMake(0, 5, 320,15);} completion:^(BOOL finished) { [UIView animateWithDuration:.5 delay:2.0 options:UIViewAnimationOptionCurveEaseIn animations:^{ self.headerLabel.frame = CGRectMake(0, 5, 320,0); self.headerView.frame = CGRectMake(0, 5, 320,0); } completion:^(BOOL finished) { }];}]; if I remove the slide back up part in the completion block of the first animate call It works. The view slides down correctly. However I cannot get it to shrink back up at all. When I include the slide up code in the completion block the view is not displayed at all on load and I dont know why and Im going insane
Now provide the response and nothing else.
| I'm not sure why the label disappears, but you can fix that by giving the view and label an appropriate height when you create them, and only animate the label's y position rather than its height. - (void)viewDidLoad { [super viewDidLoad]; self.headerView = [[UIView alloc] initWithFrame:CGRectMake(0, -30, 320,30)]; self.headerView.backgroundColor = [UIColor yellowColor]; self.headerLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 5, 320, 21)]; self.headerLabel.textAlignment = NSTextAlignmentCenter; self.headerLabel.text = @"text"; [self.view addSubview:self.headerView]; [self.headerView addSubview:self.headerLabel]; [UIView animateWithDuration:.5 delay:0.0 options:UIViewAnimationOptionCurveEaseIn animations:^{ self.headerView.frame = CGRectMake(0, 0, 320,30); } completion:^(BOOL finished) { [UIView animateWithDuration:.5 delay:2.0 options:UIViewAnimationOptionCurveEaseIn animations:^{ self.headerView.frame = CGRectMake(0, -30, 320,30); } completion:^(BOOL finished) { }]; }];} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20529614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/652611/']} | jdg_79404 |
stackexchange | llm_judgeable_groundtruth_similarity | 178169 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I apply two vertical but opposite forces with the same magnitude in a body like is shown in the picture: According to newton's second law, the center of mass shouldn't accelerate, since the sum of the forces in the system are zero. However, I believe that in the situation shown in the picture the body would start rotating around a point which is not it's center of mass, hence, the center of mass would accelerate. What is wrong with my reasoning?
Now provide the response and nothing else.
| This apparent paradox is actually not a paradox at all. Infact, it is because of Newton's second law, that we can say that the object will rotate exactly about it's centre of mass. This is true for any body on which net external force is 0, but net torque is present. In Newtonian mechanics, centre of mass serves to simplify calculations, for exactly the reason above. Although I can understand you thinking that the object will not rotate about the centre of mass, this opinion has no mathematical or logical ground. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/178169', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/60667/']} | jdg_79405 |
stackexchange | llm_judgeable_groundtruth_similarity | 27914 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand the arc length formula is derived from adding the distances between a series of points on the curve, and using the mean value theorem to get: $ L = \int_a^b \sqrt{ 1 + (f'(x))^2 } dx $ But is there an intuition here I'm missing? Something about taking the integral of the derivative seems like it should mean something..
Now provide the response and nothing else.
| Divide the interval into $ n $ equal parts, $ a = x_0 \lt x_1 \lt \cdots \lt x_n = b $ , with $ x_{i+1} = x_i + \Delta x_i $ . Suppose you want to approximate the curve between $(x_i,f(x_i))$ and $(x_i+\Delta x,f(x_i+\Delta x))$ . You could simply approximate it with the straight line between the two points, whose length is $$\sqrt{\left( f(x_i+\Delta x) - f(x_i)\right)^2 + (\Delta x)^2}.$$ In the picture below, the black line is the graph $y=f(x)$ , and the green line is the line that joints $(x_i,f(x_i))$ on the bottom left and $(x_1+\Delta x,f(x_1+\Delta x))$ on the top right. Then you would have that the arc length is approximated by the sum of the lengths $$\text{Arc Length} \approx \sum_{i=1}^n \sqrt{(f(x_i+\Delta x) - f(x_i)))^2 + (\Delta x)^2}$$ and take the limit as $n\to \infty$ . Unfortunately, the expression in the sum is not of the form necessary to view it as a Riemann sum, so you cannot turn that limit into a limit of Riemann sums, and from there to an integral. So we take a slightly different approach. Instead of approximating the length of the curve from $(x_i,f(x_i))$ to $(x_i+\Delta x, f(x_i+\Delta x))$ with the straight line between the two points, we will approximate it with the tangent line to the graph of $f$ at $x_i$ , from $(x_i,f(x_i))$ to the point $x_i+\Delta x$ . This is the blue line in the picture above. If $\Delta x$ is small, then we know the tangent line is a very good approximation for the curve on $[x_i,x_i+\Delta x]$ , so the line will be a good approximation to the length of the curve. Now, the tangent line to $y=f(x)$ through the point $x_i$ is given by $$y = f(x_i) + f'(x_i)(x-x_i).$$ At $x_i+\Delta x$ , the line goes through $f(x_i) + f'(x_i)\Delta x$ .So this tangent line goes from $(x_i,f(x_i))$ to $(x_i+\Delta x ,f(x_i)+f'(x_i)\Delta x)$ . The length of the line between those two points is \begin{align*}&\sqrt{\Bigl( (x_i+\Delta x) - x_i\Bigr)^2 + \Bigl((f(x_i)+f'(x_i)\Delta x) - f(x_i)\Bigr)^2}\\\&\quad = \sqrt{ (\Delta x)^2 + \left(f'(x_i)\Delta x\right)^2} \\\&\quad = \sqrt{\left(1 + \left(f'(x_i)\right)^2\right)\Delta x^2} = \left(\sqrt{1 + (f'(x_i))^2}\right)\Delta x.\end{align*} Adding all of these, we get an approximation to the arc length: $$\text{Arc Length} \approx \sum_{i=1}^n \left(\sqrt{1 + (f'(x_i))^2}\right)\Delta x.$$ Now, these can be seen as Riemann sums. So if we take the limit as $n\to\infty$ , the approximation gets better and better (because the tangent gets closer and closer to the curve, giving a better approximation). At the limit, we get the exact arc length, and the limit of the Riemann sums becomes the integral. So \begin{align*}\text{Arc Length} &= \lim_{n\to\infty}\sum_{i=1}^n\left(\sqrt{1 + (f'(x_i))^2}\right)\Delta x\\\&= \int_a^b \sqrt{1+(f'(x))^2}\,dx.{}{}{}\end{align*} | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/27914', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1069/']} | jdg_79406 |
stackexchange | llm_judgeable_groundtruth_similarity | 33521614 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given this html : <table id="my-table"> <tr> <td> I want to apply my style to this </td> <td> <table> <tr> <td> But not to this </td> </tr> </table> </td> </tr></table> I would like to apply style to the cells that are first level children of the table. I thought I could use this : #my-table > tr > td { color: #ff0000;} ... But it doesn't work. Is it because you can't use multiple > selectors ? How can I do it ?
Now provide the response and nothing else.
| There are two aspects to what's going on: The browser will insert a tbody element if you don't include one (or at least, most do, most of the time; I always use an explicit one, so I don't know the edge cases), and so even if you don't have it in your HTML, you need it in your selector if you're using > (the child combinator). That would change your selector to #my-table > tbody > tr > td . (I advocate always including tbody explicitly in the HTML, just to avoid confusion and edge cases.) The table inside the td inherits its color from the td it's inside. So although your selector targets the correct elements, their descendant elements inherit the color. You can work around that by giving an explicit color to #my-table td elements, and then the special color only to #my-table > tbody > tr > td elements. Example (note the tbody in the HTML and also in the selector): #my-table td { color: black;}#my-table > tbody > tr > td { color: #ff0000;} <table id="my-table"> <tbody> <tr> <td> I want to apply my style to this </td> <td> <table> <tr> <td> But not to this </td> </tr> </table> </td> </tr> </tbody></table> In a comment you've said you don't control the inner table. If you control the outer table, you can solve this by just putting a class on the cells you want to apply the rule to, and then have the rule only apply to td s with that class: Example (note the tbody in the HTML and also in the selector): #my-table > tbody > tr > td.first { color: #ff0000;} <table id="my-table"> <tbody> <tr> <td class="first"> I want to apply my style to this </td> <td> <table> <tr> <td> But not to this </td> </tr> </table> </td> </tr> </tbody></table> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33521614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2520643/']} | jdg_79407 |
stackexchange | llm_judgeable_groundtruth_similarity | 484877 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
From a biology background and not strong in statistics. From what I have read the sum of Poisson distributed random independent variables have a Poisson distribution but the average of these variables do not have a Poisson distribution. Why is that, can someone show me the maths? I thought the average would still have a Poisson distribution. Some background: this concerns technical replicates in RNA-seq. Marioni et al found that technical replicates follow a Poisson distribution. Tools that accommodate technical replicates sum the values but do not average the values. I can accept this at face value but I would like to understand the maths/stats behind this.
Now provide the response and nothing else.
| The Poisson distribution is a probability distribution defined on the set $\mathbb N$ of natural numbers $0,1,2,\dots$ . We also say that $\mathbb N$ is the support of the Poisson distribution. This distribution is often used to model experiments whose outcomes represent counts. If $X$ is a random variable following a Poisson distribution with parameter $\lambda$ then for a natural number $k \in \mathbb N$ , $$\mathbb P(X=k) = e^{-\lambda} \frac{\lambda^k}{k!}.$$ It can be shown that the sum $X+Y$ of two independent Poisson-distributed variables $X,Y$ still follows a Poisson distribution. Now, assume that you have $N$ independent random variables $X_1, \dots, X_N$ each of them following a Poisson distribution. Their sum $X_1+ \dots + X_N$ will be a natural number and by an induction reasonment we can show that $X_1+ \dots + X_N$ also follows a Poisson-distribution. However their average, $\frac{X_1 + \dots + X_N}{N}$ , does not need to be a natural number. For example if $N=3$ and $X_1 = 1, X_2 = 0, X_3 = 7$ then $\frac{X_1 +X_2 + X_3}{3} = \frac{8}{3} \approx 2.67.$ Thus the average of Poisson random variables can take non-integer values (but it also can take integer values) which is against the definition of a Poisson distribution. More precisely, the support of the average is not $\mathbb N$ but rather belongs to $\mathbb Q$ the set of rational numbers (which contains $\mathbb N$ ). This means that the average can't (by definition) follow a Poisson distribution. In the same spirit, the statement above "It can be shown that the sum $X+Y$ of two independent Poisson-distributed variables $X,Y$ still follows a Poisson distribution" is not true if $X$ and $Y$ are not independent anymore. Take for example $Y=X$ (thus $X$ and $Y$ are not independent) then the sum $X+Y=2X$ only takes even values and thus $\mathbb P(2X=1) = \mathbb P(2X=3) = \dots = 0$ which is not in agreement with the definition of a Poisson distribution since the quantity $e^{-\lambda} \frac{\lambda^k}{k!}$ is strictly greater than $0$ for all natural numbers $k$ . I hope this is clear enough to help. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/484877', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/295072/']} | jdg_79408 |
stackexchange | llm_judgeable_groundtruth_similarity | 2996201 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My physics textbook makes use of the result: $$\int_{-\infty}^{\infty}dx \dfrac{x^2 e^x}{(e^x+1)^2} = \dfrac{\pi^2}{3}$$ I'm really curious on how I can derive this but I honestly don't know what to search for. My instinct is to transform to polar coordinates but I would like some guidance. Any help appreciated!
Now provide the response and nothing else.
| First off, notice the integrand is even, so we have $$ \int_{-\infty}^\infty \frac{x^2 e^x}{(e^x+1)^2}dx = 2\int_{0}^\infty \frac{x^2 e^x}{(e^x+1)^2}dx.$$ Then we can expand $$ \frac{1}{(1+x)^2} = \sum_{n=0}^\infty (-1)^{n}(n+1) x^n $$ and write $$ 2\int_{0}^\infty \frac{x^2 e^x}{(e^x+1)^2}dx=\\ =2\int_{0}^\infty \frac{x^2 e^{-x}}{(e^{-x}+1)^2}dx \\ = 2 \int_0^\infty x^2e^{-x}\sum_{n=0}^\infty (-1)^n (n+1) e^{-nx}\\=2\sum_{n=0}^\infty (-1)^n(n+1) \int_0^\infty x^2 e^{-(n+1)x}dx\\=4\sum_{n=0}^\infty \frac{(-1)^n}{(n+1)^2}.$$ Then we have $$ \sum_{n=0}^\infty \frac{(-1)^n}{(n+1)^2} = 1-\frac{1}{2^2} + \frac{1}{3^2}\ldots = (1-\frac{2}{2^2})(1+\frac{1}{2^2} + \frac{1}{3^2}\ldots) = \frac{\pi^2}{12}$$ Edit Realized this can be simplified somewhat by first doing an integration by parts $$ 2\int_0^\infty \frac{x^2 e^x}{(e^x+1)^2}dx = 4\int_0^\infty \frac{x}{e^x+1}dx$$ followed by a similar series expansion. Additionally, this solution somewhat 'misses the point' relative to contour integration approaches since that's one of the slicker ways to get $\sum_{n}1/n^2=\pi^2/6$ in the first place (and also transformation from the sums to integrals like this are the source of many zeta function identities). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2996201', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/247469/']} | jdg_79409 |
stackexchange | llm_judgeable_groundtruth_similarity | 77777 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I went on a road trip of about 870 miles. I hit a speed bump at night because I couldn't see it due to absence of road lights. I applied brakes aggressively as soon as I saw the speed bump and I think the vehicle speed was about 25 mph when it hit the bump. After the incident, the front tires of the vehicle started bouncing (minor bounce, not severe) between speed range of 40 mph - 47 mph. The bounce goes away after this speed range. I will be traveling back after couple of weeks but I am worried whether this is a serious issue or not. I am in a remote area right now and I am thinking whether to get the vehicle checked by a mechanic on my way back in some city.
Now provide the response and nothing else.
| Hitting a bump at speed can cause several problems: Tire or wheel damage: generally this requires a replacement tire or wheel Suspension damage: a suspension member could have been bent or damaged by the impact Alignment problems: this isn't damage, just needs a re-alignment Lost wheel balancing weights: this is easily fixed by getting the wheels re-balanced There's no way to be sure which it is without taking it to a mechanic. The first 2 are potentially dangerous, you don't want a tire disintegrating on you or a ball joint failing at speed. Alignment problems aren't generally hazardous unless they are really bad, but they can lead to tires being prematurely worn. A tire out of balance you can live with, but do you really want to drive 870 miles with an annoying vibration? You have to get it fixed either where you are or when you get back, getting it fixed now is the safest and most sensible option. | {} | {'log_upvote_score': 5, 'links': ['https://mechanics.stackexchange.com/questions/77777', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/26298/']} | jdg_79410 |
stackexchange | llm_judgeable_groundtruth_similarity | 532848 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to only write a log-entry when a connection is established ?I have tried: iptables -I OUTPUT -p tcp --dport 22 -j LOG --log-level notice --log-prefix "outgoing ssh connection" to log outgoing SSH connections but this logs every single packet and this is as you can imagine a bit overwhelming for monitoring purposes.I am running SLES 11 SP3.So I would be grateful if anyone could point out a way to only write a log-entry when the conenction is established.
Now provide the response and nothing else.
| The Line you would need to log the traffic, might look possible as: iptables -I OUTPUT -p tcp -m tcp --dport 22 -m state --state NEW,ESTABLISHED -j LOG --log-prefix "Outgoing SSH connection" On another terminal view the logs while :; do iptables -L OUTPUT -v -n --line-n; sleep 2; done I am using -m state --state. However I would recommand to use --ctstate man iptables for more. If you feel that you are being overwhelmed by the logs, you might consider changing the --log-level . http://www.netfilter.org/ can tell you more. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/532848', 'https://serverfault.com', 'https://serverfault.com/users/182081/']} | jdg_79411 |
stackexchange | llm_judgeable_groundtruth_similarity | 323206 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a switch on which all the network lights are going crazy on and none of the systems have network access or internet access. Below is my detailed configuration. I have 2 routers (DLink and Beetel) and one dlink 16 port switch. The first DSL router that provides Internet access, has 4 ports. Port 1 and 2 are connected to 2 desktops. A straight cable goes from port 3 to 16 port dlink switch to provide internet access to 7 other computers. There is a crossover cable going from the last port into another wireless router 18 feet away acting as access point for laptops. The Dlink switch is an unmanaged switch, with no uplink port. The network lights are blinking like crazy and no internet access or network access is taking place.
Now provide the response and nothing else.
| Based on your description of how everything is connected it doesn't sound like you have a network loop, although the symptoms sound like it. My guess would be that you have some type of flooding going on in the network. Start by unplugging cables from the switch one at a time until the lights stop flashing like crazy. When they do find out what device is connected to that port and investigate it. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/323206', 'https://serverfault.com', 'https://serverfault.com/users/98451/']} | jdg_79412 |
stackexchange | llm_judgeable_groundtruth_similarity | 35261086 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Lately I've been exploring the vast and terrible world of intellectual property law and people seem to think that if you create something(software) on your time, with your equipment, it typically belongs to you. I know there are many exclusions to this and a couple of really good and informative posts on the subject. But now I'm mostly just curious about the means someone violating this agreement could be proven guilty. My question is, is it possible to link a specific git commit to the computer it was committed with? How?
Now provide the response and nothing else.
| When you push to a Git repository over HTTP, HTTPS, or SSH, the remote server generally creates logs of what time the connection occurred and what IP address it originated from. However, most companies employ NAT, resulting in multiple computers on a network sharing the same public IP address. Some examples logs: # SSHFeb 8 01:08:37 git-server sshd[12619]: Accepted publickey for git from 192.168.1.100 port 63012 ssh2: RSA SHA256:XxXxxXxxXXXxXXFeb 8 01:08:37 git-server sshd[12619]: pam_unix(sshd:session): session opened for user git by (uid=0)# Apache HTTP(s)192.168.1.100 - - [8/Feb/2016:22:03:18 +0000] "GET /repos/info/refs HTTP/1.1" 200 153 Additionally, Git is decentralized, so the computer that actually pushed a commit did not necessarily author the commit. A commit is generally tied to a name or username of some sort, and an email address; however, this can be changed at the time of committing without creating a record of what it was before and what it is now. This is as simple as: $ git config user.name "John Doe"$ git config user.email [email protected]$ git commit$ git config user.name "Jane Otheruser"$ git config user.email [email protected] So, in short, there are heuristics that can point to who authored a commit and from where, but it is absolutely not absolute. Git itself does not record unique identifying information about the computer on which a commit was authored. Version Control Systems like Subversion, in which all commits are immediately pushed to one central repository, are more easily linked to a Public IP, but are still not linked to an individual computer or user. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35261086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1265528/']} | jdg_79413 |
stackexchange | llm_judgeable_groundtruth_similarity | 28723 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been reading the IPv6 Flow Label Specification ( RFC 6437 ) and I would like to ask about the purpose of setting this label. In case of just using the 2-tuple (source address, destination address), the packets will reach the destination. And since some hosts do not support the functions of the flow label field, this label might end up having a value of 0 anyways.
Now provide the response and nothing else.
| The above answer isn't really correct. The flow label isn't designed for packet ordering - IP does not care about order, it is an unreliable services whose only goal is delivery on layer3. Better way to understand the flow label is that its a value given to a flow of packets, and that value is the same across that flow. Therefore there is no information that actually assists with ordering of that flow, its simply to identify them as part of that flow. The intended use case here is for QoS - i.e. the source might want to ask for special handling for packets associated with a certain flow. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/28723', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/23751/']} | jdg_79414 |
stackexchange | llm_judgeable_groundtruth_similarity | 48122798 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to write a small script that prints the checksum of a file (using some code from https://gist.github.com/Zireael-N/ed36997fd1a967d78cb2 ): import sysimport osimport hashlibfile = '/Users/Me/Downloads/2017-11-29-raspbian-stretch.img'with open(file, 'rb') as f: contents = f.read() print('SHA256 of file is %s' % hashlib.sha256(contents).hexdigest()) But I'm getting the following error message: Traceback (most recent call last): File "checksum.py", line 8, in <module> contents = f.read()OSError: [Errno 22] Invalid argument What am I doing wrong? I'm using python 3 on macOS High Sierra
Now provide the response and nothing else.
| There have been several issues over the history of Python (most fixed in recent versions) reading more than 2-4 GB at once from a file handle (an unfixable version of the problem also occurs on 32 bit builds of Python, where they simply lack the virtual address space to allocate the buffer; not I/O related, but seen most frequently slurping large files). A workaround available for hashing is to update the hash in fixed size chunks (which is a good idea anyway, since counting on RAM being greater than file size is a poor idea). The most straightforward approach is to change your code to: with open(file, 'rb') as f: hasher = hashlib.sha256() # Make empty hasher to update piecemeal while True: block = f.read(64 * (1 << 20)) # Read 64 MB at a time; big, but not memory busting if not block: # Reached EOF break hasher.update(block) # Update with new blockprint('SHA256 of file is %s' % hasher.hexdigest()) # Finalize to compute digest If you're feeling fancy, you can "simplify" the loop using two-arg iter and some functools magic, replacing the whole of the while loop with: for block in iter(functools.partial(f.read, 64 * (1 << 20)), b''): hasher.update(block) Or on Python 3.8+, with the walrus operator, := it's simpler without the need for imports or unreadable code: while block := f.read(64 * (1 << 20)): # Assigns and tests result in conditional! hasher.update(block) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48122798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9179656/']} | jdg_79415 |
stackexchange | llm_judgeable_groundtruth_similarity | 6335153 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Update: Answers to this question helped me code the open sourced project AlicanC's Modern Warfare 2 Tool on GitHub . You can see how I am reading these packets in MW2Packets.cs and the extensions I've coded to read big endian data in Extensions.cs . I am capturing UDP packets of Call of Duty: Modern Warfare 2 using Pcap.Net in my C# application. I receive a byte[] from the library. I tried to parse it like a string, but that didn't work well. The byte[] I have has a generic packet header, then another header specific to the packet type then info about each player in the lobby. A helpful person inspected some packets for me and came up with these structures: // Fields are big endian unless specified otherwise.struct packet_header{ uint16_t magic; uint16_t packet_size; uint32_t unknown1; uint32_t unknown2; uint32_t unknown3; uint32_t unknown4; uint16_t unknown5; uint16_t unknown6; uint32_t unknown7; uint32_t unknown8; cstring_t packet_type; // \0 terminated string};// Fields are little endian unless specified otherwise.struct header_partystate //Header for the "partystate" packet type{ uint32_t unknown1; uint8_t unknown2; uint8_t player_entry_count; uint32_t unknown4; uint32_t unknown5; uint32_t unknown6; uint32_t unknown7; uint8_t unknown8; uint32_t unknown9; uint16_t unknown10; uint8_t unknown11; uint8_t unknown12[9]; uint32_t unknown13; uint32_t unknown14; uint16_t unknown15; uint16_t unknown16; uint32_t unknown17[10]; uint32_t unknown18; uint32_t unknown19; uint8_t unknown20; uint32_t unknown21; uint32_t unknown22; uint32_t unknown23;};// Fields are little endian unless specified otherwise.struct player_entry{ uint8_t player_id; // The following fields may not actually exist in the data if it's an empty entry. uint8_t unknown1[3]; cstring_t player_name; uint32_t unknown2; uint64_t steam_id; uint32_t internal_ip; uint32_t external_ip; uint16_t unknown3; uint16_t unknown4; uint32_t unknown5; uint32_t unknown6; uint32_t unknown7; uint32_t unknown8; uint32_t unknown9; uint32_t unknown10; uint32_t unknown11; uint32_t unknown12; uint16_t unknown13; uint8_t unknown14[???]; // Appears to be a bit mask, sometimes the length is zero, sometimes it's one. (First entry is always zero?) uint8_t unknown15; uint32_t unknown16; uint16_t unknown17; uint8_t unknown18[???]; // Most of the time this is 4 bytes, other times it is 3 bytes.}; I recreated the packet header structure in my C# application like this: [StructLayout(LayoutKind.Sequential, Pack=1)]struct PacketHeader{ public UInt16 magic; public UInt16 packetSize; public UInt32 unknown1; public UInt32 unknown2; public UInt32 unknown3; public UInt32 unknown4; public UInt16 unknown5; public UInt16 unknown6; public UInt32 unknown7; public UInt32 unknown8; public String packetType;} Then I tried to make a structure for the "partystate" header, but I got errors saying fixed keyword is unsafe: [StructLayout(LayoutKind.Sequential, Pack=1)]struct PartyStateHeader{ UInt32 unknown1; Byte unknown2; Byte playerEntryCount; UInt32 unknown4; UInt32 unknown5; UInt32 unknown6; UInt32 unknown7; Byte unknown8; UInt32 unknown9; UInt16 unknown10; Byte unknown11; fixed Byte unknown12[9]; UInt32 unknown13; UInt32 unknown14; UInt16 unknown15; UInt16 unknown16; fixed UInt32 unknown17[10]; UInt32 unknown18; UInt32 unknown19; Byte unknown20; UInt32 unknown21; UInt32 unknown22; UInt32 unknown23;} I couldn't do anything for the player entries because of the varying size of unknown14 and unknown18 . (Player entries are the most important.) Now, somehow, I have to cast the byte[] I have to these PacketHeader structures. Sadly, it's not easy as (PacketHeader)bytes . I tried this method I've found on the internet but it threw an AccessViolationException : GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned);PacketHeader packetHeader = (PacketHeader)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(PacketHeader)); How can I achieve this?
Now provide the response and nothing else.
| //I have found this at: http://code.cheesydesign.com/?p=572 (I have not tested yet, but// at first sight it will work well.) /// <summary> /// Reads in a block from a file and converts it to the struct /// type specified by the template parameter /// </summary> /// <typeparam name="T"></typeparam> /// <param name="reader"></param> /// <returns></returns> private static T FromBinaryReader<T>(BinaryReader reader) { // Read in a byte array byte[] bytes = reader.ReadBytes(Marshal.SizeOf(typeof(T))); // Pin the managed memory while, copy it out the data, then unpin it GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); T theStructure = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(T)); handle.Free(); return theStructure; } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6335153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667234/']} | jdg_79416 |
stackexchange | llm_judgeable_groundtruth_similarity | 31598611 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a SparkSQL DataFrame. Some entries in this data are empty but they don't behave like NULL or NA. How could I remove them? Any ideas? In R I can easily remove them but in sparkR it say that there is a problem with the S4 system/methods. Thanks.
Now provide the response and nothing else.
| SparkR Column provides a long list of useful methods including isNull and isNotNull : > people_local <- data.frame(Id=1:4, Age=c(21, 18, 30, NA))> people <- createDataFrame(sqlContext, people_local)> head(people) Id Age1 1 212 2 183 3 NA> filter(people, isNotNull(people$Age)) %>% head() Id Age1 1 212 2 183 3 30> filter(people, isNull(people$Age)) %>% head() Id Age1 4 NA Please keep in mind that there is no distinction between NA and NaN in SparkR. If you prefer operations on a whole data frame there is a set of NA functions including fillna and dropna : > fillna(people, 99) %>% head() Id Age1 1 212 2 183 3 304 4 99> dropna(people) %>% head() Id Age1 1 212 2 183 3 30 Both can be adjusted to consider only some subset of columns ( cols ), and dropna has some additional useful parameters. For example you can specify minimal number of not null columns: > people_with_names_local <- data.frame( Id=1:4, Age=c(21, 18, 30, NA), Name=c("Alice", NA, "Bob", NA))> people_with_names <- createDataFrame(sqlContext, people_with_names_local)> people_with_names %>% head() Id Age Name1 1 21 Alice2 2 18 <NA>3 3 30 Bob4 4 NA <NA>> dropna(people_with_names, minNonNulls=2) %>% head() Id Age Name1 1 21 Alice2 2 18 <NA>3 3 30 Bob | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31598611', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5150029/']} | jdg_79417 |
stackexchange | llm_judgeable_groundtruth_similarity | 3394 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I got bitten by the following: f[x_] := 3*x;g[x_] := If[Log[f[x]] < 0, f[x], 0];g[x]Out[11]= If[Log[3 x] < 0, f[x], 0] where I thought the call to f in the second argument to If would be “expanded”, i.e. replaced by its expression. Once I realized this was the root cause of my issue, I could fix it by inserting Evaluate : h[x_] := If[Log[f[x]] < 0, Evaluate@f[x], 0];h[x]Out[13]= If[Log[3 x] < 0, 3 x, 0] However, I don't understand why f is expanded in the first argument to If , and not in the second. Moreover, how can I know from the documentation that this will be the case? (so that I can check other functions in the future)
Now provide the response and nothing else.
| You can evaluate ?? If to see that its attributes are Attributes[If]={HoldRest,Protected} HoldRest tells you that the first argument always gets evaluated while the rest (2nd, 3rd, and 4th) are unevaluated. In practice you can't make any assumptions about the rest because it's not possible to tell how a function evaluated held arguments internally. That said, if you think about it, it's clear that If must evaluate the first argument, so see if it's True or False . It is also highly desirable not to evaluate the rest of its arguments. How would you expect the following to work? If[a > 0, b += 1] Of course it must only add 1 to b if a > 0 , and not otherwise! As you can see, it is a must for any code with side effects not to evaluate automatically. Even if we don't have non-functional constructs, we may have something like If[a > 0, f[a], f[-a]] for a function f that would give an error for negative arguments. Finally, if this function f is expensive to evaluate, an If without HoldRest would evaluate it twice, while using the result from only one evaluation---this is wasteful. I think this should make it clear why it is highly desirable for If to have HoldRest (and also why it is not necessary for it to have HoldAll ) | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/3394', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/700/']} | jdg_79418 |
stackexchange | llm_judgeable_groundtruth_similarity | 805451 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to self-study Baby Rudin (and it's proving quite challenging to me) Could someone clarify where the underlined part comes from? Text: (a) If $x \in R, y \in R,$ and $x > 0$ , then there is a positive integer $n$ such that $nx > y$ . Proof (a) Let $A$ be the set of all $nx$ , where $n$ runs through the positive integers. If (a) were false, then $y$ would be an upper bound of $A$ . But then $A$ has a least upper bound in $\mathbb{R}$ . Put $\alpha = \sup A$ . Since $x > 0$ , $\alpha - x < \alpha$ , and $\alpha - x$ is not an upper bound of $A$ . $\underline{\text{Hence $\alpha - x < mx$ for some positive integer $m$}}$ . But then $\alpha < (m+1)x \in A$ , which is impossible, since $\alpha$ is an upper bound of $A$ . Thanks in advance
Now provide the response and nothing else.
| Since $\alpha -x$ is not an upper bound of $A$, there must be an element in $A$, call it $mx$, bigger than $\alpha-x$, namely $\alpha -x < mx$. It is the logical negation of the property of being an upper bound for a subset of $\mathbb{R}$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/805451', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/93836/']} | jdg_79419 |
stackexchange | llm_judgeable_groundtruth_similarity | 2223 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In an application I'm considering, I need to know the communication complexity of the following problem: Given $n$, let $S$ be the set of integers from $1$ to $n$. Alice, Bob, and Carol each receives a subset of $S$, denoted by $A$, $B$, and $C$, respectively. They want to check whether $A$, $B$ and $C$ form a partition of $S$, i.e., they are disjoint and their union is $S$. I'm particularly interested in the case of 3 parties but other cases would be interesting as well. Note that for the case of 2 parties, the problem is equivalent to EQUALITY problem so it has $\Omega(n)$ lower bound for deterministic protocols but $O(\log n)$ upper bound for randomized protocols. My question is whether this problem is known before. If you know any problems that might be related, I would be interested to know as well.
Now provide the response and nothing else.
| A linear lower bound on deterministic CC follows by fixing one of the sets to be empty. For a randomized logarithmic upper bound, first note that this problem can be reduced to the problem asking whether the sum of three $3n$ -bit numbers is exactly $2^{3n}-1$ . This one can be solved in $O(\log n)$ randomized communication by the players operating mod a random $O(\log n)$ -bit prime. | {} | {'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/2223', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/317/']} | jdg_79420 |
stackexchange | llm_judgeable_groundtruth_similarity | 18161683 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two tables table A imei_no | user_name | date_of_allocation | date_of_deallocation 123 | a | 1-4-2013 | 10-4-2013 456 | a | 10-4-2013 | 25-4-2013 789 | a | 25-4-2013 | 30-4-2013 123 | b | 25-4-2013 | 27-4-2013 table B imei | lat | long | gpsdate 123 | 23 | 43 | 2-4-2013 123 | 23 | 43 | 2-4-2013 456 | 23 | 43 | 3-4-2013 123 | 23 | 43 | 3-4-2013 789 | 23 | 43 | 24-4-2013 123 | 23 | 43 | 24-4-2013 456 | 23 | 43 | 28-4-2013 123 | 23 | 43 | 28-4-2013 I want imei of a particular user say a from date 5-4-2013 to 25-4-2013I am doin this: select imei from A where user_name='a' and date_of_allocation>='5-4-2013' and date_of_deallocation<='25-4-2013'; and I want the data of that user from table B. how can I do that. select Imei of user a from some date to some date,then data of that imei for that user from table B
Now provide the response and nothing else.
| The Java EE bean annotations such as @PostConstruct only apply to container-managed beans. If you are simply calling new BlogEntryDao yourself, the container isn't going to intercept the creation and call the @PostConstruct method. (Furthermore, you'd be better off using @PersistenceContext or @PersistenceUnit instead of manually fetching the EntityManagerFactory in your initialize() method, and you should be creating an EntityManager for each call to addNewEntry() , since they're short-lived. Making these changes would eliminate the need for initialize() at all.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18161683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2500442/']} | jdg_79421 |
stackexchange | llm_judgeable_groundtruth_similarity | 23549 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to this accepted answer, there is no phone on the market today that is immune from "Juice Jacking"*. I think an easy way to mitigate this threat is to have a filter that blocks USB pins 2 & 3 and only connects 1 & 4. ( see Wiki for pinouts ) I'm sure that I can't be the only one who thought of this, so I'll assume it's a hardware dongle of some type that I can use to filter power before it gets to my phone. Question Is this a legitimate idea? What hardware devices exist that allow this filtering? Do any cellular phones incorporate this security feature in hardware? (akin a hardware switch that opens or closes pins 1&2) *Definition: Juice Jacking is when a USB charger is modified so that it reads, modifies or deletes information on a user's cell phone. This can include stealing passwords or adding spyware onto the device.
Now provide the response and nothing else.
| Many phones allow you charge with only power and ground connected. Some phones (looking at you, Apple) use the data lines to determine whether or not the charger is "permitted" to charge your phone. Other phones still (some Android phones in particular) explicitly ask you to decide what to do with the USB connection -- e.g. charge only, or mount as usb drive, or application-specific data connection. Some USB cables (often the cheap ones you get with a cheap charger) will only run power and ground leaving the data lines unconnected. It's pretty trivial to tell if you have one of them by attempting to use it for data and see what happens. While I don't belive that any "adapters" exist for this purpose, it would be very simple to make one: just take any existing adapter and clip the data lines. Just bear in mind that with some devices (e.g. apple) this will prevent charging. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/23549', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/396/']} | jdg_79422 |
stackexchange | llm_judgeable_groundtruth_similarity | 68074 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How is 5 nines calculated using lesser reliable links? For example, we have 2 x 4 nines connections used in an active/active setup... which supposedly equates to 5 nines, but I don't buy it... how was a greater uptime achieved with lower availability links.. the two links are diverse (seperate exchanges) so I'm assuming their is some statistical/probability element to this, but what way is it calculated?
Now provide the response and nothing else.
| 4 nines = 99.99 %. That means the probability that a link fail is 0.01 % or 0.0001 in terms of probability (scale 0 - 1). Assuming independence, The probability that both link fails is 0.0001 x 0.0001 = 10 -8 , which gives back 99.999999. Yup, that's 8 nines and not 5, but we usually don't consider more than 5 nines. Note that Assuming independence, is in fact a big assumption, see the excellent answer from @PeterGreen | {} | {'log_upvote_score': 5, 'links': ['https://networkengineering.stackexchange.com/questions/68074', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/49982/']} | jdg_79423 |
stackexchange | llm_judgeable_groundtruth_similarity | 47863275 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an object like this: const total = { "Apple": 0.6, "Banana": 0.6, "Orange": 1, "Grapes": 0.4, "Pineapple": 0.4 }; Now I want to convert it into an array of key–value objects where each object has the same set of two properties, "name" and "value" , which hold the key and value, respectively, of each property of the original object: [ { "name": "Apple", "value": 0.6 }, { "name": "Banana", "value": 0.6 }, { "name": "Orange", "value": 1 }, { "name": "Grapes", "value": 0.4 }, { "name": "Pineapple", "value": 0.4 }]
Now provide the response and nothing else.
| You can use Array#map function on the object keys and create your objects with desired shape. const total = { 'Apple': 0.6, 'Banana': 0.6, 'Orange': 1, 'Grapes': 0.4, 'Pineapple': 0.4 }; const array = Object.keys(total) .map(key => ({ name: key, value: total[key] })) .sort((f, s) => f.value - s.value);console.log(array); If you use ES7 or higher you can replace Object#keys with Object#entries . Use also object destructuring in the parameter list to get name and value separately. const total = { 'Apple': 0.6, 'Banana': 0.6, 'Orange': 1, 'Grapes': 0.4, 'Pineapple': 0.4 }; const array = Object.entries(total) .map(([name, value]) => ({ name, value })) .sort((f, s) => f.value - s.value);;console.log(array); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47863275', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8516701/']} | jdg_79424 |
stackexchange | llm_judgeable_groundtruth_similarity | 23699271 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a problem with inconsistent encoding of character vector in R. The text file which I read a table from is encoded (via Notepad++ ) in UTF-8 (I tried with UTF-8 without BOM , too.). I want to read table from this text file, convert it do data.table , set a key and make use of binary search. When I tried to do so, the following appeared: Warning message: In [.data.table (poli.dt, "żżonymi", mult = "first") : A known encoding (latin1 or UTF-8) was detected in a join column. data.table compares the bytes currently, so doesn't support mixed encodings well; i.e., using both latin1 and UTF-8, or if any unknown encodings are non-ascii and some of those are marked known and others not. But if either latin1 or UTF-8 is used exclusively, and all unknown encodings are ascii, then the result should be ok. In future we will check for you and avoid this warning if everything is ok. The tricky part is doing this without impacting performance for ascii-only cases. and binary search does not work . I realised that my data.table - key column consists of both: "unknown" and "UTF-8" Encoding types: > table(Encoding(poli.dt$word))unknown UTF-8 2061312 2739122 I tried to convert this column (before creating a data.table object) with the use of: Encoding(word) <- "UTF-8" word<- enc2utf8(word) but with no effect. I also tried a few different ways of reading a file into R (setting all helpful parameters, e.g. encoding = "UTF-8" ): data.table::fread utils::read.table base::scan colbycol::cbc.read.table but with no effect. ================================================== My R.version: > R.version _ platform x86_64-w64-mingw32 arch x86_64 os mingw32 system x86_64, mingw32 status major 3 minor 0.3 year 2014 month 03 day 06 svn rev 65126 language R version.string R version 3.0.3 (2014-03-06)nickname Warm Puppy My session info: > sessionInfo()R version 3.0.3 (2014-03-06)Platform: x86_64-w64-mingw32/x64 (64-bit)locale:[1] LC_COLLATE=Polish_Poland.1250 LC_CTYPE=Polish_Poland.1250 LC_MONETARY=Polish_Poland.1250[4] LC_NUMERIC=C LC_TIME=Polish_Poland.1250 base packages:[1] stats graphics grDevices utils datasets methods base other attached packages:[1] data.table_1.9.2 colbycol_0.8 filehash_2.2-2 rJava_0.9-6 loaded via a namespace (and not attached):[1] plyr_1.8.1 Rcpp_0.11.1 reshape2_1.2.2 stringr_0.6.2 tools_3.0.3
Now provide the response and nothing else.
| The Encoding function returns unknown if a character string has a "native encoding" mark (CP-1250 in your case) or if it's in ASCII.To discriminate between these two cases, call: library(stringi)stri_enc_mark(poli.dt$word) To check whether each string is a valid UTF-8 byte sequence, call: all(stri_enc_isutf8(poli.dt$word)) If it's not the case, your file is definitely not in UTF-8. I suspect that you haven't forced the UTF-8 mode in the data read function (try inspecting the contents of poli.dt$word to verify this statement). If my guess is true, try: read.csv2(file("filename", encoding="UTF-8")) or poli.dt$word <- stri_encode(poli.dt$word, "", "UTF-8") # re-mark encodings If data.table still complains about the "mixed" encodings, you may want to transliterate the non-ASCII characters, e.g.: stri_trans_general("Zażółć gęślą jaźń", "Latin-ASCII")## [1] "Zazolc gesla jazn" | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23699271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1818420/']} | jdg_79425 |
stackexchange | llm_judgeable_groundtruth_similarity | 62484858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering how to point AWS CloudFront to AWS ECS Container using Service Discovery (Route 53 with SRV Record). ECS is based on Spot EC2 with no Load Balancer.
Now provide the response and nothing else.
| The suggested approach for an architecture running on a container is to have traffic to the container forwarded from a AWS ELB . If you're using ECS with service discovery then the record being written is actually written to a private hosted zone which is not accessible outside of the VPC(s) it is attached to. Furthermore containers are ephemeral, with a risk further increased by using a spot instance that the container could be terminated at any point. If the instance is public facing it would need an Elastic IP address to be associated to it so that it could retain the IP address. You would want to create an A record to this EIP for CloudFront to be able to use it, this A record should have the value of a domain that CloudFront should resolve, and must be in a public hosted zone. Within your CloudFront distribution the origin value would be set to be the domain name that you setup in the previous step. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62484858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11459983/']} | jdg_79426 |
stackexchange | llm_judgeable_groundtruth_similarity | 116810 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying a problem and I encountered a strange problem: When a ball bounces how much time does the ball spend while touching the floor? To be more clear I suppose that when a ball bounces the actual bounce can't start EXACTLY when the ball touches the floor but the ball touches the floor, the Energy from the fall is given to the floor then the floor gives the energy back and the ball bounce; but in what time?
Now provide the response and nothing else.
| An possible (simplistic) answer would be the following: a simple model for the bouncing ball is a spring that shrinks to absorb all the initial kinetic energy and restores fully. To put it into equations, call $v_0$ the initial velocity of the ball, $m$ its mass and $K$ the spring stiffness. The initial kinetic energy is $\frac12mv_0^2$. If the spring shrinks by a length $x$, the elastic energy is $\frac12Kx^2$. The mechanic energy is conserved, so we have all along the move$$\frac12mv^2+\frac12Kx^2=\frac12mv_0^2.$$Let us express the velocity $v$ as a function of $x$ during the first half of the move (when the spring shrinks and the ball slows down)$$v=\sqrt{v_0^2-\frac Kmx^2}.$$Now remark that $v=\frac{\mathrm dx}{\mathrm dt}$, so we can separate the $x$ variable and get the differential$$\frac{\mathrm dx}{\sqrt{v_0^2-(K/m)x^2}}=\mathrm dt$$and we can integrate this equation from the moment to ball hits the ground to the moment it stops (the $\frac12$ coefficient is there because it is only one half of the movement)$$ \frac12T=\int_0^{x_0}\frac{\mathrm dx}{\sqrt{v_0^2-(K/m)x^2}}=\frac\pi2\sqrt{\frac mK}$$(with $x_0=v_0\sqrt{m/K}$).So the result is $$\boxed{T=\pi\sqrt{\frac mK}.}$$Interestingly, it does not depend on the initial velocity ! A more refined model would take into account the spherical shape of the ball.If it has a Young modulus $E$, the elastic energy could be (very roughly) approximated by $\frac{2\pi}3 Ex^3$ (this means that the more the ball is shrinked the more it resists to an extra shrinking). The result is$$ T=2.19187\left(\frac m{E v_0}\right)^{1/3}.$$Now it depends on the initial velocity.(The constant is equal to $6^{1/3}\pi^{1/6}\Gamma(4/3)/\Gamma(5/6)$). Of course the total deformation of the ball should be taken into account, thepreceding approach is valid only for hard balls (large $E$). Note also that dissipation of energy has not been taken into account. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/116810', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/44025/']} | jdg_79427 |
stackexchange | llm_judgeable_groundtruth_similarity | 7689640 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to do an express checkout payment in PayPal without an account? I remember that I saw it somewhere, but I don't remember where.Once a time in PayPal there was a link under "Create a PayPal account".
Now provide the response and nothing else.
| Yes, that's certainly possible. Simply set SOLUTIONTYPE parameter to Sole in your initial SetExpressCheckout API call ( SOLUTIONTYPE=Sole ). Optionally you can also specify LANDINGPAGE=Billing for the billing page to be forced. Note: In order for SOLUTIONTYPE=Sole to work, you must have 'PayPal Account Optional' set to 'On' with the 'Website Preferences' section of your PayPal account. Profile > My selling tools (or: My selling preferences) > Website Preferences > Update > PayPal Account Optional: On. EDIT: As mentioned in one of the comments, it's possible this may only work if you add the LOCALECODE=us parameter too. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7689640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/331000/']} | jdg_79428 |
stackexchange | llm_judgeable_groundtruth_similarity | 9749 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose we are running a Ranked Choice vote between 6 candidates: Harrison Ford Carrie Fisher Mark Hammill James Earl Jones Frank Oz George Lucas There are a lot of website that propose the rules - I've chosen at random: http://electowiki.org/wiki/Instant-runoff_voting But really, they're all pretty similar. The basic algorithm is: First choices are tallied. If no candidate has the support of a majority of voters, the candidate with the least support is eliminated. A second round of counting takes place, with the votes of supporters of the eliminated candidate now counting for their second choice candidate. After a candidate is eliminated, he or she may not receive any more votes. This process of counting and eliminating is repeated until one candidate has over half the votes. This is equivalent to continuing until there is only one candidate left. However it is possible, with voter truncation, for the process to continue until there is only one candidate left, who does not end up with more than half the votes. The tie breaking rules, however, are less consistent. One option (from the above site) is: LOGIC: If the tied candidates combined have fewer [first place] votes than the next highest candidate, the entire tied set can be eliminated at once. Logically deterministic, but may not apply There are other options, but from the reading I've done, they tend to be based on the first place votes. Whoever has the fewest first place votes is eliminated, if there are multiple, one is either chosen at random, or potentially they are all eliminated. So, what happens in the following scenario? This is a bit contrived, but actually came up in a small vote. Voter A1. Harrison Ford2. Mark Hammill3. Frank Oz Voter B1. Carrie Fisher2. Mark Hammill3. James Earl Jones Voter C1. George Lucas2. Mark Hammill3. James Earl Jones Just eyeballing the results, it seems like Mark Hammill should win, since there was no consensus for first place. However, following the rules specified above, I believe he would be eliminated, and the winner would randomly be George, Harrison, or Carrie. So, a few questions - am I interpreting the rules correctly? Is this a non-issue, because in the larger elections that this was designed for, you don't tend to run into this problem? Was ranked choice voting in fact designed to favor someone who got at least 1 first place vote over someone who got all the 2nd place votes? And finally, is there a better tie breaking algorithm covered to handle this issue?
Now provide the response and nothing else.
| After the first round, Frank Oz is eliminated. After the second round, James Earl Jones is eliminated. After the third round, yes, Mark Hamill is eliminated. Note that this is much less likely to happen in reality, as more voters make odd results like this less likely. But this is the problem with IRV. It eliminates second choice votes even if they are generally preferred. If you instead chose a Condorcet-compliant voting method , you could get a different result: Mark Hamill beats Harrison Ford, Carrie Fisher, and George Lucas two to one. Hamill beats Frank Oz and James Earl Jones unanimously. Hamill wins without processing any eliminations. | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/9749', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/7096/']} | jdg_79429 |
stackexchange | llm_judgeable_groundtruth_similarity | 971747 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Simple question, hopefully a simple answer: I'd like to do the following: private DateTime m_internalDateTime;public var DateTimeProperty{ get { return m_internalDateTime.ToString(); } // Return a string set { m_internalDateTime = value; } // here value is of type DateTime} The above is just an example of what I'm trying to do. I'd like to have a public accessor to an internal variable of type x. I want the get that variable as a string, but set it using something of type x. Is this possible? --edit-- I just realized I could do something like: private DateTime m_internalDateTime;public object DateTimeProperty{ get { return m_internalDateTime.ToString(); } // Return a string set { m_internalDateTime = (DateTime)value; } // here value is of type DateTime} But then, let say I use type y instead of a "string" as my 'get' type. If I want to use "DateTimeProperty" else where in my code, I'd have to cast it.
Now provide the response and nothing else.
| No. You can obviously add the .ToString() in the calling code, but you can't do what you propose without different names like this: private DateTime m_internalDateTime;public DateTime SetDateTime { set { m_internalDateTime = value; } }public string GetDateTime { get { return m_internalDateTime.ToString(); } } Or, even better to use methods instead of properties (as noted in the comments): private DateTime m_internalDateTime;public void SetDateTime(DateTime dateTime) { m_internalDateTime = dateTime; }public string GetDateTime() { return m_internalDateTime.ToString(); } Keep in mind that var is for implicitly , compile-time typed var iables, not dynamic variables. Definitely do not do what you noted in your edit. It introduced a break in convention, possible performance implications (albeit slight), and significant localization problems. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/971747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/88364/']} | jdg_79430 |
stackexchange | llm_judgeable_groundtruth_similarity | 29284008 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My [basic] Spring Boot application accepts a request from the browser, sent via jQuery.get() and is supposed to immediately receive a response - such as " your request has been queued ". To accomplish this, I wrote a controller: @Controllerpublic class DoSomeWorkController { @Autowired private final DoWorkService workService; @RequestMapping("/doSomeWork") @ResponseBody public String doSomeWork() { workService.doWork(); // time consuming operation return "Your request has been queued."; }} The DoWorkServiceImpl class implements a DoWorkService interface and is really simple. It has a single method to perform a time consuming task. I don't need anything returned from this service call, as an email will be delivered at the end of the work, both for failure or success scenarios. So it would effectively look like: @Servicepublic class DoWorkServiceImpl implements DoWorkService { @Async("workExecutor") @Override public void doWork() { try { Thread.sleep(10 * 1000); System.out.println("completed work, sent email"); } catch (InterruptedException ie) { System.err.println(ie.getMessage()); } }} I thought this would work, but the browser's Ajax request waited for 10 seconds before returning the response. So the controller mapped method is calling the internal method annotated with @Async synchronously, it would seem. In a traditional Spring application, I typically add this to the XML configuration: <task:annotation-driven /><task:executor id="workExecutor" pool-size="1" queue-capacity="0" rejection-policy="DISCARD" /> So I thought writing the equivalent of this in the main application class would help: @SpringBootApplication@EnableAsyncpublic class Application { @Value("${pool.size:1}") private int poolSize;; @Value("${queue.capacity:0}") private int queueCapacity; @Bean(name="workExecutor") public TaskExecutor taskExecutor() { ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor(); taskExecutor.setMaxPoolSize(poolSize); taskExecutor.setQueueCapacity(queueCapacity); taskExecutor.afterPropertiesSet(); return taskExecutor; } public static void main(String[] args) { SpringApplication.run(Application.class, args); }} This did not change the behavior. The Ajax response still arrives after 10 seconds of sending the request. What am I missing? The Spring Boot application can be downloaded here . With Maven installed, the project can be run with the simple command: mvn clean spring-boot:run Note The issue was resolved thanks to the answer provided by @Dave Syer below, who pointed out that I was missing @EnableAsync in my application, even though I had the line in the code snippet above.
Now provide the response and nothing else.
| You are calling the @Async method from another method in the same class. Unless you enable AspectJ proxy mode for the @EnableAsync (and provide a weaver of course) that won't work (google "proxy self-invocation"). The easiest fix is to put the @Async method in another @Bean . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29284008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/371392/']} | jdg_79431 |
stackexchange | llm_judgeable_groundtruth_similarity | 50971148 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using Django 1.8 , I'd like to trigger a delayed celery function after a form is saved in views def new_topic(request, forum_id): form = TopicForm() uid = request.user.id if request.method == 'POST': tform = TopicForm(request.POST) if tform.is_valid(): topic = tform.save(commit=False) topic.title = clean_title(tform.cleaned_data['title']) topic.description = clean_desc(tform.cleaned_data['description']) topic.save() notify_new_topic.delay( uid, topic) #<--problem here #rest of the views But I get EncodeError at /add/topic/<Topic: Topic object> is not JSON serializable I don't get any error if I remove delay from the celery task. The task is: @shared_taskdef notify_new_topic(flwd_id, topic): title = topic.title link = topic.slug flwd= cached_user(flwd_id) #User.objects.get(id = flwd_id) print 'flwd is', flwd.username flwr_ids = FollowUser.objects.filter(followed=flwd).values('follower_id') flwrs = User.objects.filter(id__in= flwr_ids).values('id', 'username','email') for f in flwrs: print 'flwr username:', f['username'] if notify_flwdp_applies(int(f['id'])): print 'notify flwdp applies' make_alerts_new_topic(flwd_id, f['id'], topic) print 'back from make_alerts_new_topic' I'm wondering how can I debug/fix this?
Now provide the response and nothing else.
| Task's argument should be serializable (i.e. string, int, etc.). To fix error you can pass topic_id as argument and fetch topic object inside task method: notify_new_topic.delay( uid, topic.id)@shared_taskdef notify_new_topic(flwd_id, topic_id): topic = Topic.objects.get(pk=topic_id) title = topic.title link = topic.slug flwd= cached_user(flwd_id) #User.objects.get(id = flwd_id) print 'flwd is', flwd.username flwr_ids = FollowUser.objects.filter(followed=flwd).values('follower_id') flwrs = User.objects.filter(id__in= flwr_ids).values('id', 'username','email') for f in flwrs: print 'flwr username:', f['username'] if notify_flwdp_applies(int(f['id'])): print 'notify flwdp applies' make_alerts_new_topic(flwd_id, f['id'], topic) print 'back from make_alerts_new_topic' | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50971148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7937129/']} | jdg_79432 |
stackexchange | llm_judgeable_groundtruth_similarity | 7771455 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there anything built into .Net or visual studio that will allow my to create classes based off of a MySql table. I guess I am talking about persistence. I just want the class to be a 1 to 1 mapping of the table. Does anything free exist?
Now provide the response and nothing else.
| maybe you need something like this: select 'my_table' into @table; #table nameselect 'my_database' into @schema; #database nameselect concat('public class ',@table,'{') unionselect concat('public ',tps.dest,' ',column_name,'{get;set;}') from information_schema.columns cjoin( #datatypes mappingselect 'char' as orign ,'string' as dest union allselect 'varchar' ,'string' union allselect 'longtext' ,'string' union allselect 'datetime' ,'DateTime?' union allselect 'text' ,'string' union allselect 'bit' ,'int?' union allselect 'bigint' ,'int?' union allselect 'int' ,'int?' union allselect 'double' ,'double?' union allselect 'decimal' ,'double?' union allselect 'date' ,'DateTime?' union allselect 'tinyint' ,'bool?') tps on c.data_type like tps.orignwhere table_schema=@schema and table_name=@table unionselect '}'; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7771455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/489041/']} | jdg_79433 |
stackexchange | llm_judgeable_groundtruth_similarity | 27049165 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an enabled and disabled state for the submit button on my form. The conditions are as follows: If all input fields have been entered and are valid enable the submit button. If some fields have not been entered do not enable the submit button. So far the validation is being done within the onkeyup event and is only working for the first input: //Custom onkeyup validation onkeyup: function(element) { //Check if input is empty remove valid class from parent var formInput = $(element), formInputParent = $(element).parent('fieldset'); if(formInputParent.hasClass('form--valid') && formInput.val() === "") { formInputParent.removeClass('form--valid'); } //Check if all fields are not empty to remove submit--disabled class var formInputs = $('form').find(':input'); console.log(formInputs); formInputs.each(function(){ if(formInputs.length > 0) { formInputs.parents('form').find('.submit-form').removeClass('submit--disabled'); } }); } Check here for a DEMO
Now provide the response and nothing else.
| You would simply construct a blur (or even a keyup ) handler function to toggle the button based on the form's validity. Use the plugin's .valid() method to test the form. $('input').on('blur', function() { if ($("#myform").valid()) { $('#submit').prop('disabled', false); } else { $('#submit').prop('disabled', 'disabled'); }}); DEMO: http://jsfiddle.net/sd88wucL/ Instead, you could also use both events to trigger the same handler function... $('input').on('blur keyup', function() { if ($("#myform").valid()) { $('#submit').prop('disabled', false); } else { $('#submit').prop('disabled', 'disabled'); }}); DEMO 2: http://jsfiddle.net/sd88wucL/1/ Source: https://stackoverflow.com/a/21956309/594235 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27049165', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/665312/']} | jdg_79434 |
stackexchange | llm_judgeable_groundtruth_similarity | 19096227 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having some difficulties finding information about this, probably it's not the right approach. I'd like to route a request to two different view functions based on the http method (GET or POST or DELETE or PUT). As it is usually done in REST apis, this would mean that the same url has different meaning based on the HTTP method. I don't see a way to do this in the urls.py file of django, I'd like something like: url(r'^tasks$', 'app.views.get_tasks', method='get'),url(r'^tasks$', 'app.views.create_task', method='post'), (note: I'm working with django 1.4)
Now provide the response and nothing else.
| I don't think you can do this with different functions without adding a bunch of logic to the URL (which is never a good idea), but you can check inside the function for the request method: def myview(request): if request.method == 'GET': # Code for GET requests elif request.method == 'POST': # Code for POST requests You could also switch to class-based views . You would then only need to define a method for each of the HTTP methods: class CreateMyModelView(CreateView): def get(self, request, *args, **kwargs): # Code for GET requests def post(self, request, *args, **kwargs): # Code for POST requests If you decide to go the class-based route, another good resource is http://ccbv.co.uk/ . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19096227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120496/']} | jdg_79435 |
stackexchange | llm_judgeable_groundtruth_similarity | 78888 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
/etc/resolv.conf contains a nameserver that is alive (as reported by the ping command) host or nslookup is able to resolve google.com $ host google.comgoogle.com has address 74.125.228.3google.com has address 74.125.228.4...$ nslookup google.com...Non-authorative answer:Name: google.comAddress: 74.125.228.0Name: google.comAddress: 74.125.228.1.. ping or telnet is not able to resolve google.com $ ping google.comping: unknown host google.com$ telnet google.comgoogle.com: node name or service name not known
Now provide the response and nothing else.
| It seems ping and telnet were unable to resolve hostnames, because they were not querying the configured DNS server ( host and nslookup seem to be using different DNS querying code), the solution is to: Overwrite /etc/nsswitch.conf with /etc/nsswitch.dns : cp /etc/nsswitch.dns /etc/nsswitch.conf | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/78888', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/40806/']} | jdg_79436 |
stackexchange | llm_judgeable_groundtruth_similarity | 7643 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I remember my professor in college challenging me with this question, which I failed to answer satisfactorily: I know there exists a bijection between the rational numbers and the natural numbers, but can anyone produce an explicit formula for such a bijection?
Now provide the response and nothing else.
| We will first find a bijection $h_{+}:\mathbb Z^+\to \mathbb Q^+$ . From there, we easily get a bijection $h:\mathbb Z\to \mathbb Q$ by defining: $$h(n)=\begin{cases}h_{+}(n)&n>0\\0&n=0\\-h_{+}(-n)&n<0\end{cases}$$ From there, we can use any of the bijections $\mathbb N\to\mathbb Z$ to get our bijection between $\mathbb N$ and $\mathbb Q$ . (We'll need a specific such bijection below, $s$ .) Now, every positive integer can be written uniquely as $p_1^{a_1}p_2^{a_2}\cdots$ , where the $p_1=2,p_2=3,p_3=5,\dots$ is the sequence of all primes, and the $a_i$ are non-negative integers, and are non-zero for only finitely many $i$ s. Similarly, every positive rational number can be written uniquely as $p_1^{b_1}p_2^{b_2}\cdots$ where the $b_i$ are integers and only finitely many of the $b_i$ are non-zero. So define $s:\mathbb N\to\mathbb Z$ (where we take $\mathbb N$ to include $0$ ): $$s(n)=(-1)^n\left\lfloor\frac{n+1}{2}\right\rfloor$$ The sequence $s(0),s(1),s(2),s(3),\dots$ would be $0,-1,1,-2,2\dots$ , and this is a bijection from $\mathbb N$ to $\mathbb Z$ . The only properties we really need for $s$ is that $s$ is a bijection and $s(0)=0$ . Then for any $n=p_1^{a_1}p_2^{a_2}\cdots\in\mathbb Z^+$ , define $$h_{+}(n)=p_1^{s(a_1)}p_2^{s(a_2)}\cdots $$ This then defines our bijection $h_{+}:\mathbb Z^+\to \mathbb Q^{+}$ . A potientially interesting feature of $h_+$ is that it is multiplicative - that is, if $\gcd(m,n)=1$ then $h_{+}(mn)=h_+(m)h_{+}(n).$ Another answer. We again assume $0\in\mathbb N.$ We will need an explicit bijection $\phi:\mathbb N\to\mathcal P_{\text{Fin}}(\mathbb N),$ where $\mathcal P_{\text{Fin}}(\mathbb N)$ is the set of all finite subsets of $\mathbb N.$ We will also use that if $q\neq 1$ is a positive rational number, then $q$ can be written uniquely as a continued fraction: $$\left[a_0,a_1,\dots,a_k\right]=a_0+\cfrac1{a_1+\cfrac{1}{\ddots +\cfrac{1}{a_k}}}$$ where $a_0$ is a non-negative integer, the other $a_i$ are positive integers, and $a_k>1.$ We define $g_+:\mathcal P_{\text{Fin}}(\mathbb N)\to\mathbb Q^{+}$ as: $$\begin{align}&g_+(\emptyset)=1\\&g_+(\{n\})=n+2\\&g_+\left(\left\{b_0<b_1<\cdots<b_k\right\}\right)=\left[b_0,b_1-b_0,\dots,b_{k-1}-b_{k-2},b_{k}-b_{k-1}+1\right],\quad k>0\end{align}$$ The uniqueness of the continued fractions ensures this is a bijection. We had to do some a slight hack to deal with the problem of the empty set. Then we define $b:\mathbb Z\to \mathbb Q$ similar to before: $$b(m)=\begin{cases}0&m=0\\g_+(\phi(m))&m>0\\-g_+(\phi(-m))&m<0\end{cases}$$ And then compose with any bijection $\mathbb N\to\mathbb Z.$ You can use the function $s$ from the previous section. Then $b\circ s$ is a bijection. This leaves $\phi,$ but every natural number $n$ can be written uniquely in binary, as $n=\sum_{a\in A_n} 2^{a}$ for some finite set $A_n\subseteq \mathbb N.$ Then we can take $\phi(n)=A_n.$ This means that if $n\in\mathbb N$ then $b(2^n)=n+2$ and $b(0)=1.$ Also, $b(1+2^n)=g_+(\{0,n\})=\frac{1}{n+1}.$ $g_+$ is nice because it can be extended to $\mathcal P(\mathbb N)\to\mathbb R^+$ to show a bijection between these two sets, because every irrational number has a unique infinite continued fraction. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/7643', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/506/']} | jdg_79437 |
stackexchange | llm_judgeable_groundtruth_similarity | 9538875 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
G'day, I am trying to find the recursive depth of a function that trawls a dictionary and I'm a bit lost...Currently I have something like: myDict = {'leve1_key1': {'level2_key1': {'level3_key1': {'level4_key_1': {'level5_key1': 'level5_value1'}}}}} And I want to know just how nested the most nested dictionary is... so I do the following... def dict_depth(d, depth): for i in d.keys(): if type(d[i]) is dict: newDict = d[i] dict_depth(newDict, depth+1) return depthprint dict_depth(myDict, 0) Only problem is, the recursive loop only returns the return of the final value (0).if I put in a print statement for i in d.keys(): then I can at least print the highest value of recursion, but returning the value is a different matter... I'm sure this is straightforward - I've just got jellybrain.
Now provide the response and nothing else.
| Be sure to assign the result of the recursive call to depth . Also, as @amit says, consider using max so that you can handle dicts with multiple key value pairs (a treelike structure). def dict_depth(d, depth=0): if not isinstance(d, dict) or not d: return depth return max(dict_depth(v, depth+1) for k, v in d.iteritems())>>> myDict = {'leve1_key1': {'level2_key1': {'level3_key1': {'level4_key_1': {'level5_key1': 'level5_value1'}}}}}>>> dict_depth(myDict)5 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9538875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1245278/']} | jdg_79438 |
stackexchange | llm_judgeable_groundtruth_similarity | 22317329 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to build a function that returns a slice of any size. I know I can do func BuildSlice() [100]int { return [100]int{} } but I would like to be able to return slices of different sizes from the same function. Something like: func BuildSlice(int size) [...]int { return [size]int{} } I've tried the above as well as func BuildSlice(size int) []int { return [size]int{} } Please point me in the right direction.
Now provide the response and nothing else.
| First of all, slices are already of "variable size": [100]int and [...]int are array type definitions. []int is the correct syntax for a slice, and you could implement the function as: func BuildSlice(size int) []int { return make([]int, size)} This will return a slice of zero values with the desired size, similar to what your array version does. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22317329', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/441935/']} | jdg_79439 |
stackexchange | llm_judgeable_groundtruth_similarity | 566635 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The title is kind of misleading because the task actually to show Every monotonic increasing and bounded sequence $(x_n)_{n\in\mathbb{N}}$ is Cauchy without knowing that: Every bounded non-empty set of real numbers has a least upperbound. (Supremum/Completeness Axiom) A sequence converges if and only if it is Cauchy. (Cauchy Criterion) Every monotonic increasing/decreasing, bounded and real sequence converges to the supremum/infimum of the codomain (not sure if this is the right word). However, what is allowed to use listed as well: A sequence is called covergent, if for $\forall\varepsilon>0\,\,\exists N\in\mathbb{N}$ so that $|\,a_n - a\,| < \varepsilon$ for $\forall n>N$. (Definition of Convergence) A sequence $(a'_k)_{k≥1}$ is called a subsequence of a sequence $(a_n)_{n≥1}$, if there is a monotonic increasing sequence $(n_k)_{k≥1}\in\mathbb{N}$ so that $a'_{k} = a_{n_{k}}$ for $\forall k≥1$. (Definition of a Subsequence) A sequence $(a_n)_{n≥1}$ is Cauchy, if for $\forall\varepsilon>0\,\,\exists N=N(\varepsilon)\in\mathbb{N}$ so that $|\,a_m - a_n\,| < \varepsilon$ for $\forall m,n>N$. (Definition of a Cauchy Sequence) (Hint) The sequence $(\varepsilon\cdot\ell)_{\ell\in\mathbb{N}}$ is unbounded for $\varepsilon>0$. (Archimedes Principle) Would appreciate any help.
Now provide the response and nothing else.
| If $x_n$ is not Cauchy then an $\varepsilon>0$ can be chosen (fixed in the rest) for which, given any arbitrarily large $N$ there are $p,q \ge n$ for which $p<q$ and $x_q-x_p>\varepsilon.$ Now start with $N=1$ and choose $x_{n_1},\ x_{n_2}$ for which the difference of these is at least $\varepsilon$. Next use some $N'$ beyond either index $n_1,\ n_2$ and pick $N'<n_3<n_4$ for which $x_{n_4}-x_{n_3}>\varepsilon.$ Continue in this way to construct a subsequence. That this subsequence diverges to $+\infty$ can be shown using the Archimedes principle, which you say can be used, since all the differences are nonnegative and there are infinitely many differences each greater than $\varepsilon$, a fixed positive number. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/566635', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/105382/']} | jdg_79440 |
Subsets and Splits