source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 41698767 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a play2/scala application which we are building with gitlab ci. Our .gitlab-ci.yml (at least the important part) looks as follows: image: hseeberger/scala-sbtvariables: SBT_GLOBAL_BASE_DIR: "$CI_PROJECT_DIR/cache/.sbt" IVY2_CACHE_DIR: "$CI_PROJECT_DIR/cache/.ivy2" SBT_BOOT_DIR: "$CI_PROJECT_DIR/cache/.sbt/boot" M2_HOME_DIR: "$CI_PROJECT_DIR/cache/.m2"before_script: # Log the sbt version - sbt sbt-versionbuild: stage: build script: - ./build.sh with build.sh : sbt -Dsbt.global.base=$SBT_GLOBAL_BASE_DIR \ -Dsbt.ivy.home=$IVY2_CACHE_DIR \ -Dsbt.boot.directory=$SBT_BOOT_DIR \ compile Unfortunately, our pipeline always runs for around 30-40 minutes with all the steps (build, verification, deploy). Most of the time it spends by downloading sbt over and over again what is really annoying. I might not know enough about gitlab ci runners but from my understand, by using hseeberger/scala-sbt as the image, sbt should be globally available and there should be no need to download it. Then also this solution from gitlab would not be necessary. Anyhow, I would be glad if sbt would not be downloaded totally 6 times during each deployment whenever the server runs any sbt command. Can someone explain me how to use the right image or the image in the right way or otherwise how I can cache the sbt stuff? Update Over the last days I fought a lot with docker and gitlab ci . I found that this problems is pretty much the same as described in don't downloading the internet . It seems that this is some hard task to have all the dependencies and should be best done by mounting them. That's unfortunately not possible as such on a shared gitlab ci runner. I went on and discovered sbt-docker which allows you to build docker containers from a build.sbt file. With the package basic approach I tried to include all the locally available dependencies for the project into the container as global sbt plugins . But also this didn't help. My last discovery was this answer regarding the maven solution and tried to translate that into our sbt project: .gitlab-ci.yml image: hseeberger/scala-sbtvariables: MAVEN_OPTS: -Dmaven.repo.local=/cache/maven.repositorystages: - build - test - staging - deploybuild: stage: build script: - sbt compile -Dsbt.ivy.home=/cache/.ivy2 -Dsbt.global.base=/cache/.sbt/0.13 -Dsbt.boot.directory=/cache/.sbt/boot -Dsbt.repository.config=/cache/.sbt/repositories I can access the gitlab ci logs again. They look basically as follows: [info] Loading project definition from /builds/kwiqjobs/backend/project[info] Updating {file:/builds/kwiqjobs/backend/project/}backend-build...[info] Resolving com.typesafe.play#sbt-plugin;2.5.4 ...[info] Resolving com.typesafe.play#sbt-plugin;2.5.4 ...[info] Resolving com.typesafe.play#sbt-routes-compiler_2.10;2.5.4 ...[info] Resolving com.typesafe.play#sbt-routes-compiler_2.10;2.5.4 ...[info] Resolving org.scala-lang#scala-library;2.10.6 ...[info] Resolving com.typesafe.play#twirl-api_2.10;1.1.1 ...[info] Resolving com.typesafe.play#twirl-api_2.10;1.1.1 ...... a **lot** more[info] [SUCCESSFUL ] com.typesafe.sbt#sbt-twirl;1.1.1!sbt-twirl.jar (1033ms)[info] downloading https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-native-packager/scala_2.10/sbt_0.13/1.0.3/jars/sbt-native-packager.jar ...[info] [SUCCESSFUL ] com.typesafe.sbt#sbt-native-packager;1.0.3!sbt-native-packager.jar (954ms)[info] downloading https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-web/scala_2.10/sbt_0.13/1.3.0/jars/sbt-web.jar ...[info] [SUCCESSFUL ] com.typesafe.sbt#sbt-web;1.3.0!sbt-web.jar (1010ms)[info] downloading https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/com.typesafe.sbt/sbt-js-engine/scala_2.10/sbt_0.13/1.1.3/jars/sbt-js-engine.jar ...[info] [SUCCESSFUL ] com.typesafe.sbt#sbt-js-engine;1.1.3!sbt-js-engine.jar (1147ms)[info] downloading https://repo1.maven.org/maven2/com/typesafe/play/twirl-api_2.10/1.1.1/twirl-api_2.10-1.1.1.jar ...[info] [SUCCESSFUL ] com.typesafe.play#twirl-api_2.10;1.1.1!twirl-api_2.10.jar (89ms)[info] downloading https://repo1.maven.org/maven2/commons-io/commons-io/2.4/commons-io-2.4.jar ...[info] [SUCCESSFUL ] commons-io#commons-io;2.4!commons-io.jar (48ms)a **lot** more[info] Done updating.[info] Compiling 228 Scala sources and 4 Java sources to /builds/kwiqjobs/backend/target/scala-2.11/classes...[info] 'compiler-interface' not yet compiled for Scala 2.11.8. Compiling...[info] Compilation completed in 17.735 s[success] Total time: 149 s, completed Jan 20, 2017 2:22:52 PMBuild succeeded And I would like to get rid off all the downloading .
Now provide the response and nothing else.
| If you don't want to use custom made images, the best solution is to use Gitlab CI's caching mechanism . It's a little hard to get it right, but this blog post describes how to do it for SBT. Example .gitlab-ci.yml Quoted from the blog post, minor errors corrected by myself: # some parts originally from https://github.com/randm-ch/units-of-information/blob/master/.gitlab-ci.ymlimage: "hseeberger/scala-sbt"variables: SBT_VERSION: "0.13.9" SBT_OPTS: "-Dsbt.global.base=sbt-cache/.sbtboot -Dsbt.boot.directory=sbt-cache/.boot -Dsbt.ivy.home=sbt-cache/.ivy"cache: key: "$CI_BUILD_REF_NAME" # contains either the branch or the tag, so it's caching per branch untracked: true paths: - "sbt-cache/.ivy/cache" - "sbt-cache/.boot" - "sbt-cache/.sbtboot" - "sbt-cache/target"stages: - testtest: script: - sbt test Second example, also including apt-get caching This is what I used for my project, usable for more general use cases and Docker images: image: java:8stages: - testvariables: SBT_VERSION: "0.13.9" SBT_OPTS: "-Dsbt.global.base=sbt-cache/.sbtboot -Dsbt.boot.directory=sbt-cache/.boot -Dsbt.ivy.home=sbt-cache/.ivy" SBT_CACHE_DIR: "sbt-cache/.ivy/cache"cache: key: "$CI_BUILD_REF_NAME" # contains either the branch or the tag, so it's caching per branch untracked: true paths: - "apt-cache/" - "sbt-cache/.ivy/cache" - "sbt-cache/.boot" - "sbt-cache/.sbtboot" - "sbt-cache/target"before_script: - export APT_CACHE_DIR=`pwd`/apt-cache - mkdir -pv $APT_CACHE_DIR - ls $APT_CACHE_DIR || echo "no apt-cache dir found" - apt-get -o dir::cache::archives=$APT_CACHE_DIR update -y - apt-get -o dir::cache::archives=$APT_CACHE_DIR install apt-transport-https -y # Install SBT - mkdir -pv $SBT_CACHE_DIR - ls $SBT_CACHE_DIR || echo "no ivy2 cache fir found" - echo "deb http://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list - apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823 - apt-get -o dir::cache::archives=$APT_CACHE_DIR update -y - apt-get -o dir::cache::archives=$APT_CACHE_DIR install sbt -y - sbt -v sbtVersiontest: stage: test script: - sbt -v sbtVersion | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41698767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7241267/']} | jdg_85741 |
stackexchange | llm_judgeable_groundtruth_similarity | 12295711 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use this regex to split a string at every say 3rd position: String []thisCombo2 = thisCombo.split("(?<=\\G...)"); where the 3 dots after the G indicates every nth position to split. In this case, the 3 dots indicate every 3 positions. An example: Input: String st = "123124125134135145234235245"Output: 123 124 125 134 135 145 234 235 245. My question is, how do i let the user control the number of positions where the string must be split at? In other words, how do I make those 3 dots, n dots controlled by the user?
Now provide the response and nothing else.
| For a big performance improvement, an alternative would be to use substring() in a loop: public String[] splitStringEvery(String s, int interval) { int arrayLength = (int) Math.ceil(((s.length() / (double)interval))); String[] result = new String[arrayLength]; int j = 0; int lastIndex = result.length - 1; for (int i = 0; i < lastIndex; i++) { result[i] = s.substring(j, j + interval); j += interval; } //Add the last bit result[lastIndex] = s.substring(j); return result;} Example: Input: String st = "1231241251341351452342352456"Output: 123 124 125 134 135 145 234 235 245 6. It's not as short as stevevls' solution , but it's way more efficient (see below) and I think it would be easier to adjust in the future, of course depending on your situation. Performance tests (Java 7u45) 2,000 characters long string - interval is 3 . split("(?<=\\G.{" + count + "})") performance (in miliseconds): 7, 7, 5, 5, 4, 3, 3, 2, 2, 2 splitStringEvery() ( substring() ) performance (in miliseconds): 2, 0, 0, 0, 0, 1, 0, 1, 0, 0 2,000,000 characters long string - interval is 3 . split() performance (in miliseconds): 207, 95, 376, 87, 97, 83, 83, 82, 81, 83 splitStringEvery() performance (in miliseconds): 44, 20, 13, 24, 13, 26, 12, 38, 12, 13 2,000,000 characters long string - interval is 30 . split() performance (in miliseconds): 103, 61, 41, 55, 43, 44, 49, 47, 47, 45 splitStringEvery() performance (in miliseconds): 7, 7, 2, 5, 1, 3, 4, 4, 2, 1 Conclusion: The splitStringEvery() method is a lot faster (even after the changes in Java 7u6 ), and it escalates when the intervals become higher . Ready-to-use Test Code: pastebin.com/QMPgLbG9 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12295711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1651268/']} | jdg_85742 |
stackexchange | llm_judgeable_groundtruth_similarity | 4901710 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say that I have an application which manages users. You can add new user, delete them, edit detail etc. Each user has na ID and has detail page on URL like this: ..../user/detail.jsf?id=123 Now, what should happen if user with ID 123 does not exists? I think that natural reaction would be 404 standard error. Exactly the same as is outputed when you make some typo in URL (like /user/dtail.jsf). So the question is: is there such method? Or maybe is this reaction (404) appropriate? Thanks.
Now provide the response and nothing else.
| Just attach a validator to the id view parameter and if validation fails, set error code 404 on the response. e.g. Consider this simple Facelet: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <f:metadata> <f:viewParam id="id" name="id" value="#{myBean.id}" validator="#{myBean.validate}"/> </f:metadata> <h:body> <h:outputText value="#{myBean.id}"/> </h:body></html> And the following backing bean: @ManagedBean@ViewScopedpublic class MyBean { private Long id; public void validate(FacesContext context, UIComponent component, Object object) { // Do some validation // And if failed: context.getExternalContext().setResponseStatus(404); context.responseComplete(); } public Long getId() { return id; } public void setId(Long id) { this.id = id; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4901710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/399510/']} | jdg_85743 |
stackexchange | llm_judgeable_groundtruth_similarity | 95635 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What does a JIT compiler specifically do as opposed to a non-JIT compiler? Can someone give a succinct and easy to understand description?
Now provide the response and nothing else.
| A JIT compiler runs after the program has started and compiles the code (usually bytecode or some kind of VM instructions) on the fly (or just-in-time, as it's called) into a form that's usually faster, typically the host CPU's native instruction set. A JIT has access to dynamic runtime information whereas a standard compiler doesn't and can make better optimizations like inlining functions that are used frequently. This is in contrast to a traditional compiler that compiles all the code to machine language before the program is first run. To paraphrase, conventional compilers build the whole program as an EXE file BEFORE the first time you run it. For newer style programs, an assembly is generated with pseudocode (p-code). Only AFTER you execute the program on the OS (e.g., by double-clicking on its icon) will the (JIT) compiler kick in and generate machine code (m-code) that the Intel-based processor or whatever will understand. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/95635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6264/']} | jdg_85744 |
stackexchange | llm_judgeable_groundtruth_similarity | 21963858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i'm pretty new into NodeJs. And i am trying to read a file into a variable.Here is my code. var fs = require("fs"), path = require("path"), util = require("util"); var content; console.log(content); fs.readFile(path.join(__dirname,"helpers","test.txt"), 'utf8',function (err,data) { if (err) { console.log(err); process.exit(1); } content = util.format(data,"test","test","test"); }); console.log(content); But every time i run the script i get undefined and undefined What am i missing? Help please!
Now provide the response and nothing else.
| As stated in the comments under your question, node is asynchronous - meaning that your function has not completed execution when your second console.log function is called. If you move the log statement inside the the callback after reading the file, you should see the contents outputted: var fs = require("fs"), path = require("path"), util = require("util");var content;console.log(content);fs.readFile(path.join(__dirname, "helpers", "test.txt"), 'utf8', function (err, data) { if (err) { console.log(err); process.exit(1); } content = util.format(data, "test", "test", "test"); console.log(content);}); Even though this will solve your immediately problem, without an understanding of the async nature of node, you're going to encounter a lot of issues. This similar stackoverflow answer goes into more details of what other alternatives are available. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21963858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85745 |
stackexchange | llm_judgeable_groundtruth_similarity | 16360535 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was recently adding in a button into my program with in some java code and it could resolve the data type so as usual I save, built, and cleaned it. When I did that all of my Android ids and the majority of my XML forms started having errors. Here is one of the XML forms. <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:background="@drawable/background"> <ScrollView android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="2" > <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <TextView android:id="@+id/ACBonus" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Armor Class Bonus" android:textColor="#FFFF00" android:textAppearance="?android:attr/textAppearanceLarge"> </TextView> <TextView android:id="@+id/ACType" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Armor Class Type" android:textColor="#FFFF00" android:textAppearance="?android:attr/textAppearanceLarge"> </TextView> <TextView android:id="@+id/ACCheckPenalty" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Armor Check Penalty" android:textColor="#FFFF00" android:textAppearance="?android:attr/textAppearanceLarge"> </TextView> <TextView android:id="@+id/ACSpellFailure" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Armor Class Spell Failure" android:textColor="#FFFF00" android:textAppearance="?android:attr/textAppearanceLarge"> </TextView> <TextView android:id="@+id/ACWeight" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Armor Class Weight" android:textColor="#FFFF00" android:textAppearance="?android:attr/textAppearanceLarge"> </TextView> <TextView android:id="@+id/ACProperties" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Armor Class Properties" android:textColor="#FFFF00" android:textAppearance="?android:attr/textAppearanceLarge"> </TextView> <Button android:id="@+id/bACDescriptionSave" android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Save" > </LinearLayout> </ScrollView></LinearLayout> and here is the java class I was working in at the time. package com.example.pathfinderapprentice;import java.io.File;import java.io.FileInputStream;import java.io.FileNotFoundException;import java.io.FileOutputStream;import java.io.IOException;import android.app.Activity;import android.content.Context;import android.content.Intent;import android.os.Bundle;import android.view.View;import android.view.View.OnClickListener;import android.widget.TextView;public class WeapDescrip extends Activity implements OnClickListener { TextView Weapon, WeapAttackBonus, WeapCrit, WeapType, WeapRange, WeapAmmo, WeapDamage, Save; FileOutputStream WeaponStuff; String FILENAME = "InteralString"; PopUp PopUp = new PopUp(); @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.weapdescrip); setIds(); loadData(); } @Override public void onClick(View v) { // TODO Auto-generated method stub Intent W = new Intent(WeapDescrip.this, PopUp.class); switch (v.getId()) { case R.id.Weapon: /* * String data = "Test"; File f = new File(FILENAME); try { * WeaponStuff = new FileOutputStream(f); WeaponStuff.close(); } * catch (FileNotFoundException e) { // TODO Auto-generated catch * block e.printStackTrace(); } catch (IOException e) { // TODO * Auto-generated catch block e.printStackTrace(); } * * try { WeaponStuff = openFileOutput(FILENAME, * Context.MODE_PRIVATE); WeaponStuff.write(data.getBytes()); * WeaponStuff.close(); } catch (FileNotFoundException e) { // TODO * Auto-generated catch block e.printStackTrace(); } catch * (IOException e) { // TODO Auto-generated catch block * e.printStackTrace(); } */ startActivity(W); String WeaponName = PopUp.savedObject; break; case R.id.WeapAttackBonus: startActivity(W); String WeaponAttackBonus = PopUp.savedObject; break; case R.id.WeapCrit: startActivity(W); String WeaponCritical = PopUp.savedObject; break; case R.id.WeapType: startActivity(W); String WeaponType = PopUp.savedObject; break; case R.id.WeapRange: startActivity(W); String WeaponRange = PopUp.savedObject; break; case R.id.WeapAmmuniton: startActivity(W); String WeaponAmmo = PopUp.savedObject; break; case R.id.WeapDamage: startActivity(W); String WeaponDamage = PopUp.savedObject; break; } } public void setIds() { Weapon = (TextView) findViewById(R.id.Weapon); WeapAttackBonus = (TextView) findViewById(R.id.WeapAttackBonus); WeapCrit = (TextView) findViewById(R.id.WeapCrit); WeapType = (TextView) findViewById(R.id.WeapType); WeapRange = (TextView) findViewById(R.id.WeapRange); WeapAmmo = (TextView) findViewById(R.id.WeapAmmuniton); WeapDamage = (TextView) findViewById(R.id.WeapDamage); Weapon.setOnClickListener(this); WeapAttackBonus.setOnClickListener(this); WeapCrit.setOnClickListener(this); WeapType.setOnClickListener(this); WeapRange.setOnClickListener(this); WeapAmmo.setOnClickListener(this); WeapDamage.setOnClickListener(this); try { WeaponStuff = openFileOutput(FILENAME, Context.MODE_PRIVATE); WeaponStuff.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void loadData() { // TODO Auto-generated method stub /* * String collected = null; FileInputStream fis = null; try { fis = * openFileInput(FILENAME); byte[] dataArray = new * byte[fis.available()]; while(fis.read(dataArray) != -1){ collected = * new String(dataArray); } } catch (FileNotFoundException e) { // TODO * Auto-generated catch block e.printStackTrace(); } catch (IOException * e) { // TODO Auto-generated catch block e.printStackTrace(); * }finally{ try { fis.close(); Weapon.setText(collected); } catch * (IOException e) { // TODO Auto-generated catch block * e.printStackTrace(); } } */ }}
Now provide the response and nothing else.
| This should work: It basically involves converting them to decimal, finding the halves, converting the results back to hex and then concatenating them. var color1 = 'FF0000';var color2 = '00FF00';var ratio = 0.5;var hex = function(x) { x = x.toString(16); return (x.length == 1) ? '0' + x : x;};var r = Math.ceil(parseInt(color1.substring(0,2), 16) * ratio + parseInt(color2.substring(0,2), 16) * (1-ratio));var g = Math.ceil(parseInt(color1.substring(2,4), 16) * ratio + parseInt(color2.substring(2,4), 16) * (1-ratio));var b = Math.ceil(parseInt(color1.substring(4,6), 16) * ratio + parseInt(color2.substring(4,6), 16) * (1-ratio));var middle = hex(r) + hex(g) + hex(b); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16360535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1636202/']} | jdg_85746 |
stackexchange | llm_judgeable_groundtruth_similarity | 2869 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In a recent CACM article [1], the authors present an implementation for staged functions . They use the term as if it was well-known, and none of the references looks like an obvious introduction. They give a short explanation (emphasis mine and reference number changed; it's 22 in the original) In the context of program generation, multistage programming (MSP, staging for short) as established by Taha and Sheard [2] allows programmers to explicitly delay evaluation of a program expression to a later stage (thus, staging an expression). The present stage effectively acts as a code generator that composes (and possibly executes) the program of the next stage. However, Taha and Sheard write (emphasis mine): A multi-stage program is one that involves the generation, compilation, and execution of code, all inside the same process. Multi-stage languages express multi-stage programs. Staging, and consequently multi-stage programming, address the need for general purpose solutions which do not pay run-time interpretive overheads. They than go on to several references to older work allegedly showing that staging is effective, which suggests that the concept is even older. They don't give a reference for the term itself. These statements seem to be orthogonal, if not contradictory; maybe what Rompf and Odersky write is an application of what Taha and Sheard propose, but maybe it is another perspective on the same thing. They seem to agree that an important point is that programs (re)write parts of themselves at runtime, but I do not know whether that is a necessary and/or sufficient ability. So, what is staging respectively are interpretations of staging in this context? Where does the term come from? Lightweight Modular Staging: A Pragmatic Approach to Runtime Code Generation and Compiled DSLs by T. Rompf and M. Odersky (2012) MetaML and multi-stageprogramming with explicit annotations by W. Taha and T. Sheard (2000)
Now provide the response and nothing else.
| To the best of my knowledge, the term staged computation was first used by Bill Scherlis in this paper . Prior to that, the term " partial evaluation " was used for much the same concept, but the idea of staged computation is subtly different. Both the ideas are related to Kleene's S-m-n theorem . If you have a function $\phi(m,n)$ of two arguments, but you know one argument, say $m$, then you can perform some of the computation of the function right away using the knowledge of the first argument. What you are then left with is a function $\phi_m(n)$ whose computations only depend on the second, unknown, argument. The idea of partial evaluation is to compute the specialized function $\phi_m(n)$ automatically . Given the code for the original function $\phi$, partial evaluation does static analysis to determine which bits of the code depend on $m$ and which bits depend on $n$, and transforms it to a function $\phi'$ which, given $m$, constructs $\phi_m$. The second argument $n$ can then be fed to this specialized function. The idea of staged computation is to think about the function $\phi'$ first. It is called a "staged" function because it works in multiple stages. Once we give it the first argument $m$, it constructs the code for the specialized function $\phi_m$. This is the "first stage." In the second stage, the second argument is provided to $\phi_m$ which does the rest of the job. So, the job of partial evaluation is to transform the code for an ordinary function $\phi$ to a staged function $\phi'$. Scherlis envisaged that this transformation could be done by more general mechanisms than the earlier partial evaluation methods. The subject of "staged computation" now deals with issues such as: How to define staged functions? What programming languages and type systems should be used for defining staged functions? What is the semantics of such languages? How do we ensure the coherence and correctness of staged functions? What techniques are useful for automatically or semi-automatically constructing staged functions? How do we prove the correctness of such techniques? Staged computation can be very important in practice. In fact, every compiler is in effect a staged computation. Given a source program, it constructs a translated and optimized target program, which can then take the actual input and calculate the result. It is hard to write staged computation programs in practice because we have to juggle the multiple stages and make sure that the right things are done at the right time. Everybody who has written a compiler has struggled with such issues. It is also hard to write programs that write other programs, may they be machine language programs (compilers), SQL queries (database manipulations) or HTML/Server Pages/Javascript code (web applications) and myriads of other applications. The researchers in staged computation aim to create good languages and tools that make it easier and safer to create such applications. | {} | {'log_upvote_score': 6, 'links': ['https://cs.stackexchange.com/questions/2869', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/98/']} | jdg_85747 |
stackexchange | llm_judgeable_groundtruth_similarity | 5119 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $\mathcal{F}_1 \subset \mathcal{F}_2 \subset \dotsb$ are sigma algebras, what is wrong with claiming that $\cup_i\mathcal{F}_i$ is a sigma algebra? It seems closed under complement since for all $x$ in the union, $x$ has to belong to some $\mathcal{F}_i$, and so must its complement. It seems closed under countable union, since for any countable unions of $x_i$ within it, each of the $x_i$ must be in some $\mathcal{F}_j$, and so we can stop the sequence at any point and take the highest $j$ and we know that all the $x_i$'s up to that point are in $\mathcal{F}_j$, and thus so must be their union. There must be some counterexample, but I don't see it.
Now provide the response and nothing else.
| The problem arises in the countable union; your argument is correct as far as it goes, but from the fact that $\cup_{i=1}^n x_i\in \cup_{i=1}^{\infty}F_i$ for each $n$ you cannot conclude that $\cup_{i=1}^{\infty} x_i$ lies in $\cup_{i=1}^{\infty} F_i$ : the full union must be in one of the $F_j$ in order to be in $\cup_{i=1}^{\infty}F_i$ . For an explicit example, take $X=\mathbb{N}$ ; let $F_n$ be the sigma algebra that consists of all subsets of $\{1,\ldots,n\}$ and their complements in $X$ . Now let $x_i=\{2i\}$ . Then each $x_i$ is in $\cup F_i$ , but the union does not lie in any of the $F_k$ , hence does not lie in $\cup F_i$ . Added: In this example, $\cup_{i=1}^{\infty}F_n$ is the algebra of subsets of $X$ consisting of all subsets that are either finite or cofinite, so any infinite subset with infinite complement will not lie in the union, and such a set can always be expressed as a countable union of elements of $\cup F_i$ . | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/5119', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/774/']} | jdg_85748 |
stackexchange | llm_judgeable_groundtruth_similarity | 7304625 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using CMake to generate Visual Studio projects. Everything works fine except one thing. The startup project in the solution is always ALL_BUILD . How do I change the startup project to the real project I want via CMake?
Now provide the response and nothing else.
| CMake now supports this with versions 3.6 and higher through the VS_STARTUP_PROJECT directory property: cmake_minimum_required(VERSION 3.6)project(foo)# ...add_executable(bar ${BAR_SOURCES})set_property(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} PROPERTY VS_STARTUP_PROJECT bar) This will set bar as the startup project for the foo.sln solution. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7304625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/610846/']} | jdg_85749 |
stackexchange | llm_judgeable_groundtruth_similarity | 17293304 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a pretty basic question about the vocabulary of Java threads. I can't see any possible duplicates but there might be. What does the word alive refer to in Oracles documentation? Is it when the run() method has not yet completed or is it any other parameter?
Now provide the response and nothing else.
| According to the Javadoc you mentionned: A thread is alive if it has been started and has not yet died. A thread "starts" when its start() method is invoked and "dies" at the end of its run() method, or when stop() (now deprecated) is invoked. So yes, a thread is "alive" when its run() method is still ongoing, but it is also "alive" in the time window between the invocation of start() and the implicit invocation of the run() method by the JVM. You can also check the Thread.getState() and interesting information about Thread States suggested by @Marou Maroun. I am also following his suggestion warning you that a Thread can end prematurely in case an Exception is thrown that propagates beyond run . The Thread would not be alive anymore in that case. EDIT: As suggested by @zakkak, the thread can be considered alive even though the run() method did not start yet. In case you want to have proper control on when it will be invoked, use the ScheduledExecutorService , specifically the schedule() method which gives you more precise execution schedule. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17293304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1743457/']} | jdg_85750 |
stackexchange | llm_judgeable_groundtruth_similarity | 585993 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Following set is neither of even nor of odd so how can I express this by using set builder notation? {0,3,6,9,12}
Now provide the response and nothing else.
| What you have there already is a form of set builder notation. If you’re asking how you can express it in set builder notation using some property or properties of the numbers in the set, notice that they are the multiples of $3$ between $0$ and $12$, inclusive: $$\{0,3,6,9,12\}=\{n\in\Bbb N:n\le 12\text{ and }3\mid n\}\;,$$ where $3\mid n$ means that $n$ is a multiple of $3$, in case you’re not familiar with that notation. Another possibility is $$\big\{3k:k\in\{0,1,2,3,4\}\big\}\;;$$ yet another is $$\{n\in\Bbb N:\exists k\in\Bbb N(n=3k)\text{ and }n\le 12\}\;.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/585993', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/81466/']} | jdg_85751 |
stackexchange | llm_judgeable_groundtruth_similarity | 14270 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There's the idea of quantum annealing being used to solve optimization problems in terms of a QUBO problem for D-Wave's quantum algorithm. I understand that the advantage of quantum annealing as opposed to classical simulated annealing is that quantum annealing allows the particle/search point to tunnel through high barriers with probability as a function of barrier width, instead of having to climb all the way over the barrier (which in some cases wouldn't be possible because there wouldn't be enough energy). This is my understanding from here: http://en.wikipedia.org/wiki/Quantum_annealing If quantum annealing is better than simulated annealing in this fundamental way, would it not be faster to implement QA instead of SA or GA's for solving optimization problems on a classical computer? If so, why aren't people using it? Or are they, and I'm just unaware (in which case I'd love to see references)? D-Wave seems to be banking on the practicality of their quantum computer, not so much insane accuracy or other more "scientific" pursuits. If it just so happens that D-Wave's computers aren't really quantum, shouldn't we be able to find a fast classically implemented quantum annealing algorithm to compete with the quantumly implemented version also?
Now provide the response and nothing else.
| Quantum annealing essentially offers a square-root speed-up over classical simulated annealing in many circumstances. So, yes, it is potentially a faster approach for some optimization problems, but the speed-up isn't enough to make most hard problems tractable. Unfortunately, you cannot efficiently simulate quantum annealing classically, because any approaches we know of require you to keep track of the state of the system, which requires keeping track of exponentially many parameters (the amplitude of each possible classical state). Thus any attempt to simulate quantum annealing incurs a huge overhead which kills any advantage. | {} | {'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/14270', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/12442/']} | jdg_85752 |
stackexchange | llm_judgeable_groundtruth_similarity | 12654613 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm developing an application with the Qt open source edition. As I don't want to deliver the Microsoft Visual Visual C(++) redistributables, I'd like to use a static-linked version of Qt. The Qt licensing information says that I "should" develop with a commercial Qt license if I want to use static linking in my application, although when I use the configure -static command for building a static Qt library the command prompt asks me if I use the commercial or open source version of Qt. So it is possible to build Qt with the open source version? Do I need the commercial edition to use static linking? I won't sell the application.
Now provide the response and nothing else.
| EDIT April 2016 Actually, I have recently been reading in depth about LGPL, and asking some experts on the subject. Turns out that the inability to use static linking for Qt in closed source applications is more of a cultivated legend and has nothing to do with reality. What LGPL requires is the possibility for the end user to relink the application against a different version of the library. Using dynamic linking is one way to achieve that, but you can just as easily provide your object files, this way you don't have to open your source and the LGPL requirement is still satisfied. On the Qt website there are 2 legal FAQs and in neither of them is it directly stated that you can't do it. Just as it is not stated that you can. However, there are at least several instances of implying a vague legal threat in case that you do. I think there is a good explanation for all of those - they can't say that you can't do it without publishing a practical lie which may as well have negative legal repercussions for them, and they are willing to discourage doing that rather than encourage it, as it has the potential to force more people into purchasing a commercial license. So in short, yes you can , and you most definitely should as lately Qt has become a living deployment hell, plus as of Qt 5.7 in a static build, QML files are tucked neatly in the executable rather than being out on the file system, for anyone to tamper with. Just make sure that: your Qt build only contains modules, licensed under LGPL, and nothing GPL the about section of your app mentions that it is using Qt and contains a link to where you can download the application's object files include all the respective license files with your application Lastly, your application actually has to be "relinkable", that is, it must be able to work with a compatible library version that provides the necessary functionality. Which means that if you have made modifications to Qt before building it, you must provide those in the form of source code as well, but only the modifications to Qt, not your application's source code. Update: Here is an excerpt directly from the GNU FAQ : For the purpose of complying with the LGPL (any extant version: v2, v2.1 or v3): (1) If you statically link against an LGPL'd library, you must also provide your application in an object (not necessarily source) format, so that a user has the opportunity to modify the library and relink the application. That states it pretty clear. The old, original answer: It is possible to build Qt statically as long as your application is open-source and you provide the source. If you want to keep your source closed, you either need an expensive commercial license, or you need to use dynamic linking. BTW using a static build of Qt is pretty nice, for Qt5 I get about 7-8 MB executable with no external dependencies, which is much better than the 20+ MB of additional dll's you have to ship with a dynamically linked app. For more information, you can take a look at this video: Making the correct license choice when developing with Qt All in all, can it be done? 100% yes. Should it be done? It depends, for personal/testing/learning purposes it is 100% OK, however if you plan to distribute production grade software, be that commercially or not, open source or not, you'd better first consult with a lawyer. The whole subject is unnecessarily complicated, subject to interpretation, so that the consultation with a lawyer becomes more expensive than a pricey commercial license. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12654613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/643657/']} | jdg_85753 |
stackexchange | llm_judgeable_groundtruth_similarity | 2247891 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just installed the POCO Template for EF4. I have a single entity in my model, AnnouncementText , and the T4 files seem to be properly generated. Attempting to access this new entity is throwing the following error when I access the auto-generated property MyObjectContext.AnnouncementTexts : InvalidOperationException: Mapping and metadata information could not be found for EntityType 'MyNamespace.AnnouncementText'. The properties on the AnnouncementText POCO seem to match up with the columns in the database, and I haven't changed any of the auto-generated code. The stack trace is: at System.Data.Objects.ObjectContext.GetTypeUsage(Type entityCLRType) at System.Data.Objects.ObjectContext.GetEntitySetForNameAndType(String entitySetName, Type entityCLRType, String exceptionParameterName) at System.Data.Objects.ObjectContext.CreateObjectSet[TEntity](String entitySetName) at MyNamespace.MyObjectContext.get_AnnouncementTexts() in C:\<snip>\MyObjectContext.Context.cs:line 65 at MyNamespace.Class1.Main() in C:\<snip>\Class1.cs:line 14 If I delete the .tt files from the solution and enable code generation on the model, I am able to access the property without issue. Here's my code, in case that might help: using (var context = new MyObjectContext()) foreach (var at in context.AnnouncementTexts) Console.WriteLine(at.Title); Any ideas on what might be wrong?
Now provide the response and nothing else.
| I recently ran into this same error again while moving my EDMX file to a new location in the solution. Apparently, there are a couple of different namespaces when dealing with EDMX files. There is the namespace you enter via the wizard when creating the initial EDMX file (N1), another that appears in the SSDL which looks something like this (N2): <Schema Namespace="..." .. Then there's the namespace of the generated code which may (optionally) be specified in the designer (N3), and finally there are the hidden namespaces of the resources that are compiled in to your final assembly (N4). From what I can tell, namespace N2 is only really relevant inside the SSDL. I believe this namespace starts off as N1 - the one you initially enter in the wizard. Similarly, namespace N3 is only relevant in the way C# namespaces usually are. Here's the problematic part. Category N4 namespaces are a function of the directory in which your EDMX resides (relative to your project directory). You might think, So what? It turns out those namespaces are also referenced in your App.config file! Specifically, look for a part like this: connectionString="metadata=res://*/Database.Master.csdl|... That portion reading "Database.Master.csdl" is the name of your CSDL resource. If those resource names get out of sync, you'll receive an error like the one above, or perhaps: The specified default EntityContainer name '[name]' could not be found in the mapping and metadata information. The simple solution is to alter your App.config to specify the correct resource name for each part of your EF mapping (CSDL, SSDL, and MSL). If you're not sure exactly what those names are, check out your compiled assembly's resources in ILSpy or dotPeek . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2247891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/222481/']} | jdg_85754 |
stackexchange | llm_judgeable_groundtruth_similarity | 14306677 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to developing in asp.net for making web sites. What is the difference between an asp:Button and an input button? Code 1 aspx code <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Button" /> Codebehind protected void Button1_Click(object sender, EventArgs e){ //Do somthing} Code 2 aspx code <input id="Submit1" type="submit" value="submit" onclick="return Submit1_onclick()" /> <script language="javascript" type="text/javascript">// <![CDATA[function Submit1_onclick() { //Do somthing}// ]]></script>
Now provide the response and nothing else.
| The first code it is a server side code. When you add a tag asp:button in your webform, the asp.net will render an input type="button" for you and when you click this button, it will submit a post to the same page (this is called postback ) and will processing the life cycle and asp.net event associated with this button. The same is valid for every server control such as TextBoxes (render input type='text' ), Panels (renders divs), Checkboxes (render input type='checkbox' ), Labels (render spans ) etc... In the second code, you have the code that first one will render (an input type = 'button' ), but associated with an event in the client-side javascript. You can also associate a client-side event in a tag asp:button , take a look at the OnClientClick property. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14306677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1719369/']} | jdg_85755 |
stackexchange | llm_judgeable_groundtruth_similarity | 208011 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
How do I Calculate, if possible, in terms of well-known constants the integral : $\int_{0}^{1}x^{k}\psi(x)dx$ , where $k\geq 3$ is an integer ? note: $\psi(x)$ is digamma function. Any help would be greatly appreciated.
Now provide the response and nothing else.
| This integral has been observed by Donal F. Connor in 2010 (you can find the link here , pg. 94). As far as I know, he found a closed form for the odd case, but I believe the even case is somewhere in that document (don't quote me on that yet). To solve it, as Feldmann Denis notes, use integration by parts. According to Dr. Connor, we have $$ \int_0^1 x^{2n+1}\psi{(x)} \;dx = \sum_{k=0}^{2n} \left( \begin{array}{c} 2n+1 \\ k \end{array} \right) \big(H_k\zeta{(-k)} + (-1)^{k+1}\zeta'{(-k)}\big)$$ where $H_k$ is the harmonic series, and $\zeta$ is the Riemann zeta function. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/208011', 'https://mathoverflow.net', 'https://mathoverflow.net/users/51143/']} | jdg_85756 |
stackexchange | llm_judgeable_groundtruth_similarity | 9504664 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I discovered WPF supports different measurement units in XAML. Besides default DIPs, there is also support for pixels, inches and centimeters (as far as I know). This allows designer to write XAML such as this: <Canvas> <Line X1="0cm" X2="3cm" Y1="1cm" Y2="3cm" Stroke="Black"/></Canvas> However, you cannot bind these values. Imagine we have a ViewModel with Dimension property which is a String, for example "7cm". Following won't work: <Button Width="{Binding Dimension}">Test</Button> FormatException gets thrown. Similarly, when creating a FrameworkElement in code-behind, like this: Canvas1.Children.Add(new Button() { Width = "3cm", Content = "Test"}); Compilation fails because exception is thrown in constructor/wherever you try to create the control. I wonder: Is it possible to use natural units (in my case metric - centimeters) in code-behind? How? Is there a complete list of units WPF/XAML supports? EDIT: Here is a link from comment below, which resolves this question: http://msdn.microsoft.com/en-us/library/system.windows.frameworkelement.width.aspx
Now provide the response and nothing else.
| You can do this in code behind by applying the same conversion the XAML-parser uses, you do not need to reimplement this logic if you don't mind passing around strings: button.Width = (double)new LengthConverter().ConvertFrom("2cm"); The XAML-parser uses TypeConverters to convert strings to the needed values, in the documentation page of the Width property you can see that it has a TypeConverterAttribute specifying that a LengthConverter should be used. This attribute can be used to locally override how a property should be handled, but it can also be applied at class level, so if an Employee class has this attribute specifying that an EmployeeConverter should be used said converter will be the default for properties of type Employee . I am a bit surprised the binding does not apply this type converter, but using it inside an IValueConverter would be simple enough, in fact you could create a markup extension which constructs an IValueConverter from a type converter to make it nicely generic. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9504664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/975251/']} | jdg_85757 |
stackexchange | llm_judgeable_groundtruth_similarity | 4382246 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i ran into something interesting about the python augmented assignment += it seems to be automatic data type conversion is not always done for a += b if a is a 'simpler' data type, while a = a + b seems to work always cases where the conversion is done a = 1b = 1ja = 1b = 0.5 case where the conversion is not done from numpy import arraya = array([0, 0 ,0])b = array([0, 0, 1j]) after a += b , a remains as integer matrix, instead of complex matrix i used to think a += b is the same as a = a + b , what is the difference of them in the underlying implementation?
Now provide the response and nothing else.
| For the + operator, Python defines three "special" methods that an object may implement: __add__ : adds two items ( + operator). When you do a + b , the __add__ method of a is called with b as an argument. __radd__ : reflected add; for a + b , the __radd__ method of b is called with a as an instance. This is only used when a doesn't know how to do the add and the two objects are different types. __iadd__ : in-place add; used for a += b where the result is assigned back to the left variable. This is provided separately because it might be possible to implement it in a more efficient way. For example, if a is a list, then a += b is the same as a.extend(b) . However, in the case of c = a + b you have to make a copy of a before you extend it since a is not to be modified in this case. Note that if you don't implement __iadd__ then Python will just call __add__ instead. So since these different operations are implemented with separate methods, it is possible (but generally bad practice) to implement them so they do totally different things, or perhaps in this case, only slightly different things. Others have deduced that you're using NumPy and explained its behavior. However, you asked about the underlying implementation. Hopefully you now see why it is sometimes the case that a += b is not the same as a = a + b . By the way, a similar trio of methods may also be implemented for other operations. See this page for a list of all the supported in-place methods. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4382246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/534298/']} | jdg_85758 |
stackexchange | llm_judgeable_groundtruth_similarity | 377352 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
From the principle of least (or stationary) action, we get that a classical system will evolve according to Euler-Lagrange equations: $$\frac{d}{dt}\bigg (\frac{\partial L}{\partial \dot{q_i}}\bigg) = \frac{\partial L}{\partial q_i} .$$ I have often read and heard from physicists that this differential equation encapsulates all of classical mechanics. A glorious reformation of Newton's laws that are more general, compact and much more efficient. I get that if you plug in the value of the Lagrangian, you re-obtain Newton's second law. But Newtonian mechanics is based on 3 laws, is it not? The law of inertia is a special consequence of the second law, so we don't need that, but what about the third law, namely that forces acts in pairs; action equals minus reaction? My question is, can we obtain Newton's third law from this form of Euler-Lagrange equation? I understand that Newton's third law for an isolated $2$-body system follows from total momentum conservation, but what about a system with $N\geq 3$ particles?If not why do people say that it's all of classical mechanics in a nutshell?
Now provide the response and nothing else.
| Newton's third law is that for every action, there is an equal and opposite reaction. This is a statement of momentum conversation. In the Euler-Lagrange equation, the last term$$ \frac{\partial\mathcal{L}}{\partial q_i} $$is a generalized force. Similarly, the generalized momentum is $$ \frac{\partial\mathcal{L}}{\partial \dot q_i}. $$If the generalized force is zero, then$$ \frac{d}{dt} \frac{\partial\mathcal{L}}{\partial \dot q_i} = 0 $$Mathematically, this means that the generalized momentum is constant over time, i.e. it is conserved, which is Newton's third law. We don't even need the Lagrangian to summarize all of Newton's laws. As you likely know, Newton's second law $F=ma$ is a special case of $F = \frac{dp}{dt}$. This generally accounts for all of Newton's laws: Newton's 1st Law - an object will persist in a state of uniform motion unless compelled by an external force: If $F = 0$, then $\frac{dp}{dt} = 0$, and thus $p$ is constant. Newton's 2nd Law - $F = ma$: if m is constant, then $$ F = \frac{dp}{dt} = \frac{d(mv)}{dt} = m\frac{dv}{dt} = ma.$$ Newton's 3rd law - see the below derivation. So, generally speaking, Newton's third laws are slightly redundant in the sense that they can all be described by $$ \vec F = \frac{d\vec p}{dt}. $$ (or by the Euler-Lagrange equation, as you argue.) Edit: Derivation of N3L with conservation of momentum Consider a system with total momentum $\vec p_{\rm tot}$ and two particles with momenta $\vec p_1$ and $\vec p_2$ such that $\vec p_{\rm tot} = \vec p_1 + \vec p_2$.If the system is closed, then the total momentum is conserved, so$$ \frac{d\vec p_{\rm tot}}{dt} = 0.$$Differentiating both sides, you get$$ \frac{d}{dt}(\vec p_{\rm tot}) = \frac{d}{dt}(\vec p_1 + \vec p_2)$$$$ 0 = \frac{d\vec p_1}{dt} + \frac{d\vec p_2}{dt} = \vec{F_1} + \vec{F_2}$$$$ \vec F_1 = -\vec F_2 $$ | {} | {'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/377352', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/94020/']} | jdg_85759 |
stackexchange | llm_judgeable_groundtruth_similarity | 4399371 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to my textbook, the formula for the distance between 2 parallel lines has been given as below: Where PT is a vector from the first line that makes a perpendicular on the second line, vector B is a vector to which both the lines are parallel to and vector (a2 - a1) is a vector that joins one arbitrary point on the second line, to yet another arbitrary point on the other This is what I am confused by. The book, along with the numerous threads I've scoured through already provide similar diagrams for the proof: From what I understand, the crossing of ST with B should yield us a vector pointing OUT of the plane to which the lines (and in conjunction, ST) belong How would that yield us TP/PT? TP/PT belongs to the same plane to which the lines and ST belong as well, so how'd crossing ST and B yield us PT? I understand the end goal is to calculate the MAGNITUDE of the shortest vector joining both the lines, but I can't seem to understand how d is the magnitude of PT as opposed to being the magnitude of the vector jutting OUT of the plane
Now provide the response and nothing else.
| The formula uses only the magnitude of the the cross product. And the magnitude of $u \times v$ is $\|u\|$ times $\|v\|$ times the sine of the angle between $u$ and $v$ . So, using the cross product is just a fancy way of getting the sine of an angle. And, if you do a bit of trig, you'll see that the sine of an angle is exactly what you need to calculate the desired distance. The direction of the cross product is irrelevant in all of this, so you don't need to worry about the fact that it's perpendicular to the plane in which all the action occurs. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4399371', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/904030/']} | jdg_85760 |
stackexchange | llm_judgeable_groundtruth_similarity | 28378460 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having trouble to create package in Laravel 5 as workbench has been removed. As in this thread ( How create package in Laravel 5? ), Goldorak suggest that we have to create our own package structure ourselves. So, how can I create the workbench manually and get everything ready for package development?
Now provide the response and nothing else.
| Using the laravel Workbench package: You can add the illuminate/workbench package in a Laravel 5 by adding to your composer.json : "illuminate/workbench": "dev-master" then add the WorkbenchServiceProvider into your config/app.php file: 'Illuminate\Workbench\WorkbenchServiceProvider' Now you need to create the config/workbench.php file since it has been removed from Laravel 5: <?phpreturn [ /* |-------------------------------------------------------------------------- | Workbench Author Name |-------------------------------------------------------------------------- | | When you create new packages via the Artisan "workbench" command your | name is needed to generate the composer.json file for your package. | You may specify it now so it is used for all of your workbenches. | */ 'name' => '', /* |-------------------------------------------------------------------------- | Workbench Author E-Mail Address |-------------------------------------------------------------------------- | | Like the option above, your e-mail address is used when generating new | workbench packages. The e-mail is placed in your composer.json file | automatically after the package is created by the workbench tool. | */ 'email' => '',]; Fill your information in this config file then you will be able to use the workbench command: php artisan workbench vendor/name Creating your own package structure In this exemple we will create our package called awesome in a packages directory. Here is the package structure: packages/ vendor/ awesome/ src/ Awesome.php composer.json Vendor : your vendor name, typically this is your github username. Awesome : the name of your package src : Where you put the business logic To generate a composer.json file you can use this command in the packages/vendor/awesome directory: composer init Now we create a Awesome.php class in the src directory with a simple method: <?php namespace Vendor/Awesome;class Awesome{ public static function printAwesomeness() { echo 'Awesome'; }} After that we add the package to the laravel composer.json psr-4 autoloader: "autoload": { "psr-4": { "App\\": "app/", "Vendor\\Awesome\\": "packages/vendor/awesome/src" }}, and we dump the composer autoloader composer dump-autoload Now you can use your package everywhere in your laravel 5 project. If you need some laravel specific feature like service provider or view publishing, use them as described in the Laravel 5.0 documentation . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28378460', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1995781/']} | jdg_85761 |
stackexchange | llm_judgeable_groundtruth_similarity | 11074665 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know how to calculate the cumulative average for some numbers. I will give a simple example to describe what I am looking for. I have the following numbers vec <- c(1, 2, 3, 4, 5) If I do the average of these numbers I will get 3 as a result. Now, how to do the cumulative average of these numbers.
Now provide the response and nothing else.
| In analogy to the cumulative sum of a list I propose this:The cumulative average avg of a vector x would contain the averages from 1st position till position i . One method is just to compute the the mean for each position by summing over all previous values and dividing by their number. By rewriting the definition of the arithmetic mean as a recursive formula. One gets avg(1) = x(1) and avg(i) = (i-1)/i*avg(i-1) + x(i)/i; (i > 1) Evaluating this expression for every element of your vector (or list, one-dimensional array or however you call it) gives you the cumulative average. This recursive method comes in handy if you have to calculate an average over very large or very many integers and would run into an overflow if you had to store their cumulative sum. Example In your example 1, 2, 3, 4, 5 we get 1, 1.5, 2, 2.5, 3 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11074665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1260712/']} | jdg_85762 |
stackexchange | llm_judgeable_groundtruth_similarity | 19463 |
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to implement nonzero Displacement Boundary Conditions in VegaFEM on a non-linear model, using the method outlined in §3.6.2 of University of Colorado's intro to FEM (modify $f = Ku$: set desired displacements in $u$ and eliminate rows and columns in $K$ by calculating forces and applying to $f$). However it's not working as expected. I'm not sure exactly where in the solver source code I need to make the modifications to $f$, $K$ and $u$ but no matter where I do it the result is that either the model explodes or, if I make the displacements tiny, it seems to be having some influence in the correct direction (like an external force) but massively exaggerated and not anything resembling a 'boundary condition'. I wonder if I'm being thrown off because I'm assuming that the " tangent stiffness matrix" in the VegaFEM integrator is just another name for the "stiffness matrix" in the Introduction to FEM notes. Is this assumption correct? If so - VegaFEM computes internal forces and then adds external forces before passing the forces on to the solver. Does the $f$ in $f = Ku$ refer to the external forces or the internal forces? edit clarified linear vs nonlinear, and what 'not working as expected' means. edit The goal is effectively that I want to attach a highly elastic object to one or more virtual sticks which I can then drag around the simulation environment with the mouse in order to stretch the object. I'm currently simulating the boundary conditions (attachment to sticks) using PID controllers to apply external forces to the bound nodes, but this introduces another kind of elasticity that I'd rather avoid. Is there some other approach I should be considering?
Now provide the response and nothing else.
| $K u$ equals the internal forces only in the linear case. The tangent stiffness matrix, $K$, in a nonlinear problem is normally used in a Newton-Raphson algorithm to calculate updates to the displacement vector as follows: $$K \Delta u = f - f_{internal}$$$$ u_{i+1} = u_i + \Delta u$$ The vector of internal forces, $f_{internal}$ must be calculated from the nonlinear element equations. The iteration continues until $u$ converges. In a nonlinear problem, both the tangent $K$ and the $f_{internal}$ vector are functions of the displacements. One consequence of this is that to prescribe displacements at selected nodes, the displacement vector passed into the functions that calculate these must contain the prescribed values. The nonlinear solution algorithm typically begins with a solution vector of all zeros. Instead, selected entries can be set to the prescribed values so that the internal forces will be calculated correctly. If VegaFEM explicitly eliminates global equations for a constrained degree of freedom, it also should dothat for prescribed displacements. Those might be the only changes to the code needed to prescribe non-zero instead of zero displacements. | {} | {'log_upvote_score': 4, 'links': ['https://scicomp.stackexchange.com/questions/19463', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/14561/']} | jdg_85763 |
stackexchange | llm_judgeable_groundtruth_similarity | 31032363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am confused with Static implementation in VB.NET. In C#, we can create Static class and static methods to write utility methods for our application. Now, VB.NET lets us create Module in place of static class. If we create a method in the module, by default it becomes static. But in my application, I have written the below code: Public Class Utility Public Shared Function GetValue() As String // My code End FunctionEnd Class By writing the code, I am able to access the utility method as Utility.GetValue() . As this is not a static class, I am supposed to instantiate an object of it. But this method is available for both the class and objects of Utility Now my questions are: Is the implementation I have done may violate any of the features of static class that module provide? What will be difference between this and implementing a module instead? If I create a module instead, will the scope of that will be same as this class? I want to access the method throughout the project as well as other projects where this one is referenced. I tried consulting multiple articles, but nowhere found this exact answers. Please help.
Now provide the response and nothing else.
| A VB.NET module is a static class. The compiler handles this for you. Every method and property on it is static ( Shared ). A class with a static (Shared) member on it is exactly that: a class with a static (Shared) member. You don't have to create an instance of it to access the static (Shared) method, but you do to get to any of its instance members. You can also define a Sub New() in a module, and it becomes the static constructor for the module. The first time you try to invoke a member on the module, the static constructor will be invoked to initialize the static class. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31032363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2782901/']} | jdg_85764 |
stackexchange | llm_judgeable_groundtruth_similarity | 37615470 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Design Support Library version 23.4.0 . I have enabled the gradle flag: defaultConfig { vectorDrawables.useSupportLibrary = true} I am using build tools version 23.0.2 , but still, I am getting Resources$NotFoundException on KitKat or lower. It is occurring when I use android:drawableLeft or imageView.setImageResource(R.drawable.drawable_image) . And yes, I am putting this on every activity where I am using drawables static { AppCompatDelegate.setCompatVectorFromResourcesEnabled(true);} Is this a bug of the support library?
Now provide the response and nothing else.
| It took 3 separate things for me to get this to work using support library 23.4.0 : Add this to build.gradle defaultConfig { vectorDrawables.useSupportLibrary = true} Add the following to onCreate of your Application class AppCompatDelegate.setCompatVectorFromResourcesEnabled(true); (From the reference of this link - " https://stackoverflow.com/a/45582033/10752962 ") In API less then 21 ,use this line before setContentView() ; For all XML views in which you are setting a vector drawable replace android:src with app:srcCompat and in the code replace this: imageView.setImageResource(...); with imageView.setImageDrawable(...); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37615470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/940551/']} | jdg_85765 |
stackexchange | llm_judgeable_groundtruth_similarity | 189916 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to compute the following multiple summation. m=1;n=1;output = Sum[ Sum[ Sum[ Sum[ Sum[ (m-l)!/(k-l+l1)! c^(k+l1-l) a^(m-k-l1) b^l1 (l)!/(l-k+x1)! f^(l-k+x1) d^(k-x1) 1/(i-x1)! g^(i-x1) e^x1, {i,0,n-l}], {x1,0,k}], {l1,0,m-k}], {l,0,m}], {k,0,m}](*a + b c + d + c e + b f + a g + b c g *) Now declaring the value of $b,c,e,f,g$ , b=0;c=0;e=0;f=0;g=0; the output becomes $a+d$ . However, I want to evaluate the summation after declaring the value of $b,c,e,f,g$ which results in showing error due to quantities like $0^{-1}$ appear in the summand. I want to see if mathematica can simplify the summation in form of known special function under the condition $b=c=e=f=g=0$ for general $m, n$ .
Now provide the response and nothing else.
| Update: Counts[data] seems to be equivalent to AssociationThread @@ Transpose@Tally[data] . The OP's problem arises because in constructing an Association , keys are checked for uniqueness and later entries with a duplicate key replace earlier entries. (Simple example: Association[{1. -> 1, 1. -> 2}] .) Uniqueness is determined by MatchQ , I believe, which has problems discussed below in the original answer. The problem with SameQ not strictly being an equivalence relation due nontransitivity is still an issue. This update principally clarifies the role of forming an association: It discards entries of the Tally with duplicate keys, which results in an undercount. Original answer: Working with floating-point numbers is tricky. I'd say the most important, common issue is that rounding errors lead to different but close numbers that users wish would be treated the same. Introductory programming courses teach that comparing floats should be done with something like Abs[x - y] < $MyTolerance . In Mathematica similar (but relative) tolerances are built into SameQ and Equal , which are controlled by the the internal system parameters Internal`$ SameQTolerance and Internal`$EqualTolerance respectively (see also this ; this question has similar issues as the OP). Perhaps less well known is that MatchQ has a small tolerance like SameQ but is slightly more restrictive. The most important difference is that MatchQ is transitive but SameQ is not. These functions play various roles in pattern-matching and comparing numbers, and their issues affect functions like Counts[] when applied to floating-point data. When constructing classes from data, as in Counts[] , some reflection should lead one to think that using a comparison that is an equivalence relation, and therefore transitive, would be desirable. And if not transitive, it should be at least "locally transitive" on the actual data being used. By "locally transitive," I mean that the relation is transitive when restricted to the data set, even if it is not transitive on all floating-point numbers. This seems to be the problem with the OP's example: SameQ is not transitive on xs . I say "seems" because I cannot check the internal workings of Counts[] . It's possible that SameQ is used to construct the keys and MatchQ to tally the counts. It seems to me that Counts[] uses SameQ and CountsBy[] uses MatchQ to construct the keys, however the counting is done. Since SameQ is not transitive, the order that the data is processed can make a difference. Nevertheless, the problem can be fixed by making SameQ locally transitive on xs . The reason this comes up in this example is that in constructing the data xs , the rounding drift amounts to 2 bits (2 ulp), which is one too many for SameQ to be locally transitive. Here is the best fix (that fixes SameQ -- Chip Hurst points out that the tolerance can be as small as 0.55 , which is close to the value of Log10[4.] = 0.60.. that would be predicted by the observed rounding error): Block[{Internal`$SameQTolerance = Internal`$EqualTolerance}, Total@Counts[xs]](* 100 *) Another fix is to use MatchQ via CountsBy[] : Total@CountsBy[xs, # &](* 100 *) The CountsBy[] association has several keys for equal numbers, but it does have the correct total. The first solution seems better because it has one key for each cluster of equal numbers. (It likely that in some applications one would like distinct floating-point numbers to have distinct entries; then something like CountsBy[xs, ToString@*FullForm] would do the trick.) Appendix Some pictures showing the properties of SameQ and MatchQ on consecutive machine-precision floats: Block[{x1}, x1 = Table[1 + n*$MachineEpsilon, {n, 0, 5}]; {Outer[Boole@*SameQ, x1, x1] // MatrixPlot[#, PlotLabel -> SameQ] &, Outer[Boole@*MatchQ, x1, x1] // MatrixPlot[#, PlotLabel -> MatchQ] &} // GraphicsRow ] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/189916', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/53596/']} | jdg_85766 |
stackexchange | llm_judgeable_groundtruth_similarity | 11469058 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Convert String to another locale in java I want to convert a java String that contains english numbers to arabic one's so i make this int arabic_zero_unicode= 1632;String str = "13240453";StringBuilder builder = new StringBuilder();for(int i =0; i < str.length(); ++i ) { builder.append((char)((int)str.charAt(i) - 48+arabic_zero_unicode));}System.out.println("Number in English : "+str);System.out.println("Number In Arabic : "+builder.toString() ); the out put Number in English : 13240453Number In Arabic : ١٣٢٤٠٤٥٣ is there another more efficient way to do this ?
Now provide the response and nothing else.
| This gives a 5x speedup over your version with a string of length 3036.This also checks to make sure you're only changing digits. It's about a 6x speedup without the if/else check. Please pardon me if the characters are incorrect/misplaced. I had to find some of them from another source. char[] arabicChars = {'٠','١','٢','٣','٤','٥','٦','٧','٨','٩'};StringBuilder builder = new StringBuilder();for(int i =0;i<str.length();i++){ if(Character.isDigit(str.charAt(i))) { builder.append(arabicChars[(int)(str.charAt(i))-48]); } else { builder.append(str.charAt(i)); }}System.out.println("Number in English : "+str);System.out.println("Number In Arabic : "+builder.toString() ); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11469058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/671676/']} | jdg_85767 |
stackexchange | llm_judgeable_groundtruth_similarity | 72677648 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to infinitely iterate through the list from 0 to the end, but in the next loop I want to start at 1 to the end plus 0, and the next loop would start at 2 to the end plus 0, 1, up to the last item where it would start again at 0 and go to the end. Here is my code: a = [ 0, 1, 2 ]offset = 0rotate = 0while True: print(a[rotate]) offset += 1 rotate += 1 if offset >= len(a): offset = 0 rotate += 1 if rotate >= len(a): rotate = 0 This is the solution I came up with so far. It's far from perfect. The result that I want is: 0, 1, 2 # first iteration1, 2, 0 # second iteration2, 0, 1 # third iteration0, 1, 2 # fourth iteration and so on.
Now provide the response and nothing else.
| You can use a deque which has a built-in and efficient rotate function (~O(1)): >>> d = deque([0,1,2])>>> for _ in range(10):... print(*d)... d.rotate(-1) # negative -> rotate to the left...0 1 21 2 02 0 10 1 21 2 02 0 10 1 21 2 02 0 10 1 2 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/72677648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19225152/']} | jdg_85768 |
stackexchange | llm_judgeable_groundtruth_similarity | 19044725 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to prevent a record from being destroyed if there are children. class Submission < ActiveRecord::Basehas_many :quotations, :dependent => :destroy before_destroy :check_for_payments def quoted? quotations.any? end def has_payments? true if quotations.detect {|q| q.payment} end private def check_for_payments if quoted? && has_payments? errors[:base] << "cannot delete submission that has already been paid" false end endendclass Quotation < ActiveRecord::Base #associations belongs_to :submission has_one :payment_notification has_one :payment before_destroy :check_for_paymentsprivate def check_for_payments if payment_notification || payment errors[:base] << "cannot delete quotation while payment exist" return false endendend When I test this code the before_destroy :check_for_payments prevents the Quotation record from being deleted. However the :check_for_payments in the Submission before_destroy callback does not stop the Submission from being deleted. How can I stop the Submission with payments from being destroyed?
Now provide the response and nothing else.
| I would try the code below where I have: used a has_many :through association for payments avoided unnecessary record retrieval of quotations and payments by using any? without a block which results in using the association counter cache if defined, or the size of the association if already loaded and failing that an SQL COUNT if needed. avoided enumeration of quotations avoided testing for truthiness/presence of the q.payment association proxy directly which does not work for has_xxx. If you want to test for presence use q.payment.present? Try the following and see how you go: class Submission < ActiveRecord::Base has_many :quotations, inverse_of: :submission, dependent: :destroy has_many :payments, through: :quotations before_destroy :check_for_payments, prepend: trueprivate def check_for_payments if payments.any? errors[:base] << "cannot delete submission that has already been paid" return false end endend | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19044725', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/623266/']} | jdg_85769 |
stackexchange | llm_judgeable_groundtruth_similarity | 15041937 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am compiling several libraries on Ubuntu 12.04 x86_64. First I compiled the libraries with GCC 4.7.2 and it went all well. Then I tryed to recompile them with Inte Composer 2013 u2. Fot that purpose i did: export CC=/opt/intel/composer_xe_2013.2.146/bin/intel64/iccexport CPP=/opt/intel/composer_xe_2013.2.146/bin/intel64/icpc Then I run ./configure and got the following error: checking how to run the C preprocessor... /opt/intel/composer_xe_2013.2.146/bin/intel64/icpcconfigure: error: in `/var/www/workspace/freetype/freetype-2.4.11/builds/unix':configure: error: C preprocessor "/opt/intel/composer_xe_2013.2.146/bin/intel64/icpc" fails sanity checkSee `config.log' for more detailsmake: *** [setup] Error 1 The config log file contains this error: configure:3345: checking how to run the C preprocessorconfigure:3415: result: /opt/intel/composer_xe_2013.2.146/bin/intel64/icpcconfigure:3435: /opt/intel/composer_xe_2013.2.146/bin/intel64/icpc conftest.cconftest.c(14): error: identifier "Syntax" is undefined Syntax error ^conftest.c(14): error: expected a ";"compilation aborted for conftest.c (code 2)configure:3435: $? = 2configure: failed program was:| /* confdefs.h */| #define PACKAGE_NAME "FreeType"| #define PACKAGE_TARNAME "freetype"| #define PACKAGE_VERSION "2.4.11"| #define PACKAGE_STRING "FreeType 2.4.11"| #define PACKAGE_BUGREPORT "[email protected]"| #define PACKAGE_URL ""| /* end confdefs.h. */| #ifdef __STDC__| # include <limits.h>| #else| # include <assert.h>| #endif| Syntax errorconfigure:3435: /opt/intel/composer_xe_2013.2.146/bin/intel64/icpc conftest.cconftest.c(14): error: identifier "Syntax" is undefined Syntax error ^conftest.c(14): error: expected a ";"compilation aborted for conftest.c (code 2) What can be wrong here?
Now provide the response and nothing else.
| The problem may well be that the GNU make implicit variable thatdenotes "your C++ compiler" is not CPP but CXX , whereas CPP isthe implicit variable that denotes "your C preprocessor"; soyour export CPP=/opt/intel/composer_xe_2013.2.146/bin/intel64/icpc tells configure that icpc is the preprocessor and leaves CXX presumablydefaulting to g++. This is supported by the ./configure error: checking how to run the C preprocessor... /opt/intel/composer_xe_2013.2.146/bin/intel64/icpc Try: export CXX=/opt/intel/composer_xe_2013.2.146/bin/intel64/icpc or just: ./configure CXX=/opt/intel/composer_xe_2013.2.146/bin/intel64/icpc | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15041937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1167108/']} | jdg_85770 |
stackexchange | llm_judgeable_groundtruth_similarity | 32950503 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a data flow use case where I want to have topics defined based on each of the customer repositories (which might be in the order of 100,000s) Each data flow would be a topic with partitions (in the order of a few 10s) defining the different stages of the flow. Is Kafka good for a scenario like this? If not how would I remodel my use case to handle such scenarios. Also it is the case that each customer repository data cannot be mingled with others even during processing.
Now provide the response and nothing else.
| Update March 2021: With Kafka's new KRaft mode, which entirely removes ZooKeeper from Kafka's architecture, a Kafka cluster can handle millions of topics/partitions . See https://www.confluent.io/blog/kafka-without-zookeeper-a-sneak-peek/ for details. * short for "Kafka Raft Metadata mode"; in Early Access as of Kafka v2.8 Update September 2018: As of Kafka v2.0, a Kafka cluster can have hundreds of thousands of topics. See https://blogs.apache.org/kafka/entry/apache-kafka-supports-more-partitions . Initial answer below for posterity: The rule of thumb is that the number of Kafka topics can be in the thousands . Jun Rao (Kafka committer; now at Confluent but he was formerly in LinkedIn's Kafka team) wrote: At LinkedIn, our largest cluster has more than 2K topics. 5K topics shouldbe fine.[...] With more topics, you may hit one of those limits: (1) # dirs allowed in aFS; (2) open file handlers (we keep all log segments open in the broker);(3) ZK nodes. The Kafka FAQ gives the following abstract guideline: Kafka FAQ: How many topics can I have? Unlike many messaging systems Kafka topics are meant to scale up arbitrarily. Hence we encourage fewer large topics rather than many small topics. So for example if we were storing notifications for users we would encourage a design with a single notifications topic partitioned by user id rather than a separate topic per user. The actual scalability is for the most part determined by the number of total partitions across all topics not the number of topics itself (see the question below for details). The article http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/ (written by the aforementioned Jun Rao) adds further details, and particularly focuses on the impact of the number of partitions. IMHO your use case / model is a bit of a stretch for a single Kafka cluster, though not necessarily for Kafka in general. With the little information you shared (I understand that a public forum is not the best place for sensitive discussions :-P) the only off-the-hip comment I can provide you with is to consider using more than one Kafka cluster because you mentioned that customer data must be very much isolated anyways (including the processing steps). I hope this helps a bit! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32950503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1274862/']} | jdg_85771 |
stackexchange | llm_judgeable_groundtruth_similarity | 59882179 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I love Svelte, but I'm stuck on something basic (though merely cosmetic). The following code should transition between two elements smoothly, but instead it "jumps"--apparently making room for the incoming element before it arrives. The problem is similar to this one that Rich Harris noted a few years back, but I don't see that a solution was implemented. All examples on the Svelte tutorial site transition only a single element. Here is the basic markup/code: {#if div1} <div in:fly={{ x: 100, duration: 400, delay: 400 }} out:fly={{ x: 100, duration: 400 }}>Div 1</div>{:else} <div in:fly={{ x: 100, duration: 400, delay: 400 }} out:fly={{ x: 100, duration: 400 }}>Div 2</div>{/if}<button on:click={()=>{ div1 = !div1}}>Switch</button> A working equivalent in Vue would be: <transition name="fly" mode="out-in"> <div v-if="div1">Div 1</div> <div v-else>Div 2</div></transition> Here is a Code Sandbox example . You can see the button jump down to make room for the new element. I added an "in" transition delay equal to the duration of 400 (I know that's default, but I wanted to set it explicitly for clarity's sake). The delay should allow the first element to transition out before transitioning the next one in, as noted in the first link (what Harris called "hacky use of delay") and suggested here . I also tried explicitly setting the element being outro'd to position: absolute so that it wouldn't take up space. Here is a (still not working properly) example . Seems a little inelegant, even if it were working. For some reason, the transition is overriding setting a class that sets position:absolute. Any help is greatly appreciated! UPDATE: I got the desired effect with this code . What I did here was copy and modify Svelte's fly transition to take an additional parameter--'position,' which can be set to 'absolute' or 'relative' or whatever you want. A few tweaks to the CSS to ensure there are no strange side-effects (a container set to position: relative, and setting width of each element to 100% to ensure they don't change size). This works, but I still feel there should be a less labor-intensive method, without modifying Svelte's transitions.
Now provide the response and nothing else.
| I came over from Vue as well, the out-in is one thing I miss with Svelte. Rich Harris even acknowledged it prior to Svelte 3 but never really implemented a fix as far as I'm aware. The problem with the single condition, delay-only, out-in transition method is that Svelte is creating the incoming element once the condition switches despite the delay on the in transition. You can slow the transitions way down and check dev tools to see this, both elements will exist the incoming transition delay does not prevent the element from having a size, just visibility. One way around it is to do what you've done with absolute position, kinda intensive and becomes boilerplate. Another method is to set an absolute height for the container holding the elements being transitioned, pull everything else out of the container (the button in your example) and hide the overflow as seen here , very css dependent and does not always play well with certain layouts. The last way I've used is a bit more round about but since Svelte has an outroend event that is dispatched when the animation is done you can add a variable for blue or whatever your second condition is and put in an else if block for the second condition (blue here) and wire the trigger so it's checking for the active variable and switching it off, then switch on the other variable inside the outroend event as seen here you can also remove any delay since the duration becomes the delay. From inspecting the DOM during transitions it seems this is the only way that both elements don't exist at the same time because they depend on separate conditions, I'm sure there are even more elegant ways to achieve this but this works for me. EDIT : There is another option available that only works on browsers that support CSS grid spec , luckily that's nearly universal at this point. It's very similar to the absolute positioning method with an added bonus that you don't have to worry about the height of the elements at all The idea behind this is that with CSS Grid we can force 2 elements to occupy the same space with grid-area or grid-column and grid-row by giving both elements(or more than 2) the same start and end columns and rows on the implicit grid of 1 col by 1 row (grid is smart enough to not create extra columns and rows we won't be using). Since Svelte uses transforms in it's transitions we can have elements coming and going without any layout shift, nice. We no longer have to worry about absolute position affecting elements or about delays, we can fine tune the transition timing to perfection. Here is a REPL to show a simple setup, and another REPL to show how this can be used to get some pretty sweet layering effects, woah! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59882179', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4206056/']} | jdg_85772 |
stackexchange | llm_judgeable_groundtruth_similarity | 348571 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The following notation means to sum 1 to N: $$\sum_{n=1}^N n$$ Is there a notation to not increment by one for each step, but, say, 10?
Now provide the response and nothing else.
| As long as you have a constant skip size, you can handle it by multiplying the subscript by the skip size wherever it appears inside the summation: in your example you’d get $$\sum_{n=1}^N10n\;.$$ Of course that gives you the sum of all $N$ values from $10$ through $10N$. If you wanted just the sum of those multiples of $10$ that are no bigger than $N$, you’d have to adjust the upper limit: $$\sum_{n=1}^{\lfloor N/10\rfloor}10n\;.$$ If you wanted to start with $n=1$ and increase in steps of $10$, for a total of $N$ terms, it would be $$\sum_{n=0}^{N-1}(1+10n)\;.$$ And if you wanted to start with $n=1$ and increase in steps of $10$ up through a maximum value of $N$, it would be $$\sum_{n=0}^{\left\lfloor\frac{N-1}{10}\right\rfloor}(1+10n)\;.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/348571', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9793/']} | jdg_85773 |
stackexchange | llm_judgeable_groundtruth_similarity | 335718 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am learning about the USB interface. My current understanding is the following: The Host Computer contains an I/O interface, called root hub. This root hub is connected with either other hubs or I/O devices. Now my question is. What is exactly the difference between hubs and I/O devices? There is a diagram in the textbook I am reading that shows the two as clearly being difference from each other:
Now provide the response and nothing else.
| A USB hub is formally a USB device too. As such, it is a subject of USB enumeration process as any other USB device on the bus. Therefore a hub gets its own dedicated USB address, and responds to various host inquiries on its own, like reporting status of its downstream ports, and responding to interrupt pipe when something is connected or disconnected. However, hubs are special devices in the sense that their main function is to relay/channel all USB packets to downstream ports (and back), so other devices can be connected and enumerated. If there are connected and enabled USB devices in HS (High-Speed, 480 Mbps) mode on its downstream ports, anything that comes to its upstream port will be re-transmitted (repeated) downstream, and anything that comes from downstream devices will be repeated upstream. This is a fairly simple two-way repeater functionality. However, if the connected downstream device is Low Speed or Full speed, USB hubs have additional responsibilities when relaying the USB packets - they translate the fast upstream transactions into slow LS/FS transactions. For this purpose hubs have dedicated buffers with enough size to accommodate longest FS/LS packets. For this purpose the USB host uses special class of communication, so-called "split transactions". The rules of split transactions are fairly complicated, and USB specifications have nothing better than list many examples that took more than 100 pages to explain. The host feeds the hub TT (transaction Translator) just enough to keep slow devices running, and this saves the upstream HS bandwidth for other devices. In this sense USB hubs are fairly sophisticated communication processors. In short, normal USB devices perform specific I/O functions, like storing data in flash memory, or adapting a WIFi modem, or bridging/converting USB I/O data into other interfaces like UART or I2C, or relaying keys pressed on a keyboard. The hubs have an universal function to repeat/broadcast/ any data regardless of their origin or purpose. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/335718', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/138841/']} | jdg_85774 |
stackexchange | llm_judgeable_groundtruth_similarity | 1997396 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm experiencing a rather frustrating problem. My MVC site runs fine for the most part, but randomly throws an error (which shows a friendly error to the user). When I check the logs, this is what I get: System.InvalidOperationException: The model item passed into the dictionary is of type 'System.Web.Mvc.HandleErrorInfo' but this dictionary requires a model item of type 'BaseViewData'. Moments later, the same user could hit refresh and the page loads fine. I'm stuck. ;( Update: added stack trace System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.InvalidOperationException: The model item passed into the dictionary is of type 'System.Web.Mvc.HandleErrorInfo' but this dictionary requires a model item of type 'BaseViewData'. at System.Web.Mvc.ViewDataDictionary`1.SetModel(Object value) at System.Web.Mvc.ViewDataDictionary..ctor(ViewDataDictionary dictionary) at System.Web.Mvc.HtmlHelper`1..ctor(ViewContext viewContext, IViewDataContainer viewDataContainer, RouteCollection routeCollection) at System.Web.Mvc.ViewMasterPage`1.get_Html() at ASP.views_shared_site_master.__Render__control1(HtmlTextWriter __w, Control parameterContainer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Control.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.Mvc.ViewPage.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.views_shared_error_aspx.ProcessRequest(HttpContext context) at System.Web.Mvc.ViewPage.RenderView(ViewContext viewContext) at System.Web.Mvc.WebFormView.RenderViewPage(ViewContext context, ViewPage page) at System.Web.Mvc.WebFormView.Render(ViewContext viewContext, TextWriter writer) at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context) at System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) at System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
Now provide the response and nothing else.
| Here is an issue on codeplex explaining why that error occurs. Quote from http://web.archive.org/web/20131004122626/http://aspnet.codeplex.com/workitem/1795 since original link is dead: HandleError Attribute should not store exception information in ViewData When the HandleError attribute handles an exception, it stores the exception information in the ViewData . This is a problem when the Error.aspx inherits from the site.master and the site.master class is declared as follows. public partial class Site : System.Web.Mvc.ViewMasterPage<SiteViewData>{} SiteViewData contains: public class SiteViewData { public String Title { get; set; } } Each page ViewData class inherits from the SiteViewData class and looks something like this public class IndexViewData : SiteViewData{ public String Message { get; set; } public String SupportedLanguages {get; set;}} This approach allows one to write code in the Site.Master page as follows <title><%= Html.Encode(ViewData.Model.Title) %></title> Unfortunately, when an exception is thrown, the model has been replaced with an instance of the HandleErrorInfo class. This causes an InvalidOperationException to be thrown with the information The model item passed into the dictionary is of type System.Web.Mvc.HandleErrorInfo but this dictionary requires a model item of type Igwt.Boh.Website.Web.Controllers.SiteViewData . Is it possible for a new ErrorData property to be added to the ViewResult class to store the instance of the HandleErrorInfo class instead? This way the ViewData does not get changed. Chances are pretty good that any exception thrown in the action will occur after the IndexViewData (and SiteViewData ) properties have already been initialized. Closed Jan 27, 2010 at 12:24 AM by Won't fix - see comments. The comments mentioned with "wontfix" are from a former member of the Microsoft team, along with their suggestion for working around it (bolded): By the time the [HandleError] attribute executes, we've lost thereference to the original ActionResult object. We don't even know ifyou intended to show a view anyway - maybe you intended to redirect.The part of the pipeline (the ViewResult) that would have beenresponsible for passing the model from the controller to the view isgone. If an exception occurs, any model the application was working onshould probably be treated as corrupt or unavailable anyway. The bestpractice would be to write your Error view such that neither it norits dependencies (such as its master page) requires the originalmodel. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1997396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/61320/']} | jdg_85775 |
stackexchange | llm_judgeable_groundtruth_similarity | 277455 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We learned in school that photons behave partly like a wave and partly like a particle, so they can be both at the same time. We also learned that light consists of alternating electric and magnetic fields that travel through empty space as seen in the picture below. So if you can always name the specific wave length of a quantum object, doesn't that necessarily imply that this object must be a wave?
Now provide the response and nothing else.
| There is no conflict with the wave-particle duality. That duality just states that light has particle properties as well as wave properties. It has a wave-length, forms usual difraction and refraction patterns, and in most relevant macroscopic cases, it is well described by a wave equation. So, sure, it's a wave. That said, it also has particle-like properties. Sometimes you read stuff like "the object sometimes behaves like a particle and sometimes like a wave", but that is just nonsense. The fact is that a classical particle and a classical wave are just not the correct description for a quantum object. In that sense, the photon is neither a wave nor a particle. It's something different (actually, the best description we have says that it is just an excitation of a quantum field). The story that light is an alternating electric and magnetic field is the picture that you get when you interpret the classical equations of electromagnetism. There is nothing wrong with that, but you have to keep in mind that classical electromagnetism is an effective theory, which means that it is only a good approximation to nature on a certain scale. It just isn't the full picture. In a relativistic picture, there is no real distinction between magnetic and electric fields and you only have one field, the electromagnetic one. That field also doesn't look like your picture (I can't tell you how it looks like, I can only draw other pictures). You can draw a lot of pretty pictures and classical analogies, but none will capture all aspects of the photon. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/277455', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/128866/']} | jdg_85776 |
stackexchange | llm_judgeable_groundtruth_similarity | 46197050 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Scenario A logged in user will have a token expiry of 24 hours. Within that period, all request with @jwt_required decorator will have the current access token's expiry extended by another 24 hours. There is a maximum validity of 168(24 * 7) hours. It is possible to use access_token and refresh_token. ret = { 'access_token': create_access_token(identity=username, fresh=True), 'refresh_token': create_refresh_token(identity=username) } But that means every API call from my applicatino will be two requests:1. Actual HTTP Request2. Refresh the auth token @app.route('/refresh', methods=['POST'])@jwt_refresh_token_requireddef refresh(): current_user = get_jwt_identity() ret = { 'access_token': create_access_token(identity=current_user) } return jsonify(ret), 200 Is there a way to implicitly extend an auth token?
Now provide the response and nothing else.
| EDIT: There is now documentation around this here: https://flask-jwt-extended.readthedocs.io/en/latest/refreshing_tokens/ Author of flask-jwt-extended here. Technically you cannot actually extend a token, you can only replace it with a new JWT that has a new expires time. There are a few ways you could simulate this though. First, instead of having the client request a new token, you could have the server itself just implicitly send back a new token on every request. You could send the new JWTs back in a header instead of in the JSON payload, so that you wouldn't have to modify you JSON data to account for the possibility of a new JWT. Your clients would need to be aware of this though, they would need to check for that new header on every request and replace their current JWT with the new one if it is present. You could probably use a flask after_request method to do this, so you didn't have to add that functionality to all your endpoints. A similar effect could be achieved when storing the JWTs in cookies, with the differences being that cookies are automatically stored in your browser (so your client wouldn't have to manually look for them on every request), and with the added complexity of CSRF protection if you go this route ( http://flask-jwt-extended.readthedocs.io/en/latest/tokens_in_cookies.html ). The above should work fine, but you will be creating a lot of access tokens that are thrown away right after being created, which probably isn't ideal. A variation of the above is to check if the token is near expiring (maybe if it is more then half way to being expired) and only create and return a new token if that is the case. Another variation of this would be to have the client check if the token is about to expire (via javascript) and if it is, use the refresh token to request a new access token. To do that, you would need to split the JWT on dots ('.'), base64 decode the second set of strings from that split (index 1), and grab the 'exp' data from there. A second way you could do this is actually wait for a token to expire, and then use the refresh token to generate a new access token and remake the request (reactive instead of proactive). That might look like making a request, checking if the http code is 401, if so use the refresh token to generate a new access token, then making the request again. Hope this helps :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46197050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/399759/']} | jdg_85777 |
stackexchange | llm_judgeable_groundtruth_similarity | 50923 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Scientists have been looking for an explanation for the unusual orbits of extreme trans-neptunian objects. One of the possible explanations some astrophysicists have come up with was the possibility of Planet X. However, no one has been able to find planet X. What is the possibility that planet X is a spherical glass planet (not a typo, I actually mean SiO 2 ), thus the reason behind why optical telescopes have not been able to find it. What are the possible implications if we were to find it, to our current knowledge of astronomy and planet formation?
Now provide the response and nothing else.
| Zero chance. Even if the planet was made of glass, and was transparent, it would still radiate heat, as all warm objects do. Thus, we could find it with an infrared telescope. In fact, this is likely how we would find it even if was a "normal" planet. Furthermore, at the distance it is likely to be, there is very little light to reflect. As such, it wouldn't matter if it's transparent or not as there is not much visible light to worry about. I also doubt $\rm{SiO_2}$ remains transparent when subjected to the high temperatures and densities in the cores of planets. The Earth's core is ~6000 degrees Celsius. Glass would have melted at this point. Then, you get into formation scenarios. How do you suppose it was made only of glass without being contaminated with all the other stuff that is floating in space and makes up planets? Add in 4.5 billion years of dust accumulation and material from asteroid impacts and the surface definitely won't be clean glass anymore. If it's not clean then you've lost the transparency. If it was, then we get to throw out the book on planet formation and will need to start again. | {} | {'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/50923', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/47635/']} | jdg_85778 |
stackexchange | llm_judgeable_groundtruth_similarity | 2859508 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A, B, C \in M_n(\mathbb R)$ be such that $A$ commutes with $B$, $B$ commutes with $C$ and $B$ is not a scalar matrix. Then $A$ commutes with $C$. I think it is false, but how can I solve it within 3 minutes?
Now provide the response and nothing else.
| In $\mathbb R^5$, let $A$ be the matrix for which switches the first two standard basis vectors and preserves the others. That is, $Ae_1=e_2,Ae_2=e_1,$ and $Ae_i=e_i$ for $i=3,4,5$. Let $C$ be the matrix which switches $e_2$ and $e_3$ (preserving the others), and let $B$ be the matrix which switches $e_4$ and $e_5$ (preserving the other). We have $A$ commutes with $B$, and $B$ with $C$, but not $A$ with $C$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2859508', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/517603/']} | jdg_85779 |
stackexchange | llm_judgeable_groundtruth_similarity | 57409229 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
suppose i have below df: import pandas as pddata_dic = { "a": [0,0,1,2], "b": [0,3,4,5], "c": [6,7,8,9]}df = pd.DataFrame(data_dic) Result: a b c0 0 0 61 0 3 72 1 4 83 2 5 9 I need to past value to new column from above columns based on conditions: if df.a > 0 then value df.aelse if df.b > 0 then value df.b else value df.c For now i try with: df['value'] = [x if x > 0 else 'ww' for x in df['a']] but don't know how to input more conditions in this. Expected result: a b c value0 0 0 6 61 0 3 7 32 1 4 8 13 2 5 9 2 Thank You for hard work.
Now provide the response and nothing else.
| Use numpy.select : df['value'] = np.select([df.a > 0 , df.b > 0], [df.a, df.b], default=df.c)print (df) a b c value0 0 0 6 61 0 3 7 32 1 4 8 13 2 5 9 2 Difference between vectorized and loop solutions in 400k rows: df = pd.concat([df] * 100000, ignore_index=True)In [158]: %timeit df['value2'] = np.select([df.a > 0 , df.b > 0], [df.a, df.b], default=df.c)9.86 ms ± 611 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)In [159]: %timeit df['value1'] = [x if x > 0 else y if y>0 else z for x,y,z in zip(df['a'],df['b'],df['c'])]399 ms ± 52.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57409229', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11557974/']} | jdg_85780 |
stackexchange | llm_judgeable_groundtruth_similarity | 270075 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Please look at the output below: bob ~ # df -hFilesystem Size Used Avail Use% Mounted onudev 5,7G 4,0K 5,7G 1% /devtmpfs 1,2G 1,5M 1,2G 1% /run/dev/mapper/mint--vg-root 218G 66G 142G 32% /none 4,0K 0 4,0K 0% /sys/fs/cgrouptmpfs 5,7G 528M 5,2G 10% /tmpnone 5,0M 0 5,0M 0% /run/locknone 5,7G 99M 5,6G 2% /run/shmnone 100M 48K 100M 1% /run/usertmpfs 5,7G 44K 5,7G 1% /var/tmp/dev/sda1 236M 132M 93M 59% /boot df reports that LVM partition has 218G whereas it must be 250G, well 232G if to recalculate with 1024. So where is 14G? But even 218-66=152 not 142! That is 10 more Gigabytes which are also nowhere? Other utils output: bob ~ # pvs PV VG Fmt Attr PSize PFree /dev/sda5 mint-vg lvm2 a-- 232,64g 0 bob ~ # pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name mint-vg PV Size 232,65 GiB / not usable 2,00 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 59557 Free PE 0 Allocated PE 59557 PV UUID 3FA5KG-Dtp4-Kfyf-STAZ-K6Qe-ojkB-Tagr83bob ~ # fdisk -l /dev/sdaDisk /dev/sda: 250.1 GB, 250059350016 bytes255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00097b2a Device Boot Start End Blocks Id System/dev/sda1 * 2048 499711 248832 83 Linux/dev/sda2 501758 488396799 243947521 5 Extended/dev/sda5 501760 488396799 243947520 8e Linux LVM# sfdisk -l -uMDisk /dev/sda: 30401 cylinders, 255 heads, 63 sectors/trackWarning: extended partition does not start at a cylinder boundary.DOS and Linux will interpret the contents differently.Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System/dev/sda1 * 1 243 243 248832 83 Linux/dev/sda2 244+ 238474 238231- 243947521 5 Extended/dev/sda3 0 - 0 0 0 Empty/dev/sda4 0 - 0 0 0 Empty/dev/sda5 245 238474 238230 243947520 8e Linux LVMDisk /dev/mapper/mint--vg-root: 30369 cylinders, 255 heads, 63 sectors/tracksfdisk: ERROR: sector 0 does not have an msdos signature /dev/mapper/mint--vg-root: unrecognized partition table typeNo partitions found Linux Mint 17.3 UPDATE # lvdisplay --- Logical volume --- LV Path /dev/mint-vg/root LV Name root VG Name mint-vg LV UUID ew9fDY-oykM-Nekj-icXn-FQ1T-fiaC-0Jw2v6 LV Write Access read/write LV Creation host, time mint, 2016-02-18 14:52:15 +0200 LV Status available # open 1 LV Size 232,64 GiB Current LE 59557 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 Regarding swap. Initially it was there, in LVM. Then I removed it and extended root partition with the space which was used by the swap (about 12G) UPDATE2 # tune2fs -l /dev/mapper/mint--vg-roottune2fs 1.42.9 (4-Feb-2014)Filesystem volume name: <none>Last mounted on: /Filesystem UUID: 0b5ecf9b-a763-4371-b4e7-01c36c47b5ccFilesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 14491648Block count: 57952256Reserved block count: 2897612Free blocks: 40041861Free inodes: 13997980First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 1010Blocks per group: 32768Fragments per group: 32768Inodes per group: 8192Inode blocks per group: 512Flex block group size: 16Filesystem created: Thu Feb 18 14:52:49 2016Last mount time: Sun Mar 13 16:49:48 2016Last write time: Sun Mar 13 16:49:48 2016Mount count: 22Maximum mount count: -1Last checked: Thu Feb 18 14:52:49 2016Check interval: 0 (<none>)Lifetime writes: 774 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8First orphan inode: 6160636Default directory hash: half_md4Directory Hash Seed: 51743315-0555-474b-8a5a-bbf470e3ca9fJournal backup: inode blocks UPDATE3 (Final) Thanks to Jonas the space loss has been found # df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/mint--vg-root 218G 65G 142G 32% /# resize2fs /dev/mapper/mint--vg-rootresize2fs 1.42.9 (4-Feb-2014)Filesystem at /dev/mapper/mint--vg-root is mounted on /; on-line resizing requiredold_desc_blocks = 14, new_desc_blocks = 15The filesystem on /dev/mapper/mint--vg-root is now 60986368 blocks long.# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/mint--vg-root 229G 65G 153G 30% / and this is a diff of tune2fs command output before and after resize2fs running # diff /tmp/tune2fs_before_resize2fs /tmp/tune2fs2_after_resize2fs13,17c13,17< Inode count: 14491648< Block count: 57952256< Reserved block count: 2897612< Free blocks: 40041861< Free inodes: 13997980---> Inode count: 15253504> Block count: 60986368> Reserved block count: 3018400> Free blocks: 43028171> Free inodes: 1475983621c21< Reserved GDT blocks: 1010---> Reserved GDT blocks: 100938c38< Inode size: 256---> Inode size: 25642c42< First orphan inode: 6160636---> First orphan inode: 5904187
Now provide the response and nothing else.
| Let us do some research. I have noticed that difference before, but never checked in detail what to attribute the losses to. Have a look at my scenario for comparision: fdisk shows the following partition: /dev/sda3 35657728 1000214527 964556800 460G 83 Linux There will be some losses as I my filesystem lives in a luks container, but that should only be a few MiB. df shows: Filesystem Size Used Avail Use% Mounted on/dev/dm-1 453G 373G 58G 87% / (The luks container is also why /dev/sda3 does not match /dev/dm-1, but they are really the same device, with encryption inbetween, no LVM. This also shows that LVM is not responsible for your losses, I have them too.) Now lets ask the filesystem itself on that matter. Calling tune2fs -l , which outputs a lot of interesting information about ext-family filesystems, we get: root@altair ~ › tune2fs -l /dev/dm-1tune2fs 1.42.12 (29-Aug-2014)Filesystem volume name: <none>Last mounted on: /Filesystem UUID: 0de04278-5eb0-44b1-9258-e4d7cd978768Filesystem magic number: 0xEF53Filesystem revision #: 1 (dynamic)Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isizeFilesystem flags: signed_directory_hash Default mount options: user_xattr aclFilesystem state: cleanErrors behavior: ContinueFilesystem OS type: LinuxInode count: 30146560Block count: 120569088Reserved block count: 6028454Free blocks: 23349192Free inodes: 28532579First block: 0Block size: 4096Fragment size: 4096Reserved GDT blocks: 995Blocks per group: 32768Fragments per group: 32768Inodes per group: 8192Inode blocks per group: 512Flex block group size: 16Filesystem created: Wed Oct 14 09:27:52 2015Last mount time: Sun Mar 13 12:25:50 2016Last write time: Sun Mar 13 12:25:48 2016Mount count: 23Maximum mount count: -1Last checked: Wed Oct 14 09:27:52 2015Check interval: 0 (<none>)Lifetime writes: 1426 GBReserved blocks uid: 0 (user root)Reserved blocks gid: 0 (group root)First inode: 11Inode size: 256Required extra isize: 28Desired extra isize: 28Journal inode: 8First orphan inode: 26747912Default directory hash: half_md4Directory Hash Seed: 4723240b-9056-4f5f-8de2-d8536e35d183Journal backup: inode blocks Glancing over it, the first which springs into your eyes should be Reserved blocks . Multiplying that with the Block size (also from the output), we get the difference between the df Used+Avail and Size: 453GiB - (373GiB+58GiB) = 22 GiB6028454*4096 Bytes = 24692547584 Bytes ~= 23 GiB Close enough, especially considering that df rounds (using df without -h and repeating the calculation leaves only 16 MiB of the difference between Used+Avail and Size unexplained). To whom the reserved blocks are reserved is also written in the tune2fs output. It is root. This is a safety-net to ensure that non-root users cannot make the system entirely unusable by filling the disk, and keeping a few percent of disk space unused also helps against fragmentation. Now for the difference between the size reported by df and the size of the partition. This can be explained by taking a look at the inodes. ext4 preallocates inodes, so that space is unusable for file data. Multiply the Inode count by the Inode size , and you get: 30146560*256 Bytes = 7717519360 Bytes ~= 7 GiB453 GiB + 7 GiB = 460 GiB Inodes are basically directory entries. Let us ask mkfs.ext4 about details (from man mkfs.ext4): -i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to change this ratio on a filesystem after it is created, so be careful deciding the correct value for this parameter. Note that resizing a filesystem changes the numer of inodes to maintain this ratio. There are different presets to use for different scenarios. On a file server with lots of linux distribution images, it makes sense to pass e.g. -T largefile or even -T largefile4 . What -T means is defined in /etc/mke2fs.conf , in those examples and on my system: largefile = { inode_ratio = 1048576}largefile4 = { inode_ratio = 4194304} So with -T largefile4 , the number of is much less than the default (the default ratio is 16384 in my /etc/mke2fs.conf ). This means, less space reserved for directory entries, and more space for data. When you run out of inodes, you cannot create new files. Increasing the number of inodes in an existing filesystem does not seem to be possible . Thus, the default number of inodes is rather conservatively chosen to ensure that the average user does not run out of inodes prematurely. I just figured that out at poking at my numbers, let me know if it (does not) work for you ☺. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/270075', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/37713/']} | jdg_85781 |
stackexchange | llm_judgeable_groundtruth_similarity | 20575257 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a PowerShell script that monitors an image folder. I need to find a way to automatically run this script after the computer starts. I already tried the following methods, but I couldn't get it working. Use msconfig and add the PowerShell script to startup, but I cannot find the PowerShell script on that list. Create a shortcut and drop it to startup folder. No luck. %SystemRoot%\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -File "C:\Doc\Files\FileMonitor.ps1" or %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe -File "C:\Doc\Files\FileMonitor.ps1" Here's my PowerShell script: $folder = "C:\\Doc\\Files"$dest = "C:\\Doc\\Files\\images"$filter = "*.jpg"$fsw = new-object System.IO.FileSystemWatcher $folder, $filter -Property @{ IncludeSubDirectories=$false NotifyFilter = [System.IO.NotifyFilters]'FileName, LastWrite'}$onCreated = Register-ObjectEvent $fsw Created -SourceIdentifier FileCreated -Action { Start-Sleep -s 10 Move-Item -Path C:\Doc\Files\*.jpg C:\Doc\Files\images} I also tried to add a basic task using taskschd.msc . It is still not working. Here's what I found, and maybe that will help to debug it. If I open up a PowerShell window and run the script there, it works. But if I run it in a command prompt, powershell.exe -File "C:\Doc\Files\FileMonitor.ps1" It will not work. I am not sure it's a permission problem or something else. BTW, I have PowerShell 3.0 installed, and if I type $host.version, it will show 3 there. But my powershell.exe seems like it is still v1.0. %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe
Now provide the response and nothing else.
| I finally got my PowerShell script to run automatically on every startup. You will need to create two files: the first is the Powershell script (e.g. script.ps1 ) and the second is a .cmd file that will contain commands that will run on the command prompt (e.g. startup.cmd ). The second file is what needs to be executed when the computer starts up, and simply copy-pasting the .ps1 to the startup folder won't work, because that doesn't actually execute the script - it only opens the file with Notepad. You need to execute the .cmd which itself will execute the .ps1 using PowerShell. Ok, enough babbling and on to the steps: Create your .ps1 script and place it in a folder. I put it on my desktop for simplicity. The path would look something like this: %USERPROFILE%\Desktop\ script.ps1 Create a .cmd file and place it in %AppData%\Microsoft\Windows\Start Menu\Programs\Startup\ startup.cmd Doing this will execute the cmd file every time on startup. Here is a link of how to create a .cmd file if you need help. Open the .cmd file with a text editor and enter the following lines: PowerShell -Command "Set-ExecutionPolicy Unrestricted" >> "%TEMP%\StartupLog.txt" 2>&1PowerShell %USERPROFILE%\Desktop\script.ps1 >> "%TEMP%\StartupLog.txt" 2>&1 This will do two things: Set the Execution Policy of your PowerShell to Unrestricted. This is needed to run scripts or else PowerShell will not do it. Use PowerShell to execute the .ps1 script found in the path specified. This code is specifically for PowerShell v1.0. If you're running PowerShell v2.0 it might be a little different. In any case, check this source for the .cmd code. Save the .cmd file Now that you have your .ps1 and .cmd files in their respective paths and with the script for each, you are all set. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20575257', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/817314/']} | jdg_85782 |
stackexchange | llm_judgeable_groundtruth_similarity | 1689054 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to specify in a data connection or the LinqToSql Designer a schema? Whenever I go to setup a data connection for LinqToSql there doesn't seem to be anyway to specify a schema and I get 'dbo' by default. I figure that I can't be the first person to use a schema other than 'dbo' with LinqToSql, so I must be missing something obvious. EDIT : Maybe a little clarification is in order. Let's say that my database is such: MyDatabase.dbo.Tables MyDatabase.MySchema.Tables I can create a connection to MyDatabase.dbo (by default) and thus get a nice listing of tables that I can drag on to the designer. What I can't seem to do is create a connection to MyDatabase.MySchema to get the nice table listings etc. of which I could drag on to the designer.
Now provide the response and nothing else.
| pre-commit hook: my $repos = shift;my $txn = shift;foreach my $line (`$svnlook changed -t $txn "$repos"`){ chomp($line); if ($line !~ /^([AUD_]).\s\s(.+)$/) { print STDERR "Can't parse [$line].\n"; exit(1); } else { my $action = $1; my $file = $2; chomp($file); #If path has trailing slash, then it is a folder and we want to skip folders if($file =~ /\/$/) { next; } my $fileContent = `$svnlook cat -t $txn "$repos" "$file"`; if ($action =~ /[AU]/) { my @lines = split(/\n/, $fileContent ); #Check for whatever you need in this file's content } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1689054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113763/']} | jdg_85783 |
stackexchange | llm_judgeable_groundtruth_similarity | 613417 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading Steele's: The Cauchy-Schwarz Master Class . I'm having some trouble understanding it, I'll list my doubts: I'm having trouble understanding $\sum_{k=1}^{\infty}a^2_k<\infty$. It's lesser than infinity? What does that mean? In the following text, it mentions that it is small but I'm not sure of what that means, small in relation to what? I don't know how he goes from $(1.4)$ to $xy\leq C(x^2+y^2)$. I don't know how he draws the link to the familiar factorization. I can only think of $xy\leq C(x^2+y^2)$ as $0\leq Cx^2+Cy^2-xy$. I see that $xy\leq C(x^2+y^2)$ has a connection with $(1.5)$ in which it seems he's assuming that $C=1/2$. I'm lost at this mid step.
Now provide the response and nothing else.
| 1 When working with the non-negative reals, it is often useful to treat $\infty$ as a formal symbol which is bigger than everything else (see extended reals ). Then a sum of non-negative numbers always has a value; either it's infinite, or it converges to a real number (this statement is related to the completeness of the reals, and is worth thinking about if you haven't already). So, writing $\sum_{k=1}^\infty a_k^2 < \infty$ just means that the series $\sum_{k=1}^\infty a_k^2$ converges to some non-negative real number. Equivalently, it means that all the finite sums $\sum_{k=1}^N a_k^2$ are bounded above by some fixed number. The least upper bound is what we call the "value" of the series $\sum_{k=1}^\infty a_k^2$. For example, we could write $\sum_{k=1}^\infty \frac{1}{2^k} <\infty$, because $\sum_{k=1}^N\frac{1}{2^k} = 1 - \frac{1}{2^N}$, so the finite sums are bounded above by $1$. In fact, $\sum_{k=1}^\infty \frac{1}{2^k} =1$. 2 It's important to realize that the author is basically thinking backwards here. That is, he's saying: What would it take to prove (1.4)? Well, we know that two things are finite, and we want to prove that a third thing is finite. One way to do this would be to show that it's less than a constant times the sum of the first two things. In other words, if we knew that there was some non-negative $C$ with $|a_k b_k| \leq C(a_k^2 + b_k^2)$ for all $k$, then that would be enough to prove (1.4), because then we would have $\sum_{k=1}^N |a_k b_k| \leq C (\sum_{k=1}^N a_k^2 + \sum_{k=1}^N b_k^2)$. If the two sums on the right are bounded above by non-negative reals $A$ and $B$, then it follows that the sum on the left is bounded above by the non-negative real $C(A+B)$. Since there is nothing particularly special about the numbers $|a_k|$ and $|b_k|$, the author is forgetting about the sequences for the moment and considering any two non-negative real numbers $x$ and $y$. If it were true that there was some $C$ with $xy \leq C(x^2 + y^2)$ for any non-negative $x$ and $y$, then we could set $x=|a_k|$ and $y=|b_k|$ and get exactly the thing we wanted. (Note that we haven't really done any math yet; this is a style of reasoning I like to call "wishful thinking", where you imagine what would be nice to have, or allow you to solve the problem, but you don't immediately worry about whether it's possible or not. Here we are very lucky, and we will get everything that we hope for, but for the moment we are just asking "What if?") 3 Again, you need to understand that the author is thinking backwards. We don't know that $xy\leq C(x^2+y^2)$, this is something that we hope is true for some $C$. So there is no problem trying out $C=\frac{1}{2}$; if that works, then we have what we want. But why would we guess that number? Well, maybe we're staring at the equation $Cx^2 + Cy^2 - xy \geq 0$ and we realize that it looks a little bit like a square. Maybe we even try multiplying by $2$, to get $2Cx^2 + 2Cy^2 - 2xy \geq 0$. Now it looks a whole lot like $(x-y)^2 \geq 0$, which we already know is true! So the thing that want to be true really is true when $C=\frac{1}{2}$—which is very nice for us. But the reasoning goes in the opposite direction of the writing. That is, we know that $(|a_k| - |b_k|)^2 \geq 0$, therefore $|a_k b_k| \leq C (a_k^2 + b_k^2)$ for some non-negative $C$, therefore we can bound the series $\sum_k |a_k b_k|$, etc. The author is not giving the proof; he is telling you the right way to think about a problem like this. You cannot always directly deduce the thing you want; sometimes you need to begin with the conclusion, and ask what steps would lead you there. There are no fixed methods for doing this. It is just asking the question "What if?" For any given problem, this approach may not work, or it may work only partially, but here it leads to a solution very, very quickly. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/613417', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/25805/']} | jdg_85784 |
stackexchange | llm_judgeable_groundtruth_similarity | 13078230 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm implementing GCM in my app and keeping a hash of notifications to keep track of what is in the notification shade (I have to change intents based on if the user is in or out of app). I set the deleteIntent PendingIntent for all my notifications. All this does is remove the Notification from my local hash so it won't be updated anymore. The intent is fired fine if I clear all or swipe to delete a notification. However, I also set my notifications to auto cancel. Clicking on a notification does not trigger the deleteIntent for my notification. My question is, is there any way to be notified when my Notifications are auto-cancelled?
Now provide the response and nothing else.
| This bug has been reported , but it doesn't look like it has been investigated at all. To work around this here's what I did: Turn off auto cancel Use broadcast for both content and delete intents with different actions Broadcast receiver checks action Content action: Do both click and delete operations, and cancel notification manually Delete action: Do delete operation only For example: Send Notification Notification.Builder builder = new Notification.Builder(context) // Set other properties (not auto-cancel) .setContentIntent(PendingIntent.getBroadcast(context, 0, new Intent(NOTIFICATION_CLICKED_ACTION), 0)) .setDeleteIntent(PendingIntent.getBroadcast(context, 0, new Intent(NOTIFICATION_DELETED_ACTION), 0));notificationManager.notify(NOTIFICATION_ID, builder.build()); Receive Broadcast if (intent.getAction().equals(NOTIFICATION_CLICKED_ACTION)) { startActivity(new Intent(context, MyActivity.class)); notificationManager.cancel(NOTIFICATION_ID);}// Do deletion behaviour here (for both click and delete actions) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13078230', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113235/']} | jdg_85785 |
stackexchange | llm_judgeable_groundtruth_similarity | 19499967 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Just upgraded a .NET 4.5 WCF Service, which also has an OData service to use EF 6. The OData service was of course working prior to the upgrade. Now, when attempting to query the OData service or even just browse to it from Visual Studio 2012, I get the following Request Error : The server encountered an error processing the request. The exception message is 'Expression of type 'System.Data.Entity.Core.Objects.ObjectContext' cannot be used for return type 'System.Data.Objects.ObjectContext''. See server logs for more details. The exception stack trace is: at System.Linq.Expressions.Expression.ValidateLambdaArgs(Type delegateType, Expression& body, ReadOnlyCollection`1 parameters)at System.Linq.Expressions.Expression.Lambda[TDelegate](Expression body, String name, Boolean tailCall, IEnumerable`1 parameters)at System.Data.Services.Providers.DbContextHelper.CreateDbContextAccessor(Type type)at System.Data.Services.Providers.DbContextHelper.GetDbContextAccessor(Type type)at System.Data.Services.DataService`1.CreateMetadataAndQueryProviders(IDataServiceMetadataProvider& metadataProviderInstance, IDataServiceQueryProvider& queryProviderInstance, Object& dataSourceInstance, Boolean& isInternallyCreatedProvider)at System.Data.Services.DataService`1.CreateProvider()at System.Data.Services.DataService`1.HandleRequest()at System.Data.Services.DataService`1.ProcessRequestForMessage(Stream messageBody)at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] )at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs)at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage41(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage11(MessageRpc& rpc)at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) The ODataLib is 5.6, EF is 6.0.1, the WCF Data Services Server is 5.6. This issue did not help. I have done everything from here as well with no luck. I have no references to System.Data.Entity . What can I do to get past this?
Now provide the response and nothing else.
| Use the WCF Data Services Entity Framework Provider . See Using WCF Data Services 5.6.0 with Entity Framework 6+ for more information. Using WCF Data Services 5.6.0 with Entity Framework 6+ And now for some exciting news: you can finally use WCF Data Services with Entity Framework 6+! Today we are uploading a new NuGet package called WCF Data Services Entity Framework Provider . This NuGet package bridges the gap between WCF Data Services 5.6.0 and Entity Framework 6+. We were able to build this provider as an out-of-band provider (that is, a provider that ships apart from the core WCF DS stack) because of the public provider work we did recently. Upgrading an existing OData service to EF 6 If you are upgrading an existing OData service to Entity Framework 6 or greater, you will need to do a couple of things: Install the WCF Data Services Entity Framework Provider NuGet package. Since this package has a dependency on WCF Data Services 5.6.0 and Entity Framework 6 or greater, some of the other NuGet packages in your project may be upgraded as well. Replace the base type of your DataService. For EF 5 or below, your data service should inherit from DataService<T> where T is a DbContext or ObjectContext . For EF 6 or greater, your data service should inherit from EntityFrameworkDataService<T> where T is a DbContext . See What’s the difference between DataService and EntityFrameworkDataService below for more details. Creating a new OData service with EF 6 If you are creating a new OData service and would like to use Entity Framework 6 or greater, you will need to follow similar steps: Create your new project. I typically use an ASP.NET Empty Web Application for this, but you can use whatever you want. Note that if you do use the empty template, you may need to create an App_Data folder for Entity Framework to work properly with LocalDB. Install the WCF Data Services Entity Framework Provider NuGet package. Since this package has a dependency on WCF Data Services 5.6.0 and Entity Framework 6 or greater, some of the other NuGet packages in your project may be upgraded as well. Add a new WCF Data Service. It’s best if you ensure that your tooling is up-to-date as we occasionally fix bugs in the item template. Our latest tooling installer was released with WCF DS 5.6.0. It can be downloaded here . Replace the base type of the DataService that was generated by the item template. For EF 6 or greater, your data service should inherit from EntityFrameworkDataService<T> where T is a DbContext . See What’s the difference between DataService and EntityFrameworkDataService below for more details. What’s the difference between DataService<T> and EntityFrameworkDataService<T> ? Historically the WCF DS stack required all WCF DS-based OData services to inherit from DataService<T> . Internally, the data service would determine whether the service should use the in-box EF provider, the in-box Reflection provider, or a custom provider. When we added support for EF 6, we utilized the new public provider functionality to allow the provider to ship separately. This will allow us, for instance, to use WCF DS 5.6.0 with either EF 5, 6, or some future version. However, the new public provider functionality comes with a little bit of code you need to write. Since that code should be the same for every default EF 6 WCF DS provider, we went ahead and included a class that does this for you. EntityFrameworkDataService<T> inherits from DataService<T> and implements all the code you would need to implement otherwise. By shipping this additional class, we literally made the upgrade process as simple as changing the base type of your service. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19499967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/373438/']} | jdg_85786 |
stackexchange | llm_judgeable_groundtruth_similarity | 555392 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Short Version : If we want to solve a problem where a particle is forced off a geodesic, can we obtain identical results if we instead solve the problem where the particle is in a different spacetime but stays on a geodesic? Long Version (The Thought Experiment) : (See Twin Paradox for background) Twin A is floating stationary in space. Twin B is in a high-speed rocket which passes by twin A. At that moment, twin A and twin B synchronise their clocks. Then, consider two different situations: Space is globally flat Minkowski. Twin B coasts through flat space, using their thrusters to slowly accelerate before using them to turn around and travel back home to twin A. Space is globally Schwarzschild, but twin A is situated very far away from the central black hole where space is, to good approximation, flat. Twin B immediately turns off their thrusters and coasts along their geodesic with no noticable acceleration in their frame. The Geodesic happens to take them close to the black hole, completely curving around the black hole and turning back on itself, back towards twin A (eventually passing by them again). Figure 1 : a suggestion of a geodesic around a Schwarzschild black hole which turns back on itself (not to scale). When the twins pass by the second time, we find that twin A's clock has avanced more time than twin B's clock. That is, twin A has aged more than twin B. I know this is true in case (1) because it is the typical description of the twin paradox, which has a well-known answer (although I don't know how to explicitly calculate it). I know this is true in case (2) because twin A and twin B are both just geodesics in a schwarzchild spacetime, so I was able to numerically integrate them and sure enough, I found that $\tau_B < \tau_A$ . My question is: Are situation (1) and (2) equivalent when viewed from twin A's frame? Why I think they are equivalent: The Einstein Equivalence Principle states that acceleration and curvature are equivalent. A description that uses curvature (geodesics) to describe a particle's trajectory should be the same as a description that uses acceleration. Why I'm not sure: Twin B feels a force of acceleration in case (1) but doesn't feel anything in case (2). In case (2), both twins A and B stay on geodesics forever. This is more akin to time dialation in Special Relativity which is happens when two particles are on different geodesics - Perhaps the time dialation we're seeing in (2) is more related to the SR type of time dialation which is presumably different from the type in (1)?
Now provide the response and nothing else.
| The Lorentz force is not a Lorentz invariant, so if you get the same total force in S' as S, then you are doing something wrong. In the stationary frame of the electrons, the force on one electron due the other is given by a pure Coulomb force $$ {\bf F'} = -e{\bf E'} = \frac{e^2}{4\pi \epsilon_0z'^2}\ {\bf \hat{z}} ,$$ where $z'$ is their separation along the z-axis. In your laboratory frame (in which the electrons are moving) then the electric field is transformed according to the usual special relativistic transformations of the electromagnetic fields for a frame velocity difference of ${\bf v} = v{\bf \hat{x}}$ (i.e. perpendicular to a line joining the charges) $$ {\bf E} = \gamma E'\ {\bf \hat{z}}$$ and there is now a magnetic field $$ {\bf B} = -\gamma \frac{vE'}{c^2}\ {\bf \hat{y}}\ ,$$ and $z=z'$ . The total Lorentz force is then $${\bf F} = -e \left( {\bf E} + {\bf v} \times {\bf B}\right) = -\gamma eE' \left(1 - \frac{v^2}{c^2}\right)\ {\bf \hat{z}} = \gamma^{-1}\ {\bf F'}$$ Thus in the laboratory frame the force between the electrons diminishes as they get faster. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/555392', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/265641/']} | jdg_85787 |
stackexchange | llm_judgeable_groundtruth_similarity | 40750 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently I am learning Maxwell-Boltsmann distribution (MBD) and in that I am learning about microstate , macrostate and thermodynamic probability (TDP). I understood the derivation of MBD but I am getting hard time (and also confusing) to visualise these terms (microstate, macrostate and TDP) with MBD. Can any one explain (possibly with pictures) what is microstate, macrostate and TDP and how it is used to derive MBD.
Now provide the response and nothing else.
| A ‘microstate’ refers to a description of the system which relies on the states of each element of the system. Applied to a thermodynamic system, each microstate $M_i$ of the system is a set of positions $\{q_i\}$ and velocities $\{\dot q_i\}$ for $i = 1,\ldots,3N$ (in three dimensions, add another set of coordinates for internal degrees of freedom, such as rotation) which describe the position and velocity of each particle. As you can imagine, for large $N$ (say, $N = 10^{23}$), this gets out of hand. Furthermore, the probability that the system is in microstate $M_i$ is quite low as there are many, many different microstates the system could occupy. A ‘macrostate’ on the other hand is a state description relying on the macroscopic properties of the system: it’s temperature, pressure, volume, internal energy and such. For each macrostate, there are many, many microstates which result in the same macrospace: for example, if you interchange velocity (but not position) of two gas particles, the macrostate does not change, but you have a different microstate. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/40750', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7248/']} | jdg_85788 |
stackexchange | llm_judgeable_groundtruth_similarity | 36068622 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create an array of bool arrays. I want to have every combination of bool arrays with the exception of {false, false, false, false} . I want the order of this array to hold its sub arrays such that it ascends in the order of least trues to most trues. (Backwards order is fine, but it must still be ordered.) Each subset of arrays such that they have the same number of trues should be in random order. I can hardcode it as such: private bool[][] GetBoolArrays(){ var fourDoorList = new List<bool[]>(); fourDoorList.Add(new bool[4] { true, true, true, true }); fourDoorList = fourDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var threeDoorList = new List<bool[]>(); threeDoorList.Add(new bool[4] { true, true, true, false }); threeDoorList.Add(new bool[4] { true, true, false, true }); threeDoorList.Add(new bool[4] { true, false, true, true }); threeDoorList.Add(new bool[4] { false, true, true, true }); threeDoorList = threeDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var twoDoorList = new List<bool[]>(); twoDoorList.Add(new bool[4] { true, true, false, false }); twoDoorList.Add(new bool[4] { true, false, true, false }); twoDoorList.Add(new bool[4] { true, false, false, true }); twoDoorList.Add(new bool[4] { false, true, true, false }); twoDoorList.Add(new bool[4] { false, true, false, true }); twoDoorList.Add(new bool[4] { false, false, true, true }); twoDoorList = twoDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var oneDoorList = new List<bool[]>(); oneDoorList.Add(new bool[4] { true, false, false, false }); oneDoorList.Add(new bool[4] { false, true, false, false }); oneDoorList.Add(new bool[4] { false, false, true, false }); oneDoorList.Add(new bool[4] { false, false, false, true }); oneDoorList = oneDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorList); boolArrayList.AddRange(threeDoorList); boolArrayList.AddRange(twoDoorList); boolArrayList.AddRange(oneDoorList); return boolArrayList.ToArray();} But that is so very dirty! I can create a list like such, but these are unordered in the way I want: private bool[][] GetBoolArrays(){ const int subArraySize = 4; bool[][] combinations = new bool[(int)Mathf.Pow(2, subArraySize) - 1][]; for (int i = 1; i < Mathf.Pow(2, subArraySize); i++) { string binary = System.Convert.ToString(i, 2); while (binary.Length < subArraySize) { binary = 0 + binary; } bool[] singleCombination = binary.Select(c => c == '1').ToArray(); combinations[i - 1] = singleCombination; } return combinations;} So to clarify, I am trying to create an array of arrays. Each sub array has 4 bools. The main array has every combination of subarrays except for all false. The subarrays should be ordered by number of trues, but each section with a set number of trues should be randomized. I apologize if this is a poor explanation of what I am after...it is a bit difficult to explain. I can clarify on anything needed. Any ideas on how I can clean up the hard coded version of this?
Now provide the response and nothing else.
| Let's make a series of small refactorings. We start with: private bool[][] GetBoolArrays(){ var fourDoorList = new List<bool[]>(); fourDoorList.Add(new bool[4] { true, true, true, true }); fourDoorList = fourDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var threeDoorList = new List<bool[]>(); threeDoorList.Add(new bool[4] { true, true, true, false }); threeDoorList.Add(new bool[4] { true, true, false, true }); threeDoorList.Add(new bool[4] { true, false, true, true }); threeDoorList.Add(new bool[4] { false, true, true, true }); threeDoorList = threeDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var twoDoorList = new List<bool[]>(); twoDoorList.Add(new bool[4] { true, true, false, false }); twoDoorList.Add(new bool[4] { true, false, true, false }); twoDoorList.Add(new bool[4] { true, false, false, true }); twoDoorList.Add(new bool[4] { false, true, true, false }); twoDoorList.Add(new bool[4] { false, true, false, true }); twoDoorList.Add(new bool[4] { false, false, true, true }); twoDoorList = twoDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var oneDoorList = new List<bool[]>(); oneDoorList.Add(new bool[4] { true, false, false, false }); oneDoorList.Add(new bool[4] { false, true, false, false }); oneDoorList.Add(new bool[4] { false, false, true, false }); oneDoorList.Add(new bool[4] { false, false, false, true }); oneDoorList = oneDoorList.OrderBy(c => Random.Range(float.MinValue, float.MaxValue)).ToList(); var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorList); boolArrayList.AddRange(threeDoorList); boolArrayList.AddRange(twoDoorList); boolArrayList.AddRange(oneDoorList); return boolArrayList.ToArray();} First thing we notices is that the shuffle code is duplicated. Extract it to a helper extension. Also, why do we need to turn this into a list? We're just going to pass it to AddRange later. Keep it as a sequence. static IEnumerable<T> Shuffle<T>(this IEnumerable<T> items){ return items.OrderBy(c => Random.Range(float.MinValue, float.MaxValue));} Also, we now have a shuffled sequence and an unshuffled list. Keep them separate variables. Also, we notice that there is no point shuffling the list that has only one thing in it! OK, now what have we got? private bool[][] GetBoolArrays(){ var fourDoorList = new List<bool[]>(); fourDoorList.Add(new bool[4] { true, true, true, true }); var fourDoorListShuffle = fourDoorList; // No point shuffling! var threeDoorList = new List<bool[]>(); threeDoorList.Add(new bool[4] { true, true, true, false }); threeDoorList.Add(new bool[4] { true, true, false, true }); threeDoorList.Add(new bool[4] { true, false, true, true }); threeDoorList.Add(new bool[4] { false, true, true, true }); var threeDoorListShuffle = threeDoorList.Shuffle(); var twoDoorList = new List<bool[]>(); twoDoorList.Add(new bool[4] { true, true, false, false }); twoDoorList.Add(new bool[4] { true, false, true, false }); twoDoorList.Add(new bool[4] { true, false, false, true }); twoDoorList.Add(new bool[4] { false, true, true, false }); twoDoorList.Add(new bool[4] { false, true, false, true }); twoDoorList.Add(new bool[4] { false, false, true, true }); var twoDoorListShuffle = twoDoorList.Shuffle(); var oneDoorList = new List<bool[]>(); oneDoorList.Add(new bool[4] { true, false, false, false }); oneDoorList.Add(new bool[4] { false, true, false, false }); oneDoorList.Add(new bool[4] { false, false, true, false }); oneDoorList.Add(new bool[4] { false, false, false, true }); var oneDoorListShuffle = oneDoorList.Shuffle(); var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorListShuffle); boolArrayList.AddRange(threeDoorListShuffle); boolArrayList.AddRange(twoDoorListShuffle); boolArrayList.AddRange(oneDoorListShuffle); return boolArrayList.ToArray();} What else do we notice? We say "new bool[4]" but the compiler can infer both the type and the number. private bool[][] GetBoolArrays(){ var fourDoorList = new List<bool[]>(); fourDoorList.Add(new[] { true, true, true, true }); var fourDoorListShuffle = fourDoorList; // No point shuffling! var threeDoorList = new List<bool[]>(); threeDoorList.Add(new[] { true, true, true, false }); threeDoorList.Add(new[] { true, true, false, true }); threeDoorList.Add(new[] { true, false, true, true }); threeDoorList.Add(new[] { false, true, true, true }); var threeDoorListShuffle = threeDoorList.Shuffle(); var twoDoorList = new List<bool[]>(); twoDoorList.Add(new[] { true, true, false, false }); twoDoorList.Add(new[] { true, false, true, false }); twoDoorList.Add(new[] { true, false, false, true }); twoDoorList.Add(new[] { false, true, true, false }); twoDoorList.Add(new[] { false, true, false, true }); twoDoorList.Add(new[] { false, false, true, true }); var twoDoorListShuffle = twoDoorList.Shuffle(); var oneDoorList = new List<bool[]>(); oneDoorList.Add(new[] { true, false, false, false }); oneDoorList.Add(new[] { false, true, false, false }); oneDoorList.Add(new[] { false, false, true, false }); oneDoorList.Add(new[] { false, false, false, true }); var oneDoorListShuffle = oneDoorList.Shuffle(); var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorListShuffle); boolArrayList.AddRange(threeDoorListShuffle); boolArrayList.AddRange(twoDoorListShuffle); boolArrayList.AddRange(oneDoorListShuffle); return boolArrayList.ToArray();} Nicer. What if we used a collection initializer instead of all those calls to Add? private bool[][] GetBoolArrays(){ var fourDoorList = new List<bool[]>() { new[] { true, true, true, true }}; var fourDoorListShuffle = fourDoorList; // No point shuffling! var threeDoorList = new List<bool[]>() { new[] { true, true, true, false }, new[] { true, true, false, true }, new[] { true, false, true, true }, new[] { false, true, true, true }}; var threeDoorListShuffle = threeDoorList.Shuffle(); var twoDoorList = new List<bool[]>() { new[] { true, true, false, false }, new[] { true, false, true, false }, new[] { true, false, false, true }, new[] { false, true, true, false }, new[] { false, true, false, true }, new[] { false, false, true, true }}; var twoDoorListShuffle = twoDoorList.Shuffle(); var oneDoorList = new List<bool[]>() { new[] { true, false, false, false }, new[] { false, true, false, false }, new[] { false, false, true, false }, new[] { false, false, false, true }}; var oneDoorListShuffle = oneDoorList.Shuffle(); var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorListShuffle); boolArrayList.AddRange(threeDoorListShuffle); boolArrayList.AddRange(twoDoorListShuffle); boolArrayList.AddRange(oneDoorListShuffle); return boolArrayList.ToArray();} Better. What do we need the explaining variables for? private bool[][] GetBoolArrays(){ var fourDoorList = new List<bool[]>() { new[] { true, true, true, true }}; var threeDoorList = new List<bool[]>() { new[] { true, true, true, false }, new[] { true, true, false, true }, new[] { true, false, true, true }, new[] { false, true, true, true }}; var twoDoorList = new List<bool[]>() { new[] { true, true, false, false }, new[] { true, false, true, false }, new[] { true, false, false, true }, new[] { false, true, true, false }, new[] { false, true, false, true }, new[] { false, false, true, true }}; var oneDoorList = new List<bool[]>() { new[] { true, false, false, false }, new[] { false, true, false, false }, new[] { false, false, true, false }, new[] { false, false, false, true }}; var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorList); boolArrayList.AddRange(threeDoorList.Shuffle()); boolArrayList.AddRange(twoDoorList.Shuffle()); boolArrayList.AddRange(oneDoorList.Shuffle()); return boolArrayList.ToArray();} Hmm, why do any of these have to be lists? private bool[][] GetBoolArrays(){ var fourDoorList = new[] { new[] { true, true, true, true }}; var threeDoorList = new[] { new[] { true, true, true, false }, new[] { true, true, false, true }, new[] { true, false, true, true }, new[] { false, true, true, true }}; var twoDoorList = new[] { new[] { true, true, false, false }, new[] { true, false, true, false }, new[] { true, false, false, true }, new[] { false, true, true, false }, new[] { false, true, false, true }, new[] { false, false, true, true }}; var oneDoorList = new[] { new[] { true, false, false, false }, new[] { false, true, false, false }, new[] { false, false, true, false }, new[] { false, false, false, true }}; var boolArrayList = new List<bool[]>(); boolArrayList.AddRange(fourDoorList); boolArrayList.AddRange(threeDoorList.Shuffle()); boolArrayList.AddRange(twoDoorList.Shuffle()); boolArrayList.AddRange(oneDoorList.Shuffle()); return boolArrayList.ToArray();} A sequence of add-ranges is the same as a sequence of concats: private bool[][] GetBoolArrays(){ var fourDoorList = new[] { new[] { true, true, true, true }}; var threeDoorList = new[] { new[] { true, true, true, false }, new[] { true, true, false, true }, new[] { true, false, true, true }, new[] { false, true, true, true }}; var twoDoorList = new[] { new[] { true, true, false, false }, new[] { true, false, true, false }, new[] { true, false, false, true }, new[] { false, true, true, false }, new[] { false, true, false, true }, new[] { false, false, true, true }}; var oneDoorList = new[] { new[] { true, false, false, false }, new[] { false, true, false, false }, new[] { false, false, true, false }, new[] { false, false, false, true }}; return fourDoorList. Concat(threeDoorList.Shuffle()). Concat(twoDoorList.Shuffle()). Concat(oneDoorList.Shuffle()). ToArray();} That's a lot nicer looking than the original code. Notice how we simply made a series of clear, correct refactorings that made each revision a little bit better. Now, can you make a method that takes the number of bools you want total, and the number you want true? static IEnumerable<bool[]> Combinations(int totalCount, int trueCount) { You implement this} Suppose we had such a method, which is left as an exercise. (The combinatorics articles on my blog may help.) Now we can write: private bool[][] GetBoolArrays(){ var fourDoorList = Combinations(4, 4); var threeDoorList = Combinations(4, 3); var twoDoorList = Combinations(4, 2); var oneDoorList = Combinations(4, 1); return fourDoorList. Concat(threeDoorList.Shuffle()). Concat(twoDoorList.Shuffle()). Concat(oneDoorList.Shuffle()). ToArray();} Now, can you write a method that has this signature: static IEnumerable<T> MultiConcat(IEnumerable<IEnumerable<T>> sequences) { ... you implement this ...} If you can, then you can write: private bool[][] GetBoolArrays(){ var combinations = new[] { Combinations(4, 4).Shuffle(), Combinations(4, 3).Shuffle(), Combinations(4, 2).Shuffle(), Combinations(4, 1).Shuffle()}; return combinations.MultiConcat().ToArray();} Which I think is really quite a bit easier to read than the original code. In fact, we could get it down to a single statement: private bool[][] GetBoolArrays(){ return new[] { Combinations(4, 4).Shuffle(), Combinations(4, 3).Shuffle(), Combinations(4, 2).Shuffle(), Combinations(4, 1).Shuffle() }.MultiConcat().ToArray();} But now we might be getting too concise. But let's not stop now. There is a lot of duplicated code in there! private bool[][] GetBoolArrays(){ var q = from num in new[] { 4, 3, 2, 1 } select Combinations(4, num).Shuffle(); return q.MultiConcat().ToArray();} Oh wait, we already have a multi concat built into LINQ! Hey, sorry to make you do that exercise but I bet it built character. private bool[][] GetBoolArrays(){ var q = from num in new[] { 4, 3, 2, 1 } select Combinations(4, num).Shuffle() into all from combinations in all from combination in combinations select combination; return q.ToArray();} And that's as concise as I'm going to make it. Notice the lessons here: Small changes iterated can lead to big results. Extract operations to methods that specialize in those operations Use compiler inference to reduce redundancy and increase readability Code that uses expressions to describe values is usually more compact than code that use statements to describe operations. Embrace abstraction. Notice how much easier everything got when we abandoned the desire to stuff everything into a list. If you are going to realize things into lists or arrays, do so as late in the process as possible. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36068622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1889720/']} | jdg_85789 |
stackexchange | llm_judgeable_groundtruth_similarity | 28968 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Random Forests (RFs) is a competitive data modeling/mining method. An RF model has one output -- the output/prediction variable. The naive approach to modeling multiple outputs with RFs would beto construct an RF for each output variable. So we have N independentmodels, and where there is correlation between output variables wewill have redundant/duplicate model structure. This could be verywasteful, indeed. Also as a general rule more model variables implies amore overfit model (less generalisation). Not sure if this applieshere but it probably does. In principle we could have an RF with multiple outputs. The predictionvariable is now a vector (n-tuple). The decision nodes in eachdecision tree are now splitting the set of target/prediction vectorsbased on a threshold vector, I figure this threshold is taken to be aplane in the n-dimensional space and that therefore we can determinewhich side of the threshold vector each of the target vectors is on. The optimal prediction value for each side of the decision split isthe mean (centroid) calculated for the vectors on each side. Finding the optimal split point when working with single variables istrivial and computationally fast/efficient. For an n-tuple we cannotfind the optimal split (or at least it becomes computationallyinfeasible as N increases), but we may be able to find a near optimalsplit using a Monte Carlo type method (or some hybrid of Monte Carloand local gradient traversal). Would this actually work? That is, would it just map the trainingpairs without generalising? Does this technique already exist under a different name? You might also want to consider how this relates to neural nets such as Restricted Boltzmann Machines (RBMs) and Deep Belief Networks.
Now provide the response and nothing else.
| Multiple output decision trees (and hence, random forests) have been developed and published. Pierre Guertz distributes a package for this ( download ). See also Segal & Xiao, Multivariate random forests, WIREs Data Mining Knowl Discov 2011 1 80–87, DOI: 10.1002/widm.12 I believe the latest version of Scikit-learn also supports this. A good review of the state of the art can be found in the thesis by Henrik Linusson entitled "MULTI-OUTPUT RANDOMFORESTS". The simplest method for making the split choices at each node is to randomly choose ONE of the output variables and then follow the usual random forest approach for choosing a split. Other methods based on a weighted sum of the mutual information score with respect to each input feature and output variable have been developed, but they are quite expensive compared to the randomized approach. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/28968', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/7789/']} | jdg_85790 |
stackexchange | llm_judgeable_groundtruth_similarity | 17816205 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering why the outcome of this program is 5621? #include <stdio.h>main(){ int i=56; printf("%d\n",printf("%d",printf("%d",i))); getch();}
Now provide the response and nothing else.
| printf returns the amount of characters it has printed. So first the most inner printf gets called with 56, printing 56 . Then it returns the amount of characters it has printed (2) to the middle printf , printing 2 . Then finally the amount of characters printed (1) gets passed into the outer printf , which then gets printed to procude 5621 . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17816205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2542539/']} | jdg_85791 |
stackexchange | llm_judgeable_groundtruth_similarity | 772832 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way to subscribe to volume buttons press events?
Now provide the response and nothing else.
| After the recent rejections from Apple Do not use this. Apple now uses some patch which would reject your app straightaway if it uses any of the private APIs - though should note here that quite some apps on the App Store use this already and are still there! The only way to do this now is to have an AVAudioPlayer prepared to play but not playing ([player prepareToPlay]). This seems to take care of adjusting the app's volume according to the rocker buttons. There's no other published way currently to handle this. PLEASE READ THE ABOVE NOTE Yes, Use the MPVolumeView MPVolumeView *volume = [[[MPVolumeView alloc] initWithFrame:CGRectMake(18.0, 340.0, 284.0, 23.0)] autorelease]; [[self view] addSubview:volume]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(volumeChanged:) name:@"AVSystemController_SystemVolumeDidChangeNotification" object:nil]; for (UIView *view in [volume subviews]){ if ([[[view class] description] isEqualToString:@"MPVolumeSlider"]) { volumeViewSlider = view; //volumeViewSlider is a UIView * object } } [volumeViewSlider _updateVolumeFromAVSystemController];-(IBAction)volumeChanged:(id)sender{ [volumeViewSlider _updateVolumeFromAVSystemController];} This will give you a slider (same as one used in ipod) whose value will change acc to volume of the phone You will get a compile-time warning that view may not respond to _updateVolumeFromAVSystemControl, but just ignore it. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/772832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93816/']} | jdg_85792 |
stackexchange | llm_judgeable_groundtruth_similarity | 364265 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Short background: We are just starting to migrate/reimplement an ERP system to Java with Hibernate, targeting a concurrent user count of 50-100 users using the system. We use MS SQL Server as database server, which is good enough for this loads. Now, the old system doesn't use any transactions at all and relies for critical parts (e.g. stock changes) on setting manual locks (using flags) and releasing them. That's something like manual transaction management. But there are sometimes problems with data inconsistency. In the new system we would like to use transactions to wipe out these problems. Now the question: What would be a good/reasonable default transaction isolation level to use for an ERP system, given a usage of about 85% OLTP and 15% OLAP? Or should I always decide on a per task basis, which transaction level to use? And as a reminder the four transaction isolation levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SERIALIZABLE
Now provide the response and nothing else.
| 99 times out of 100, read committed is the right answer. That ensures that you only see changes that have been committed by the other session (and, thus, results that are consistent, assuming you've designed your transactions correctly). But it doesn't impose the locking overhead (particularly in non-Oracle databases) that repeatable read or serializable impose. Very occasionally, you may want to run a report where you are willing to sacrifice accuracy for speed and set a read uncommitted isolation level. That's rarely a good idea, but it is occasionally a reasonably acceptable workaround to lock contention issues. Serializable and repeatable read are occasionally used when you have a process that needs to see a consistent set of data over the entire run regardless of what other transactions are doing at the time. It may be appropriate to set a month-end reconciliation process to serializable, for example, if there is a lot of procedureal code, a possibility that users are going to be making changes while the process is running and a requirement that the process needs to ensure that it is always seeing the data as it existed at the time the reconciliation started. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/364265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/44522/']} | jdg_85793 |
stackexchange | llm_judgeable_groundtruth_similarity | 9385767 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm try to run this function, which grabs all the checked checkbox values in to a comma separated string, and converts "," in to ", " , so it reads better. The problem is I'm getting a strange error: $('.name_boxes').live('click', function() { var all_boxes = $('.name_boxes'); var all_boxes_values = [] for (var i = 0; i < all_boxes.length; i++) { if (all_boxes[i].checked) { all_boxes_values.push(all_boxes[i].value) } } var all_boxes_values_clean = all_boxes_values.replace(/,/g,", "); alert(all_boxes_values_clean);}); The console error says: Uncaught TypeError: Object Aaron Ramsey,Aaron Renfree has no method 'replace'. I'm not getting the alert box. This is a bit beyond me, can anybody explain what I'm doing wrong?
Now provide the response and nothing else.
| Although alert(some_array) prints a string representation of the array, the array itself is not a string. Thus, it does not have .replace . alert is forced to convert it into a string because the alert box can only show characters. You can simply join using a custom separator, though. join is a function of arrays: var all_boxes_values_clean = all_boxes_values.join(", "); As a side note, I recommend console.log over alert because it: shows the actual object/array instead of a string representation (especially useful with objects instead of the useless [object Object] you receive with alert ) frees you from closing the popup each time keeps track of other logs so that you have an actual log of logs | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9385767', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/933633/']} | jdg_85794 |
stackexchange | llm_judgeable_groundtruth_similarity | 43367805 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to read multiple tables from a sheet excel file using pandas ?Something like:read table1 from row0 until row100read table2 from row 102 until row202...
Now provide the response and nothing else.
| Assuming we have the following Excel file: Solution: we are parsing the first sheet (index: 0 ) xl = pd.ExcelFile(fn)nrows = xl.book.sheet_by_index(0).nrowsdf1 = xl.parse(0, skipfooter= nrows-(10+1)).dropna(axis=1, how='all')df2 = xl.parse(0, skiprows=12).dropna(axis=1, how='all') EDIT: skip_footer was replaced with skipfooter Result: In [123]: df1Out[123]: a b c0 78 68 331 62 26 302 99 35 133 73 97 44 85 7 535 80 20 956 40 52 967 36 23 768 96 73 379 39 35 24In [124]: df2Out[124]: c1 c2 c3 c40 78 88 59 a1 82 4 64 a2 35 9 78 b3 0 11 23 b4 61 53 29 b5 51 36 72 c6 59 36 45 c7 7 64 8 c8 1 83 46 d9 30 47 84 d | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43367805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501044/']} | jdg_85795 |
stackexchange | llm_judgeable_groundtruth_similarity | 54740947 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking to working on a project using node js addons with C++. I came across two abstract library NAN and N-API that I can use. However I am unable to decide which one I should use. I was not able to find proper comparison between these two libraries. What are the pros, cons and differences of both? How to choose between them? So far I have found that NAN has more online tutorials/articles regarding async calls. But N-API is officially supported by Node (and was created after NAN as a better alternative, although not sure.)
Now provide the response and nothing else.
| My understanding is this: The Node-API (formerly N-API) was added to the core node.js interface in v8.0.0. "It is intended to insulate Addons from changes in the underlying JavaScript engine…" to quote the documentation. It also provides some other wrappers around things like buffers and asynchronous work (which should help avoid some of the underlying non-stable APIs noted in their Implications of ABI stability section). nan (Native Abstractions for Node) is indeed older and so also supports older versions of node.js — back to node.js 0.8! Now despite its author claiming back in 2017 : As I mentioned somewhere else, N-API is not meant to be directly used for anything. Where has this notion come from? It is an (effectively internal) low-level infrastructure layer meant to offer ABI stability. There will be another layer on top. …I do not see much warning to that effect in the official Node.js add-on documentation. Perhaps this other comment is a bit more insightful: Yes, you should still use NAN for production use. It covers every relevant version of Node.js. Also note that N-API is not intended for end users. You should eventually use https://github.com/nodejs/node-addon-api . Again, that was in June of 2017 by the maintainer of nan at the time. It seems that node-addon-api has matured in the meantime and remains active. In fact, I found a comment in the -addon-api repo that is only a month old at present: …part of the goal was to make it easy to transition from nan. So I think the answer is: use nan if you want something mature and very backwards-compatible use node-addon-api if you want something forwards-looking in C++ use Node-API/N-API if you are comfortable working in C and dealing with possible lower-level concerns | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54740947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2082445/']} | jdg_85796 |
stackexchange | llm_judgeable_groundtruth_similarity | 420351 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As the title states I want to connect a USB mouse to the USB port on a USB/UART bridge. I want to determine if it is possible to communicate with the mouse using serial commands on the FPGA board then converted to USB signals. In other words, I want the bridge to serve as the host to a peripheral like a mouse/keyboard. Do I need to worry about converting to PS/2 at all when connected? Is this feasible as I am looking for a viable solution to connect a mouse to my FPGA board without having to create a USB block to handle this from scratch?
Now provide the response and nothing else.
| Almost all ICs marketed as "USB/UART bridge" or similar (including the CP2103) can only act as a USB device on their USB port, so no - this won't work. What you are looking for is a "USB host controller". Those will have a USB host port to interface with your mouse (or another USB device), and a host controller interface to connect to a host processor - typically a CPU of some kind, but in your case an FPGA. In most cases this host controller interface will be an implementation of one of several standards such as UHCI, EHCI, XHCI. Unfortunately for you, those are very complicated standards and it would take a lot of work to implement the corresponding logic in the FPGA. Fortunately for you, there are a small number of host controllers designed for this kind of application which integrate a lot of that logic and present a simpler, higher-level interface to the host processor. Examples include the MAX3421E and FTDI's Vinculum ( VNC1L , VNC2 ). | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/420351', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/20332/']} | jdg_85797 |
stackexchange | llm_judgeable_groundtruth_similarity | 12695591 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Before I ask about app.router I think I should explain at least what I think happens when working with middleware. To use middleware, the function to use is app.use() . When the middleware is being executed, it will either call the next middleware by using next() or make it so no more middleware get called. That means that the order in which I place my middleware calls is important, because some middleware depends on other middleware, and some middleware near the end might not even be called. Today I was working on my application and had my server running in the background. I wanted to make some changes and refresh my page and see the changes immediately. Specifically, I was making changes to my layout. I couldn't get it to work so I searched Stack Overflow for the answer and found this question . It says to make sure that express.static() is beneath require('stylus') . But when I was looking at that OP's code, I saw that he had his app.router call at the very end of his middleware calls, and I tried to figure out why that was. When I made my Express.js application (version 3.0.0rc4), I used the command express app --sessions --css stylus and in my app.js file the code came setup with my app.router above both the express.static() and require('stylus') calls. So it seems like, if it comes already setup that way, then it should stay that way. After re-arranging my code so I could see my Stylus changes, it looks like this: app.configure(function(){ //app.set() calls //app.use() calls //... app.use(app.router); app.use(require('stylus').middleware(__dirname + '/public')); app.use(express.static(__dirname + '/public', {maxAge: 31557600000}));});app.get('/', routes.index);app.get('/test', function(req, res){ res.send('Test');}); So I decided that the first step would be to find out why it is important to even have app.router in my code. So I commented it out, started my app and navigated to / . It displayed my index page just fine. Hmm, maybe it worked because I was exporting the routing from my routes file (routes.index). So next I navigated to /test and it displayed Test on the screen. Haha, OK, I have no idea what app.router does. Whether it is included in my code or not, my routing is fine. So I am definitely missing something. So Here Is My Question: Could somebody please explain what app.router does, the importance of it, and where I should place it in my middleware calls? It would also be nice if I got a brief explanation about express.static() . As far as I can tell, express.static() is a cache of my information, and if the application can't find the requested page, it will check the cache to see if it exists.
Now provide the response and nothing else.
| Note: This describes how Express worked in versions 2 and 3. See the end of this post for information about Express 4. static simply serves files ( static resources) from disk. You give it a path (sometimes called the mount point), and it serves the files in that folder. For example, express.static('/var/www') would serve the files in that folder. So a request to your Node server for http://server/file.html would serve /var/www/file.html . router is code that runs your routes. When you do app.get('/user', function(req, res) { ... }); , it is the router that actually invokes the callback function to process the request. The order that you pass things to app.use determines the order in which each middleware is given the opportunity to process a request. For example, if you have a file called test.html in your static folder and a route: app.get('/test.html', function(req, res) { res.send('Hello from route handler');}); Which one gets sent to a client requesting http://server/test.html ? Whichever middleware is given to use first. If you do this: app.use(express.static(__dirname + '/public'));app.use(app.router); Then the file on disk is served. If you do it the other way, app.use(app.router);app.use(express.static(__dirname + '/public')); Then the route handler gets the request, and "Hello from route handler" gets sent to the browser. Usually, you want to put the router above the static middleware so that a accidentally-named file can't override one of your routes. Note that if you don't explicitly use the router , it is implicitly added by Express at the point you define a route (which is why your routes still worked even though you commented out app.use(app.router) ). A commenter has brought up another point about the order of static and router that I hadn't addressed: the impact on your app's overall performance. Another reason to use router above static is to optimize performance. If you put static first, then you'll hit the hard drive on every single request to see whether or not a file exists. In a quick test , I found that this overhead amounted to ~1ms on an unloaded server. (That number is much likely to be higher under load, where requests will compete for disk access.) With router first, a request matching a route never has to hit the disk, saving precious milliseconds. Of course, there are ways to mitigate static 's overhead. The best option is to put all of your static resources under a specific folder. (IE /static ) You can then mount static to that path so that it only runs when the path starts with /static : app.use('/static', express.static(__dirname + '/static')); In this situation, you'd put this above router . This avoids processing other middleware/the router if a file is present, but to be honest, I doubt you'll gain that much. You could also use staticCache , which caches static resources in-memory so that you don't have to hit the disk for commonly requested files. ( Warning: staticCache will apparently be removed in the future.) However, I don't think staticCache caches negative answers (when a file does not exist), so it doesn't help if you've put staticCache above router without mounting it to a path. As with all questions about performance, measure and benchmark your real-world app (under load) to see where the bottlenecks really are. Express 4 Express 4.0 removes app.router . All middleware ( app.use ) and routes ( app.get et al) are now processed in precisely the order in which they are added. In other words: All routing methods will be added in the order in which they appear. You should not do app.use(app.router) . This eliminates the most common issue with Express. In other words, mixing app.use() and app[VERB]() will work exactly in the order in which they are called. app.get('/', home);app.use('/public', require('st')(process.cwd()));app.get('/users', users.list);app.post('/users', users.create); Read more about changes in Express 4. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/12695591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1408717/']} | jdg_85798 |
stackexchange | llm_judgeable_groundtruth_similarity | 109141 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a large backlog of things we should do in our software, in a lot of different categories, for example: New problem areas for our products to solve New functionality supporting existing problem areas New functionality requested by our existing users Usability and "look" enhancements Architectural upgrades to the back-end Bug fixes Managing all of these in a sensible fashion is a job that falls to Product Management, but it is tricky for a lot of reasons. Firstly, we have a number of different systems that hold the different things (market requirements document in files, bugs in a bug database, customer requirements in our help desk system, enginering's wish-list on our intranet, etc). And secondly, many of the items are of wildly different size, scope, complexity and of course value, which means that choosing isn't as simple as just ordering a list by priority. Because we now are fairly large, have a complex product and lots of customers, the basic solutions (a spreadsheet, a google doc, a basecamp to-do list) just isn't sufficient to deal with this. We need a way to group things together in various ways, prioritise them on an ongoing basis, make it clear what we're doing and what is coming - without it requiring all of someone's time to just manage some tool. How do you manage this in a way that allows the business to always do what is most valuable to existing customers, helps get new ones, and keeps the software innards sane? Note that this is different from the development-side, which I think we have down pretty well. We develop everything in an iterative, agile fashion, and once something has been chosen for design and implementation, we can do that. It's the part where we need to figure out what to do next that's hardest! Have you found a method or a tool that works? If so, please share! (And if you would like to know the answer too, rate up the question so it stays visible :) Addendum: Of course it's nice to fix all the bugs first, but in a real system that actually is installed on customers' machines, that is not always practical. For example, we may have a bug that only occurs very rarely and that it would take a huge amount of time and architectural upheaval to fix - we might leave that for a while. Or we might have a bug where someone thinks something is hard to use, and we think fixing it should wait for a bigger revamp of that area. So, there are lots of reasons why we don't just fix them all straight away, but keep them open so we don't forget. Besides, it is the prioritization of the non-bugs that is the hardest; just imagine we don't have any :)
Now provide the response and nothing else.
| Managing a large backlog in an aggressive manner is almost always wasteful. By the time you get to the middle of a prioritized pile things have more often than not changed. I'd recommend adopting something like what Corey Ladas calls a priority filter: http://leansoftwareengineering.com/2008/08/19/priority-filter/ Essentially, you have a few buckets of increasing size and decreasing priority. You allow stakeholders to fill them, but force them to ignore the rest of the stories until there are openings in the buckets. Very simple but very effective. Edit: Allan asked what to do if tasks are of different sizes. Basically, a big part of making this work is right-sizing your tasks. We only apply this prioritization to user stories. User stories are typically significantly smaller than "create a community site". I would consider the community site bit an epic or even a project. It would need to be broken down into significantly smaller bits in order to be prioritized. That said, it can still be challenging to make stories similarly sized. Sometimes you just can't, so you communicate that during your planning decisions. With regards to moving wibbles two pixels, many of these things that are easy can be done for "free". You just have to be careful to balance these and only do them if they're really close to free and they're actually somewhat important. We treat bugs similarly. Bugs get one of three categories, Now, Soon or Eventually. We fix Now and Soon bugs as quickly as we can with the only difference being when we publish the fixes. Eventually bugs don't get fix unless devs get bored and have nothing to do or they somehow become higher priority. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/109141', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13394/']} | jdg_85799 |
stackexchange | llm_judgeable_groundtruth_similarity | 50716690 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
letters=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']for (letters_num,a,z) in range(0,26) range(1,27) , range(26,0,-1): print 'letter' ,letters[letters_num] ,'is number' ,a,'from a and number',z,'from z' i want the output to be: letter a is number 1 from a and number 26 from zletter b is number 2 from a and number 25 from zletter c is ...... what is the right form to put two variables with two different 'range' in the same 'for' loop?? for x,y in range('range of x') , range ('range of y'):print x , y i want the correct form of this code , please
Now provide the response and nothing else.
| I guess the compiler error that you see is referring to the fact that you are using listener into it's own defining context. Try this for a change: In UserManager: func allUsers(completion:@escaping ([User])->Void) -> ListenerRegistration? { return db.collection("users").addSnapshotListener { querySnapshot, error in if let documents = querySnapshot?.documents { var users = [User]() for document in documents { let user = User(snapshot: document) users.append(user) } completion(users) } }} In ViewController: override func viewDidLoad() { super.viewDidLoad() self.listener = UserManager.shared.allUsers(completion: { (users) in self.users = users self.tableView.reloadData() })}deinit { self.listener.remove()} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50716690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9902170/']} | jdg_85800 |
stackexchange | llm_judgeable_groundtruth_similarity | 1703827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C#, it's possible to declare a struct (or class) that has a pointer type member, like this: unsafe struct Node{ public Node* NextNode;} Is it ever safe (err.. ignore for a moment that ironic little unsafe flag..) to use this construction? I mean for longterm storage on the heap. From what I understand, the GC is free to move things around, and while it updates the references to something that's been moved, does it update pointers too? I'm guessing no, which would make this construction very unsafe, right? I'm sure there are way superior alternatives to doing this, but call it morbid curiosity. EDIT: There appears to be some confusion. I know that this isn't a great construction, I purely want to know if this is ever a safe construction, ie: is the pointer guaranteed to keep pointing to whatever you originally pointed it to? The original C-code was used to traverse a tree (depth first) without recursion, where the tree is stored in an array. The array is then traversed by incrementing a pointer, unless a certain condition is met, then the pointer is set to the NextNode, where traversal continues. Of course, the same can in C# be accomplished by: struct Node{ public int NextNode; ... // other fields} Where the int is the index in the array of the next node. But for performance reasons, I'd end up fiddling with pointers and fixed arrays to avoid bounds checks anyway, and the original C-code seemed more natural.
Now provide the response and nothing else.
| Is it ever safe to use this construction? I mean for long term storage on the heap. Yes. Doing so is usually foolish, painful and unnecessary, but it is possible. From what I understand, the GC is free to move things around, and while it updates the references to something that's been moved, does it update pointers too? No. That's why we make you mark it as unsafe. I'm guessing no, which would make this construction very unsafe, right? Correct. I'm sure there are way superior alternatives to doing this, but call it morbid curiosity. There certainly are. is the pointer guaranteed to keep pointing to whatever you originally pointed it to? Not unless you ensure that happens. There are two ways to do that. Way one: Tell the garbage collector to not move the memory. There are two ways to do that: Fix a variable in place with the "fixed" statement. Use interop services to create a gc handle to the structures you wish to keep alive and in one place. Doing either of these things will with high likelihood wreck the performance of the garbage collector. Way two: Don't take references to memory that the garbage collector can possibly move. There are two ways to do that: Only take addresses of local variables, value parameters, or stack-allocated blocks. Of course, in doing so you are then required to ensure that the pointers do not survive longer than the relevant stack frame, otherwise, you're referencing garbage. Allocate a block out of the unmanaged heap and then use pointers inside that block. In essence, implement your own memory manager. You are required to correctly implement your new custom memory manager. Be careful. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1703827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/61632/']} | jdg_85801 |
stackexchange | llm_judgeable_groundtruth_similarity | 11146725 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
class HelloWorld(object): def say_it(self): return 'Hello I am Hello World'def i_call_hello_world(hw_obj): print 'here... check type: %s' %type(HelloWorld) if isinstance(hw_obj, HelloWorld): print hw_obj.say_it()from mock import patch, MagicMockimport unittestclass TestInstance(unittest.TestCase): @patch('__main__.HelloWorld', spec=HelloWorld) def test_mock(self,MK): print type(MK) MK.say_it.return_value = 'I am fake' v = i_call_hello_world(MK) print vif __name__ == '__main__': c = HelloWorld() i_call_hello_world(c) print isinstance(c, HelloWorld) unittest.main() Here is the traceback here... check type: <type 'type'>Hello I am Hello WorldTrue<class 'mock.MagicMock'>here... check type: <class 'mock.MagicMock'>E======================================================================ERROR: test_mock (__main__.TestInstance)----------------------------------------------------------------------Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1224, in patched return func(*args, **keywargs) File "t.py", line 18, in test_mock v = i_call_hello_world(MK) File "t.py", line 7, in i_call_hello_world if isinstance(hw_obj, HelloWorld):TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types----------------------------------------------------------------------Ran 1 test in 0.002s Q1. Why is this error thrown? They are <class type='MagicMock> Q2. How do I pause the mocking so that the first line will pass if the error is fixed? From the docs : Normally the __class__ attribute of an object will return its type. For a mock object with a spec, __class__ returns the spec class instead. This allows mock objects to pass isinstance() tests for the object they are replacing / masquerading as: mock = Mock(spec=3)isinstance(mock, int)True
Now provide the response and nothing else.
| IMHO this is a good question and saying " don't use isinstance , use duck typing instead " is a bad answer. Duck typing is great, but not a silver bullet. Sometimes isinstance is necessary, even if it is not pythonic. For instance, if you work with some library or legacy code that isn't pythonic you must play with isinstance . It is just the real world and mock was designed to fit this kind of work. In the code the big mistake is when you write: @patch('__main__.HelloWorld', spec=HelloWorld)def test_mock(self,MK): From patch documentation we read (emphasize is mine): Inside the body of the function or with statement, the target is patched with a new object . That means when you patch the HelloWorld class object the reference to HelloWorld will be replaced by a MagicMock object for the context of the test_mock() function. Then, when i_call_hello_world() is executed in if isinstance(hw_obj, HelloWorld): HelloWorld is a MagicMock() object and not a class (as the error suggests). That behavior is because as a side effect of patching a class reference the 2nd argument of isinstance(hw_obj, HelloWorld) becomes an object (a MagicMock instance). This is neither a class or a type . A simple experiment to understand this behavior is to modify i_call_hello_world() as follows: HelloWorld_cache = HelloWorlddef i_call_hello_world(hw_obj): print 'here... check type: %s' %type(HelloWorld_cache) if isinstance(hw_obj, HelloWorld_cache): print hw_obj.say_it() The error will disappear because the original reference to HelloWorld class is saved in HelloWorld_cache when you load the module. When the patch is applied it will change just HelloWorld and not HelloWorld_cache . Unfortunately, the previous experiment doesn't give us any way to play with cases like yours because you cannot change the library or legacy code to introduce a trick like this. Moreover, these are that kind of tricks that we would like to never see in our code. The good news is that you can do something ,but you cannot just patch the HelloWord reference in the module where you have isinstace(o,HelloWord) code to test. The best way depends on the real case that you must solve. In your example you can just create a Mock to use as HelloWorld object, use spec argument to dress it as HelloWorld instance and pass the isinstance test. This is exactly one of the aims for which spec is designed. Your test would be written like this: def test_mock(self): MK = MagicMock(spec=HelloWorld) #The hw_obj passed to i_call_hello_world print type(MK) MK.say_it.return_value = 'I am fake' v = i_call_hello_world(MK) print v And the output of just unittest part is <class 'mock.MagicMock'>here... check type: <type 'type'>I am fakeNone | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11146725', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/230884/']} | jdg_85802 |
stackexchange | llm_judgeable_groundtruth_similarity | 1652019 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In a finitely complete and cocomplete category. Does it always hold that the composition of two regular epimorphisms is regular? And if it's not the case, what kind of additional constraints can make it true (say, a pre-abelian category)? What I already knew is it holds for categories where regular epimorphisms and strong epimorphisms conincide.
Now provide the response and nothing else.
| In a category with all kernel pairs and coequalisers of kernel pairs, the following conditions are equivalent: regular epimorphisms are stable under composition; regular epimorphisms coincide with strong epimorphisms; for any morphism $f$, if $m_f \circ e_f$ is its factorisation through the coequaliser of its kernel pair, $m_f$ is a monomorphism; regular epimorphisms and monomorphisms form a factorisation system. This is proved in Monomorphisms, Epimorphisms, and Pull-backs by Kelly (Propositions 2.7 and 3.8). Note: Kelly takes as definition of regular epimorphism what is called elsewhere strict epimorphism , but these notions coincide when kernel pairs exist. (And Kelly doesn’t give the “factorisation system” version.) Kelly gives an example of a pre-abelian category in which regular monomorphisms do not compose: the category of abelian groups with no elements of order 4 (last paragraph of p. 126). The dual category is a pre-abelian category in which regular epimorphisms do not compose. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1652019', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/223481/']} | jdg_85803 |
stackexchange | llm_judgeable_groundtruth_similarity | 13125 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a useful topic. A college physics lab, medical diagnostics, urban growth, etc. - there is a lot of applications. On this site by Paul Bourke about Google Earth fractals we can get a high resolution images (in this post they are low res - import from source for experiments). For example, around Lake Nasser in Egypt: img = Import["http://paulbourke.net/fractals/googleearth/egypt2.jpg"] The simplest method I know is Box Counting Method which has a lot of shortcomings. We start from extracting the boundary - which is the fractal object: {Binarize[img], iEdge = EdgeDetect[Binarize[img]]} Now we could partition image into boxes and see how many boxes have at least 1 white pixel. This is a very rudimentary implementation: MinS = Floor[Min[ImageDimensions[iEdge]]/2];data = ParallelTable[{1/size, Total[Sign /@ (Total[#, 2] & /@ (ImageData /@ Flatten[ImagePartition[iEdge, size]]))]}, {size, 10, MinS/2, 10}]; From this the slope is 1.69415 which is a fractal dimension that makes sense line = Fit[Log[data], {1, x}, x] 13.0276 + 1.69415 x Plot[line, {x, -6, -2}, Epilog -> Point[Log[data]], PlotStyle -> Red, Frame -> True, Axes -> False] Benchmark: if I run this on high res of Koch snowflake i get something like ~ 1.3 with more exact number being 4/log 3 ≈ 1.26186 . Question: can we improve or go beyond the above box counting method? All approaches are acceptable if they find fractal dimension from any image of natural fractal.
Now provide the response and nothing else.
| You can still use box count, but doing it smarter :) Counting boxes with at least 1 white pixel from ImagePartition can be done more efficiently using Integral Image , a technique used by Viola-Jones (2004) in their now popular face recognition framework. For a mathematical motivation (and proof), Viola and Jones point to this source . Actually, someone already asked about a Mathematica implementation here . What Integral Image allows you to do is to compute efficiently the total mass of any rectangle in an image. So, you can define the following: IntegralImage[d_] := Map[Accumulate, d, {0, 1}];data = ImageData[ColorConvert[img, "Grayscale"]]; (* img: your snowflake image *)ii = IntegralImage[data]; Then, the mass (white content) of a region, is (* PixelCount: total mass in region delimited by two corner points, given ii, the IntegralImage *)PixelCount[ii_, {p0x_, p0y_}, {p1x_, p1y_}] := ii[[p1x, p1y]] + ii[[p0x, p0y]] - ii[[p1x, p0y]] - ii[[p0x, p1y]]; So, instead of partitioning the image using ImagePartition , you can get a list of all the boxes of a given size by PartitionBoxes[{rows_, cols_}, size_] := Transpose /@ Tuples[{Partition[Range[1, rows, size], 2, 1], Partition[Range[1, cols, size], 2, 1]}]; If you apply PixelCount to above, as in your algorithm, you should have the same data but calculated faster. PixelCountsAtSize[{rows_, cols_}, ii_, size_] := ((PixelCount [ii, #1, #2] &) @@ # &) /@ PartitionBoxes[{rows, cols}, size]; Following your approach here, we should then do fractalDimensionData = Table[{1/size, Total[Sign /@ PixelCountsAtSize[Dimensions[ii], ii, size]]}, {size, 3, Floor[Min[Dimensions[ii]]/10]}];line = Fit[Log[fractalDimensionData], {1, x}, x]Out:= 10.4414 + 1.27104 x which is very close to the actual fractal dimension of the snowflake (which I used as input). Two things to note. Because this is faster, I dared to generate the table at box size 3. Also, unlike ImagePartition , my partition boxes are all of the same size and therefore, it excludes uneven boxes at the edges. So, instead of doing minSize/2 as you did, I put minSize/10 - excluding bigger and misleading values for big boxes. Hope this helps. Update Just ran the algorithm starting with 2 and got this 10.4371 + 1.27008 x . And starting with 1 is 10.4332 + 1.26919 x , much better. Of course, it takes longer but still under or around 1 min for your snowflake image. Update 2 And finally, for your image from Google Earth (eqypt2.jpg) the output is (starting at 1-pixel boxes) 12.1578 + 1.47597 x It ran in 43.5 secs in my laptop. Using ParallelTable is faster: around 28 secs. | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/13125', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/13/']} | jdg_85804 |
stackexchange | llm_judgeable_groundtruth_similarity | 21808687 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I checked if these two were equal, they apparently weren't. Can someone explain why? var string = "Hello";var array = ['H', 'e', 'l', 'l', 'o']; Why is (string === array) is false ? EDIT: This website is fantastic. Such fast help. Thanks guys.
Now provide the response and nothing else.
| Why is (string === array) is false ? You are using strict comparison ( === ) , which also checks the data type of the values. Obviously a primitive string value is not the same data type as an object , and objects are only truly equal to themselves. Proof: var foo = [1,2,3];var bar = [1,2,3];console.log(foo === bar); // falseconsole.log(foo === foo); // true Now, if you were to use loose comparison ( == ) , the following comparison does return true : console.log([1,2,3] == '1,2,3'); // true Why? Because the array is converted to a string first, and this happens to result in the same string value. But that doesn't mean that the values are the same (one is still an array and the other a string). That's why you should always use strict comparison. What's the difference between a string and an array of characters in Javascript? Strings are not arrays because they are inheriting from different prototypes (*) and hence have different instance methods. For example, arrays have a method join and strings have a method match . From one point of view, arrays and strings are similar though, because they are both array-like objects. What does array-like mean? It means that the object has a length property and numeric properties. A string has a length property which gives you the number of characters in the string, and you can access single characters of the string with str[i] . Example: var arr = ['a','b','c'];var str = "abc";console.log(arr.length); // 3console.log(str.length); // 3console.log(arr[0]); // aconsole.log(str[0]); // aconsole.log(arr === str); // falseconsole.log(typeof str); // stringconsole.log(typeof arr); // object *: Actually, there even is a difference between primitive strings and String objects, but I don't want to go too deep here. Technically primitive strings don't have any methods because they are not objects, but in most cases, you can treat primitive strings like they were objects. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21808687', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2181017/']} | jdg_85805 |
stackexchange | llm_judgeable_groundtruth_similarity | 3122926 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried jQuery's $('#divOne').animate({zIndex: -1000}, 2000) to that element which has a z-index of 1000, but it is still above the other elements? (If I use firebug to change it to -1000 then it will be below other elements)
Now provide the response and nothing else.
| jQuery attempts to add a unit to the value on each step of the animation. So, instead of 99 it'll be 99px which, of course, isn't a valid zIndex value. It doesn't seem possible to set the unit used by jQuery to simply a blank string -- it'll either take the unit you include in the value (e.g. 20% - percent unit) or it will use px . Fortunately, you can hack animate() to make this work: var div = $('#divOne');$({ z: ~~div.css('zIndex') // ~~ to get an integer, even from non-numerical values like "auto"}).animate({ z: -1000}, { step: function() { div.css('zIndex', ~~this.z); }, duration: 2000}); For more info about ~~ see this . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3122926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/325418/']} | jdg_85806 |
stackexchange | llm_judgeable_groundtruth_similarity | 295867 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Consider a pair of holomorphic functions $f,g \in \mathcal{O}(\Delta)$ on the complex unit disk $\Delta = \{|z| < 1\}$ that both satisfy $f(0) = g(0) = 0$ and $f'(0) = g'(0) = 1$. Does the domain$$f(\Delta) \cdot g(\Delta) := \{f(z)g(w) \mid z, w \in \Delta \} \subset \mathbb{C}$$contain the unit disk $\Delta$, with equality if and only if $f(\Delta) = g(\Delta) = \Delta$? In this setting, it may be useful to recall Bloch's theorem, which states that $f(\Delta)$ and $g(\Delta)$ contain disks of a fixed (absolute) radius; however those disks need not of course be centered at the origin.
Now provide the response and nothing else.
| Let's start with the simple reduction. Notice that $f(\Delta)$ and $g(\Delta)$ are connected open sets containing small disks near the origin, so if one of them is unbounded, $f(\Delta)g(\Delta)=\mathbb C$. Let $a\in\Delta\setminus\{0\}$ (we certainly have $0=f(0)g(0)$, so the origin is never problematic) be not in $f(\Delta)g(\Delta)$. Consider $f(\Delta)\ni 0$ and $(a/g)(\Delta)\ni\infty$. Those are connected but not necessarily simply connected disjoint open sets. Let $\Omega$ be the union of $f(\Delta)$ and all bounded connected components of $\mathbb C\setminus f(\Delta)$ (or, which is the same, the complement of the connected component of $\mathbb C\setminus f(\Delta)$ containing $\infty$). If $(a/g)(\Delta)$ intersects $\Omega$, it must intersect $f(\Delta)$ as well (you cannot reach a bounded connected component of a complement of an open set by a path (actually even an open sausage, if you want) from $\infty$ without crossing the set itself. Now replace $f$ with the conformal mapping $\varphi$ from $\Delta$ to $\Omega$ with $\varphi(0)=f(0)$, $\Phi=\varphi'(0)>0$. Then, by the Schwarz lemma, $\Phi\ge f'(0)=1$, so if we consider $\widetilde f=\Phi^{-1}\varphi$, we will have $\Phi^{-1}a\notin \widetilde f(\Delta)g(\Delta)$ and $\widetilde f$ is now univalent. Similarly we can make $g$ univalent. Now comes the main Lemma: Let $f$ be a (bounded and, if you want, analytic up to the boundary) univalent function such that $f(0)=0, f'(0)=1$. Let $A$ be the area on $\mathbb C\setminus\{0\}$ given by $dA(z)=|z|^{-2}dm_2(z)$, which is invariant under $z\mapsto az (a\ne 0)$ and $z\mapsto z^{-1}$. Then$$A(f(\Delta\setminus r\Delta))\ge 2\pi\log\frac{1}{r}+o(1)\text{ as }r\to 0^+\,.$$ Proof: Let $S=\Delta\setminus r\Delta$. We have (since $f$ is univalent)$$A(f(S))\times 2\pi\log\frac 1r=\left[\int_S\frac{|f'|^2}{|f|^2}\,dm_2\right]\left[\int_S\frac{1}{|z|^2}\,dm_2\right]\\\ge\left[\int_S\frac{|f'|}{|f|}\,\frac{dm_2(z)}{|z|}\right]^2=I^2\,.$$Note that $$I=\int_{[0,2\pi]}d\theta\int_r^1d\rho \frac{|f'(\rho e^{i\theta})|}{|f(\rho e^{i\theta})|}\ge \int_{[0,2\pi]}d\theta\int_r^1 d\rho \frac d{d\rho}(\log|f(\rho e^{i\theta})|)\\=\int_{[0,2\pi]}d\theta\log|f(e^{i\theta})|-\int_{[0,2\pi]}d\theta\log|f(re^{i\theta})|=I_1-I_2\,.$$However, under our assumptions we have $I_1=0$ while $I_2=2\pi\log r+O(r)$ as $r\to 0^+$ whence the lemma. Now life gets easy. Notice that for sufficiently small $r>0$ both $f(S)$ and $(a/g)(S)$ lie in the annulus $\{w\in\mathbb C\,:\,(1-o(1))r\le|w|\le (1+o(1))|a|r^{-1}$. (we use both the boundedness and the univalence properties here). But the invariant area of this annulus is only $4\pi\log\frac 1r+\log|a|+o(1)$, so the images must overlap somewhere, thus finishing the story. Cute question, by the way :-) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/295867', 'https://mathoverflow.net', 'https://mathoverflow.net/users/26522/']} | jdg_85807 |
stackexchange | llm_judgeable_groundtruth_similarity | 712655 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does the POSIX standard reserve the -W option for vendor extensions of the system utilities? I do not understand why the letter ‘W’ is used. ‘V’ (for v endor) could make more sense. Maybe this question should be moved to Retrocomputing SE.
Now provide the response and nothing else.
| This provision was added between Single Unix v2 (1997) and Single Unix v3 (2001). It wasn't done in a vacuum: it had to take into account both the previous specifications and existing practice. If a letter was already specified for some commands, the existing commands would have to be grandfathered in and wouldn't be able to follow this guideline. If a letter was already used by popular programs not specified by POSIX or by popular implementations of POSIX programs, this would have made it harder to specify those utilities later, and harder for users to remember options with similar meanings but different letters for different commands. Looking at the documented options in SUSv2: grep -h -Po '(?<=^<dt><b>-)[[:alnum:]]' /usr/share/doc/susv2/susv2/xcu/*.html | sort | uniq -c we can see that all the lowercase letters are taken by at least one utility, and most uppercase letters as well. The free letters are -B , -J , -K , -Y and -Z . -V is taken only for two commands: command , where it's a variant of -v (added — I don't know by who originally, possibly one of the Unix specification working groups or ksh — because the original definition of -v wasn't quite satisfactory). dis , where it's an option to print the version of the utility. POSIX could have chosen -V for vendor, but it would have meant that command would not have followed the guidelines. This would have been annoying since command was created for the sake of portability (both for its behavior of avoiding differences between shell builtins and external utilities, and for its function similar to type but without the output formatting variability). In addition, dis was far from the only program out there to use -V for “version” (most of these weren't codified by POSIX because they weren't part of the base system: you don't need a “print version” option for a utility that's part of the base system, you just use the version number of the base system). So -V would have had too many exceptions, both inside POSIX and out, to be a good choice. -W was only taken by cc . cc implementations tended to differ quite a lot between vendors (in particular, with respect to which C dialect it expected), which led to it being removed from future versions of the standard (replaced by c89 , c99 , etc.). Since the next version of the standard no longer had cc , giving -W a new meaning didn't exclude any standard utility. As far as I know, it wasn't a particularly common choice of option letter in non-POSIX utilities, so it was up for grabs. Why -W and not another of the uppercase letters that wasn't used at all? I don't know for sure, it could have been arbitrary, but it didn't come out of the blue. The -W option was codified for cc with an argument that itself had to have a certain structure allowing multiplexing: it had to start with a character specifying what “subdomain” (compilation phase) the option applies to, followed by “subdomain-specific” options. Since POSIX.1-2001 only leaves one letter for implementation-specific options, this letter would have to be multiplexed in order to allow more than one implementation-specific behavior change. So the -W option of cc was an inspiration for how the implementation-specific -W could be used — not necessarily the exact syntax, but the basic principle of taking an argument with a prefix indicating a “sub-option” of some sort. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/712655', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/194408/']} | jdg_85808 |
stackexchange | llm_judgeable_groundtruth_similarity | 159626 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can get all files on the bash patches site by downloading them in a sequence: SEQ=$(seq -f "%03g" 1 30)for i in $SEQ; do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i;done But then I would have to know the maximum number. Is there a possibility to just get the listing and extract all patchfiles for downloading?
Now provide the response and nothing else.
| You could use wget with recursive downloading: wget -nc -nd -nH -np -r -R '*.*' http://ftp.gnu.org/gnu/bash/bash-4.3-patches/ Explanation: -nc : no-clobber (don't overwrite existing files), probably not necessary. -nd : Don't create hierarchy of directories. -nH : Don't create directory based on hostname. Or you'd find everything downloaded to a directory called ftp.gnu.org . -np : Never ascend to the parent directory. -r : Download recursively. -R '*.*' : Reject everything with a . in its filename (skips things like index.html and so on). An accept list may also be used. The file is downloaded, but discarded . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/159626', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/20661/']} | jdg_85809 |
stackexchange | llm_judgeable_groundtruth_similarity | 22755 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Gen 2:6-7 ... A mist used to rise from the earth and water the whole surface of the ground. Then the Lord God formed man of dust from the ground ... Is there an explanation why there is still dust on the ground although God previously ordered a mist to water all of the ground?
Now provide the response and nothing else.
| The watering of the ground was imperfect in aspect because of the waw consecutive appears here with the perfect tense (and so “flips” the aspect from perfect to imperfect in alignment with the preceding verb in context, which is imperfect in aspect). Please click on the image to view the full source document, which comes from Gesenius, § 112 3(a)(α), who makes this observation of this passage. So the “rising” of water (mist) and “watered” ground were imperfect in aspect , which already (as noted by Gesenius, above ) was recurring or repetitive action. In other words, the emergence of water and the soaking of the ground were cyclical events, because they were repetitive. The same idea appears in the Aramaic of the Targum Onqelos , which appeared in 1st Century. The rabbis who translated from Classical Hebrew into the Aramaic used the past progressive; that is, they used the Peal perfect of the linking verb with two masculine singular participles (one Peal and the other Hithpeel) in order to create the past progressive tense. Please click the image below to enlarge. The past progressive reinforces the rabbinic view that the verbs in Hebrew were imperfect aspect, and thus repetitive irrigation cycles keep occurring. In this respect, the other rabbis saw this watered ground as clay, from which the Almighty had created man. For example, the following citation from the Talmud Yerushalmi mentions the following in Tractate Shabbat , Chapter 2. Please click on the image ( above ) to view the full source document in Aramaic - cf. paragraph beginning with the phrase, דף כ,א פרק ב הלכה ו גמרא . (English translation from Neusner .) Rabbi Shlomo Yiztchaki (Rashi) mentions the same analogy of bread dough as man was an admixture of dust and water . In summary, man is composed of water and dust. That is, the Almighty created man during an irrigation cycle when the ground was wet, and so man is a vessel of “clay,” or an admixture of water and dust. | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/22755', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/10164/']} | jdg_85810 |
stackexchange | llm_judgeable_groundtruth_similarity | 4042930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a class public class GAE_SERVLETREQUESTServlet extends HttpServlet { Not sure what the spec says about recycling of the HTTPServlet: Should the servlet container create new instance of this class on each incoming request or can the implementation reuse classes between requests? I'm investigating a funny issue where it seems that a Map created on the GAE_SERVLETREQUESTServlet instance maintains state between requests.
Now provide the response and nothing else.
| For the general case - non-distributed, multi-threaded, it is guaranteed that there will be only one instance of the servlet. From the Servlet 3.0 specification: 2.1 Request Handling Methods The basic Servlet interface defines a service method for handling client requests. This method is called for each request that the servlet container routes to an instance of a servlet. The handling of concurrent requests to a Web application generally requires that the Web Developer design servlets that can deal with multiple threads executing within the service method at a particular time. Generally the Web container handles concurrent requests to the same servlet by concurrent execution of the service method on different threads. 2.2 Number of Instances The servlet declaration which is either via the annotation as described in Chapter 8, “Annotations and pluggability” or part of the deployment descriptor of the Web application containing the servlet, as described in Chapter 14, “Deployment Descriptor”, controls how the servlet container provides instances of the servlet. For a servlet not hosted in a distributed environment (the default), the servlet container must use only one instance per servlet declaration . However, for a servlet implementing the SingleThreadModel interface, the servlet container may instantiate multiple instances to handle a heavy request load and serialize requests to a particular instance. In the case where a servlet was deployed as part of an application marked in the deployment descriptor as distributable, a container may have only one instance per servlet declaration per Java Virtual Machine (JVM™)1. However, if the servlet in a distributable application implements the SingleThreadModel interface, the container may instantiate multiple instances of that servlet in each JVM of the container. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4042930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48062/']} | jdg_85811 |
stackexchange | llm_judgeable_groundtruth_similarity | 258535 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $(\mathsf{Rel},\otimes,1)$ denote the monoidal category of sets and relations, where $1$ is the one-element set. I once conjectured (with a little help from Jamie Vicary) that $\mathsf{Rel}$ is "quotient-free" in the sense that if a strong monoidal functor $F\colon\mathsf{Rel}\to S$ identifies any parallel pair of morphisms, then $F$ identifies every parallel pair of morphisms, and hence it factors through the terminal monoidal category (since $\mathsf{Rel}$ has a zero-object). [I'd be happy to hear suggestions for a better name than "quotient-free monoidal category", or for a better way of thinking of such things.] Definition: We say a monoidal category $M$ is quotient-free if for any monoidal category $S$ and strong monoidal functor $F\colon M\to S$, if $F(f_0)=F(f_1)$ for distinct morphisms $f_0\neq f_1\colon A\to B$ then $F$ factors through a terminal monoidal category. Explaining the conjecture to Tobias Fritz, he quickly proved it (by contradiction) as follows. Proposition: The monoidal category $(\mathsf{Rel},\otimes,1)$ is quotient-free. Proof (Fritz): Suppose that $A$ and $B$ are sets and that $R_0,R_1\colon A\to B$ are relations such that $R_0\neq R_1$. Then there exists $a\in A$ and $b\in B$ such that $(a,b)\notin R_0$ and $(a,b)\in R_1$ (without loss of generality). Let $e_a\colon 1\to A$ and $e_b\colon 1\to B$ correspond to the relations characterizing the subsets $\{a\}\subseteq A$ and $\{b\}\subseteq B$, respectively, and let $e'_b\colon B\to 1$ be the transpose of $e_b$. Then we have two different relations$$1\xrightarrow{e_a}A\xrightarrow{R_0\ ,\ R_1}B\xrightarrow{e'_b}1.$$These ($e'_bR_0e_a$ and $e'_bR_1e_a$) are the only two relations $1\to 1$, equaling the "null" relation $\emptyset_{1,1}$ and the identity $\mathrm{id}_1$, respectively. Assuming now that $F(R_0)=F(R_1)$, we have $F(\mathrm{id}_1)=F(\emptyset_{1,1})$. It follows that $F$ identifies any given relation $X\colon C\to D$ with the null relation $\emptyset_{C,D}\colon C\to D$, because $$FX\cong F(X)\otimes F(\mathrm{id}_1)=F(X)\otimes F(\emptyset_{1,1})\cong F(X\otimes\emptyset_{1,1})=F(\emptyset_{C,D}).$$Thus for any set $A$, we obtain an isomorphism $F(A)\cong F(\emptyset)$, where $\emptyset$ is the zero-object of $\mathsf{Rel}$. $\square$ Question: What are other examples of quotient-free monoidal categories? Question: Might we consider quotient-free monoidal categories as acting like fields, which are also somehow quotient-free? That is, maps to quotient-free monoidal categories would be analogous to points? Any thoughts on this would be useful.
Now provide the response and nothing else.
| There is no such function. Since $f$ would have to map $\mathbb R$ onto $\mathbb R$ for the equation to make sense at all $x\in\mathbb R$, it follows that $f^{-1}(x)\to -\infty$ also as $x\to -\infty$, so $f'\to 0$. Thus $f(x)\ge x$, say, for all small enough $x$, hence $f^{-1}(x)\le x$ eventually, but then the equation shows that $f'\le e^x$, which is integrable on $(-\infty, 0)$, so $f$ would approach a limit as $x\to -\infty$ and not be surjective after all. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/258535', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2811/']} | jdg_85812 |
stackexchange | llm_judgeable_groundtruth_similarity | 15375084 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So from my understanding of delay slots, they occur when a branch instruction is called and the next instruction following the branch also gets loaded from memory. What is the point of this? Wouldn't you expect the code after a branch not to run in case the branch is taken? Is it to save time in case the branch isnt taken? I am looking at a pipeline diagram and it seems the instruction after branch is getting carried out anyway..
Now provide the response and nothing else.
| Most processors these days use pipelines. The ideas and problems from the H&P book(s) are used everywhere. At the time of those original writings, I would assume the actual hardware matched that particular notion of a pipeline. fetch, decode, execute, write back. Basically a pipeline is an assembly line, with four main stages in the line, so you have at most four instructions be worked on at once. Which confuses the notion of how many clocks does it take to execute an instruction, well it takes more than one clock, but if you have some/many executing in parallel then the "average" can approach or exceed one per clock. When you take a branch though the assembly line fails. The instructions in the fetch and decode stage have to be tossed, and you have to start filling again, so you take a hit of a few clocks to fetch, decode, then back to executing. The idea of the branch shadow or delay slot is to recover one of those clocks. If you declare that the instruction after a branch is always executed then when a branch is taken the instruction in the decode slot also gets executed, the instruction in the fetch slot is discarded and you have one hole of time not two. So instead of execute, empty, empty, execute, execute you now have execute, execute, empty, execute, execute... in the execute stage of the pipeline. The branch is 50% less painful, your overall average execution speed improves, etc. ARM does not have a delay slot, but it gives the illusion of a pipeline as well, by declaring that the program counter is two instructions ahead. Any operation that relies on the program counter (pc-relative addressing) must compute the offset using a pc that is two instructions ahead, for ARM instructions this is 8 bytes for original thumb 4 bytes and when you add in thumb2 instructions it gets messy. These are illusions at this point outside academics, the pipelines are deeper, have lots of tricks, etc, in order for legacy code to keep working, and/or not having to re-define how instructions work for each architecture change (imagine mips rev x, 1 delay slot, rev y 2 delay slots, rev z 3 slots if condition a and 2 slots if condition b and 1 slot if condition c) the processor goes ahead and executes the first instruction after a branch, and discards the other handful or dozen after as it re-fills the pipe. How deep the pipes really are is often not shared with the public. I saw a comment about this being a RISC thing, it may have started there but CISC processors use the same exact tricks, just giving the illusion of the legacy instruction set, at times the CISC processor is no more than a RISC or VLIW core with a wrapper to emulate the legacy CISC instruction set (microcoded). Watch the how its made show. Visualize an assembly line, each step in the line has a task. What if one step in the line ran out of blue whatsits, and to make the blue and yellow product you need the blue whatsits. And you cant get new blue whatsits for another week because someone screwed up. So you have to stop the line, change the supplies to each stage, and make the red and green product for a while, which normally could have been properly phased in without dumping the line. That is like what happens with a branch, somewhere deep in the assembly line, something causes the line to have to change, dump the line. the delay slot is a way to recover one product from having to be discarded in the line. Instead of N products coming out before the line stopped, N+1 products came out per production run. Execution of code is like bursts of production runs, you often get short, sometimes long, linear execution paths before hitting a branch to go to another short execution path, branch another short execution path... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15375084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1441717/']} | jdg_85813 |
stackexchange | llm_judgeable_groundtruth_similarity | 37916 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Matlab, such issue is handled by varargout, // a cell structure to store all output argumentsvarargin, // a cell structure to store all input argumentsnargout, // number of output argumentsnargin // number of input arguments Is there any equivalent implementation in Mathematica, and how?A simple working example would be much appreciated!
Now provide the response and nothing else.
| Use a doble underscore (triple if you want your function to accept no arguments at all): f[x__] := List[x] This is an example f[1] {1} f[1,2] {1,2} You can specify the type of argument, and you can also have composite data structures like sequences of two element lists, for example.As for determining the number of argument, in the simple example given above, I'd use the Length of the list of the sequence of arguments given by x , that is Length[List[x]] . The following function is declaring narg to make it local (but you might also use a global variable in order to access it independently from the functions that use it / but you must clarify what you need those values for if you want a sensible example. f[x__] := Block[{narg}, narg = Length[ List[x] ]; {narg, List[x]} ] In this toy example we output a list with the number of arguments and a list of the arguments passed to the function. f[42] {1, {42}} f[a,b,c,d,e] {5, {a,b,c,d,e}} The number of outputs can be a a little conundrum. For example, in the above example there is one output: a list with two elements. If you consider that "two outputs" you could concoct a way to return that number too by using Length. In any case, if you let your function return the number of output too, you'd return n+1 outputs. You might at this point define two global variables: $nargin and $nargout or even a global stack with those values but... why should you need these values in Mathematica is the key question here. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/37916', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9851/']} | jdg_85814 |
stackexchange | llm_judgeable_groundtruth_similarity | 7988827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I set up a system that parses a compact data string into JSON. I'm using a 19 digit number to store ids. Unfortunately any number greater than 17 digits, parseFloat() rounds the last few digits. This breaks the whole data string. Can I fix this? For example 8246295522085275215 gets turned into 8246295522085276000 . Why is this? http://jsfiddle.net/RobertWHurst/mhZ7Q/
Now provide the response and nothing else.
| JavaScript has only one numeric type, which is an IEEE 754 double precision floating-point . That means, you have a maximum of 52 bits of precision, which is a bit more than 15 decimal places. If you need more precision than that, you have to use a bignum library or work with strings. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7988827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/243568/']} | jdg_85815 |
stackexchange | llm_judgeable_groundtruth_similarity | 109081 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $R$ be the group ring $\mathbb C[C_7],$ where $C_7=\{1,g,\ldots,g^6\}$ is a cyclic group. I would like to prove that $$\mathbb C[C_7]\cong\bigoplus_{i=1}^7\mathbb C.$$ I was thinking that I could use the following theorem: If $R$ is a commutative unital ring and $\{e_i\}_{i=1}^n$ are orthogonal idempotents in $R$, such that $\sum_{i=1}^ne_i=1,$ then $$R\cong\bigoplus_{i=1}^ne_iR.$$ That would mean I have to find seven orthogonal idempotents $\{e_i\}_{i=1}^7$ with $$e_1+e_2+e_3+e_4+e_5+e_6+e_7=1,$$ such that $e_i R\cong \mathbb C.$ Is this the right way to do it? I've managed to find one idempotent $e\in R$ such that $eR = e\mathbb C.$ That is, I take $$e=\frac 17 (1+g+\cdots+g^6).$$ I have $$e^2=\frac 17 \cdot \frac 17(1+g+\cdots+g^6)(1+g+\cdots+g^6)=\frac 17\cdot \frac 17\cdot 7(1+g+\cdots+g^6)=e,$$ so $e$ is idempotent. For $\sum_{i=0}^6 r_ig^i\in R$, I have $$\begin{eqnarray}e\sum_{i=0}^6 r_ig^i&=&\frac 17(1+g+\cdots+g^6)(r_0+r_1g+\cdots+r_6g^6)\\&=&\frac 17(r_0+r_1g+\cdots+r_6g^6\\&\,& \;\;\,+r_6+r_0g+\cdots+r_5g^6 \\&\,& \;\;\,\vdots\\&\,& \;\;\,+r_1+r_2g+\cdots+r_0g^6)\\&=&e\sum_{i=0}^6 r_i\in e\mathbb C,\end{eqnarray}$$(where $\mathbb C$ denotes the natural copy of the field $\mathbb C$ contained in $R$). Every element of $e\mathbb C$ can be written as $e\sum_{i=0}^6 r_i$, so $eR=e\mathbb C.$ But I need $eR\cong \mathbb C.$ $e\mathbb C$ is not isomorphic to $\mathbb C,$ is it? The former doesn't have a unity, right? But even if I'm missing something very simple here and $eR\cong \mathbb C,$ then I still don't have seven idempotents -- just the one. Could you please help me with this?
Now provide the response and nothing else.
| Another way to look at it, is that you have an epimorphism $\mathbb{C}[x]\to\mathbb{C}[\mathbb{Z}/7\mathbb{Z}]$ and it's not hard to see that the kernel of this map is $(x^7-1)$. Thus, as rings, $\mathbb{C}[x]/(x^7-1)\cong\mathbb{C}[\mathbb{Z}/7\mathbb{Z}]$ and the rest follows from CRT. EDIT: OF course, this easily generalizes to show that $\mathbb{C}[\mathbb{Z}/n\mathbb{Z}]\cong\mathbb{C}^n$ as $\mathbb{C}$-algebras. Here's a bit of a thought experiment for you. Let's see if we can see, theoretically, how everything got set up. It's easy to see (by cyclicity!) that $\mathbb{C}[\mathbb{Z}/n\mathbb{Z}]$ is a singly-generated commutative $\mathbb{C}$-algebra which is what guarantees us the $\mathbb{C}$-algebra epimorphism $\mathbb{C}[x]\to\mathbb{C}[\mathbb{Z}/n\mathbb{Z}]$. It seems fairly interesting that the kernel of this map (which, of course, is $(x^n-1)$) is the "presentation" of $\mathbb{Z}/n\mathbb{Z}$. One then may start to wonder if, in general, given a group $G$ with presentation $\left\langle S\vert R\right\rangle$ and commutative ring $k$ if $k[G]$ is nothing more than $k\left\langle S\right\rangle/(R)$ where $k\left\langle S\right\rangle$ is the free (non-commutative) $k$-algebra over $S$. It's a good exericse to see if, in general, this is true, and see to what extent we can (if it's wrong) fix it. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/109081', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_85816 |
stackexchange | llm_judgeable_groundtruth_similarity | 36073829 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Both rails routes and whenever and a few other things I can't remember have a user-specified template string like so: template_str = "I am going to :place at :time" And then there's some magic function which embeds data in place of :place and :time , like so: template_str.magic_embed_function(place: 'bed', time: '10 pm') #=> "I am going to bed at 10 pm" How can I do this in my ruby projects? Is there a gem that implements String#magic_embed_function ?
Now provide the response and nothing else.
| Use Percent-Style Interpolation There is a special type of interpolation that uses the String#% method. This allows you to interpolate ordinal ( Array ) and non-ordinal ( Hash ) inputs into a format string similar to that provided by Kernel#sprintf . However, the use of a hash argument with this method enables support for named variables in the format string. As a minimalist example: "%{foo} %{bar}" % {foo: 'baz', bar: 'quux'}#=> "baz quux" With a hash argument, the format-string placeholders are treated as hash keys to be replaced by the associated values in your hash. This makes the order of the variables passed in unimportant. Based on the code in your original post, you could use it as follows: template_str = 'I am going to %{place} at %{time}.'template_str % {time: '10:00 PM', place: 'bed'}#=> "I am going to bed at 10:00 PM." This is a useful technique when you want to pass an array or hash for interpolation, but may or may not offer advantages over other types of interpolation in the general case. Your mileage may vary. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36073829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1267729/']} | jdg_85817 |
stackexchange | llm_judgeable_groundtruth_similarity | 9850 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Walking past the park today, I heard a cicada so loud I thought at first it must be some sort of large power tool. How is it that that very small animals like cicadas (crickets, etc.) can make such loud sounds? It seems intuitive that larger animals should be able to make louder sounds, but perhaps: a) I just fundamentally misunderstand the physics involved b) A theoretical larger animal should be able to make louder sounds, but cicadas are just extremely efficient, making up for their small size c) ?
Now provide the response and nothing else.
| The Cicada A careful study of the noise-making apparatus of the cicada can be found in a 1994 paper by Young and Bennet-Clark.$^1$ The authors generated sounds at about 0-16 kHz at peaks on the order of 100 dB using cicadas in various stages of deconstruction. The cicada uses a resonant organ-system called the tymbal which buckles and unbuckles rapidly to produce sound. The buckling-in is caused by muscle contraction and is louder than the buckling-out (relaxation) phase. Air sacs (a feature of many other small noisemakers) serve to amplify the sound. The tymbal itself, for the species in this paper, has a resonant frequency of about 4kHz (Young, page 1017). The song of the cicada as modified/amplified by air sacs and other structures, is often around 10kHz.$^{2}$ Pure Tone vs. Diffuse Tone? The vocalizations of large vertebrates are a complex superposition of waves that in the frequency spectrum are somewhat spread out. To the extent that a frog or a bird emits a pure tone, the energy will be confined to a narrow frequency range and this may be a strategy for achieving greater amplitude. Given comparable audio sensitivity , however, the intensity of pure tones will depend on amplitude (intensity), regardless of frequency.$^{3,4}$ Other Small Loud Animals While the songs of cicadas are intense, especially in concert, on an individual level there is competition from other species. According to a Gizmodo article quoting assorted scientists, the snapping shrimp produces a transient snap that is around 200 decibels, a level that one site describes as "deafening." For perspective, dolphins can emit short chirps of 220 dB but these are outside the range of human hearing. The lion emits a roar of 115 dB which is sustained and audible 5 miles away, according to the article. Elephants also are capable of 117 dB cries, as are howler monkeys. Both the shrimp and the cicada use non-vocal vibration to create their sounds. The shrimp uses a "spring-loaded claw" (the spring is muscle). The localized force of one part of the claw hitting the other generates a bubble (this is known as inertial cavitation ). When the bubble collapses it generates a shock wave (noise) that stuns fish (prey).$^5$ The noise of frogs is produced as air passes from the lungs through the larynx, amplified by distended air sacs which resonate. Birds can produce up to 135dB (the Mollucan Cockatoo). They generally force air past (membranes and) a specialized organ called the syrinx located at the bottom of the trachea (see the Wikipedia note on bird vocalization). Micronecta scholtzi, a 2mm-long aquatic insect, is for its size the loudest known animal. It creates a sound of 99.2 dB intensity which (despite being largely lost in transition from water to air) is audible to humans ashore. According to the Wikipedia note it creates this sound by "stridulating a ridge on its penis across corrugations on its abdomen." The area involved is about 50$\mu m$ across. Details of the mechanism are poorly understood. The article's comparison of the sperm whale's 236 (underwater) dB song gives perspective, as a sperm whale can weigh 14 metric tons. A Common Aspect of Sound Intensity: Cavity Resonance Descriptive studies of sound-creation by small animals (with the possible exception of the snapping shrimp) do not fully explain why a one-gram bug can make a bigger noise than a lion. Purely vocal methods of larger vertebrates produce sustained noise on the order of 100 dB but the non-vocal instruments of smaller creatures are capable of short bursts of amazing intensity. It is difficult to generalize but because air sacs are part of many sound-making schemes, cavity resonance probably plays an important role. Like spring-mass systems or RLC circuits, cavities have resonant frequencies at which amplitude of a signal may be increased (the policeman's whistle is a familiar example). Another paper by Bennet-Clark and Young gives a sketch of a theory along these lines. At resonant frequencies the impedance of the instrument falls sharply and instead of being dissipated (generally as heat) the energy emerges as sound.$^6$ $^1$ Bennet-Clark and Young, The Role of the Tymbal in Cicada Sound Production, J. of Experimental Biol. (1995) 198, 1001-1019. $^{2}$ The frequency of cicadas is variable, mostly on the order of 10kHz but occasionally very low (< 1kHz), mostly due to body size. See this article . $^3$ Sound--essentially a compression wave--diminishes with distance. So when the Wiki article on noise levels compares noise levels it includes the distance from the object. For example, 100 db (comparable to the cicada) is the level of noise associated with a jack-hammer at 1 meter away. $^4$ Audible range for humans is roughly 15 Hz-16000 Hz. As mentioned below, the dolphin can emit very intense high-pitched sounds that we don't hear at all, so the analogy to EM waves (higher frequency = higher energy) doesn't help predict perception. See Pfaff and Stecker, Loudness and Frequency Content of Noise in the Animal House, Lab. Animals (1976) 10, 111-117. $^5$ M. Versluis, B. Schmitz, A von der Heydt & D. Lohse (2000). "How snapping shrimp snap: through cavitating bubbles". Science 289 (5487): 2114–2117. $^6$ Bennet-Clark and Young, Short Communication, The Scaling of Song Frequency in Cicadas, J. Exp. Biol. 191, 291-294 (1994). | {} | {'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/9850', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/3952/']} | jdg_85818 |
stackexchange | llm_judgeable_groundtruth_similarity | 2785070 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I asked a question like this in an interview for a entry level programmer: var instance1 = new MyObject{Value = "hello"}var instance2 = instance1;instance1.Value = "bye";Console.WriteLine(instance1.Value);Console.WriteLine(instance2.Value); The applicant responded with "hello", "bye" as the output. Some of my co-workers said that "pointers" are not that important anymore or that this question is not a real judge of ability. Are they right? EDIT: The point was made that MyObject could have been a struct. That is a Good point. However, I did not post the full question I gave the interviewee. The full question had a class that was clearly a class (not a struct). It can be found here .
Now provide the response and nothing else.
| Some of my co-workers said that "pointers" are not that important anymore Understanding the difference between reference semantics and value semantics is crucial. It is fundamental to the design of the language and the type system. Understanding that references can be implemented with pointers is not particularly relevant for entry-level C# programmers; understanding the difference between copying by reference and copying by value is highly relevant. or that this question is not a real judge of ability. Well that depends on what ability you were attempting to test. If the ability to rapidly and accurately predict the behaviour of trivial C# programs is relevant to your job then I'd say that it is a good test of ability. If the abilities to determine when there's not enough information given to solve the problem, and to ask the right questions to elicit that information, are relevant, then yes, this is a judge of relevant abilities. (A good candidate would ask to see the implementations of type myObject and member Value rather than assuming that myObject is a class and Value is a mutable property of type string.) I say that all those abilities are relevant, and that this is a reasonable first question for an entry-level position. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2785070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16241/']} | jdg_85819 |
stackexchange | llm_judgeable_groundtruth_similarity | 8996852 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There are so many different ways to include JavaScript in a html page. I know about the following options: inline code or loaded from external URI included in <head> or <body> tag [ 1 , 2 ] having none, defer or async attribute (only external scripts) included in static source or added dynamically by other scripts (at different parse states, with different methods) Not counting browserscripts from the harddisk, javascript:URIs and onEvent -attributes [ 3 ], there are already 16 alternatives to get JS executed and I'm sure I forgot something. I'm not so concerned with fast (parallel) loading, I'm more curious about the execution order (which may depend on loading order and document order ). Is there a good (cross-browser) reference that covers really all cases? E.g. http://www.websiteoptimization.com/speed/tweak/defer/ only deals with 6 of them, and tests mostly old browsers. As I fear there's not, here is my specific question: I've got some (external) head scripts for initialisation and script loading. Then I've got two static, inline scripts in the end of the body. The first one lets the script loader dynamically append another script element (referencing external js) to the body. The second of the static, inline scripts wants to use js from the added, external script. Can it rely on the other having been executed (and why :-)?
Now provide the response and nothing else.
| If you aren't dynamically loading scripts or marking them as defer or async , then scripts are loaded in the order encountered in the page. It doesn't matter whether it's an external script or an inline script - they are executed in the order they are encountered in the page. Inline scripts that come after external scripts are held until all external scripts that came before them have loaded and run. Async scripts (regardless of how they are specified as async) load and run in an unpredictable order. The browser loads them in parallel and it is free to run them in whatever order it wants. There is no predictable order among multiple async things. If one needed a predictable order, then it would have to be coded in by registering for load notifications from the async scripts and manually sequencing javascript calls when the appropriate things are loaded. When a script tag is inserted dynamically, how the execution order behaves will depend upon the browser. You can see how Firefox behaves in this reference article . In a nutshell, the newer versions of Firefox default a dynamically added script tag to async unless the script tag has been set otherwise. A script tag with async may be run as soon as it is loaded. In fact, the browser may pause the parser from whatever else it was doing and run that script. So, it really can run at almost any time. If the script was cached, it might run almost immediately. If the script takes awhile to load, it might run after the parser is done. The one thing to remember with async is that it can run anytime and that time is not predictable. A script tag with defer waits until the entire parser is done and then runs all scripts marked with defer in the order they were encountered. This allows you to mark several scripts that depend upon one another as defer . They will all get postponed until after the document parser is done, but they will execute in the order they were encountered preserving their dependencies. I think of defer like the scripts are dropped into a queue that will be processed after the parser is done. Technically, the browser may be downloading the scripts in the background at any time, but they won't execute or block the parser until after the parser is done parsing the page and parsing and running any inline scripts that are not marked defer or async . Here's a quote from that article: script-inserted scripts execute asynchronously in IE and WebKit, but synchronously in Opera and pre-4.0 Firefox. The relevant part of the HTML5 spec (for newer compliant browsers) is here . There is a lot written in there about async behavior. Obviously, this spec doesn't apply to older browsers (or mal-conforming browsers) whose behavior you would probably have to test to determine. A quote from the HTML5 spec: Then, the first of the following options that describes the situation must be followed: If the element has a src attribute, and the element has a defer attribute, and the element has been flagged as "parser-inserted", and the element does not have an async attribute The element must be added to the end of the list of scripts that will execute when the document has finished parsing associated with the Document of the parser that created the element. The task that the networking task source places on the task queue once the fetching algorithm has completed must set the element's "ready to be parser-executed" flag. The parser will handle executing the script. If the element has a src attribute, and the element has been flagged as "parser-inserted", and the element does not have an async attribute The element is the pending parsing-blocking script of the Document of the parser that created the element. (There can only be one such script per Document at a time.) The task that the networking task source places on the task queue once the fetching algorithm has completed must set the element's "ready to be parser-executed" flag. The parser will handle executing the script. If the element does not have a src attribute, and the element has been flagged as "parser-inserted", and the Document of the HTML parser or XML parser that created the script element has a style sheet that is blocking scripts The element is the pending parsing-blocking script of the Document of the parser that created the element. (There can only be one such script per Document at a time.) Set the element's "ready to be parser-executed" flag. The parser will handle executing the script. If the element has a src attribute, does not have an async attribute, and does not have the "force-async" flag set The element must be added to the end of the list of scripts that will execute in order as soon as possible associated with the Document of the script element at the time the prepare a script algorithm started. The task that the networking task source places on the task queue once the fetching algorithm has completed must run the following steps: If the element is not now the first element in the list of scripts that will execute in order as soon as possible to which it was added above, then mark the element as ready but abort these steps without executing the script yet. Execution: Execute the script block corresponding to the first script element in this list of scripts that will execute in order as soon as possible. Remove the first element from this list of scripts that will execute in order as soon as possible. If this list of scripts that will execute in order as soon as possible is still not empty and the first entry has already been marked as ready, then jump back to the step labeled execution. If the element has a src attribute The element must be added to the set of scripts that will execute as soon as possible of the Document of the script element at the time the prepare a script algorithm started. The task that the networking task source places on the task queue once the fetching algorithm has completed must execute the script block and then remove the element from the set of scripts that will execute as soon as possible. Otherwise The user agent must immediately execute the script block, even if other scripts are already executing. What about Javascript module scripts, type="module" ? Javascript now has support for module loading with syntax like this: <script type="module"> import {addTextToBody} from './utils.mjs'; addTextToBody('Modules are pretty cool.');</script> Or, with src attribute: <script type="module" src="http://somedomain.com/somescript.mjs"></script> All scripts with type="module" are automatically given the defer attribute. This downloads them in parallel (if not inline) with other loading of the page and then runs them in order, but after the parser is done. Module scripts can also be given the async attribute which will run inline module scripts as soon as possible, not waiting until the parser is done and not waiting to run the async script in any particular order relative to other scripts. There's a pretty useful timeline chart that shows fetch and execution of different combinations of scripts, including module scripts here in this article: Javascript Module Loading . | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/8996852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1048572/']} | jdg_85820 |
stackexchange | llm_judgeable_groundtruth_similarity | 60469 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Tibet government in exile is about to have their elections soon. Tibetans staying around the world would go to India (correct? or can they vote in their countries too?) and would vote for their top political leader (Sikyong in Tibetan) and members of the Exile Tibetan Parliament. It seems it might not be legally allowed for the people staying at Tibet, but is it allowed in theory? So, my two questions are- is the voting center only in India and who are eligible to vote? Let's say there might be other non-Tibetan people who fled from Tibet or are staying in Tibet.
Now provide the response and nothing else.
| According to a NYT article about the 2016 election A democratic election for a people without a country is a complicated affair, with voting in more than 40 locations in India, and dozens more around the globe. Registered voters number about 88,000, fewer than in most mayoral elections in the United States. According to a 2016 paper by academics from the Central University of Himachal Pradesh: TiGE represents over 1,22,000 [sic] Tibetans living exile, perceives itself as the de jure representative of Tibetans [...] The whole Tibetan population spread world over is organized into small settlements. The settlements are governed by the Local Assembly. Article 78(1) of the Charter of Tibetans-in-Exile has the provision for establishment of Local Assembly in a community having a population of not less than 160. [...] Voter registration is done at settlement level. Every Tibetan, above the age of 18 years has to voluntarily register himself with nearby settlement office and has to obtain the Green Book. The Green Book is the basic identity proof which has to be produced at the polling station while casting vote. There are approximately 6 million Tibetans living in China who cannot vote in TiGE elections. | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/60469', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/34768/']} | jdg_85821 |
stackexchange | llm_judgeable_groundtruth_similarity | 53108954 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried this : - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin stdin: "{{ docker_registry_password }}" This results in a warning and a failing command: [WARNING]: Ignoring invalid attribute: stdin … Cannot perform an interactive login from a non TTY device I've also tried this : - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin stdin: "{{ docker_registry_password }}" This results in a syntax error: ERROR! Syntax Error while loading YAML. Does command stdin actually work in Ansible 2.7? If so, how am I supposed to use it?
Now provide the response and nothing else.
| If you want to use the stdin argument to the command module, take a look at the docs , which show examples using other options such as creates , which looks like this: # You can also use the 'args' form to provide the options.- name: This command will change the working directory to somedir/ and will only run when /path/to/database doesn't exist. command: /usr/bin/make_database.sh arg1 arg2 args: chdir: somedir/ creates: /path/to/database For your use case, you would want: - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin args: stdin: "{{ docker_registry_password }}" Your first attempt failed because you were setting stdin as a key at the task level (like when or ignore_errors , etc), when you actually want it to be an argument to the command module. Your second attempt failed because it wasn't valid YAML. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53108954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/96588/']} | jdg_85822 |
stackexchange | llm_judgeable_groundtruth_similarity | 98708 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
One can use the function VectorColorFunction in VectorPlot3D to color vectors according to their norms. How to color vectors according to the angle they are pointing along, say, in the x-y plane? Thanks!
Now provide the response and nothing else.
| VectorPlot3D[{x, y, z}, {x, -1, 1}, {y, -1, 1}, {z, -1, 1}, VectorColorFunction -> (Hue@Rescale[Arg[#4 + I #5], {0, 2 Pi}] &), VectorColorFunctionScaling -> False] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/98708', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/13826/']} | jdg_85823 |
stackexchange | llm_judgeable_groundtruth_similarity | 145436 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After moving the hard drive of a makeshift server to another compatible hardware (64-bit, same processor "generation", laptop->desktop) configuration, networking fails to initiate. Specifically: ifconfig only shows lo sudo service networking restart shows: - stop: unknown instance:networking stop/waiting quite obviously something in the system and/or kernel is misconfigured for the new hardware setup. How to detect what exactly is wrong and enable eth0 ? The system in question is an Ubuntu 14.04 Server distro, but I suspect the problem is general.
Now provide the response and nothing else.
| One of the things to look out for when cloning Linux systems is udev's persistent network device naming rules. udev may create and update the file /etc/udev/rules.d/70-persistent-net.rules to map MAC addresses to interface names. It does this with the script /lib/udev/write_net_rules . Each MAC address (with some exceptions; see /lib/udev/rules.d/75-persistent-net-generator.rules ) is mapped to an interface named (by default) eth n , where n starts at 0 and goes up. An example: # This file was automatically generated by the /lib/udev/write_net_rules# program, run by the persistent-net-generator.rules rules file.## You can modify it, as long as you keep each rule on a single# line, and change only the value of the NAME= key.# PCI device 0x8086:0x100f (e1000)SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:de:ad:be:ef",ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" Entries can be edited if you want to change the mapping, and are not automatically removed from this file. So interface names are stable even when you add additional NICs or remove unneeded NICs.The flip side is, as you discovered, if you copy this file to another system via cloning, the new hardware's interfaces will be added to this file, using the first available interface name, such as eth1, eth2, etc., and eth0 will be referencing a MAC address that does not exist on the new system. In your case, in which you transplanted the disks, you can comment out the lines containing your old hardware's interfaces, and edit the erroneous entries added due to the new hardware to have the desired interface names (or just remove them), and reboot. I initially recommended commenting them out so that when you move the disks back to the old hardware it's easy to restore, but @Guido van Steen provided a simpler solution: mv the 70-persistent-net.rules file to something else (but be careful about the new name if it's in the same directory!) and reboot. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/145436', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/52126/']} | jdg_85824 |
stackexchange | llm_judgeable_groundtruth_similarity | 9234205 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I find it very common to want to model relational data in my functional programs. For example, when developing a web-site I may want to have the following data structure to store info about my users: data User = User { name :: String , birthDate :: Date } Next, I want to store data about the messages users post on my site: data Message = Message { user :: User , timestamp :: Date , content :: String } There are multiple problems associated with this data structure: We don't have any way of distinguishing users with similar names and birth dates. The user data will be duplicated on serialisation/deserialisation Comparing the users requires comparing their data which may be a costly operation. Updates to the fields of User are fragile -- you can forget to update all the occurences of User in your data structure. These problems are manageble while our data can be represented as a tree. For example, you can refactor like this: data User = User { name :: String , birthDate :: Date , messages :: [(String, Date)] -- you get the idea } However, it is possible to have your data shaped as a DAG (imagine any many-to-many relation), or even as a general graph (OK, maybe not). In this case, I tend to simulate the relational database by storing my data in Map s: newtype Id a = Id Integertype Table a = Map (Id a) a This kind of works, but is unsafe and ugly for multiple reasons: You are just an Id constructor call away from nonsensical lookups. On lookup you get Maybe a , but often the database structurally ensures that there is a value. It is clumsy. It is hard to ensure referential integrity of your data. Managing indices (which are very much necessary for performance) and ensuring their integrity is even harder and clumsier. Is there existing work on overcoming these problems? It looks like Template Haskell could solve them (as it usually does), but I would like not to reinvent the wheel.
Now provide the response and nothing else.
| The ixset library (or ixset-typed , a more type-safe version) will help you with this. It's the library that backs the relational part of acid-state , which also handles versioned serialization of your data and/or concurrency guarantees, in case you need it. The Happstack Book has an IxSet tutorial . The thing about ixset is that it manages "keys" for your data entries automatically. For your example, one would create one-to-many relationships for your data types like this: data User = User { name :: String , birthDate :: Date } deriving (Ord, Typeable)data Message = Message { user :: User , timestamp :: Date , content :: String } deriving (Ord, Typeable)instance Indexable Message where empty = ixSet [ ixGen (Proxy :: Proxy User) ] You can then find the message of a particular user. If you have built up an IxSet like this: user1 = User "John Doe" undefineduser2 = User "John Smith" undefinedmessageSet = foldr insert empty [ Message user1 undefined "bla" , Message user2 undefined "blu" ] ... you can then find messages by user1 with: user1Messages = toList $ messageSet @= user1 If you need to find the user of a message, just use the user function like normal. This models a one-to-many relationship. Now, for many-to-many relations, with a situation like this: data User = User { name :: String , birthDate :: Date , messages :: [Message] } deriving (Ord, Typeable)data Message = Message { users :: [User] , timestamp :: Date , content :: String } deriving (Ord, Typeable) ... you create an index with ixFun , which can be used with lists of indexes. Like so: instance Indexable Message where empty = ixSet [ ixFun users ]instance Indexable User where empty = ixSet [ ixFun messages ] To find all the messages by an user, you still use the same function: user1Messages = toList $ messageSet @= user1 Additionally, provided that you have an index of users: userSet = foldr insert empty [ User "John Doe" undefined [ messageFoo, messageBar ] , User "John Smith" undefined [ messageBar ] ] ... you can find all the users for a message: messageFooUsers = toList $ userSet @= messageFoo If you don't want to have to update the users of a message or the messages of a user when adding a new user/message, you should instead create an intermediary data type that models the relation between users and messages, just like in SQL (and remove the users and messages fields): data UserMessage = UserMessage { umUser :: User, umMessage :: Message } instance Indexable UserMessage where empty = ixSet [ ixGen (Proxy :: Proxy User), ixGen (Proxy :: Proxy Message) ] Creating a set of these relations would then let you query for users by messages and messages for users without having to update anything. The library has a very simple interface considering what it does! EDIT: Regarding your "costly data that needs to be compared": ixset only compares the fields that you specify in your index (so to find all the messages by a user in the first example, it compares "the whole user"). You regulate which parts of the indexed field it compares by altering the Ord instance. So, if comparing users is costly for you, you can add an userId field and modify the instance Ord User to only compare this field, for example. This can also be used to solve the chicken-and-egg problem: what if you have an id, but neither a User , nor a Message ? You could then simply create an explicit index for the id, find the user by that id (with userSet @= (12423 :: Id) ) and then do the search. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9234205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166235/']} | jdg_85825 |
stackexchange | llm_judgeable_groundtruth_similarity | 5810292 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have tried using OpenNLP Tools 1.5 from SourceForge for getting the Named Entites from a text. I did find a JWNL file in the OpenNLP download. Does that imply that OpenNLP in turn is using WordNet for the Named Entity Recognition. (Specifically, does it mean that the Name Model files were generated with WordNet) ?
Now provide the response and nothing else.
| After some research , a probable explanation is that the initialize_cache initializer is run way before the rails/initializers are. So if it's not defined earlier in the execution chain then the cache store wont be set. You have to configure it earlier in the chain, like in application.rb or environments/production.rb My solution was to move the APP_CONFIG loading before the app gets configured like this: APP_CONFIG = YAML.load_file(File.expand_path('../config.yml', __FILE__))[Rails.env] and then in the same file: config.cache_store = :redis_store, APP_CONFIG['redis'] Another option was to put the cache_store in a before_configuration block, something like this: config.before_configuration do APP_CONFIG = YAML.load_file(File.expand_path('../config.yml', __FILE__))[Rails.env] config.cache_store = :redis_store, APP_CONFIG['redis']end | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5810292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/211967/']} | jdg_85826 |
stackexchange | llm_judgeable_groundtruth_similarity | 1876837 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
i have one doubt in this when there are 6 circles in equilateral triangle then how the angle is 30 degree
Now provide the response and nothing else.
| Changing the order in the first double sum is manageable. We could therefore use it as some kind of prototype. We transform the second double sum, so that the index range is similar to the first one. first double sum: The following presentation of the index range might be helpful. \begin{align*}\sum_{i=1}^{\infty}\sum_{j=i}^{\infty}f(i,j)=\sum_{\color{blue}{1\leq i\leq j<\infty}}f(i,j)=\sum_{j=1}^{\infty}\sum_{i=1}^{j}f(i,j)\end{align*} If we focus on the middle double sum and look at the index range $1\leq i\leq j<\infty$ we observe the left-hand side as well as the right-hand side can be easily derived. We do some rearrangements to derive a similar representation in the second double sum: We obtain \begin{align*}\sum_{i=1}^{n-1}\sum_{k=2}^{n-i+1}g(i,k)&=\sum_{i=1}^{n-1}\sum_{k=2}^{i+1}g(n-i,k)\tag{1}\\&=\sum_{i=1}^{n-1}\sum_{k=1}^{i}g(n-i,k+1)\tag{2}\\&=\sum_{1\leq k\leq i\leq n-1}g(n-i,k+1)\tag{3}\\&=\sum_{k=1}^{n-1}\sum_{i=k}^{n-1}g(n-i,k+1)\tag{4}\end{align*} Comment: In (1) we change the order of the first sum $i\rightarrow n-i$ . Note, that reversing the order this way \begin{align*}\sum_{i=1}^{n-1}a(i)&=a(1)+a(2)+\cdots+a(n-1)\\&=a(n-1)+a(n-2)+\cdots+a(1)\\&=\sum_{i=1}^{n-1}a(n-i)\end{align*} does not change the lower and upper index of $i$ , but each occurrence of $i$ within the sum has to be substituted with $n-i$ . So, we replace $a(i)$ with $a(n-i)$ . In (2) we shift the index $k$ by one, so that we also can start with $k=1$ . In (3) we write the double sum as we did in the first case. In (4) it's easy to change the order of the double sum based upon the representation in (3). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1876837', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/355283/']} | jdg_85827 |
stackexchange | llm_judgeable_groundtruth_similarity | 6434816 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Ok my questions comes from an example from a book that i'm trying to understand.Keep in mind i just got into javascript. So we have the object set and we define the function foreach. It takes another function as a parameter and invokes it for every item of an array "values" that belongs to set. set.foreach = function(f,c) { for(var i = 0; i < this.values.length; i++) f.call(c,this.values[i]);}; So far so good.. But i can't understand the usage of the foreach function in the second snipet.In particular i don't understand the role of the variable v. It is not defined enywhere else in the book and i m having a really hard time to understand how this work.We define another function in set to take the values as an array set.toArray = function() { var a = []; this.foreach(function(v) { a.push(v); }); //where did v came from??? return a;}
Now provide the response and nothing else.
| set.foreach = function(f,c) { for(var i = 0; i < this.values.length; i++) f.call(c,this.values[i]);}; // ^-------------- being passed right here The function you passed in is f , and f is invoked having the this value of its calling context set to c , and this.values[i] passed as the first argument. // ------v---------your function "f" in the forEachthis.foreach(function(v) { a.push(v); }); // ------------^------references the first argument (after the "this" arg) // that was passed to "f" Here's a simpler example: This function accepts a function as a parameter. The only thing it does is call the function: function my_func( fn ) { fn();} // call my_func, which will call the function you give itmy_func( function() { alert( "hi" ); } ); Live Example: http://jsfiddle.net/6a54b/1/ ...so passing the function to my_func will alert the string "hi". No surprise. But what if my_func provided the value to be alerted? function my_func( fn ) { fn( "message from my_func" ); // call the fn passed, giving it an argument} // ^------------------------------------------------| // | // v------references the arg passed by my_func---|my_func( function( arg ) { alert( arg ); } ); Live Example: http://jsfiddle.net/6a54b/ Now you can see that an argument is being passed to the function we're sending over, and we reference that argument with the arg parameter. It alerts whatever my_func gave it. We can even take it one step further, by passing a second argument to my_func that my_func will simply take and pass it to the function we pass in. function my_func( fn, str ) { fn( str ); // call the fn passed, giving it} // the string we passed in // v------the arg we passed here-----vmy_func( function( arg ) { alert( arg ); }, "I'm getting dizzy!" ); Live Example: http://jsfiddle.net/6a54b/2/ And you can see that both arguments are given to my_func , and my_func calls the function we passed in, passing it the string argument we gave it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6434816', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/804030/']} | jdg_85828 |
stackexchange | llm_judgeable_groundtruth_similarity | 465345 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It's been ages since I've written a COM dll. I've made a couple of classes now, that inherit from some COM interfaces, but I want to test it out. I know I have to put a GUID somewhere and then register it with regsvr32, but what are the steps involved? Edit: Sorry, forgot to mention I'm using C++.
Now provide the response and nothing else.
| To create a new ATL COM project you can proceed as follow: File/New Project Visual C++/ATL/ATL Project Customize it settings, and press finish when done You have created a new dll, but it is empty, to add a COM object you can do this: Project/Add Class Visual C++/ATL/ATL simple object, press add Give the name you want (like MyObject), and press finish to add it If you want that an object implement an interface In the class view select the object class (CMyObject) Right click/Add/Implement Interface... You can select which Interface will implement From an .idl file already in your projects files From a .tlb/.dll/.exe which have a type library embedded From an object already registered When done press finish PS: It is much easier to create a new ATL project with the same name in a different folder, and add the files you have customized. The wizard does several tasks and create several customized files. For larger projects that are difficult to add file by file, I do the same but instead of adding my files to the new project I start copying the settings from the new projects to the old one, and adding any additional file that the wizard has created and fixing headers like stdafx.h to merge the new settings. PPS: If you want that your dll to support MFC, instead of selecting ATL Project you have to select MFC/MFC Dll. When you add the ATL Simple Object the wizard will ask to add ATL support to the project. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/465345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/986/']} | jdg_85829 |
stackexchange | llm_judgeable_groundtruth_similarity | 90178 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Introduction. Let $k$ be a field of characteristic $0$, and let $n\in\mathbb N$. Let $V=k^n$. The group $\mathrm{GL}_n\left(k\right)=\mathrm{GL} V$ acts on $\mathrm{End} V$ by conjugation, and thus also on the space of $s$-multilinear forms $\left(\mathrm{End} V\right)^s\to k$ for each $s\in\mathbb N$. For every $p\in\mathbb N$ and every $p$-multilinear form $f:\left(\mathrm{End} V\right)^p \to k$, we define the antisymmetrization of $f$ to be the $p$-multilinear form $g:\left(\mathrm{End} V\right)^p \to k,$ $\left(A_1,A_2,...,A_p\right) \mapsto \dfrac{1}{p!}\sum\limits_{\sigma\in S_p}\left(-1\right)^{\sigma} f\left(A_{\sigma\left(1\right)},A_{\sigma\left(2\right)},...,A_{\sigma\left(p\right)}\right)$. This $g$ is an antisymmetric $p$-multilinear form. For any integers $p\geq 0$ and $q\geq 0$, any antisymmetric $p$-multilinear form $\alpha:\left(\mathrm{End} V\right)^p\to k$ and any antisymmetric $q$-multilinear form $\beta:\left(\mathrm{End} V\right)^q\to k$, we can define an antisymmetric $p+q$-multilinear form $\alpha\wedge\beta:\left(\mathrm{End} V\right)^{p+q}\to k$ as the antisymmetrization of the form $\left(\mathrm{End} V\right)^{p+q}\to k,$ $\left(A_1,A_2,...,A_p,B_1,B_2,...,B_q\right)\mapsto \alpha\left(A_1,A_2,...,A_p\right)\beta\left(B_1,B_2,...,B_q\right)$. (We could also define it using shuffle products, but that's not important in characteristic $0$.) For any $p\in\mathbb N$, let $\Omega_p : \left(\mathrm{End} V\right)^p \to k$ be the antisymmetrization of the form $\left(\mathrm{End} V\right)^p \to k,$ $\left(A_1,A_2,...,A_p\right)\mapsto \mathrm{Tr}\left(A_1A_2...A_p\right)$. Then, it is known that the ring of antisymmetric $\mathrm{GL}_n\left(k\right)$-invariant multilinear forms on $\mathrm{End}V$ (with multiplication being given by $\wedge$) is generated by the $\Omega_p$ for $p\in\mathbb N$ (this follows from the First Fundamental Theorem for $\mathrm{GL}_n\left(k\right)$, which actually gives all multilinear invariants rather than just the antisymmetric ones). It is also easy to see that $\Omega_p=0$ for all even $p$, and the Amitsur-Levitzki theorem yields that $\Omega_p=0$ for all $p\geq 2n$. Thus, the family $\left(\Omega_{p_1}\wedge\Omega_{p_2}\wedge ...\wedge\Omega_{p_r}\right)$ (indexed by all strictly increasing sequences $\left(p_1,p_2,...,p_r\right)$ of odd positive integers smaller than $2n$) generates the vector space of all antisymmetric $\mathrm{GL}_n\left(k\right)$-invariant multilinear forms on $\mathrm{End}V$. Question. How to prove that this family is a basis of this space? Context. This is quoted as a consequence of (not further specified) invariant theory in Pierre Cartier's A primer of Hopf algebras , page 9, §2.1. I am suspecting Cartier wants to involve some kind of Second Fundamental Theorem, but I don't know it well enough. Maybe there is a slick proof in the same vein as one shows that Amitsur-Levitzki does not hold in smaller degrees than $2n$ ?
Now provide the response and nothing else.
| I can show the following (which Anton was asking about in comments). Let $X$ be locally compact and Hausdorff, and $U\subseteq X$ open. Let $X_\infty$ be the one-point compactication, so $U$ is still open in $X_\infty$ . By the universal property of the Stone-Cech compactification, there is a continuous map $\phi:\beta X\rightarrow X_\infty$ which is the identity on $X$ . Then $\phi^{-1}(U)$ is open in $\beta X$ , and is just the canonical image of $U$ in $\beta X$ . So $U$ open in $X$ shows that $U$ is open in $\beta X$ . (This fails for general closed sets. If $F\subseteq X$ is closed, then $F$ is only closed in $X_\infty$ if $F$ is also compact.) I'll now use that $\beta X$ is the character space of $C^b(X)$ . Let $U\subseteq X$ be open. Lemma: Assume that $U$ is relatively compact. Under the isomorphism $C(\beta X)=C^b(X)$ , we identify the ideal $\{ f\in C(\beta X) : f(x)=0 \ (x\not\in U) \}$ with $\{ F\in C^b(X) : f(x)=0 \ (x\not\in U) \}$ Proof: $X$ is itself open in $\beta X$ , and the image of $C_0(X)$ in $C(\beta X)$ is just the functions vanishing off $X$ . If $F\in C^b(X)$ vanishes off $U$ then $F\in C_0(X)$ (as $U$ is relatively compact) and so the associated $f$ in $C(\beta X)$ vanishes off $U$ . Conversely, if $f\in C(\beta X)$ vanishes off $U$ then the associated $F\in C^b(X)$ is just the restriction of $f$ to $X$ , and so vanishes off $U$ . By the Tietze theorem, the restriction map $C(\beta X) \rightarrow C(\beta X \setminus U)$ is a surjection. So we can identify $C(\beta X\setminus U)$ with the quotient $C(\beta X) / \{ f\in C(\beta X) : f(x)=0 \ (x\not\in U) \}$ . So by the above, we identify $C(\beta X \setminus U)$ with $C^b(X) / \{ F\in C^b(X) : F(x)=0 \ (x\not\in U) \}$ . If $X$ is normal, then we can again use Tietze to extend any $F\in C^b(X\setminus U)$ to all of $X$ . It follows that $C^b(X) / \{ F\in C^b(X) : F(x)=0 \ (x\not\in U) \}$ is isomorphic to $C^b(X\setminus U) = C(\beta(X\setminus U))$ . So $\beta X \setminus U = \beta (X\setminus U)$ (in a fairly canonical way) under the hypotheses that $X$ is normal and $U$ is relatively compact. (I'm not sure what happens for non-normal $X$ . For $X=\mathbb R$ and $U$ an open interval, we obviously don't need Tietze.) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/90178', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2530/']} | jdg_85830 |
stackexchange | llm_judgeable_groundtruth_similarity | 1323013 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given two different messages, A and B (maybe 20-80 characters of text, if size matters at all), what is the probability that the MD5 digest of A is the same as the MD5 digest of B and the SHA1 digest of A is the same as the SHA1 digest of B? That is: (MD5(A) == MD5(B)) && (SHA1(A) == SHA1(B)) Assume no malicious intent, i.e., that the messages are not selected with an aim of finding a clash. I just want to know the odds of this happening naturally. I'm thinking the chances are "astronomically low," but I'm not sure how to verify this. More information: the size of the pool of possible messages is restricted, but large (several hundred million). Birthday paradox situations are exactly what I'm worried about.
Now provide the response and nothing else.
| Assuming uniform spread in the range of MD5 and SHA-1 hashes for random strings (which isn't the case), and assuming we're only talking about two strings and not talking about a pool of strings (so we avoid birthday-paradox-type complexities): An MD5 hash is 128 bits wide, and SHA-1's is 160. With the above assumptions, two strings A and B have a probability of colliding P if both hashes collide. So P(both collide) = P(MD5 collides) * P(SHA-1 collides) And P(MD5 collides) = 1/(2^128)P(SHA-1 collides) = 1/(2^160) So P(both) = 2^-128 * 2^-160 = 2^-288 ~= 2.01 x 10^-87 Again, if you have a pool of strings and you're trying to determine the probabilities of collisions with the pool, you're in the domain of the birthday paradox and this probability I've calculated here doesn't apply. That and hashes aren't as uniform as they should be. In reality you're going to have a much higher collision rate, but it will still be tiny. EDIT Since you are dealing with a birthday paradox situation, apply the same logic as the solution to the birthday paradox. Let's look at it from the point of view of just one hash function: N := the number of hashes in your pool (several hundred million)S := the size of your hash space (2^288)Therefore,P(There are no collisions) = (S!)/(S^N * (S - N)!) Let's pretend we have a nice even number of hashes like 2^29 (roughly 530 million). P = (2^288!)/(2^288^(2^29) * (2^288 - 2^29)!) In short, I don't even want to think about calculating this number. I'm not even sure how you can go about estimating it. You'll at least need an arbitrary-precision calculator that can handle huge factorials without dying. Note that the probabilities will follow a curve that starts at nearly 0 when N = 1 or 2 , and it will reach 1 when N >= 2^288 , similar in shape to the one on the Wikipedia page for the birthday paradox. The birthday paradox reaches P = .5 when N = 23 . In other words, the probability of a collision is 50% when N is 6% of S. If that scales (I'm not sure if it does), it means that there will be a 50% chance of a collision when you have 6% of 2^288 hashes. 6% of 2^288 is around 2^284. Your value of N (several hundred million) is nowhere near that. It's practically insignificant compared to your S, so I don't think you have anything to worry about. Collisions aren't very likely. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1323013', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/164/']} | jdg_85831 |
stackexchange | llm_judgeable_groundtruth_similarity | 6709338 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For a silly challenge I am trying to implement a list type using as little of the prelude as possible and without using any custom types (the data keyword). I can construct an modify a list using tuples like so: import Prelude (Int(..), Num(..), Eq(..))cons x = (x, ())prepend x xs = (x, xs) head (x, _) = xtail (_, x) = xat xs n = if n == 0 then xs else at (tail xs) (n-1) I cannot think of how to write an at (!!) function. Is this even possible in a static language?If it is possible could you try to nudge me in the right direction without telling me the answer.
Now provide the response and nothing else.
| There is a standard trick known as Church encoding that makes this easy. Here's a generic example to get you started: data Foo = A Int Bool | B StringfooValue1 = A 3 FalsefooValue2 = B "hello!" Now, a function that wants to use this piece of data must know what to do with each of the constructors. So, assuming it wants to produce some result of type r , it must at the very least have two functions, one of type Int -> Bool -> r (to handle the A constructor), and the other of type String -> r (to handle the B constructor). In fact, we could write the type that way instead: type Foo r = (Int -> Bool -> r) -> (String -> r) -> r You should read the type Foo r here as saying "a function that consumes a Foo and produces an r ". The type itself "stores" a Foo inside a closure -- so that it will effectively apply one or the other of its arguments to the value it closed over. Using this idea, we can rewrite fooValue1 and fooValue2 : fooValue1 = \consumeA consumeB -> consumeA 3 FalsefooValue2 = \consumeA consumeB -> consumeB "hello!" Now, let's try applying this trick to real lists (though not using Haskell's fancy syntax sugar). data List a = Nil | Cons a (List a) Following the same format as before, consuming a list like this involves either giving a value of type r (in case the constructor was Nil ) or telling what to do with an a and another List a , so. At first, this seems problematic, since: type List a r = (r) -> (a -> List a -> r) -> r isn't really a good type (it's recursive!). But we can instead demand that we first reduce all the recursive arguments to r first... then we can adjust this type to make something more reasonable. type List a r = (r) -> (a -> r -> r) -> r (Again, we should read the type List a r as being "a thing that consumes a list of a s and produces an r ".) There's one final trick that's necessary. What we would like to do is to enforce the requirement that the r that our List a r returns is actually constructed from the arguments we pass. That's a little abstract, so let's give an example of a bad value that happens to have type List a r , but which we'd like to rule out. badList = \consumeNil consumeCons -> False Now, badList has type List a Bool , but it's not really a function that consumes a list and produces a Bool , since in some sense there's no list being consumed. We can rule this out by demanding that the type work for any r , no matter what the user wants r to be: type List a = forall r. (r) -> (a -> r -> r) -> r This enforces the idea that the only way to get an r that gets us off the ground is to use the (user-supplied) consumeNil function. Can you see how to make this same refinement for our original Foo type? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6709338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_85832 |
stackexchange | llm_judgeable_groundtruth_similarity | 70525 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Some news articles about Portugal describe "the center-right Social Democratic Party" in opposition to the "center-left Socialist Party". In most of Europe, Social Dems are left or center-left, and Socialists are left or far left. Are the parties in Portugal politically to the right of their namesakes in the rest of Europe, or is the political center of Portugal really that far left?
Now provide the response and nothing else.
| According to Professor Jaime Nogueira Pinto , some "bands" of the political spectrum were made illegal by the Junta and the Communists after the 1974 military coup ("Carnation Revolution"): Os políticos antifascistas, que chegaram ao poder com o golpe militarde 1974, não precisaram de se preocupar muito com a direita ou de seocupar dela. No período de transição entre o 25 de Abril de 74 e o 25de Novembro de 75, nos momentos revolucionários de 28 de Setembro e 11de Março, os partidos com alguns vestígios de pensamento, ideias ouprincípios à direita foram proibidos, os seus dirigentes presos ouforçados ao exílio e as pessoas suspeitas de serem de direita, alvo de“proscrições”. Como se não bastasse, assinou-se o famoso pactoMFA-Partidos, ou “Plataforma de Acordo Constitucional”, consumado empleno gonçalvismo, um mês depois do 11 de Março. A ideia do “Pacto”partiu de Álvaro Cunhal, para permitir que o MFA – que tanto tinhafeito pela restauração democrática – tivesse uma palavra a dizer sobreo futuro texto constitucional, rumo ao socialismo, e pudesse policiaras forças políticas permitidas. Os militares “democratas” tinham feitouma revolução tão bonita… não podiam deixar que “os fascistas” ou queo povo (na sua inocência ou vítima de manipulação e de másinfluências) a estragassem. Nesse pacto, fechou-se o arco de legitimidade do novo regime, quevetava a direita, depois de ter prendido e forçado ao exílio o querestava dela. E as “forças democráticas” (incluindo as lideranças doCDS e do PPD) viram que ganhavam um eleitorado cativo que, emboraalheio à ideologia de fachada esquerdista que propunham – também parasobreviver e não serem proibidos pelo MFA –, votaria neles como “malmenor”. Como votou. Assim, a chamada Terceira República portuguesa proibiu à nascença ospartidos de direita, empurrando os eleitores de direita para umresignado voto útil nos partidos do centro, permitidos pelo MFA, com abenevolência do PCP. Jaime Nogueira Pinto, A excepção portuguesa: porque não temos uma direita radical? , Observador, 3 Fevereiro 2019. In the comments, ohwilleke translated the key portion of this quote to English: [...] the so-called Third Portuguese Republic banned right-wing parties from birth, pushing right-wing voters to a resigned useful vote in the center parties, permitted by the MFA [the military junta], with the benevolence of the PCP [Portuguese Communist Party]. portugal | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/70525', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/7256/']} | jdg_85833 |
stackexchange | llm_judgeable_groundtruth_similarity | 3600713 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to obtain the size of the notification bar and title bar in android? At the moment I obtain the display width and height with: Display display = getWindowManager().getDefaultDisplay();int width = display.getWidth();int height = display.getHeight(); After that I want to subtract the sizes of the bars so that I can stretch a video without losing aspect ratio. Currently I hide the bars because I can't see a better way.
Now provide the response and nothing else.
| Maybe this is a helpful approach: Referring to the Icon Design Guidelines there are only three different heights for the status (notification) bar depending on the screen density: 24px for LDPI 32px for MDPI 48px for HDPI So if you retrieve the screen density of the device using densityDpi of DisplayMetrics you know which value to subtract so it could look something like that: DisplayMetrics metrics = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(metrics); int myHeight = 0; switch (metrics.densityDpi) { case DisplayMetrics.DENSITY_HIGH: Log.i("display", "high"); myHeight = display.getHeight() - 48; break; case DisplayMetrics.DENSITY_MEDIUM: Log.i("display", "medium/default"); myHeight = display.getHeight() - 32; break; case DisplayMetrics.DENSITY_LOW: Log.i("display", "low"); myHeight = display.getHeight() - 24; break; default: Log.i("display", "Unknown density"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3600713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/352097/']} | jdg_85834 |
stackexchange | llm_judgeable_groundtruth_similarity | 406827 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find all points from the domain of $$f(x,y)=e^{x^2-xy-1}$$in which the function f reaches the maximum rate change (I mean gain/increase) in the direction of x-axis the domain: (am I right?)$$x \in R, y \in R$$so the domain is real numbers>0. I thought about directional derivative and it's property:$$[e^{x^2-xy-1}(2x-y), e^{x^2-xy-1}(-x)]*[cos\alpha, cos\beta]$$where$$cos\alpha=1, cos\beta=0$$so it's:$$e^{x^2-xy-1}(2x-y)$$and what should I do now?
Now provide the response and nothing else.
| few thoughts on the first one: $$\ln \left((\sin \frac{1}{n} \cdot \sin \frac{2}{n} \cdot ... \cdot \sin 1)^{\frac{1}{n}} \right)=\frac{1}{n} \sum_{k=1}^n \ln \left( \sin(\frac{k}{n})\right)$$ This is just a Riemann sum, and thus its limit is $$\int_{0}^1 \ln(\sin(x)) dx$$ This is an improper integral though, so the RS approach might not be best, but I think it is a convergent improper integral, since $\int_0^1 \ln(x)dx $ is convergent.. Maybe someone can take over.... | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/406827', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45886/']} | jdg_85835 |
stackexchange | llm_judgeable_groundtruth_similarity | 1727479 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to show that $\mathbb{Z}_4$ and $\mathbb{Z}_2 \times \mathbb{Z}_2$ are not isomorphic by using the fact that $\mathbb{Z}_4$ has one element of additive order 4 (the largest additive order), while $\mathbb{Z}_2 \times \mathbb{Z}_2$ has no elements of additive order 4. Can someone help me get started in showing that additive order is a property preserved by isomorphisms?
Now provide the response and nothing else.
| In general, if $f: A \to B$ is a homomorphism of groups and $a\in A$, then $ord(f(a)) \le ord(a)$ because $a^n=1$ implies $f(a)^n=f(a^n)=f(1)=1$. When $f$ is an isomorphism with inverse $g$, we get$$ord(a) = ord(gf(a)) \le ord(f(a)) \le ord(a)$$and so $ord(a)=ord(f(a))$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1727479', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/288963/']} | jdg_85836 |
stackexchange | llm_judgeable_groundtruth_similarity | 49451039 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As per https://cloud.google.com/functions/docs/writing/ , Firebase Cloud Functions uses Node v6.11.5 Is there any insight as to why such an old version was chosen, especially considering that it makes me to unable to use newer syntax in JS languages like ECMA6?
Now provide the response and nothing else.
| EDIT: As of Cloud Next 2018, Cloud Functions has exited beta. There is node 8 support now, but it is currently in beta. There are a number of things to keep in mind. Cloud Functions will only use a version of node with LTS (Long Term Support). Only even number versions of node receive LTS. Cloud Functions is in beta, and the engineering team has more important priorities than providing a new version of node. It's a priority to get to production readiness, and that doesn't necessarily involve having the latest version of node. It's not feasible to simply drop in a new major version, as that might break existing clients. It will probably have to be the case that each project can select the version of node they would like to use, and that's a significant change. You are able to use any language that you want that can transpile to ES6. This includes TypeScript and even Kotlin, if you wish. The Firebase CLI supports TypeScript directly , and has for a few months now. TypeScript is actually the Firebase team's recommended development language. If you want to stay on top of the latest languages for Cloud Functions development, I strongly recommend that you not depend on the version of node. Instead, select tooling that can transpile to ES6. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49451039', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1847976/']} | jdg_85837 |
stackexchange | llm_judgeable_groundtruth_similarity | 964894 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to have a common function to convert UTC time to EDT. I have a server in India. An application in it needs to use EDT time for all time purposes. I am using .NET 3.5. I found this on some other forum. DateTime eastern = TimeZoneInfo.ConvertTimeBySystemTimeZoneId( DateTime.UtcNow, "Eastern Standard Time"); When i tried with "Easten Daylight Time" I got an error. "The time zone ID 'Eastern Daylight Time' was not found on the local computer". Please help with this or any other solution.
Now provide the response and nothing else.
| Eastern Daylight Time isn't the name of a "full" time zone - it's "half" a time zone, effectively, always 4 hours behind UTC. (There may be proper terminology for this, but I'm not aware of it.) Why would you want to use EDT for times which don't have daylight savings applied? If you want a custom time zone that always has the same offset to UTC, use TimeZoneInfo.CreateCustomTimeZone . Note that if you use get the Eastern Standard timezone ( TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time") ) then that will still have daylight saving time applied appropriately (i.e. during summer). For example: TimeZoneInfo tzi = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");// Prints TrueConsole.WriteLine(tzi.IsDaylightSavingTime(new DateTime(2009, 6, 1)));// Prints FalseConsole.WriteLine(tzi.IsDaylightSavingTime(new DateTime(2009, 1, 1))); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/964894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/119211/']} | jdg_85838 |
stackexchange | llm_judgeable_groundtruth_similarity | 17306038 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to detect the current web browser within one of my Api Controllers in my program using MVC4. Everywhere I look people say to use Request.Browser , however I can't get that to work. Any suggestions or is there something I'm overlooking?
Now provide the response and nothing else.
| You can use the HttpBrowserCapabilities in System.Web like this var userAgent = HttpContext.Current.Request.UserAgent; var userBrowser = new HttpBrowserCapabilities { Capabilities = new Hashtable { { string.Empty, userAgent } } }; var factory = new BrowserCapabilitiesFactory(); factory.ConfigureBrowserCapabilities(new NameValueCollection(), userBrowser); //Set User browser Properties BrowserBrand = userBrowser.Browser; BrowserVersion = userBrowser.Version; This relies on browscap.ini in Windows/System32/inetsrv/ or Windows/SysWOW64/inetsrv for definitions. This article may also help - http://stephenwalther.com/archive/2010/03/05/use-asp-net-4-browser-definitions-with-asp-net-3-5 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17306038', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1614490/']} | jdg_85839 |
stackexchange | llm_judgeable_groundtruth_similarity | 20521737 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I check if laravel is connected to the database? I've searched around and I can't find anything that would tell me how this is done.
Now provide the response and nothing else.
| You can use if(DB::connection()->getDatabaseName()){ echo "Connected sucessfully to database ".DB::connection()->getDatabaseName().".";} It will give you the database name for the connected database, so you can use it to check if your app is connected to it. But... Laravel will only connect to database once it needs something from database and, at the time of a connection try, if it finds any errors it will raise a PDOException , so this is what you can do to redirect your user to a friendly page: App::error(function(PDOException $exception){ Log::error("Error connecting to database: ".$exception->getMessage()); return "Error connecting to database";}); Add this to your app/filters.php file. In my opinion, you don't really need to check if it is connceted or not, just take the proper action in the exception handling closure. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20521737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1318205/']} | jdg_85840 |
Subsets and Splits