source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
9726816
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why I can't create or overload operator := in my class in C++? Is there are some list operators that I can overload? I can only overload, or also create some new custom operators? Now provide the response and nothing else.
Because no such operator exists in C++. You cannot roll your own operators because you would need to modify the grammar of the language for the parser to recognize them. You can find a list of the available operators here or here (or better yet by reading the standard if you can get a copy). Finally, be advised that overloading operators like there is no tomorrow is a mistake that pretty much every C++ beginner makes; operators are really nothing more than functions, and unless there is a very good case to be made for overloading an operator most of the time it's a better idea to just write a plain function for your class instead. For example, std::string does not have an operator* even though it could be argued that it's convenient to write string sleepy = string("z") * 40;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9726816', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1272436/']}
jdg_76741
stackexchange
llm_judgeable_groundtruth_similarity
26793402
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to understand fork() behavior. This time in a for-loop . Observe the following code: #include <stdio.h>void main(){ int i; for (i=0;i<3;i++) { fork(); // This printf statement is for debugging purposes // getppid(): gets the parent process-id // getpid(): get child process-id printf("[%d] [%d] i=%d\n", getppid(), getpid(), i); } printf("[%d] [%d] hi\n", getppid(), getpid());} Here is the output: [6909][6936] i=0[6909][6936] i=1[6936][6938] i=1[6909][6936] i=2[6909][6936] hi[6936][6938] i=2[6936][6938] hi[6938][6940] i=2[6938][6940] hi[1][6937] i=0[1][6939] i=2[1][6939] hi[1][6937] i=1[6937][6941] i=1[1][6937] i=2[1][6937] hi[6937][6941] i=2[6937][6941] hi[6937][6942] i=2[6937][6942] hi[1][6943] i=2[1][6943] hi I am a very visual person, and so the only way for me to truly understand things is by diagramming. My instructor said there would be 8 hi statements. I wrote and ran the code, and indeed there were 8 hi statements. But I really didn’t understand it. So I drew the following diagram: Diagram updated to reflect comments :) Observations: Parent process (main) must iterate the loop 3 times. Then printf is called On each iteration of parent for-loop a fork() is called After each fork() call, i is incremented, and so every child starts a for-loop from i before it is incremented At the end of each for-loop, "hi" is printed Here are my questions: Is my diagram correct? Why are there two instances of i=0 in the output? What value of i is carried over to each child after the fork()? If the same value of i is carried over, then when does the "forking" stop? Is it always the case that 2^n - 1 would be a way to count the number of children that are forked? So, here n=3 , which means 2^3 - 1 = 8 - 1 = 7 children, which is correct? Now provide the response and nothing else.
Here's how to understand it, starting at the for loop. Loop starts in parent, i == 0 Parent fork() s, creating child 1. You now have two processes. Both print i=0 . Loop restarts in both processes, now i == 1 . Parent and child 1 fork() , creating children 2 and 3. You now have four processes. All four print i=1 . Loop restarts in all four processes, now i == 2 . Parent and children 1 through 3 all fork() , creating children 4 through 7. You now have eight processes. All eight print i=2 . Loop restarts in all eight processes, now i == 3 . Loop terminates in all eight processes, as i < 3 is no longer true. All eight processes print hi . All eight processes terminate. So you get 0 printed two times, 1 printed four times, 2 printed 8 times, and hi printed 8 times.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26793402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3154832/']}
jdg_76742
stackexchange
llm_judgeable_groundtruth_similarity
16637271
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Pandas has the following examples for how to store Series , DataFrames and Panels in HDF5 files: Prepare some data: In [1142]: store = HDFStore('store.h5')In [1143]: index = date_range('1/1/2000', periods=8)In [1144]: s = Series(randn(5), index=['a', 'b', 'c', 'd', 'e'])In [1145]: df = DataFrame(randn(8, 3), index=index, ......: columns=['A', 'B', 'C']) ......:In [1146]: wp = Panel(randn(2, 5, 4), items=['Item1', 'Item2'], ......: major_axis=date_range('1/1/2000', periods=5), ......: minor_axis=['A', 'B', 'C', 'D']) ......: Save it in a store: In [1147]: store['s'] = sIn [1148]: store['df'] = dfIn [1149]: store['wp'] = wp Inspect what's in the store: In [1150]: storeOut[1150]: <class 'pandas.io.pytables.HDFStore'>File path: store.h5/df frame (shape->[8,3]) /s series (shape->[5]) /wp wide (shape->[2,5,4]) Close the store: In [1151]: store.close() Questions: In the code above, when is the data actually written to disk ? Say I want to add thousands of large dataframes living in .csv files to a single .h5 file. I would need to load them and add them to the .h5 file one by one since I cannot afford to have them all in memory at once as they would take too much memory. Is this possible with HDF5? What would be the correct way to do it? The Pandas documentation says the following: "These stores are not appendable once written (though you simply remove them and rewrite). Nor are they queryable ; they must be retrieved in their entirety." What does it mean by not appendable nor queryable ? Also, shouldn't it say once closed instead of written ? Now provide the response and nothing else.
As soon as the statement is exectued, eg store['df'] = df . The close just closes the actual file (which will be closed for you if the process exists, but will print a warning message) Read the section http://pandas.pydata.org/pandas-docs/dev/io.html#storing-in-table-format It is generally not a good idea to put a LOT of nodes in an .h5 file. You probably want to append and create a smaller number of nodes. You can just iterate thru your .csv and store/append them one by one. Something like: for f in files: df = pd.read_csv(f) df.to_hdf('file.h5',f,df) Would be one way (creating a separate node for each file) Not appendable - once you write it, you can only retrieve it all at once, e.g. you cannot select a sub-section If you have a table, then you can do things like: pd.read_hdf('my_store.h5','a_table_node',['index>100']) which is like a database query, only getting part of the data Thus, a store is not appendable, nor queryable, while a table is both .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16637271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283296/']}
jdg_76743
stackexchange
llm_judgeable_groundtruth_similarity
46806335
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just want to create a REST API that receives a file, process it and return some information. My problem is that I am following this example: http://www.django-rest-framework.org/api-guide/parsers/#fileuploadparser And I can't make it work using Postman or curl, I think I am missing something. The parser always gives me these two errors: FileUpload parse error - none of upload handlers can handle the stream Missing filename. Request should include a Content-Disposition header with a filename parameter. This is the code: views.py: class FileUploadView(APIView): parser_classes = (FileUploadParser,) def post(self, request, filename, format=None): file_obj = request.data['file'] # ... # do some stuff with uploaded file # ... return Response(status=204) def put(self, request, filename, format=None): file_obj = request.data['file'] # ... # do some stuff with uploaded file # ... return Response(status=204) urls.py urlpatterns = [ url(r'predict/(?P<filename>[^/]+)$', app.views.FileUploadView.as_view())] settings.py """Django settings for GenderAPI project.Generated by 'django-admin startproject' using Django 1.9.1.For more information on this file, seehttps://docs.djangoproject.com/en/1.9/topics/settings/For the full list of settings and their values, seehttps://docs.djangoproject.com/en/1.9/ref/settings/"""import osimport posixpathLOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'file': { 'level': 'DEBUG', 'class': 'logging.FileHandler', 'filename': 'debug.log', }, }, 'loggers': { 'django': { 'handlers': ['file'], 'level': 'DEBUG', 'propagate': True, }, },}# Build paths inside the project like this: os.path.join(BASE_DIR, ...)BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))# Quick-start development settings - unsuitable for production# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/# SECURITY WARNING: keep the secret key used in production secret!SECRET_KEY = removed# SECURITY WARNING: don't run with debug turned on in production!DEBUG = TrueALLOWED_HOSTS = ['localhost','127.0.0.1']REST_FRAMEWORK = { 'DEFAULT_PARSER_CLASSES': ( 'rest_framework.parsers.FileUploadParser' )}# Application definitionINSTALLED_APPS = [ # Add your apps here to enable them 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'app' ]MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware']ROOT_URLCONF = 'GenderAPI.urls'TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, },]WSGI_APPLICATION = 'GenderAPI.wsgi.application'# Database# https://docs.djangoproject.com/en/1.9/ref/settings/#databasesDATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), }}# Static files (CSS, JavaScript, Images)# https://docs.djangoproject.com/en/1.9/howto/static-files/STATIC_URL = '/static/'STATIC_ROOT = posixpath.join(*(BASE_DIR.split(os.path.sep) + ['static']))FILE_UPLOAD_TEMP_DIR = BASE_DIRMEDIA_URL = '/media/' Here you can see a postman capture (I have tried everything): PUT /predict/pabloGrande.jpg HTTP/1.1Host: 127.0.0.1:52276Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW------WebKitFormBoundary7MA4YWxkTrZu0gWContent-Disposition: form-data; name="file"; filename="04320cf.jpg"Content-Type: image/jpeg------WebKitFormBoundary7MA4YWxkTrZu0gW-- requirements: bleach==1.5.0Django==1.11.6djangorestframework==3.7.1html5lib==0.9999999Markdown==2.6.9numpy==1.13.3olefile==0.44pandas==0.20.3Pillow==4.3.0pip==9.0.1protobuf==3.4.0python-dateutil==2.6.1pytz==2017.2scipy==1.0.0rc1setuptools==28.8.0six==1.11.0tensorflow==1.3.0tensorflow-tensorboard==0.1.8Werkzeug==0.12.2wheel==0.30.0 Thanks so much for your help Now provide the response and nothing else.
In django REST framework. we have components like Parsers, Renderers and Serializers. The responsibility of Parsers is to parse the data that is sent by request methods GET, POST and PUT, etc. Default parser used in django REST is ' JSONParser '. It only parses the data JSON data[numbers, string, date]. It ignores the data like FILES. In order to parse the FILES we need to use parsers like " MultiPartParser " or " FormParser ". Example Code : from rest_framework.parsers import MultiPartParser from rest_framework.response import Response from rest_framework.views import APIView class ExampleView(APIView): """ A view that can accept POST requests with JSON content. """ parser_classes = (MultiPartParser,) def post(self, request, format=None): # to access files print request.FILES # to access data print request.data return Response({'received data': request.data}) When we use property request.data then parser will parse the data. References: Django REST Docs , Django REST Github
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46806335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160718/']}
jdg_76744
stackexchange
llm_judgeable_groundtruth_similarity
382486
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question concerns the C# language, but I expect it to cover other languages such as Java or TypeScript. Microsoft recommends best practices on using asynchronous calls in .NET. Among these recommendations, let's pick two: change the signature of the async methods so that they return Task or Task<> (in TypeScript, that'd be a Promise<>) change the names of the async methods to end with xxxAsync() Now, when replacing a low-level, synchronous component by an async one, this impacts the full stack of the application. Since async/await has a positive impact only if used "all the way up", it means the signature and method names of every layer in the application must be changed. A good architecture often involves placing abstractions between each layers, such that replacing low-level components by others is unseen by the upper-level components. In C#, abstractions take the form of interfaces. If we introduce a new, low-level, async component, each interface in the call stack needs to be either modified or replaced by a new interface. The way a problem is solved (async or sync) in an implementing class is not hidden (abstracted) to the callers anymore. The callers have to know if it's sync or async. Aren't async/await best practices contradicting with "good architecture" principles? Does it mean that each interface (say IEnumerable, IDataAccessLayer) needs their async counterpart (IAsyncEnumerable, IAsyncDataAccessLayer) such that they can be replaced in the stack when switching to async dependencies? If we push the problem a little further, wouldn't it be simpler to assume every method to be async (to return a Task<> or Promise<>), and for the methods to synchronize the async calls when they're not actually async? Is this something to be expected from the future programming languages? Now provide the response and nothing else.
What Color Is Your Function? You may be interested in Bob Nystrom's What Color Is Your Function 1 . In this article, he describes a fictional language where: Each function has a color: blue or red. A red function may call either blue or red functions, no issue. A blue function may only call blue functions. While fictitious, this happens quite regularly in programming languages: In C++, a "const" method may only call other "const" methods on this . In Haskell, a non-IO function may only call non-IO functions. In C#, a sync function may only call sync functions 2 . As you have realized, because of these rules, red functions tend to spread around the code base. You insert one, and little by little it colonizes the whole code base. 1 Bob Nystrom, apart from blogging, is also part of the Dart team and has written this little Crafting Interpreters serie; highly recommended for any programming language/compiler afficionado. 2 Not quite true, as you may call an async function and block until it returns, but... Language Limitation This is, essentially, a language/run-time limitation. Language with M:N threading, for example, such as Erlang and Go, do not have async functions: each function is potentially async and its "fiber" will simply be suspended, swapped out, and swapped back in when it's ready again. C# went with a 1:1 threading model, and therefore decided to surface synchronicity in the language to avoid accidentally blocking threads. In the presence of language limitations, coding guidelines have to adapt.
{}
{'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/382486', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/321810/']}
jdg_76745
stackexchange
llm_judgeable_groundtruth_similarity
17855842
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why does moment.js UTC always show the wrong date. For example from chrome's developer console: moment(('07-18-2013')).utc().format("YYYY-MM-DD").toString()// ormoment.utc(new Date('07-18-2013')).format("YYYY-MM-DD").toString() Both of them will return "2013-07-17" why is it returning 17th instead of 18th , that was passed in. But if I use momentjs without the utc: moment(new Date('07-18-2013')).format("YYYY-MM-DD").toString() I get back "2013-07-18" which is what I also expect when using moment.js UTC. Does this mean we cannot get the correct date when using moment.js UTC? Now provide the response and nothing else.
By default, MomentJS parses in local time. If only a date string (with no time) is provided, the time defaults to midnight. In your code, you create a local date and then convert it to the UTC timezone (in fact, it makes the moment instance switch to UTC mode ), so when it is formatted, it is shifted (depending on your local time) forward or backwards. If the local timezone is UTC+N (N being a positive number), and you parse a date-only string, you will get the previous date. Here are some examples to illustrate it (my local time offset is UTC+3 during DST): >>> moment('07-18-2013', 'MM-DD-YYYY').utc().format("YYYY-MM-DD HH:mm")"2013-07-17 21:00">>> moment('07-18-2013 12:00', 'MM-DD-YYYY HH:mm').utc().format("YYYY-MM-DD HH:mm")"2013-07-18 09:00">>> Date()"Thu Jul 25 2013 14:28:45 GMT+0300 (Jerusalem Daylight Time)" If you want the date-time string interpreted as UTC, you should be explicit about it: >>> moment(new Date('07-18-2013 UTC')).utc().format("YYYY-MM-DD HH:mm")"2013-07-18 00:00" or, as Matt Johnson mentions in his answer, you can ( and probably should ) parse it as a UTC date in the first place using moment.utc() and include the format string as a second argument to prevent ambiguity. >>> moment.utc('07-18-2013', 'MM-DD-YYYY').format("YYYY-MM-DD HH:mm")"2013-07-18 00:00" To go the other way around and convert a UTC date to a local date, you can use the local() method, as follows: >>> moment.utc('07-18-2013', 'MM-DD-YYYY').local().format("YYYY-MM-DD HH:mm")"2013-07-18 03:00"
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/17855842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/529289/']}
jdg_76746
stackexchange
llm_judgeable_groundtruth_similarity
496213
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an array like this: array=(1 2 7 6) and would like to search for the second largest value, with the output being secondGreatest=6 Is there any way to do this in bash? Now provide the response and nothing else.
printf '%s\n' "${array[@]}" | sort -n | tail -2 | head -1 Print each value of the array on it's own line, sort it, get the last 2 values, remove the last value secondGreatest=$(printf '%s\n' "${array[@]}" | sort -n | tail -2 | head -1) Set that value to the secondGreatest variable. Glenn Jackman had an excellent point about duplicate numbers that I didn't consider. If you only care about unique values you can use the -u flag of sort: secondGreatest=$(printf '%s\n' "${array[@]}" | sort -nu | tail -2 | head -1)
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/496213', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/332357/']}
jdg_76747
stackexchange
llm_judgeable_groundtruth_similarity
917822
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using split() to tokenize a String separated with * following this format: name*lastName*ID*school*age%name*lastName*ID*school*age%name*lastName*ID*school*age I'm reading this from a file named "entrada.al" using this code: static void leer() { try { String ruta="entrada.al"; File myFile = new File (ruta); FileReader fileReader = new FileReader(myFile); BufferedReader reader = new BufferedReader(fileReader); String line = null; while ((line=reader.readLine())!=null){ if (!(line.equals("%"))){ String [] separado = line.split("*"); //SPLIT CALL names.add(separado[0]); lastNames.add(separado[1]); ids.add(separado[2]); ages.add(separado[3]); } } reader.close(); } And I'm getting this exception: Exception in thread "main" java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0 * My guess is that the lack of a * after age on the original text file is causing this. How do I get around it? Now provide the response and nothing else.
No, the problem is that * is a reserved character in regexes, so you need to escape it. String [] separado = line.split("\\*"); * means "zero or more of the previous expression" (see the Pattern Javadocs ), and you weren't giving it any previous expression, making your split expression illegal. This is why the error was a PatternSyntaxException .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/917822', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/45963/']}
jdg_76748
stackexchange
llm_judgeable_groundtruth_similarity
299285
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How would one use the Axiom of Foundation (that every non-empty set has an $\in$-minimal element) to prove that for any two sets $x,y$ we can find a set $x'$ so that $x'$ and $y$ are disjoint and $x'$ and $x$ are in bijection? I myself tried to construct $x$ recursively by doing the following: if $x$ and $y$ are disjoint, we are done; if not, take $z$ in $x \cap y$ and let $x_{1} = (x - \left\{z_{1}\right\} ) \cup \left\{y\right\}$. Now, the new intersection $x_{1} \cap y$ has size with $1$ less than the size of $x \cap y$ because $y \not \in y$ (by Axiom of Foundation) and so on (i.e. if $x_{1}$ and $y$ are not disjoint take $z_{2}$ in the intersection and let $x_{3} = (x_{1} - \left\{z_{2}\right\}) \cup \left\{\left\{y\right\}\right\}$. However, I don't think this works since we might have $y \in x$ and this messes up our bijection. A detailed answer would be appreciated... (for me to get confortable with the basics) thanks Now provide the response and nothing else.
Using Pairing and Replacement , let $$x'=\bigl\{ \{y,z\}\mid z\in x\bigr\}.$$Then $x\to x'$, $z\mapsto \{y,z\}$ is a bijection (even if $z=y$ may occur).For $z\in x$ consider the set $u=\{\{z,y\},y\}$ (again using Pairing ). As $\{z,y\}\cap u$ is nonempty, Foundation says that $y\cap u$ is empty, especially $\{y,z\}\notin y$, hence $x'\cap y=\emptyset$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/299285', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61453/']}
jdg_76749
stackexchange
llm_judgeable_groundtruth_similarity
523021
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Like many others, I have been following the sad development of the bush/forest fires in Australia recently. A claim that gets repeated is that one of the contributors to this blaze is the ongoing >45°C heatwave. Now I am not really putting this in question here. This is probably a well established fact, but my immediate intuition of physics fails to grasp why. Here's my reasoning: Obviously the vegetation has to be dry for this to happen, but surely - and in my experience - a +30°C temperature is more than enough to make sure everything is bone dry. 45°C is not significantly closer than the aforementioned 30°C to the ignition temperature, which is several hundreds degrees. I am aware that some vegetation in Australia (eucalyptus probably) is remarkably easy to light, but still air temperatures are pretty far from that. Heat conduction flux is also directly proportional to the difference in temperature (first order dependence) and not for example the square of it, so the few extra degrees shouldn't make grass that much easier to ignite. So why does the heatwave matter? Now provide the response and nothing else.
This is more a question of chemistry and biology than physics. Solid objects don't burn (try dropping a lit match on a piece of structural lumber sometime -- it'll just go out). Instead, they release or decompose into flammable gasses on heating, and it's those gasses that burn. Some plants (eg. pines) produce volatile resins that provide an easy ignition source, and the amount and speed of vaporization increases with temperature. Eucalyptus is especially bad for this: eucalyptus oil is extremely volatile, extremely flammable, and produced in copious quantities. In hot weather, there's far more vaporized oil in the air than in more moderate temperatures, leading to easier fire starts and faster spreads. Eucalyptus has evolved not just to survive fires, but to encourage them, as a way to clear out competing plant species.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/523021', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/250879/']}
jdg_76750
stackexchange
llm_judgeable_groundtruth_similarity
19681
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm interested in sufficient conditions for a notion of sequential convergence to be induced by a topology. More precisely: Let $V$ be a vector space over $\mathbb{C}$ endowed with a notion $\tau$ of sequential convergence. When is there a topology $\mathcal{O}$ on $V$ that makes $V$ a topological vector space such that "sequences suffice" in $(V,\mathcal{O})$, e.g. $(V,\mathcal{O})$ is first countable, and convergence in $(V,\mathcal{O})$ coincides with $\tau$-convergence? Is the topology $\mathcal{O}$ uniquely determined? By a notion $\tau$ of sequential convergence on a vector space $V$ I mean a "rule" $\tau$ which assigns to certain sequences $(v_n)_{n\in\mathbb{N}}\subset V$ (which one would call convergent sequences) an element $v\in V$ (a limit of $(v_n)_n$). One could write $v_n\stackrel{\tau}{\to}v$ in this case. This process of "assigning a limit" should satisfy at least that any constant sequence $(v,v,v,...)$ is convergent and is assigned the limit $v$. Also, given a convergent sequence $(v_n)_n$ with limit $v$ any subsequence $(v_{n_k})_k$ should have $v$ as a limit. I would also like this concept of assigning a limit to be compatible with addition in $V$ and multiplication by a scalar. Maybe one should include further restrictions. In fact I would like to know which further assumptions on this "limiting process" one has to assume in order to ensure that this limiting procedure corresponds to an actual topology on $V$ which makes $V$ a topological vector space in which a sequence converges if and only if it $\tau$-converges. Let me give two examples. If we take for instance a topological vector space $(V,\mathcal{O})$ then we have a notion of convergence in $V$ based upon the set $\mathcal{O}$ of open sets of $V$. This notion of convergence clearly satisfies the above assumptions on $\tau$. If on the other hand we consider $L^\infty([0,1])$ equipped with the notion of pointwise convergence almost everywhere , then there is no topology on $L^\infty([0,1])$ which makes $L^\infty([0,1])$ a TVS in which a sequence converges if and only if it converges pointwise almost everywhere. Still, convergence almost everywhere also satisfies the above assumptions on $\tau$. So the above assumptions on this concept of convergence are necessary but not sufficient conditions for what I mean by a notion of convergence to correspond to an actual topology. The question is: Which further assumptions do I have to make? On a less general level I'm particularly interested in the following case: Let $G\subset\mathbb{C}^d$ be a domain, $X$ a (complex) Banach space and let $H^\infty(G;X)$ denote the space of bounded holomorphic functions $f\colon G\to X$. Now consider the following notion $\tau$ of sequential convergence on $H^\infty(G;X)$: We say that a sequence $(f_n)_{n\in\mathbb{N}}\subset H^\infty(G;X)$ $\tau$-converges to $f\in H^\infty(G;X)$ if $\sup_{n\in\mathbb{N}}\sup_{z\in G} \|f_n(z)\|_X$ is finite and $f_n(z)$ converges in $X$ to $f(z)$ for every $z\in G$. Is there a topology $\mathcal{O}$ on $H^\infty(G;X)$ such that "sequences suffice" in $(H^\infty(G;X),\mathcal{O})$ and a sequence $(f_n)_{n\in\mathbb{N}}\subset H^\infty(G;X)$ converges w.r.t. the topology $\mathcal{O}$ if and only if it $\tau$-converges? Is this topology $\mathcal{O}$ unique if existent? What if we drop the "sequences suffice"-restriction? Is $(H^\infty,\mathcal{O})$ locally convex? Metrizable? What if we replace $X$ by a more general space like a LCTVS or a Frechet space? Thank you in advance for any suggestions, ideas or references. Now provide the response and nothing else.
I am addressing only the first part of your question (i.e., nothing with the structure of vector space; only topology and limits of sequences).I will quote here part of Problems 1.7.18-1.7.20 from Engelking's General Topology. (It would be better if you could get the book. I believe it used to be here , but the links don't work now. Perhaps you'll find it in the Internet.) L*-space is a pair $(X, \lambda)$, where X isa set and $\lambda$ a function (called the limit operator) assigning to some sequences of points of Xan element of X (called the limit of the sequence) in such a way that the following conditions are satisfied: (L1) If $x_i=x$ for $i = 1,2,\dots$, then $\lambda x_i = x$. (L2) If $\lambda x_i = x$, then $\lambda x' = x$ for every subsequence $x'$ of $x$. (L3) If a sequence $\{x_i\}$ does not converge to $x$, then it contains a subsequence $\{x_{k_i}\}$ suchthat no subsequence of $\{x_{k_i}\}$ converges to $x$. These properties are sufficient to define a closure operator on $X$ (not necessary idempotent). If $(X,\lambda)$ fulfills and additional condition (L4) If $\lambda x_i = x$ and $\lambda x^i_j = x_i$ for $i = 1,2,\dots$, then there exist sequences of positive integers$i_1, i_2,\dots$ and $j_1, j_2, \dots$ such that $\lambda x_{j_k}^{i_k} = x$. L*-space $X$ satisfying (L4) is called an S*-space. The closure operator given by S*-space is idempotent. Using this closure operator we get a topology, such that the convergence of the sequences is given by $\lambda$. A topology can be obtained from a L*-space (S*-space) if and only if the original space is sequential (Frechet-Urysohn). References given in Engelking's book are Frechet [1906] and [1918], Urysohn [1926a], Kisynski [i960]. Frechet [1906] Sur quelques points du calcul fonctionnel, Rend, del Circ. Mat. di Palermo 22 (1906), 1-74. Frechet [1918] Sur la notion de voisinage dans les ensembles abstraits, Bull. Sci. Math. 42 (1918), 138-156. Kisynski [1960] Convergence du type L, Coll. Math. 7 (1960), 205-211. Urysohn [1926a] Sur les classes (L) de M. Frechet, Enseign. Math. 25 (1926), 77-83. NOTE: Some axioms for convergence of sequences are studied in the paper:Mikusinski, P., Axiomatic theory of convergence (Polish), Uniw. Slaski w Katowicach Prace Nauk.-Prace Mat. No. 12 (1982), 13-21. I do not have the original paper, only a paper which cites this one; it seems that the axioms are equivalent to (L1)-(L3) and the uniqueness of limit. (But I do not know, whether some further axioms are studied in this paper.) EDIT: In Engelking's book (and frequently in general topology) the term Frechet space is used in this sense, not this one . I've edited Frechet to Frechet-Urysohn above, to avoid the confusion.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/19681', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1403/']}
jdg_76751
stackexchange
llm_judgeable_groundtruth_similarity
26999496
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a slider to manipulate an image within the DOM. What I would like to happen is for the image to scale but stay centered as it is being scaled. The problem I have is I am giving the user the ability to position the image using .draggable but after they have moved the image and they try to scale it again it jumps position. I have added a fiddle to the Question to show what I mean. Basically if you use the slider to change the size of the image before you drag it the image will scale and position correctly, however if you move the image after you have first scaled and then try and scale again after dragging the image jumps on the first iteration of the slider then scales and positions correctly. The strange thing is if you scale the image drag it scale it back down to the original size drag again and then try and scale it positions correctly. It looks like it has something to do with the previous slider value. Code for slider and drag: $( "#slide1" ).slider({ range: "min", min: -70, max: 150, step: 10, value: 0, slide: function( event, ui ) { var zv = (1+ ui.value/100); // Rotate Value var nx = 0; var ny = 0; nzo = calculateAspectRatioFit(iwz, ihz, iwz*(zv), ihz*(zv)); $('#img1').width(nzo.width).height(nzo.height); var wd = Math.abs(iwz - nzo.width); var hd = Math.abs(ihz - nzo.height); if (zv < 1){ nx = (ixz+(wd/2)); ny = (iyz+(hd/2)); $('#img1').css({'left':nx+'px','top':ny+'px'}); } else { nx = (ixz-(wd/2)); ny = (iyz-(hd/2)); $('#img1').css({'left':nx+'px','top':ny+'px'}); } }});$('#img1').draggable({ drag:function(event, ui){ var il = ui.position.left; var it = ui.position.top; var iw = $(this).width(); var ih = $(this).height(); $('#img1').css({'top':it+'px','left':il+'px'}); }, stop:function(){ ixz = $('#img1').position().left; iyz = $('#img1').position().top; } });}); All the code is within the JSFiddle found here . Now provide the response and nothing else.
In short, you need to create a copy of handler, such as by posting it into the io_service , before attempting to get the async_result in order to keep the coroutine alive. Boost.Asio prevents a non-resumable coroutine from indefinitely suspending by destroying the coroutine, resulting in the coroutine's stack to unwind. The coroutine object will throw boost::coroutines::detail::forced_unwind during its destruction, causing the suspended stack to unwind. Asio accomplishes this by: The yield_context CompletionToken maintains a weak_ptr to the coroutine. When the specialized handler_type::type handler is constructed, it obtains a shared_ptr for the coroutine via the CompletionToken's weak_ptr . When the handler is passed as the completion handler to asynchronous operations, the handler and its shared_ptr are copied. When the handler is invoked, it resumes the coroutine. When invoking async_result::get() , the specialization will reset the coroutine shared_ptr owned by the handler that was passed to async_result during construction, and then yield the coroutine. Here is an attempt to illustrate the execution of the code. Paths in | indicate the active stack, : indicates the suspended stack, and arrows are used to indicate transfer of control: boost::asio::io_service io_service;boost::asio::spawn(io_service, &my_timer);`-- dispatch a coroutine creator into the io_service.io_service.run();|-- invoke the coroutine entry| handler.| |-- create coroutine| | (count: 1)| |-- start coroutine ----> my_timer(): : |-- create handler1 (count: 2): : |-- create asnyc_result1(handler1): : |-- timer.async_wait(handler): : | |-- create handler2 (count: 3): : | |-- create async_result2(handler2): : | |-- create operation and copy: : | | handler3 (count: 4): : | `-- async_result2.get(): : | |-- handler2.reset() (count: 3)| `-- return <---- | `-- yield| `-- ~entry handler :| (count: 2) :|-- io_service has work (the :| async_wait operation) :| ...async wait completes... :|-- invoke handler3 :| |-- resume ----> |-- async_result1.get(): : | |-- handler1.reset() (count: 1)| `-- return <---- | `-- yield| `-- ~handler3 : :| | (count: 0) : :| `-- ~coroutine() ----> | `-- throw forced_unwind To fix this problem, handler needs to be copied and invoked through asio_handler_invoke() when it is time to resume the coroutine. For example, the following will post a completion handler 1 into io_service that invokes a copy of handler : timer.async_wait (handler);timer.get_io_service().post( std::bind([](decltype(handler) handler) { boost::system::error_code error; // Handler must be invoked through asio_handler_invoke hooks // to properly synchronize with the coroutine's execution // context. using boost::asio::asio_handler_invoke; asio_handler_invoke(std::bind(handler, error), &handler); }, handler));return result.get (); As demonstrated here , with this additional code, the output becomes: my_timer entermy_timer returns 1. The completion handler code can likely be cleaned up a bit, but as I was answering how to resume a Boost.Asio stackful coroutine from a different thread , I observed some compilers selecting the wrong asio_handler_invoke hook.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26999496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1531541/']}
jdg_76752
stackexchange
llm_judgeable_groundtruth_similarity
29453720
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose my text is <div class="compTitle"><h3 class="title"><a href="http://www.jaguarusa.com/index.html" target="_blank" data-80a="5520a3b52c6fe"><b></b><b>Jaguar</b>: Luxury Cars &amp; Sports Cars | <b></b><b>Jaguar</b> USA</a></h3> <div><span class=" fz-15px fw-m fc-12th wr-bw">www.<b></b><b>jaguar</b>usa.com</span><a class="tri fc-3rd fz-2xs ml-10" href="http://cc.bingj.com/cache.aspx?q=jaguar&amp;d=4967534815872005&amp;mkt=en-US&amp;setlang=en-US&amp;w=EK-x56coxxfS-VYEa33LnCSKRL4yj3aK" target="_blank" data-80a="5520a3b52c7ec">Cached</a></div></div> <div class="compTitle"><h3 class="title"><a class=" td-u" href="http://www.jaguar.com/index.html" target="_blank" data-80a="5520a3b52d37b"><b></b><b>Jaguar</b> - How Alive Are You?</a></h3> <div><span class=" fz-15px fw-m fc-12th wr-bw">www.<b></b><b>jaguar</b>.com/index.html</span><a class="tri fc-3rd fz-2xs ml-10" href="http://208.71.46.190/search/srpcache?p=jaguar&amp;n=30&amp;ei=UTF-8&amp;u=http://cc.bingj.com/cache.aspx?q=jaguar&amp;d=4828270487339026&amp;mkt=en-US&amp;setlang=en-US&amp;w=e_F5McXDYJWTpIIntdbN8Ht0ewcqxYpP&amp;icp=1&amp;.intl=us&amp;sig=HM1uWmpgoDFepAuhoyxPwA--" target="_blank" data-80a="5520a3b52d463">Cached</a></div></div> <div class="compTitle"><h3 class="title"><a class=" td-u" href="http://www.jaguar.com/market-selector.html" target="_blank" data-80a="5520a3b52d5a4"><b></b><b>Jaguar</b> International - Market selector page</a></h3> <div><span class=" fz-15px fw-m fc-12th wr-bw">www.<b></b><b>jaguar</b>.com</span><a class="tri fc-3rd fz-2xs ml-10" href="http://208.71.46.190/search/srpcache?p=jaguar&amp;n=30&amp;ei=UTF-8&amp;u=http://cc.bingj.com/cache.aspx?q=jaguar&amp;d=4721944299241952&amp;mkt=en-US&amp;setlang=en-US&amp;w=CBJMmRdVmMVjSqhyMCFPd1BAaDRR4HV4&amp;icp=1&amp;.intl=us&amp;sig=2oXq96OH3i158VrEx2GhPA--" target="_blank" data-80a="5520a3b52d68a">Cached</a></div></div> <div class="compTitle"><h3 class="title"><a class=" td-u" href="http://animals.nationalgeographic.com/animals/mammals/jaguar/" target="_blank" data-80a="5520a3b52d7c8"><b>Jaguars</b>, <b></b><b>Jaguar</b> Pictures, <b></b><b>Jaguar</b> Facts - National Geographic</a></h3> <div><span class=" fz-15px fw-m fc-12th wr-bw">animals.nationalgeographic.com/animals/mammals/<b></b><b>jaguar</b></span><a class="tri fc-3rd fz-2xs ml-10" href="http://208.71.46.190/search/srpcache?p=jaguar&amp;n=30&amp;ei=UTF-8&amp;u=http://cc.bingj.com/cache.aspx?q=jaguar&amp;d=4808277429977536&amp;mkt=en-US&amp;setlang=en-US&amp;w=NHU5GwfHQLSKjSLbPsXxTxyvRv2FsgCF&amp;icp=1&amp;.intl=us&amp;sig=OMQN5z2J_effVch27OsP9A--" target="_blank" data-80a="5520a3b52d8ad">Cached</a></div></div> and I want to find the text between <a> and </a> tags <a href = "link">This part here</a> Also, I need to make sure that "This part here" does not contain any tags. What I am doing is, headers = soup.findAll('div','compTitle')for header in headers: if header is not None: t = header.a.string.encode('utf-8') print t So, my expected output is Jaguar: Luxury Cars &amp; Sports Cars | Jaguar USAJaguar - How Alive Are You?Jaguar International - Market selector pageJaguars, Jaguar Pictures, Jaguar Facts - National Geographic But I am getting an error saying that AttributeError: 'NoneType' object has no attribute 'encode' Now provide the response and nothing else.
There are two cases to consider: The closest integer that is less than or equal to a : int c1 = a - (a % b); The closest integer that is greater than a : int c2 = (a + b) - (a % b); Then we need to check which is closer to a and return that: int c;if (a - c1 > c2 - a) { c = c2;} else { c = c1;} So we could create a closestInteger() method like this: static int closestInteger(int a, int b) { int c1 = a - (a % b); int c2 = (a + b) - (a % b); if (a - c1 > c2 - a) { return c2; } else { return c1; }} Example: System.out.println(closestInteger(254, 25));System.out.println(closestInteger(9, 5)); Output: 25010
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29453720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2964080/']}
jdg_76753
stackexchange
llm_judgeable_groundtruth_similarity
1817179
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to examine whether the following limit exists, or not. $$\lim_{n \to +\infty} \frac{1}{n^2} \sum_{k=1}^{n} k \ln\left( \frac{k^2+n^2}{n^2}\right )$$ If it does, I need to calculate its value. How to even start this? I've got no idea. Now provide the response and nothing else.
An idea: "Riemann sums" may be a good start. Massage your current sum into something of the form$$\frac{1}{n}\sum_{k=0}^n \frac{k}{n} \ln \left( 1+\left(\frac{k}{n}\right)^2\right)$$and recognize a Riemann sum for the (continuous) function $f\colon[0,1]\to\mathbb{R}$ defined by $f(x) = x\ln(1+x^2)$. Update: Jack d'Aurizio gave a way (actually, two) to evaluate the integral $$\int_0^1 x\ln(1+x^2)dx$$ in his separate answer , which complements this one.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1817179', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/294766/']}
jdg_76754
stackexchange
llm_judgeable_groundtruth_similarity
104354
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been fiddling around with a brushed DC motor, a 2N2222 and an arduino's PWM to get different speeds out of my motor. Now, based on a video tutorial I watched on youtube, It was recommended to put a 1k Ohm resitor between the PWM pin and the base, apparently to protect the arduino in case the transistor screwed up. I did what I was told to do, following this schematic (My power supply is 1.5V and R1 is 1K, not 220) But the motor would not turn when the microcontroller pin would output 5V ( full duty). So I figured, ights, let me not mess around with my arduino in case that was the problem, so I connected the base, throught the 1k resistor, to the same 1.5V powering the motor...still nothing...put a new 1k resistor...still nothing...so I romoved the 1k Ohm resistor and plugged my base straight to the 1.5v source and the motor started turning. Can someone please explain to me why that is? From what I've been reading, isn't the voltage at the base what determines the current flowing from the collector to the emmiter? -_- <-- Confused face Now provide the response and nothing else.
The motor needs a certain amount of current in order to turn. How much current is allowed to pass through the transistor from collector to emitter, and hence through the motor, is controlled by the current passing through the transistor from base to emitter times the transistor's current gain, known as "h fe ". The base resistor reduced this current to too low a value to allow the motor to turn even when amplified by the transistor. Use what you know about the motor's required current, the voltage across the transistor from base to emitter during saturation, the voltage from MCU pin to emitter, and the transistor's h fe to calculate the correct maximum value of resistor to use. E.g.: 300mA (I motor ) / 70A/A (h fe ) = 4.2mA (5V (V MCU ) - 0.7V (V BE(SAT) ) )/4.2mA = 1.024 kohm Note that the motor voltage supply is not involved in these calculations, but it still must be high enough after subtracting the voltage from collector to emitter as per motor specifications.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/104354', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/37632/']}
jdg_76755
stackexchange
llm_judgeable_groundtruth_similarity
3503503
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I had my webpage validated for xhtml transitional till I added this table (see below). Since then it doesn't validate and says " document type does not allow element "tfoot" here <tfoot> The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed). One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error)." Any ideas as what is happening? I checked for any opened and not closed tags but did not find any so I don't know what else is wrong. <table><caption>My first table, Anna</caption><thead><tr><th>June</th><th>July</th><th>August</th></tr></thead><tbody><tr><td> Data 1</td><td>Data 2</td><td> Data 3</td><td>Data 4 </td> </tr> <tr> <td> Data a </td> <td> Date b</td><td>Data c</td><td>Data d</td></tr><tfoot><tr><td>Result1</td></tr></tfoot></tbody></table> Now provide the response and nothing else.
You've got the <tfoot> at the end of the table. It should be between the <thead> and the <tbody> . It will appear at the bottom, but it's coded at the top. One of the original ideas is that as a large table loaded, the heading and footer would be visible quickly, with the rest filling in (esp. useful if the body was scrollable between them). It hasn't quite worked out like that in practice, but it does make more sense if you know that. In the DTD it lists: <!ELEMENT table (caption?, (col*|colgroup*), thead?, tfoot?, (tbody+|tr+))> That is, optional caption , then zero-or-more col or colgroup , then optional thead , then optional tfoot , then at least one tbody or tr . UPDATE: Note that HTML 5 now allows one to put the <tfoot> at the end of the table, instead of before the first <tbody> (or the first <tr> that isn't in a <thead> , <tfoot> or <tbody> and hence in a single implicit <tbody> ). As such the code in the question would now be considered valid. The older approach is also still valid, and probably advisable.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3503503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/422954/']}
jdg_76756
stackexchange
llm_judgeable_groundtruth_similarity
16217948
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: List list = new ArrayList<String>() ;list.add(1) ;Integer hello = (Integer) list.get(0) ;System.out.println(hello); The above code has a reference of type List referring to an instance of ArrayList of type String . When the line list.add(1) is executed, isn't the 1 added to the ArrayList (of type String) ? If yes, then why is this allowed? Now provide the response and nothing else.
You have used type erasure, which means you have ignored previously set generic checks. You can get away with this as this as generics are a compile time feature which isn't checked at runtime. What you have the same as List list = new ArrayList() ;list.add(1) ;Integer hello = (Integer) list.get(0) ;System.out.println(hello); or List<Integer> list = new ArrayList<Integer>() ;list.add(1) ;Integer hello = list.get(0); // generics add an implicit cast hereSystem.out.println(hello); If you look at the byte code generated by the compiler, there is no way to tell the difference. Interestingly, you can do this List<String> strings = new ArrayList<String>();@SuppressWarnings("unchecked");List<Integer> ints = (List) strings;ints.add(1);System.out.println(strings); // okString s= strings.get(0); // throws a ClassCastException
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16217948', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1441883/']}
jdg_76757
stackexchange
llm_judgeable_groundtruth_similarity
423213
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This article shows that DDR4 SDRAM has approximately 8x more bandwidth DDR1 SDRAM. But the time from setting the column address to when the data is available has only decreased by 10% (13.5ns).A quick search shows that the access time of the fastest async. SRAM (18 years old) is 7ns.Why has SDRAM access time decreased so slowly? Is the reason economic, technological, or fundamental? Now provide the response and nothing else.
It's because it's easier and cheaper to increase the bandwidth of the DRAM than to decrease the latency. To get the data from an open row of ram, a non trivial amount of work is necessary. The column address needs to be decoded, the muxes selecting which lines to access need to be driven, and the data needs to move across the chip to the output buffers. This takes a little bit of time, especially given that the SDRAM chips are manufactured on a process tailored to high ram densities and not high logic speeds. To increase the bandwidth say by using DDR(1,2,3 or 4), most of the logic can be either widened or pipelined, and can operate at the same speed as in the previous generation. The only thing that needs to be faster is the I/O driver for the DDR pins. By contrast, to decrease the latency the entire operation needs to be sped up, which is much harder. Most likely, parts of the ram would need to be made on a process similar to that for high speed CPUs, increasing the cost substantially (the high speed process is more expensive, plus each chip needs to go through 2 different processes). If you compare CPU caches with RAM and hard disk/SSD, there's an inverse relationship between storage being large, and storage being fast. An L1$ is very fast, but can only hold between 32 and 256kB of data. The reason it is so fast is because it is small: It can be placed very close to the CPU using it, meaning data has to travel a shorter distance to get to it The wires on it can be made shorter, again meaning it takes less time for data to travel across it It doesn't take up much area or many transistors, so making it on a speed optimized process and using a lot of power per bit stored isn't that expensive As you move up the hierarchy each storage option gets larger in capacity, but also larger in area and farther away from the device using it, meaning the device must get slower.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/423213', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/213323/']}
jdg_76758
stackexchange
llm_judgeable_groundtruth_similarity
5214127
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to make a horizontal rule with some text in the middle.For example: ----------------------------------- my title here ----------------------------- Is there a way to do that in CSS? Without all the "-" dashes obviously. Now provide the response and nothing else.
This is roughly how I'd do it: the line is created by setting a border-bottom on the containing h2 then giving the h2 a smaller line-height . The text is then put in a nested span with a non-transparent background. h2 { width: 100%; text-align: center; border-bottom: 1px solid #000; line-height: 0.1em; margin: 10px 0 20px; } h2 span { background:#fff; padding:0 10px; } <h2><span>THIS IS A TEST</span></h2><p>this is some content other</p> I tested in Chrome only, but there's no reason it shouldn't work in other browsers. JSFiddle: http://jsfiddle.net/7jGHS/
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5214127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/311465/']}
jdg_76759
stackexchange
llm_judgeable_groundtruth_similarity
5612736
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently working on a small social networking application and right now I'm trying to create a model that represents friendships between users . This is what I came up with so far: class User < ActiveRecord::Base # ... has_many :friendships has_many :friends, :through => :friendshipsendclass Friendship < ActiveRecord::Base belongs_to :user belongs_to :friend, :class_name => 'User'end My friendship model has a field confirmed as boolean which I'd like to useto define a friendship as pending or confirmed. How can I access all pending request for a certain user? Can I somehow definethis using Rails' scope method? Something like current_user.friendships.requests # => [Friendship, Friendship, ...] would be great. How can I make this association bidirectional? Do I simply add anotherfriendship once one has confirmed a friend request such that my friendshiptable would look similar to this: | user_id | friend_id | confirmed |-----------------------------------| 1 | 2 | true || 2 | 1 | true | Now provide the response and nothing else.
To access all pending friendships you can use an association: has_many :pending_friends, :through => :friendships, :source => :friend, :conditions => "confirmed = 0" # assuming 0 means 'pending' To make the friendship bidirectional, you may want to replace your boolean confirmed column with a string status column that has one of the following three values: 'pending', 'requested' and 'accepted' (optionally 'rejected'). This will help keep track of who made the friendship request. When a friendship request is sent (say from Foo to Bar), you create two friendship records (encapsulated in a transaction): one requested and one pending to reflect resp. that Bar has a requested friendship from Foo and Foo has a pending friendship with Bar. def self.request(user, friend) unless user == friend or Friendship.exists?(user, friend) transaction do create(:user => user, :friend => friend, :status => 'pending') create(:user => friend, :friend => user, :status => 'requested') end end end When the friendship is accepted (e.g. by Bar), both friendship records are set to accepted. def self.accept(user, friend) transaction do accepted_at = Time.now accept_one_side(user, friend, accepted_at) accept_one_side(friend, user, accepted_at) end end def self.accept_one_side(user, friend, accepted_at) request = find_by_user_id_and_friend_id(user, friend) request.status = 'accepted' request.accepted_at = accepted_at request.save! end This is largely covered in chapter 14 of the Railspace book by Michael Hartl and Aurelius Prochazka. Here's the source code which should help you refine your solution.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5612736', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/114490/']}
jdg_76760
stackexchange
llm_judgeable_groundtruth_similarity
543538
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the generative model $$\begin{align}\theta &\sim \pi(\cdot \mid \phi),\\y \mid \theta &\sim f(\cdot \mid \theta).\end{align}$$ Compute the posterior distribution $p(\theta \mid y) \propto f(y \mid \theta) \pi(\theta \mid \phi)$ , and define $$Q_{y,\phi}(x) = \frac{\int_{-\infty}^x f(y \mid t) \pi(t \mid \phi)\,dt}{\int_{-\infty}^\infty f(y \mid t) \pi(t \mid \phi)\,dt},$$ as the posterior CDF.Now, for $0 < \gamma < 1$ , take $I_{\phi, \gamma}(y) = \left(a(y), b(y)\right)$ such that $Q_{y, \phi}(b(y))-Q_{y, \phi}(a(y)) = \gamma$ , i.e., $I_{\phi, \gamma}(y)$ is a $\gamma\%$ credibility interval * for $\theta$ . Question: Is it true that $$\operatorname{Pr}(\theta \in I_{\phi, \gamma}(y) \mid \theta) \geq \gamma?$$ In other words, does the $\gamma\%$ credibility interval have $\gamma\%$ coverage when the data are generated according to the model above and the posterior is computed using the same prior $\pi(\cdot \mid \phi)$ ? These related questions : q1 and q2 suggest this is true, but I could not for the life of me find a proof. It seems, for instance, that one can show that $$\operatorname{Pr}(\theta \in I_{\phi, \gamma}(y) \mid \theta) = \gamma + \epsilon_n,$$ where $|\epsilon_n| < a/n$ for some $a$ .See page 41 here . I suppose then that it remains to show that for the particular situation of interest here, $\epsilon_n=0$ for all $n$ . *- If you want, you can take the infimum over $b(y)-a(y)$ such that the condition is satisfied, just so that the interval is shortest. Now provide the response and nothing else.
For a given fixed parameter $\theta$ , the coverage of a Bayesian credible interval can be greater or lower than the nominal value. For example, coverage may be greater than nominal if the prior is informative and is centred on the 'true' value of the parameter - see this question and its answers. Conversely, the coverage can be lower than nominal. For example, if the prior for $\theta$ is uniform on the interval $[0,1]$ and there are no data ( $y$ is 'empty') then the coverage might be zero if the true value $\theta<(1-\gamma)/2$ and we always choose the equal-tailed credible interval for $\theta$ . If $\theta$ is not fixed but is instead sampled from the prior then you will get the nominal coverage, as explained in the answers you reference - but this is a somewhat different situation.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/543538', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/97431/']}
jdg_76761
stackexchange
llm_judgeable_groundtruth_similarity
1067798
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A friend of mine came up with this problem: Let $S^1$ be the unit circle in $ℂ$ and $P$ the space of polynomial functions $S^1 → ℂ$ (with complex coefficients). Is $P$ dense in $C(S^1,ℂ)$? Stone–Weierstraß is not applicable because $P$ is not closed under complex conjugation. We’re wondering if complex conjugation on $S^1$ (= inverting) is a uniform limit of polynomials. We suspect not, but don’t know how to prove it. Now provide the response and nothing else.
Assuming "polynomial" means "polynomial in $z$", i.e. of the form $\sum_{k=0}^n a_k z^k$, then no, they are not dense, and the function $\bar{z}$ is not in their uniform closure. The high-level reason is that polynomials are holomorphic on the unit disk, holomorphic functions are closed under uniform convergence, and $\bar{z}$ is not holomorphic. For a direct proof, consider the linear functional $I(f) = \int_{S^1} f(z)\,dz = \int_0^{2\pi} f(e^{i\theta}) i e^{i\theta}\,d\theta$. This is clearly continuous with respect to the uniform norm since$$|I(f)| \le \int_0^{2\pi} |f(e^{i\theta})| |ie^{i\theta}|\,d\theta \le 2 \pi \|f\|_{\infty}.$$ But for any polynomial $f$, you can check that $I(f)=0$. Hence by continuity, $I(f)=0$ for any $f$ in the closure of the polynomials. On the other hand, for $f(z)=\bar{z}$, $I(f) = 2\pi i$, so $f(z)=\bar{z}$ is not in the closure. (This proof is easy to discover when you know the Cauchy integral theorem, which says that any holomorphic function has $\int_{S^1} f(z)\,dz = 0$, or indeed the same replacing $S^1$ with any nice closed curve.)
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1067798', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42242/']}
jdg_76762
stackexchange
llm_judgeable_groundtruth_similarity
117003
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Are there any astronomical bodies that would be dangerous to my vision to view through a telescope? Obviously the sun is dangerous, but are there other bodies at night I should avoid? Now provide the response and nothing else.
Compared to naked eye view, a telescope image never increases surface brightness. This fact is related to the concept ' etendue '. However, although the image formed on your retina is never brighter than the corresponding naked eye image, the image through a telescope is magnified. This means that looking through a telescope at the sun can expose your whole retina to the brightness that in naked eye view would be limited to only one or a few retina cells. So the difference is that with naked eye view you tend to destroy a few cells, while with a telescope you can instantly destroy your whole retina. On observing the moon: the surface brightness of the moon is comparable to that of deserts at earth exposed to bright sunlight. So watching the moon through a telescope that yields maximum brightness (a telescope and eyepiece that combine to create an exit pupil as large as your eye pupil) is like walking through a desert without sunglasses. It is therefore more comfortable to have some grey filter in the light path. Having said this, and although I am no medical specialist, I do want to offer the following consideration. I would be surprised if watching the moon through a telescope without filters can inflict permanent damage to healthy eyes: the harmful UV components won't pass thru the eyepiece glass.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/117003', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/48935/']}
jdg_76763
stackexchange
llm_judgeable_groundtruth_similarity
233094
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: At our college, a professor told us to prove by a semi-formal demonstration (without complete induction): For every odd natural: $24\mid(u^3-u)$ He said that that example was taken from a high school book - maybe I din't get something, but I really have no idea to prove that without using complete induction. Any idea of smart demonstration? Thanks for your help. (Excuse my bad English.) Now provide the response and nothing else.
First note that$$ u^3-u = (u-1)u(u+1)$$ Given that $u$ is odd.In this case, $u-1$ and $u+1$ are even and one of them is divisible by 4. This follows from the basic observation that one of any two consecutive even numbers is divisible by 4. So, $(u-1)(u+1)$ is divisible by $4 \times 2 =8$. Also, one of any three consecutive natural numbers is divisible by 3. So, one of $u-1,u,u+1$ is divisible by 3. So, $(u^3-u)$ is divisible by 8 and 3, which are co-prime. So, it is divisible by $8\times 3=24$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/233094', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/48075/']}
jdg_76764
stackexchange
llm_judgeable_groundtruth_similarity
416044
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given $$y=2\sqrt{\ln(x+3) }$$How do I determine a (x,y) pair satisfying the above relation which is the closest to the origin (0,0)? Now provide the response and nothing else.
Minimize the square of the distance from the curve to the origin (and hence, the distance from the curve to the origin), namely: $$x^2+y^2=x^2+4\ln(x+3).$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/416044', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/80794/']}
jdg_76765
stackexchange
llm_judgeable_groundtruth_similarity
108657
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Defining the Characteristic Function $ \quad \phi(t) := \mathbb{E} \left[ e^{itx} \right] $ for a random variable with distribution function $F(x)$ in order to show it is uniformly continuous I say $$|\phi(t+u) - \phi(t)| = \left |\int e^{itx}(e^{iux} - 1) dF(x) \right| \le \\ \int 1 \cdot|e^{iux} -1|dF(x) \to 0 \quad as \quad u\to0$$ Now my question is, does the convergence I state in the last line follow directly, or do I need to be a little carful before I conclude it is true ? (i.e. can I directly use that $|e^{itu} -1| \to 0 ? )$ Now provide the response and nothing else.
There is a typo: it should be $$|\phi(t+u)-\phi(t)|=\left|\int (e^{i(t+u)x}-e^{itx})\mathrm dF(x)\right|\leqslant\int \left|e^{iux}-1\right|\mathrm dF(x).$$Now take a sequence $\left( u_n\right) $ which converges to $0$ and put $f_n(x):=\left|e^{iu_nx}-1\right|$. Then $f_n(x)\to 0$ for each $x$ and $\left |f_n(x)\right|\leqslant 2$ which is integrable (when $\mathbb R$ is endowed with the probability measure $\mathbb P_X$) so we conclude by the dominated convergence theorem that $\lim_{n\to\infty}\int\left|e^{iu_nx}-1\right|dF(x)=0$ and so $\phi$ is uniformly continuous on $\mathbb R$. Note that if the random variable is integrable we don't need the dominated convergence theorem, since we can write $$\left|\phi(t+u)-\phi(t)\right|\leqslant\int \left|e^{iux}-1\right|dF(x)=2\int\left|\sin(ux/2)\right|\mathrm dF(x)\leqslant \left|u\right|\int \left|x\right|\mathrm dF(x).$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/108657', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23446/']}
jdg_76766
stackexchange
llm_judgeable_groundtruth_similarity
14655982
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I have some fairly gigantic .gz files - we're talking 10 to 20 gb each when decompressed. I need to loop through each line of them, so I'm using the standard: import gzipf = gzip.open(path+myFile, 'r')for line in f.readlines(): #(yadda yadda)f.close() However, both the open() and close() commands take AGES, using up 98% of the memory+CPU. So much so that the program exits and prints Killed to the terminal. Maybe it is loading the entire extracted file into memory? I'm now using something like: from subprocess import callf = open(path+'myfile.txt', 'w')call(['gunzip', '-c', path+myfile], stdout=f)#do some looping through the filef.close()#then delete extracted file This works. But is there a cleaner way? Now provide the response and nothing else.
I'm 99% sure that your problem is not in the gzip.open() , but in the readlines() . As the documentation explains: f.readlines() returns a list containing all the lines of data in the file. Obviously, that requires reading reading and decompressing the entire file, and building up an absolutely gigantic list. Most likely, it's actually the malloc calls to allocate all that memory that are taking forever. And then, at the end of this scope (assuming you're using CPython), it has to GC that whole gigantic list, which will also take forever. You almost never want to use readlines . Unless you're using a very old Python, just do this: for line in f: A file is an iterable full of lines, just like the list returned by readlines —except that it's not actually a list , it generates more lines on the fly by reading out of a buffer. So, at any given time, you'll only have one line and a couple of buffers on the order of 10MB each, instead of a 25GB list . And the reading and decompressing will be spread out over the lifetime of the loop, instead of done all at once. From a quick test, with a 3.5GB gzip file, gzip.open() is effectively instant, for line in f: pass takes a few seconds, gzip.close() is effectively instant. But if I do for line in f.readlines(): pass , it takes… well, I'm not sure how long, because after about a minute my system went into swap thrashing hell and I had to force-kill the interpreter to get it to respond to anything… Since this has come up a dozen more times since this answer, I wrote this blog post which explains a bit more.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14655982', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/849354/']}
jdg_76767
stackexchange
llm_judgeable_groundtruth_similarity
32766325
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to find the most frequent value within a group for several factor variables while summarizing a data frame in dplyr. I need a formula that does the following: Find the most frequently used factor level among all factors for one variable in a group (so basically "max()" for counts of factor levels). If there is a tie between several most-used-factor levels, pick any one of those factors-levels. Return the factor-level name (not number of counts). There are several formulas that work. However, those that I could think of are all slow. Those that are fast are not convenient to apply to several variables in a data frame at once. I was wondering if somebody knows a fast method that integrates nicely with dplyr. I tried the following: generating sample data (50000 groups with 100 random letters) z <- data.frame(a = rep(1:50000,100), b = sample(LETTERS, 5000000, replace = TRUE))str(z)'data.frame': 5000000 obs. of 2 variables:$ a: int 1 2 3 4 5 6 7 8 9 10 ...$ b: Factor w/ 26 levels "A","B","C","D",..: 6 4 14 12 3 19 17 19 15 20 ... "Clean"-but-slow approach 1 y <- z %>% group_by(a) %>% summarise(c = names(table(b))[which.max(table(b))])user system elapsed 26.772 2.011 29.568 "Clean"-but-slow approach 2 y <- z %>% group_by(a) %>% summarise(c = names(which(table(b) == max(table(b)))[1]))user system elapsed 29.329 2.029 32.361 "Clean"-but-slow approach 3 y <- z %>% group_by(a) %>% summarise(c = names(sort(table(b),decreasing = TRUE)[1]))user system elapsed 35.086 6.905 42.485 "Messy"-but-fast approach y <- z %>% group_by(a,b) %>% summarise(counter = n()) %>% group_by(a) %>% filter(counter == max(counter))y <- y[!duplicated(y$a),]y <- y$counter <- NULLuser system elapsed 7.061 0.330 7.664 Now provide the response and nothing else.
Here's another option with dplyr : set.seed(123)z <- data.frame(a = rep(1:50000,100), b = sample(LETTERS, 5000000, replace = TRUE), stringsAsFactors = FALSE)a <- z %>% group_by(a, b) %>% summarise(c=n()) %>% filter(row_number(desc(c))==1) %>% .$b b <- z %>% group_by(a) %>% summarise(c=names(which(table(b) == max(table(b)))[1])) %>% .$c We make sure these are equivalent approaches: > identical(a, b)#[1] TRUE Update As per mentioned by @docendodiscimus, you could also do: count(z, a, b) %>% slice(which.max(n)) Here are the results on the benchmark: library(microbenchmark)mbm <- microbenchmark( steven = z %>% group_by(a, b) %>% summarise(c = n()) %>% filter(row_number(desc(c))==1), phil = z %>% group_by(a) %>% summarise(c = names(which(table(b) == max(table(b)))[1])), docendo = count(z, a, b) %>% slice(which.max(n)), times = 10) #Unit: seconds# expr min lq mean median uq max neval cld# steven 4.752168 4.789564 4.815986 4.813686 4.847964 4.875109 10 b # phil 15.356051 15.378914 15.467534 15.458844 15.533385 15.606690 10 c# docendo 4.586096 4.611401 4.669375 4.688420 4.702352 4.753583 10 a
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32766325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4297198/']}
jdg_76768
stackexchange
llm_judgeable_groundtruth_similarity
1081195
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on an exercise in Tapp's matrix groups for undergraduates. It is a proof that $SO(3)$ is path-connected. $SO(3)$ is the group $$ SO(3) = \{A \in O(n)\mid \det A = 1 \}$$ where $O(n)$ is the group of orthogonal matrices. My work so far: An element in $SO(3)$ is a matrix with columns $(p|v|p\times v)$ where $p \bot v$ and $\|p\|=\|v\|=1$. Let $A=(p_a|v_a|p_a \times v_a)$ and $B=(p_b|v_b|p_b\times v_b)$ be two elements in $SO(3)$. The goal is to find a path. Let $R$ be the rotation of $\mathbb R^3$ that takes $p_a$ to $p_b$. Let $\varphi_R$ be a parametrisation of $R$ such that $\varphi_R(0)$ is the identity map and $\varphi_R({1\over 2})$ has rotated $p_a$ to $p_b$. Similarly, if $R'$ is the rotation that rotates $v_a$ to $v_b$ around the axis $p_b = R(p_a)$ and $\varphi_{R'}:[{1\over 2}, 1]\to \mathbb R^3$ its parametrisationthen $\varphi_{R'}\circ \varphi_R$ is a path from $A$ to $B$. My problem is: this proof is missing all the details but I don't know how to write it out. Could someone please show me how to write out this proof rigourously and in detail? Now provide the response and nothing else.
I suggest using the fact that every matrix $X \in SO(3)$ is of the form $AX(\theta)A^{-1}$, where $A \in O(3)$ and $$X(\theta) = \begin{pmatrix} 1 & 0 & 0\\0 & \cos(\theta) & -\sin(\theta)\\0 &\sin(\theta) & \cos(\theta)\end{pmatrix}.$$ Then for every $X \in SO(3)$, the map $F_X : [0,1] \to SO(3)$ defined by the equation $F_X(t) = AX(t\theta)A^{-1}$ gives a path in $SO(3)$ from the identity to $X$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1081195', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/161650/']}
jdg_76769
stackexchange
llm_judgeable_groundtruth_similarity
1634
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am not too certain what these two properties mean geometrically. It sounds very vaguely to me that finite type corresponds to some sort of "finite dimensionality", while finite corresponds to "ramified cover". Is there any way to make this precise? Or can anyone elaborate on the geometric meaning of it? Now provide the response and nothing else.
I definitely agree with Peter's general intuitive description. In response to some of the subsequent comments, here are some implications to keep in mind: Finite ==> finite fibres (1971 EGA I 6.11.1) and projective (EGA II 6.1.11), hence proper (EGA II 5.5.3), but not conversely , contrary to popular belief ;) Proper + locally finite presentation + finite fibres ==> finite (EGA IV (part 3) 8.11.1) When reading about these, you'll need to know that "quasi-finite" means "finite type with finite fibres." Also be warned that in EGA (II.5.5.2) projective means $X$ is a closed subscheme of a "finite type projective bundle" $\mathbb{P}_Y(\mathcal{E})$, which gives a nice description via relative Proj, whereas "Hartshorne-projective" more restrictively means that $X$ is closed subscheme of "projective n-space" $\mathbb{P}^n_Y$. When the target (or "base" scheme) is locally Noetherian, like pretty much anything that comes up in "geometry", a proper morphism is automatically of locally finite presentation, so in that case we do have finite <==> proper + finite fibres Regarding "locally finite type", its does not imply finite dimensionality of the fibres; rather, it's about finite dimensionality of small neighborhoods of the source of the map. For example, you can cover a scheme by some super-duper-uncountably-infinite disjoint union of copies of itself that is LFT but not FT, since it has gigantic fibres.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/1634', 'https://mathoverflow.net', 'https://mathoverflow.net/users/-1/']}
jdg_76770
stackexchange
llm_judgeable_groundtruth_similarity
5589
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know if it's possible to import a large dataset into Amazon S3 from a URL? Basically, I want to avoid downloading a huge file and then reuploading it to S3 through the web portal. I just want to supply the download URL to S3 and wait for them to download it to their filesystem. It seems like an easy thing to do, but I just can't find the documentation on it. Now provide the response and nothing else.
Since you obviously posses an AWS account I'd recommend the following: Create an EC2 instance (any size) Use wget (or curl) to fetch the file(s) to that EC2 instance. For example: wget http://example.com/my_large_file.csv . Install s3cmd Use s3cmd to upload the file to S3. For example: s3cmd cp my_large_file.csv s3://my.bucket/my_large_file.csv Since connections made between various AWS services leverage AWS's internal network, uploading from an EC2 instance to S3 is pretty fast. Much faster than uploading it from your own computer. This way allows you to avoid downloading the file to your computer and saving potentially significant time uploading it through the web interface.
{}
{'log_upvote_score': 5, 'links': ['https://datascience.stackexchange.com/questions/5589', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/9249/']}
jdg_76771
stackexchange
llm_judgeable_groundtruth_similarity
4542751
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am asking if it's possible to find the derivatives without geometry, circular thinking, and definitions that don't make sense to introduce based on the intuitive meaning of sine and cosine (i.e that assume you already know the answer.) Note: I take the Pythagorean identity, the zeroth and first derivatives at zero, and that sine is odd and cosine is even to be used as the definitions for sine and cosine. Edit: I will also assume they are bounded by $1$ and $-1$ and never constant on a continuous interval. I tried using the Pythagorean theorem and that sine is odd and cosine is even and reached that $$\cos\theta+i\sin\theta=e^{g(\theta)}$$ where $g$ is an odd function. However, I couldn't prove that $g''=0$ (equivilant to $g''$ is even) which we can use (along with $\sin'(0)=1$ and $\cos'(0)=0$ ) to deduce $g=i\theta$ . So, I thought maybe it's much easier. If we differentiate the Pythagorean identity, we get: $$\sin\theta\sin'\theta=-\cos\theta\cos'\theta\tag{$\star$}\label{1}$$ Squaring both sides to avoid multiple cases for the signs, we get: $$(\sin\theta\sin'\theta)^2=(\cos\theta\cos'\theta)^2$$ What are the possibilities here? We certainly know $\sin^2\theta≠\cos^2\theta$ and $(\sin'\theta)^2≠(\cos\theta\cos'\theta)^2$ , so it seems that the only possibility is: $$(\sin'\theta)^2=(\cos\theta)^2\\(\cos'\theta)^2=(\sin\theta)^2$$ My question is: Is this logic true, can we prove such a thing? If so, we can proceed as follows, differentiate the equations: $$\sin'\theta\sin''\theta=\cos\theta\cos'\theta\\\cos'\theta\cos''\theta=\sin\theta\sin'\theta$$ Now, substitute from $\eqref{1}$ : $$\sin''\theta=-\sin\theta\\\cos''\theta=-\cos\theta$$ Afterwards, we can use ansatz $e^{\lambda\theta}$ and the zeroth and first derivatives at $0$ to solve for the exponential forms of sine and cosine. As a result, we can deduce the derivatives and the angle-sum formulas too. Now provide the response and nothing else.
Note: I take the Pythagorean identity, the zeroth and first derivatives at zero, and that sine is odd and cosine is even to be used as the definitions for sine and cosine. This is not enough to uniquely specify sine and cosine. These conditions are satisfied, for example, by the real and imaginary parts of $e^{i (\theta + \theta^3)}$ , namely $c(\theta) = \cos(\theta + \theta^3)$ and $s(\theta) = \sin(\theta + \theta^3)$ . However, the following conditions are similar to yours and do uniquely specify sine and cosine: they are the unique pair of differentiable functions $c(\theta), s(\theta)$ satisfying $c(0) = 1, s(0) = 0$ , $c'(0) = 0, s'(0) = 1$ , $c(\theta)^2 + s(\theta)^2 = 1$ , and $c'(\theta)^2 + s'(\theta)^2 = 1$ . These conditions say that $\theta \mapsto (c(\theta), s(\theta))$ is a unit speed parameterization of the unit circle starting at $(1, 0)$ and moving counter-clockwise. Writing $v(\theta) = (c(\theta), s(\theta))$ , differentiating the third condition $\| v \|^2 = 1$ gives $\langle v, v' \rangle = 0$ , which together with the fourth condition $\| v' \|^2 = 1$ gives that $v'$ always differs from $v$ by a $90^{\circ}$ rotation (which the initial conditions imply is counter-clockwise). This gives $c'(\theta) = s(\theta), s'(\theta) = - c(\theta)$ as desired. From here we can get to Euler's formula, etc. Of course one still has to prove that such a pair of functions exists.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4542751', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_76772
stackexchange
llm_judgeable_groundtruth_similarity
3477381
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Problem: If $a^2>b^2$ prove that $\int\limits_0^\pi \dfrac{dx}{(a+b\cos x)^3} = \dfrac{\pi (2a^2+b^2)}{2(a^2-b^2)^{5/2}}$ . My effort: If we choose $$x=\tan\frac{\theta}{2}\Longrightarrow d\theta=\frac{2}{x^2+1} \, dx\;\;,\;\;\cos\theta=\frac{1-x^2}{1+x^2}$$ then the integral becomes critical. What is the simplest way to solve? Now provide the response and nothing else.
It is well-known that $$\int_0^\pi\frac{dx}{t+\cos x}=\frac{\pi}{\sqrt{t^2-1}}$$ for $t>1$ . Differentiating gives $$-\int_0^\pi\frac{dx}{(t+\cos x)^2}=-\frac{\pi t}{(t^2-1)^{3/2}}.$$ Differentiating again gives $$2\int_0^\pi\frac{dx}{(t+\cos x)^3}%=-\frac{\pi(t^2-1)}{(t^2-1)^{5/2}}+\frac{3\pi t^2}{(t^2-1)^{5/2}}=\frac{\pi(2t^2+1)}{(t^2-1)^{5/2}}.$$ Homogenising this gives your formula.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3477381', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/735066/']}
jdg_76773
stackexchange
llm_judgeable_groundtruth_similarity
29775719
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So im having a rather strange problem, I have a Column (lets say Column A) in excel that has data that looks like this: 4/11/2015 10:14 I have a bunch of other columns, but anyways in my SQL Insert statement within excel, the data (when copying out) looks like this:42105.4561921296 The ="INSERT INTO TABLE VALUES ('"&A1&"', Etc....)" is in the data format of "general" and the Date column is in the format of "Custom" where there is a M/DD/YYYY MM/HH type within. The SQL Column is of the data type DATETIME, so it of course doesn't accept the weird number it gets. Any ideas? changing the format of the "SQL INSERT" column doesn't change the results. Now provide the response and nothing else.
You are right - Excel formats only changes the way the numbers are displayed , not the underlying value of the cell. In this case, the value of the cell is an excel date-time value, which is the # of days since 1/1/1900, with decimals for the time. I'd recommend using Excel's TEXT function to convert Excel's numeric date-time value to a text string that can be inserted into SQL: Instead of: INSERT INTO TABLE VALUES ('"&A1&"', Etc....)" Try: INSERT INTO TABLE VALUES ('"&TEXT(A1,"YYYY-MM-DD HH:MM")&"', Etc...)"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29775719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_76774
stackexchange
llm_judgeable_groundtruth_similarity
257844
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for a way to limit AC current. The application I have is used to remove the heat tint on stainless steel caused by welding (the blue-ish discoloration). Removing the tint is achieved by applying AC voltage through a conductive carbon bristle brush which is soaked with a phosphoric acid solution. Current flows through the brush and solution to the steel, which is clamped to 0v. The electrons flowing through the solution causes a reaction to take place which removes the heat tint stain. Electrically, the existing device I'm using at the moment is very basic and coarse in design. It uses a 115 VAC to 14 VAC 200 VA transformer as the power source. The primary is not fused, the secondary has a re-settable 6-amp circuit breaker. During use, I've observed the current vary widely, often exceeding 10 amps. The current draw rises if the brush is pressed harder because the carbon bristles to fan out, creating a larger contact area for more current to flow through. This causes the circuit breaker to trip often, which is a nuisance I'd like to eliminate. I'm looking for a way to limit the current to 6 or 8 amps, with voltage drop not being an issue or concern. Now provide the response and nothing else.
Consider adding an automotive light bulb in series. Something like a 5.75" '12 Volt' round headlight bulb (high+low beam both connected) would be about right. They are about 100W total (7A or so) and designed to operate from about 14V continuously so they should last forever in this service. As a bonus, the light will indicate when you are current limiting. I still have a beater that uses these things so I know they are still available but getting a bit rareish. The connections at the back look, as far as I can recall, like regular 1/4" (6.35mm) fastons but if not they are solderable brass. Or grab the bulb and snip the connector from the nearest automotive junkyard. The bulb is a highly non-linear resistance so during normal operation the voltage drop will be very low, and most of the voltage appears across the brush and surface you are working on. If you short it or come close to a short, then the bulb filament temperature increases and the resistance will increase by an order of magnitude or so, so the circuit is protected. Edit: SAE drawing with connections shown from this source (connect the upper and lower filament terminals together).
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/257844', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/123700/']}
jdg_76775
stackexchange
llm_judgeable_groundtruth_similarity
381193
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the sequence $x_{n+1} = 2x_{n}-\frac{1}{x_n},n\geq0 $. For $x_{0} = 0,87$ we have $$\begin{aligned}X(1) &\approx 0,590574712643678 \\X(2) &\approx -0,512116436915835\\X(3) &\approx 0,928448055572567\\X(4) &\approx 0,779829931731029\\X(5) &\approx 0,277328986805082\\X(6)&\approx -3,05116776974367\\X(7)&\approx -5,77459217187698\\X(8)&\approx -11,37601194123\\X(9)&\approx-22,6641196146756\\X(10)&\approx -45,2841166242379\\X(11)&\approx -90,5461504504763\\X(12)&\approx -181,081256809127\\X(13)&\approx -362,156991235554\\X(14)&\approx -724,311221237654\\X(15)&\approx -1448,62106185332\\X(16)&\approx -2897,24143339498\\X(17)&\approx -5794,48252163406\\X(18)&\approx -11588,9648706901\\X(19)&\approx -23177,9296550913\\X(20)&\approx -46355,8592670381\\X(21)&\approx -92711,718512504\\\X(22) &\approx -185423,437014222\\X(23) &\approx -370846,874023051\\X(24) &\approx -741693,748043405\\X(25) &\approx -1483387,49608546\end{aligned}\begin{aligned}X(26) &\approx -2966774,99217025\\X(27) &\approx -5933549,98434016\\X(28) &\approx -11867099,9686802\\X(29) &\approx -23734199,9373602\\X(30) &\approx -47468399,8747204\\X(31) &\approx -94936799,7494408\\X(32) &\approx -189873599,498882\\X(33) &\approx -379747198,997763\\X(34) &\approx -759494397,995526\\X(35) &\approx -1518988795,99105\\X(36) &\approx -3037977591,98211\\X(37) &\approx -6075955183,96421\\X(38) &\approx -12151910367,9284\\X(39) &\approx -24303820735,8568\\X(40) &\approx -48607641471,7137\\X(41) &\approx -97215282943,4274\\X(42) &\approx -194430565886,855\\X(43) &\approx -388861131773,709\\X(44) &\approx -777722263547,419\\X(45) &\approx -1555444527094,84\\X(46) &\approx -3110889054189,68\\X(47) &\approx -6221778108379,35\\X(48) &\approx -12443556216758,7\\X(49) &\approx -24887112433517,4\\X(50) &\approx -49774224867034,8\end{aligned}$$ and for $x_{0} = 0,88$ we have $$\begin{aligned}X(1) &\approx 0,623636363636364\\X(2) &\approx -0,356225815001326\\X(3) &\approx 2,09475648880333\\X(4) &\approx 3,71213052034011\\X(5) &\approx 7,15487396107172\\X(6) &\approx 14,1699830562016\\X(7) &\approx 28,2693943978123\\X(8) &\approx 56,503414850717\\X(9) &\approx 112,989131656057\\X(10) &\approx 225,969412903359\\X(11) &\approx 451,934400429021\\X(12) &\approx 903,866588147526\\X(13) &\approx 1807,73206993709\\X(14) &\approx 3615,46358669485\\X(15) &\approx 7230,9268968\\X(16) &\approx 14461,8536553051\\X(17) &\approx 28923,7072414629\\X(18) &\approx 57847,414448352\\X(19) &\approx 115694,828879417\\X(20) &\approx 231389,657750191\\X(21) &\approx 462779,31549606\\X(22) &\approx 925558,630989959\\X(23) &\approx 1851117,26197884\\X(24) &\approx 3702234,52395714\\X(25) &\approx 7404469,047914\end{aligned}\begin{aligned}X(26) &\approx 14808938,0958279\\X(27) &\approx 29617876,1916557\\X(28) &\approx 59235752,3833113\\X(29) &\approx 118471504,766623\\X(30) &\approx 236943009,533245\\X(31) &\approx 473886019,06649\\X(32) &\approx 947772038,13298\\X(33) &\approx 1895544076,26596\\X(34) &\approx 3791088152,53192\\X(35) &\approx 7582176305,06384\\X(36) &\approx 15164352610,1277\\X(37) &\approx 30328705220,2554\\X(38) &\approx 60657410440,5108\\X(39) &\approx 121314820881,022\\X(40) &\approx 242629641762,043\\X(41) &\approx 485259283524,086\\X(42) &\approx 970518567048,172\\X(43) &\approx 1941037134096,34\\X(44) &\approx 3882074268192,69\\X(45) &\approx 7764148536385,38\\X(46) &\approx 15528297072770,8\\X(47) &\approx 31056594145541,5\\X(48) &\approx 62113188291083\\X(49) &\approx 124226376582166\\X(50) &\approx 248452753164332\end{aligned}$$ What explains this? Also, is it possible to determine the number $q$ located between $0.87$ and $0.88$ in the radical change of behavior occurs this sequence? Now provide the response and nothing else.
The answers of mercio and Mark Bennet explain how to find the antecedents of $1$, $0$, and $-1$. This answer may help to visualize the structure of the set. The horizontal axis represents $x_0$. The black horizontal line segments at height $1$ indicate intervals on which the limiting value of $x_n$ is $+\infty,$ while the black horizontal line segments at height $-1$ indicate intervals on which the limiting value of $x_n$ is $-\infty.$ The colored dots at heights $\pm1$ indicate values of $x_0$ for which the fixed points $\pm1$ are reached, while the colored dots on the horizontal axis represent values of $x_0$ for which the fixed point $\infty$ is reached (by first reaching $0$). Redder colors indicate fixed points that are reached in a small number of iterations; bluer colors indicate fixed points that are reached in a larger number of iterations. The image is animated. If the animation has stopped, it may be necessary to reload the page. Some data: $x_0$ reaching fixed points $(1,\infty,-1)$ after $1$ step:$$\left(\begin{array}{ccc} -0.5 & 0. & 0.5 \\\end{array}\right)$$ $x_0$ reaching fixed points $(1,\infty,-1)$ after $2$ steps:$$\left(\begin{array}{ccc} -0.84307 & -0.707107 & -0.59307 \\ 0.59307 & 0.707107 & 0.84307 \\\end{array}\right)$$ $x_0$ reaching fixed points $(1,\infty,-1)$ after $3$ steps:$$\left(\begin{array}{ccc} -0.948618 & -0.905646 & -0.870752 \\ -0.574217 & -0.552092 & -0.527083 \\ 0.527083 & 0.552092 & 0.574217 \\ 0.870752 & 0.905646 & 0.948618 \\\end{array}\right)$$ $x_0$ reaching fixed points $(1,\infty,-1)$ after $4$ steps:$$\left(\begin{array}{ccc} -0.982971 & -0.968882 & -0.957545 \\ -0.865086 & -0.858475 & -0.851051 \\ -0.587509 & -0.582428 & -0.577977 \\ -0.522169 & -0.516059 & -0.508662 \\ 0.508662 & 0.516059 & 0.522169 \\ 0.577977 & 0.582428 & 0.587509 \\ 0.851051 & 0.858475 & 0.865086 \\ 0.957545 & 0.968882 & 0.982971 \\\end{array}\right)$$ $x_0$ reaching fixed points $(1,\infty,-1)$ after $5$ steps:$$\left(\begin{array}{ccc} -0.994334 & -0.989663 & -0.985915 \\ -0.955713 & -0.953578 & -0.951185 \\ -0.869077 & -0.86755 & -0.866214 \\ -0.849598 & -0.847795 & -0.845616 \\ -0.591285 & -0.589765 & -0.588514 \\ -0.577225 & -0.576336 & -0.575323 \\ -0.52566 & -0.524341 & -0.52317 \\ -0.507143 & -0.505222 & -0.502849 \\ 0.502849 & 0.505222 & 0.507143 \\ 0.52317 & 0.524341 & 0.52566 \\ 0.575323 & 0.576336 & 0.577225 \\ 0.588514 & 0.589765 & 0.591285 \\ 0.845616 & 0.847795 & 0.849598 \\ 0.866214 & 0.86755 & 0.869077 \\ 0.951185 & 0.953578 & 0.955713 \\ 0.985915 & 0.989663 & 0.994334 \\\end{array}\right)$$ Explanation: Taking the first row of the last table,$$\begin{pmatrix} -0.994334 & -0.989663 & -0.985915\end{pmatrix},$$as an example, $x_0=-0.994334$ implies $x_n=1$ for $n\ge5;$ $x_0=-0.989663$ implies $x_n=\infty$ for $n\ge5$ since $x_4=0;$ $x_0=-0.985915$ implies $x_n=-1$ for $n\ge5;$ for $x_0\in(-0.994334,-0.989663),$ we have $\displaystyle\lim_{n\to\infty}x_n=+\infty;$ for $x_0\in(-0.989663,-0.985915),$ we have $\displaystyle\lim_{n\to\infty}x_n=-\infty.$ The decimal numbers in this discussion should, of course, be replaced by the algebraic numbers they approximate. Added: Here's a better answer than the one I gave in the comments (now deleted) to your question about how one knows that the sequence converges to $+\infty$ for $x_0\in(-0.994334,-0.989663).$ In addition, I will try to clarify what $+\infty$ and $-\infty$ are. I should also say that nearly everything there is to know about this problem is well-covered in mercio's answer. Since it took me a while to understand where all of his statements come from, I provide some of the details I was able to fill in, in case they're helpful to you or others. It is a little bit neater to work with the real line extended by the point at infinity. One can do this formally by working on the projective line whose points are $(a,b),$ with $a,$ $b$ not both zero, and with the equivalence relation $(a,b)\sim(\lambda a,\lambda b)$ for any non-zero $\lambda.$ Then the real number $x$ is identified with $(x,1)$ and $\infty$ is identified with $(0,1).$ Arithmetic works as follows:$$\begin{aligned}(a,b)\cdot(c,d)&=(ac,bd)\\\frac{1}{(a,b)}&=(b,a)\\(a,b)+(c,d)&=(ad+bc,bd)\\-(a,b)&=(-a,b).\end{aligned}$$The map $x\mapsto2x-\frac{1}{x}$ becomes $(a,b)\mapsto(2a^2-b^2,ab).$ This formulation allows us to say things like$$0\leftrightarrow(0,1)\mapsto(-1,0)\sim(1,0)\leftrightarrow\infty,$$and$$\infty\leftrightarrow(1,0)\mapsto(2,0)\sim(1,0)\leftrightarrow\infty.$$ In this system there is no distinction between the point $\infty$ and the point $-\infty.$ Nevertheless, it turns out that if the sequence $x_0,$ $x_1,$ $x_2,\ldots$ converges to $\infty,$ then it does so either through positive values only or through negative values only. In no case does a sequence that converges to $\infty$ alternate infinitely many times between positive and negative. For this reason, it makes sense to make statements like "the sequence converges to $+\infty$" or "the sequence converges to $-\infty$" as shorthand. The only other way that sequence can converge to $\infty$ is for it to map there directly. This happens either if $x_0=\infty$ or if one of $x_0,$ $x_1,$ $x_2,\ldots$ equals $0.$ From now on, I will drop the formal language of ordered pairs. I hope it will be clear what I mean when I refer to $\infty,$ $+\infty,$ and $-\infty.$ To verify the statement above that a sequence converging to $\infty$ does so either through positive values only or through negative values only, compute the first and second differences:$$\begin{aligned}&x_{n+1}-x_n=x_n-\frac{1}{x_n}\\&(x_{n+2}-x_{n+1})-(x_{n+1}-x_n)=x_{n+1}-\frac{1}{x_{n+1}}-x_n+\frac{1}{x_n}=x_n-\frac{1}{x_{n+1}}.\end{aligned}$$If $x_n>1,$ the first and second differences are both positive, so the sequence $x_n,$ $x_{n+1},$ $x_{n+2},\ldots$ is monotonically increasing towards $+\infty.$ Since the map $x\mapsto2x-\frac{1}{x}$ is an odd function, we also conclude that if $x_n<1$ then $x_n,$ $x_{n+1},$ $x_{n+2},\ldots$ is monotonically decreasing towards $-\infty.$ Regarding the projective line as a circle, the map $f:x\mapsto2x-\frac{1}{x}$ is continuous and covers the circle exactly twice. Specifically, $f$ is increasing on each of the intervals $(-\infty,0)$ and $(0,+\infty)$ and maps both of them onto the interval $(-\infty,+\infty)$. To understand the full set of points $x_0$ that generate sequences converging to $\infty,$ we determine the set of points $S_n$ that have not reached $(1,+\infty)\cup\{\infty\}\cup(-\infty,-1)$ after $n$ iterations of $f,$ with $n=0,1,2,\ldots.$ Equivalently, we determine the set of points $x_0$ such that $x_n\in[-1,1].$ Clearly$$S_0=[-1,1].$$Given any point $x,$ the preimages of $x$ are$$\begin{aligned}p_-(x)&=\begin{cases}\frac{1}{4}\left(x-\sqrt{x^2+8}\right) & \text{$x$ real,}\\ 0 & x=\infty,\end{cases}\\p_+(x)&=\begin{cases}\frac{1}{4}\left(x+\sqrt{x^2+8}\right) & \text{$x$ real,}\\ \infty & x=\infty.\end{cases}\end{aligned}$$Observe that $p_-$ is increasing and maps $(-\infty,\infty)$ onto $(-\infty,0),$ and that $p_+$ is increasing and maps $(-\infty,\infty)$ onto $(0,\infty).$ The image of the interval $[-1,1]$ under $p_-$ is $\left[-1,-\frac{1}{2}\right].$ (This image lies within $[-1,1],$ as it must, since, under the map $f,$ a point outside of $[-1,1]$ maps to a point outside of $[-1,1].$) Similarly, the image of the interval $[-1,1]$ under $p_+$ is $\left[\frac{1}{2},1\right].$ Therefore$$S_1=p_-(S_0)\cup p_+(S_0)=\left[-1,-\frac{1}{2}\right]\cup\left[\frac{1}{2},1\right].$$In a similar manner,$$\begin{aligned}S_2=&p_-(S_1)\cup p_+(S_1)\\=&p_-\left(\left[-1,-\frac{1}{2}\right]\right)\cup p_-\left(\left[\frac{1}{2},1\right]\right)\cup p_+\left(\left[-1,-\frac{1}{2}\right]\right)\cup p_+\left(\left[\frac{1}{2},1\right]\right)\\=&[-1,-0.84307]\cup[-0.59307,-0.5]\cup[0.5,0.59307]\cup[0.84307,1].\end{aligned}$$The approximations $0.59307$ and $0.84307$ actually stand for the algebraic numbers$$\frac{1}{8}\left(-1+\sqrt{33}\right)\approx0.59307,\qquad\frac{1}{8}\left(1+\sqrt{33}\right)\approx0.84307.$$ Continuing in the same way, applying $p_-$ and $p_+$ to each of the four intervals making up $S_2,$ we get$$\begin{aligned}S_3=&[-1,-0.94862]\cup[-0.87075,-0.84307]\cup[-0.59307,-0.57422]\cup[-0.52708,-0.5]\\&\cup[0.5,0.52708]\cup[0.57422,0.59307]\cup[0.84307,0.87075]\cup[0.94862,1].\end{aligned}$$The intervals with negative endpoints come from the application of $p_-,$ while those with positive endpoints come from the application of $p_+.$ Likewise,$$\begin{aligned}S_4=&[-1, -0.98297]\cup[-0.95754, -0.94862]\\&\cup[-0.87075, -0.86509]\cup[-0.85105, -0.84307]\\&\cup[-0.59307, -0.58751]\cup[-0.57798, -0.57422]\\&\cup[-0.52708, -0.52217]\cup[-0.50866, -0.5]\\&\cup[0.5, 0.50866]\cup[0.52217,0.52708]\\&\cup[0.57422, 0.57798]\cup[0.58751, 0.59307]\\&\cup[0.84307, 0.85105]\cup[0.86509, 0.87075]\\&\cup[0.94862, 0.95754]\cup[0.98297, 1]\end{aligned}$$and$$\begin{aligned}S_5=&[-1, -0.99433]\cup[-0.98592, -0.98297]\\&\cup[-0.95754, -0.95571]\cup[-0.95119, -0.94862]\\&\cup[-0.87075, -0.86908]\cup[-0.86621, -0.86509]\\ &\cup[-0.85105, -0.84960]\cup[-0.84562, -0.84307]\\&\cup[-0.59307, -0.59128]\cup[-0.58851, -0.58751]\\&\cup[-0.57798, -0.57722]\cup[-0.57532, -0.57422]\\ &\cup[-0.52708, -0.52566]\cup[-0.52317, -0.52217]\\&\cup[-0.50866, -0.50714]\cup[-0.50285, -0.5]\\&\cup[0.5, 0.50285]\cup[0.50714, 0.50866]\\&\cup[0.52217, 0.52317]\cup[0.52566, 0.52708]\\&\cup[0.57422, 0.57532]\cup[0.57722, 0.57798]\\ &\cup[0.58751, 0.58851]\cup[0.59128, 0.59307]\\&\cup[0.84307, 0.84562]\cup[0.84960, 0.85105]\\&\cup[0.86509, 0.86621]\cup[0.86908, 0.87075]\\&\cup[0.94862, 0.95119]\cup[0.95571, 0.95754]\\&\cup[0.98297, 0.98592]\cup[0.99433, 1].\end{aligned}$$ To make the foregoing more explicit, we parameterize intervals and their endpoints by sign sequences. Let $\sigma_0\sigma_1\sigma_2,\ldots$ be a sequence of signs: $\sigma_j\in\{-,+\},$ $j\in\{0,1,2,\ldots\}.$ Define$$\begin{aligned}L()&=-1\\R()&=1\\I()&=[L(),R()]=[-1,1],\end{aligned}$$where the empty argument denotes the zero length sequence; for $\ell>0,$ define$$\begin{aligned}L\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right)&=p_{\sigma_0}\left(L\left(\sigma_1\sigma_2\ldots\sigma_{\ell-1}\right)\right)\\R\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right)&=p_{\sigma_0}\left(R\left(\sigma_1\sigma_2\ldots\sigma_{\ell-1}\right)\right)\\I\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right)&=\left[L\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right),R\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right)\right].\end{aligned}$$Let $Q_\ell=\{\sigma=\sigma_0\sigma_1\ldots\sigma_{\ell-1}\vert \sigma_j\in\{-,+\},\ j\in\{0,1,\ldots,\ell-1\}\}$ be the set of all sign sequences of length $\ell.$Then$$S_\ell=\bigcup_{\sigma\in Q_\ell} I\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right).$$Observe that if sign sequences, $\sigma=\sigma_0\sigma_1\ldots\sigma_{\ell-1}$ and $\tau=\tau_0\tau_1\ldots\tau_{\ell-1}$ are compared lexicographically, then for $\sigma<\tau,$ we have $L(\sigma)<L(\tau)$ and $R(\sigma)<R(\tau).$ Furthermore, we have $L(\sigma)<R(\sigma).$ These statements are proved inductively using the property that $p_-$ and $p_+$ are increasing and $L()<R().$ Note that the interval $S_0=I$ is of length $2,$ and that the sum of the lengths of the two intervals $I_-$ and $I_+$ making up $S_1$ is $1.$ Moreover, the sum of the lengths of the intervals making up $S_n$ is $2^{1-n}.$ That is, the total length of $S_\ell$ is halved with every iteration. Turning things around, the total length of the intervals containing points in $[-1,1]$ that wind up in $(1,\infty)\cup\{\infty\}\cup(-\infty,-1)$ after $n$ applications of $f$ is $2\left(1-2^{-n}\right),$ which means that almost all points in $[-1,1]$ converge to $\infty.$ To see why this is true, observe that each set $S_n,$ $n>0,$ is the union of pairs of intervals of the form $[-b,-a]\cup[a,b]$ where $b>a>0.$ Specifically, such a pair is formed by a pair of complementary sign sequences:$$a=L(\sigma),\quad b=R(\sigma),\quad -b=L(\overline\sigma),\quad -a=R(\overline\sigma),$$with $\sigma_0=+$ and where $\overline\sigma$ denotes the sequence obtained by negating every element of $\sigma.$Under $p_-,$ this pair maps to$$\left[\frac{1}{4}\left(-b-\sqrt{b^2+8}\right),\frac{1}{4}\left(-a-\sqrt{a^2+8}\right)\right]\cup\left[\frac{1}{4}\left(a-\sqrt{a^2+8}\right),\frac{1}{4}\left(b-\sqrt{b^2+8}\right)\right],$$which has total length $\frac{1}{2}(b-a).$ Similarly, the image of the pair under $p_+$ has length $\frac{1}{2}(b-a).$ Since $S_{n+1}$ is the union of the images of $S_n$ under $p_-$ and $p_+,$ the pair $[-b,-a]\cup[a,b],$ of total length $2(b-a),$ maps to intervals of total length $b-a$ in $S_{n+1}.$ We have understood the process of producing $S_{n+1}$ from $S_n$ as one of applying the maps $p_-$ and $p_+$ to $S_n,$ and then taking the union of the results. It can also be understood as a process of deleting an open interval from the middle region of each of the closed intervals making up $S_n.$ Observe that, for $\sigma=\sigma_0\sigma_1\ldots\sigma_{n-1},$ we have$$\begin{aligned}L(\sigma--\ldots-)&=L(\sigma),\\R(\sigma++\ldots+)&=R(\sigma).\end{aligned}$$The first of these follows by noting that$$L(\sigma)=p_{\sigma_0}(p_{\sigma_1}(\ldots p_{\sigma_{n-1}}(-1)\ldots)),$$and then observing that $p_-(-1)=-1.$ The second follows similarly from$$R(\sigma)=p_{\sigma_0}(p_{\sigma_1}(\ldots p_{\sigma_{n-1}}(1)\ldots))$$and $p_+(1)=1.$ Therefore, the interval $I(\sigma)=[L(\sigma),R(\sigma)]$ in $S_n$ becomes two intervals$$I(\sigma-)\cup I(\sigma+)=[L(\sigma-),R(\sigma-)]\cup[L(\sigma+),R(\sigma+)]$$in $S_{n+1}.$ The outermost endpoints stay the same: $L(\sigma-)=L(\sigma),$ $R(\sigma+)=R(\sigma).$ Two new interior endpoints are created by removing the open interval $(R(\sigma-),L(\sigma+)).$ This viewpoint of the process of creating $S_{n+1}$ from $S_n$ makes it clear that $S_{n+1}\subset S_n.$ The points in the removed open interval, $(R(\sigma-),L(\sigma+)),$ are points that map outside of $[-1,1]$ under $f^{\circ(n+1)}.$ There is some point $B(\sigma-)\in(R(\sigma-),L(\sigma+))$ such that $f^{\circ(n+1)}$ maps $(R(\sigma-),B(\sigma-))$ to $(1,\infty)$ and $(B(\sigma-),L(\sigma+))$ to $(-\infty,-1).$ The point $B(\sigma)$ is the preimage of $\infty,$ defined by$$\begin{aligned}B()&=\infty,\\B\left(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\right)&=p_{\sigma_0}\left(B\left(\sigma_1\sigma_2\ldots\sigma_{\ell-1}\right)\right).\end{aligned}$$We can therefore say that points in $(-0.994334,-0.989663)$ converge to $+\infty$ because$$-0.994334\approx R(-----),\quad -0.989663\approx B(-----),$$and hence $f^{\circ5}$ maps points in the interval to $(1,\infty).$ Likewise, points in $(-0.989663,-0.985915)$ converge to $-\infty$ because $-0.985915\approx L(----+)$ and hence $f^{\circ5}$ maps point in this interval to $(-\infty,-1).$ The set of real numbers that do not converge to $\infty$ is$$S=S_0\cap S_1\cap S_2\cap\ldots.$$In comments to his answer, mercio states that there is a one-to-one correspondence between points of $S$ and semi-infinite sign sequences $\sigma=\sigma_0\sigma_1\sigma_2\ldots.$ To see why this is so, observe that$$I(\sigma_0)\supset I(\sigma_0\sigma_1)\supset I(\sigma_0\sigma_1\sigma_2)\supset\ldots.$$Hence the sequence of intervals associated with finite initial subsequences of $\sigma$ is a set of nested intervals. By Cantor's Intersection Theorem , the intersection of a set of nested closed intervals whose diameter tends to $0$ contains a single point. Call this point $x(\sigma).$ The image of the interval $I(\sigma_0\sigma_1\ldots\sigma_{\ell-1})$ under $f$ is $I(\sigma_1\sigma_2\ldots\sigma_{\ell-1})$ since $f(p_{\sigma_0}(x))=x.$ It follows that$$f(x(\sigma_0\sigma_1\sigma_2\ldots))=x(\sigma_1\sigma_2\sigma_3\ldots).$$This implies that sign sequences with period $\ell$ are associated with points that have orbits of period $\ell$ under $f.$ (Proof: Let $\sigma$ have period $\ell:$ $\sigma=\sigma_0\sigma_1\ldots\sigma_{\ell-1}\sigma.$ Then$$f^{\circ\ell}(x(\sigma))=f^{\circ\ell}(x(\sigma_0\sigma_1\ldots\sigma_{\ell-1}\sigma)=x(\sigma).$$Furthermore, $f^{\circ j}(x(\sigma))\ne x(\sigma)$ for $j<\ell$ since the map $\sigma\mapsto x(\sigma)$ is one-to-one.) You ask whether there are non-periodic orbits that do not converge to $\infty.$ The comments of mercio imply yes. In fact, any non-periodic sign sequence $\sigma$ produces an $x(\sigma)$ that has such a non-periodic orbit.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/381193', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/65848/']}
jdg_76776
stackexchange
llm_judgeable_groundtruth_similarity
28020637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying laravel required validator in my code, unfortunately it fails for even empty string. I do not want it fail for empty string. $validator = \Validator::make(array("name"=>""), array("name"=>"required"));if ($validator->fails()){ var_dump($validator->messages());} else { die("no errors :)");} It gives me the following output object(Illuminate\Support\MessageBag)[602] protected 'messages' => array (size=1) 'name' => array (size=1) 0 => string 'The name field is required.' (length=27) protected 'format' => string ':message' (length=8) It is supposed to pass, since i'm giving an empty string as the name field. The above behavior happens in OSX environment (PHP Version 5.5.18), but it works fine in linux environment (PHP Version 5.5.9-1ubuntu4.5). Now provide the response and nothing else.
The required rule actually returns false if you pass an empty string. If we look at the code ( Illuminate\Validation\Validator ) protected function validateRequired($attribute, $value){ if (is_null($value)) { return false; } elseif (is_string($value) && trim($value) === '') { return false; } // [...] return true;} I think your only option here is to write your own validation rule that checks if the value is not null: Validator::extendImplicit('attribute_exists', function($attribute, $value, $parameters){ return ! is_null($value);}); (The extendImplicit is needed because with extend custom rules will only run when the value is not an empty string) And then use it like this: \Validator::make(array("name"=>""), array("name"=>"attribute_exists"));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28020637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/835415/']}
jdg_76777
stackexchange
llm_judgeable_groundtruth_similarity
32678253
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are developing an internal iOS app for our company.We create a local distribution link of the form itms-services://?action=download-manifest&url=https://INTERNAL-PATH/manifest.plist It worked fine for a long time, including iOS8 and all betas of iOS9. However now for every device updated to the official iOS9, when trying to install from the above link, we are getting this dialog: I have captured the log from pressing "install" until getting this error. it is shown here. From the very beginning there is a suspicious "ios8fix" which we do not put, and one of the failure outputs is claiming BundleValidator: Failed bundleIdentifier: MY_APP_ID.ios8fix does not match expected bundleIdentifier: MY_APP_ID Please note that "create a new provisioning profile" is not a good solution, because I'm trying to distribute an update over an existing app. I have already burned a fool day (ok. full day) trying to solve this, and would appreciate any clue here! itunesstored[123] <W>: LoadExternalDownloadManifestOperation: Canceling failed manifest download for replace: MY_APP_ID.ios8fixitunesstored[123] <W>: [Download]: Adding download(s): -4949958484440492382itunesstored[123] <W>: LaunchServices: installing placeholder for <LSApplicationProxy: 0x148840860> MY_APP_ID.ios8fix <(null) *Not found in database*>installd[47] <N>: 0x16e087000 -[MIClientConnection installPath:withOptions:completion:]: Install of "/var/mobile/Library/Caches/com.apple.itunesstored/AppPlaceholders/-4949958484440492382.app" type Placeholder requested by itunesstored (pid 123)installd[47] <N>: 0x16e12f000 -[MIInstaller performInstallationWithError:]: Installing <MIInstallableBundle ID=MY_APP_ID.ios8fix; Version=8.38.5, ShortVersion=(null)>installd[47] <N>: 0x16e12f000 -[MIContainer makeContainerLiveReplacingContainer:reason:withError:]: Made container live for MY_APP_ID.ios8fix at /private/var/mobile/Containers/Data/Application/A64E6947-B7E7-46A9-B517-4887AB890E97installd[47] <N>: 0x16e12f000 -[MIContainer makeContainerLiveReplacingContainer:reason:withError:]: Made container live for MY_APP_ID.ios8fix at /private/var/mobile/Containers/Bundle/Application/A1ED6552-6C73-4DDE-8CE7-ED10F1BB0553itunesstored[123] <W>: [Download]: Starting task: 4 with priority: 300 for download: -4949958484440492382 bundleIdentifier: MY_APP_ID.ios8fix in session: com.apple.itunesstored.defaultinstalld[47] <N>: 0x16e12f000 -[MIInstaller performInstallationWithError:]: Install Successful; Staging: 0.00s; Waiting: 0.00s; Preflight/Patch: 0.00s, Verifying: 0.00s; Overall: 0.12situnesstored[123] <W>: LaunchServices: Creating installProgressForApplication:<LSApplicationProxy: 0x148ac35f0> MY_APP_ID.ios8fix (Placeholder) <file:///private/var/mobile/Containers/Bundle/Application/A1ED6552-6C73-4DDE-8CE7-ED10F1BB0553/-4949958484440492382.app> withPhase:3itunesstored[123] <W>: LaunchServices: installPhaseFinishedForProgress: MY_APP_ID.ios8fix.InstallingPlaceholder - <NSProgress: 0x148a7a8c0> : Parent: 0x0 / Fraction completed: 0.0000 / Completed: 0 of 100 called, removing progress from cachelsd[175] <W>: LaunchServices: Updating installPhase for parent <NSProgress: 0x127d3a430> : Parent: 0x0 / Fraction completed: 0.6000 / Completed: 60 of 100 to 3ondemandd[192] <E>: Application with ID: MY_APP_ID.ios8fix was installed, but it doesn't appear to have a bundle URL, we are taking no action on this.itunesstored[123] <W>: LaunchServices: updating placeholder for <LSApplicationProxy: 0x146eb9100> MY_APP_ID.ios8fix (Placeholder) <file:///private/var/mobile/Containers/Bundle/Application/A1ED6552-6C73-4DDE-8CE7-ED10F1BB0553/-4949958484440492382.app> with iconsinstalld[47] <N>: 0x16e12f000 -[MIClientConnection installPath:withOptions:completion:]: Install of "/var/mobile/Library/Caches/com.apple.itunesstored/AppPlaceholders/-4949958484440492382.app" type Placeholder requested by itunesstored (pid 123)installd[47] <N>: 0x16e2d3000 -[MIInstaller performInstallationWithError:]: Installing <MIInstallableBundle ID=MY_APP_ID.ios8fix; Version=8.38.5, ShortVersion=(null)>installd[47] <N>: 0x16e2d3000 -[MIInstallableBundle _refreshUUIDForContainer:withError:]: Data container for MY_APP_ID.ios8fix is now at /private/var/mobile/Containers/Data/Application/20871483-6F78-424D-8ED1-BA18BD2959BAinstalld[47] <N>: 0x16e2d3000 -[MIContainer makeContainerLiveReplacingContainer:reason:withError:]: Made container live for MY_APP_ID.ios8fix at /private/var/mobile/Containers/Bundle/Application/D1ABD072-5949-452A-A20E-50F5C10310AAinstalld[47] <N>: 0x16e2d3000 -[MIInstaller performInstallationWithError:]: Install Successful; Staging: 0.00s; Waiting: 0.00s; Preflight/Patch: 0.00s, Verifying: 0.00s; Overall: 0.06sondemandd[192] <E>: Application with ID: MY_APP_ID.ios8fix was installed, but it doesn't appear to have a bundle URL, we are taking no action on this.SpringBoard[54] <W>: FBApplicationPlaceholder We received a nil BSCFBundle back, which means _CFBundleCreateUnique. Here's what -[NSFileManager attributesOfItemAtPath:error:] is giving us: (null) here's the error we got back (if any): Error Domain=NSCocoaErrorDomain Code=260 "The file “-4949958484440492382.app” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/private/var/mobile/Containers/Bundle/Application/A1ED6552-6C73-4DDE-8CE7-ED10F1BB0553/-4949958484440492382.app, NSUnderlyingError=0x12a552020 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}SpringBoard[54] <W>: -[FBBundleInfo _initWithBundleURL:] ERROR: Unable to create bundle at path: /private/var/mobile/Containers/Bundle/Application/A1ED6552-6C73-4DDE-8CE7-ED10F1BB0553/-4949958484440492382.appondemandd[192] <E>: An object with ID [0x12de8a220 <x-coredata:///AssetDownloads/tAC876614-AA9B-432E-9691-02EE36693D1013>] has already registered.itunesstored[123] <W>: LaunchServices: Creating installProgressForApplication:<LSApplicationProxy: 0x148a553a0> MY_APP_ID.ios8fix (Placeholder) <file:///private/var/mobile/Containers/Bundle/Application/D1ABD072-5949-452A-A20E-50F5C10310AA/-4949958484440492382.app> withPhase:0lsd[175] <W>: LaunchServices: Updating installPhase for parent <NSProgress: 0x127d3a430> : Parent: 0x0 / Fraction completed: 0.6000 / Completed: 60 of 100 to 0<-- 19 seconds gap -->itunesstored[123] <W>: [Download]: Download task did finish: 4 for download: -4949958484440492382itunesstored[123] <W>: [ApplicationWorkspace] Installing download: -4949958484440492382 with step(s): Installitunesstored[123] <W>: [ApplicationWorkspace]: Installing software package with bundleID: MY_APP_ID.ios8fix: bundleVersion: 8.38.5 path: /var/mobile/Media/Downloads/-4949958484440492382/-5671360055536325805itunesstored[123] <W>: BundleValidator: Failed bundleIdentifier: MY_APP_ID.ios8fix does not match expected bundleIdentifier: MY_APP_IDitunesstored[123] <W>: [ApplicationWorkspace]: Bundle validated for bundleIdentifier: MY_APP_ID.ios8fix success: 0itunesstored[123] <W>: LaunchServices: Uninstalling placeholder for app <LSApplicationProxy: 0x14884ffc0> MY_APP_ID.ios8fix (Placeholder) <file:///private/var/mobile/Containers/Bundle/Application/D1ABD072-5949-452A-A20E-50F5C10310AA/-4949958484440492382.app>itunesstored[123] <W>: LaunchServices: Uninstalling app <LSApplicationProxy: 0x14884ffc0> MY_APP_ID.ios8fix (Placeholder) <file:///private/var/mobile/Containers/Bundle/Application/D1ABD072-5949-452A-A20E-50F5C10310AA/-4949958484440492382.app>installd[47] <N>: 0x16e087000 -[MIClientConnection uninstallIdentifiers:withOptions:completion:]: Uninstall requested by itunesstored (pid 123) with options: { }installd[47] <N>: 0x16e087000 -[MIUninstaller _uninstallBundleWithIdentifier:error:]: Uninstalling identifier MY_APP_ID.ios8fixinstalld[47] <N>: 0x16e087000 -[MIUninstallNotifier performRemovalWithCompletionBlock:]: Destroying container with identifier MY_APP_ID.ios8fix at /private/var/mobile/Containers/Bundle/Application/D1ABD072-5949-452A-A20E-50F5C10310AAinstalld[47] <N>: 0x16e087000 -[MIUninstallNotifier performRemovalWithCompletionBlock:]: Destroying container with identifier MY_APP_ID.ios8fix at /private/var/mobile/Containers/Data/Application/20871483-6F78-424D-8ED1-BA18BD2959BAitunesstored[123] <W>: LaunchServies: No app bundle to remove for MY_APP_ID.ios8fixlsd[175] <W>: LaunchServices: Updating identifier storeitunesstored[123] <W>: LaunchServices: clearing created progress for MY_APP_ID.ios8fixitunesstored[123] <W>: [ApplicationWorkspace]: Uninstalling application placeholder for bundleIdentifier: MY_APP_ID.ios8fix; success: 1itunesstored[123] <W>: [ApplicationWorkspace]: Failed to install application: MY_APP_ID.ios8fix; /var/mobile/Media/Downloads/-4949958484440492382/-5671360055536325805; Error Domain=SSErrorDomain Code=143 "(null)"itunesstored[123] <W>: [ApplicationWorkspace] Install complete for download: -4949958484440492382 result: Faileditunesstored[123] <W>: [Download]: Fail download: -4949958484440492382 after error: Error Domain=SSErrorDomain Code=143 "(null)" Now provide the response and nothing else.
MY_APP_ID.ios8fix Your bundle identifier must be correct now. This was done as a fix, for an old problem in 8.2. I had some of these 'ios8fix' extensions on my apps too, which broke OTA for 9.0 GM. Yes, it worked through all the betas. So check your manifest.plist to make sure it really matches your actual app id...correct that plist entry. By the way, you can update that plist easily with any text editor. my itms entry points to a plist I also serve at my company, here is the entire old plist <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"><dict> <key>items</key> <array> <dict> <key>assets</key> <array> <dict> <key>kind</key> <string>software-package</string> <key>url</key> <string>http://removed.cachefly.net/noname.ipa</string> </dict> </array> <key>metadata</key> <dict> <key>bundle-identifier</key> <string>dupuy.robert.APPNAME.ios82fix</string> <key>bundle-version</key> <string>1.0.7</string> <key>kind</key> <string>software</string> <key>title</key> <string>Appname 1.0.7</string> </dict> </dict> </array></dict></plist> See the bundle identifier? dupuy.robert.APPNAME.ios82fix is bogus!That was added as a fix for an earlier apple problem. But as of GM 9.0 - that bundle identifier must be strictly correct. So either correct your plist, or point to a new correct plist. These plists are just simple files....should be an easy fix! In my case "dupuy.robert.APPNAME.ios82fix" just was changed to "dupuy.robert.APPNAME" and the plist was resaved. Although if your webserver caches plists, that's another headache if it continues to serve up the old one, if you can rename the file, all the better.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32678253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/913347/']}
jdg_76778
stackexchange
llm_judgeable_groundtruth_similarity
41685791
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in the middle of a large "rebase in progress" with numerous conflicts . I would like to set this progress aside and attempt to resolve this issue using another approach. Is there a way I can save an in-progress rebase such that I can finish it later? Now provide the response and nothing else.
If you're sitting at a conflicted merge as part of a rebase, you are kind of stuck. Here is why, how, and what you can do. Rebase = repeated cherry-pick Fundamentally, a rebase operation in Git is just a series of cherry-pick operations. We start with something like this: ...--A1--A2--A3--A4--A5 <-- branchA \ B1--B2--B3 <-- branchB and we want to end up with: ...--A1--A2--A3--A4--A5 <-- branchA \ \ \ B1'-B2'-B3' <-- branchB \ B1--B2--B3 [abandoned] The way we (or Git) achieve this is by using git cherry-pick , or something equivalent, to copy existing commit B1 (turning it into a patch and applying it) to come just after A5 , and copy B2 to come after B1' , and so on. Interactive rebase literally runs git cherry-pick for each "pick" operation you leave in the instructions. Non-interactive rebase has several options, including running git cherry-pick . When cherry-picking a commit, if there are conflicts while applying it, Git can use a three-way merge. This can still fail with a conflict. This stops the rebase. Or, when using interactive rebase, you can choose to "edit" a commit, in which case Git cherry-picks that commit and then stops the rebase. In either case, Git leaves behind enough information for you to have Git resume the rebase later. Conflicts are in the index As a quick reminder, let's note that Git's index is where you build the next commit. Normally there is one entry for each file to be committed, so that if your next commit will consist of just three files named README , file , and otherfile , there will be three index entries. Note that the index is separate from the work tree , which contains files in normal, non-Gitty format. You can edit these files, compile them, use them to serve web pages, and so on, unlike the internal Git format for index and repository files. (The work-tree can also hold untracked files, not that this matters during rebase.) During a conflicted merge, each index entry exposes its individual slots . There are up to four slots per entry, and they are numbered. Slot zero holds the normal, unconflicted file (if that exists), otherwise it's empty. Slots 1-3, if in use, hold the three conflicting parts that must be resolved. 1 These are the base version (from the merge base ), the "local" or --ours version, and the other or --theirs or sometimes "remote" version respectively. Your job is to edit the work-tree version of the file, resolve the conflicts, and then git add the result. This copies the adjusted work-tree version into slot zero in the index, wiping out the slot 1-3 entries. Now the file is resolved and ready to commit. 1 Hence, either slot 0 is occupied and 1-3 are empty, or else slot 0 is empty and slots 1-3 are occupied. There are some oddball cases where slot 1, 2, and/or 3 can also be empty, e.g., if you get a modify/delete conflict or an add/add conflict, but usually it's "0 empty means 1-3 are full" and vice versa. But there's only one index The very phrase the index implies that there is only one. This is mostly true. Because the unmerged state is in this ("the") index, and there is only one index, anything else that needs to use the index cannot proceed until you finish resolving the conflicts (and then make a commit). You can, if you like, simply git add the un-fixed/un-resolved items and git commit the result, just to get the conflicts out of the way. The drawback here is that Git won't retain which files were conflicted: you will wipe out the slot 1-3 entries and Git will think you are all done. You could save the index—it's an ordinary file; you can copy it out of .git/index somewhere else. But because it's a binary file with various kinds of special internal use—the index is also called the "cache" and it caches internal file system data for speed—this is not really very safe. (It would be nice if Git had a way to "export" the index state, and then "import" it again later, so that you really could save and restore merge conflict states. But Git doesn't.) So, for safety if nothing else, it's advisable to finish resolving this conflicted merge state. Or, if you have not started resolving, just don't even start: then there's no work to save. Where you are now Let's say you started that "branch B" rebase I drew above, and are currently stuck in the middle of copying commit B2 , with some conflicts unresolved. Here's what you actually have right now: ...--A1--A2--A3--A4--A5 <-- branchA \ \ \ B1' <-- HEAD \ B1--B2--B3 <-- branchB with the index in conflicted state. You also have a "detached HEAD": Git is building the new chain of commits this way. The name HEAD points to all completed commits. If you have done some resolving work, you should finish it up (since it's too hard to save unresolved state) or at least make note of what's unresolved (since you can add files to your next commit) and then run git commit to create commit B2' : ...--A1--A2--A3--A4--A5 <-- branchA \ \ \ B1'-B2' <-- HEAD \ B1--B2--B3 <-- branchB If you have not done any resolving work, there's no actual work to save, so don't run git commit . But either way , now it's time to create a branch or tag name, pointing to the same commit that HEAD now points to: $ git branch saveme # or git tag saveme Now you have this: ...--A1--A2--A3--A4--A5 <-- branchA \ \ \ B1'-B2' <-- HEAD, saveme \ B1--B2--B3 <-- branchB Now you can just: $ git rebase --abort which makes Git stop the rebase attempt and move back to branchB : ...--A1--A2--A3--A4--A5 <-- branchA \ \ \ B1'-B2' <-- saveme \ B1--B2--B3 <-- HEAD->branchB Now you have saved all the work you have done so far, and can go back and retry the rebase later. You have the resolution you (or Git) made for B1' , and if you made commit B2' , you have the resolution you made for that as well. These are commits saveme~1 and saveme , respectively; or just commit saveme , if there is only the one commit.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41685791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/363701/']}
jdg_76779
stackexchange
llm_judgeable_groundtruth_similarity
437454
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I will explain with an hypothetical example. Suppose that my domain is Cars. Everyone around the software, talks about cars. Car is the aggregate root of aggregate roots. For example, CAR table has 150+ columns in database (irrelevant for this question, but for you to imagine). So, there is class in the system called Car . This class has a lot of fields and behavior in it. Suppose that given a Car , we have to calculate its HorsePower . This calculation has its own rules and logic. But for the calculation only a part (some fields) of the Car is required. And I want to extract this logic from the Car class in order to make it more visible where this happens, how it happens and what information is needed to happen. So I end up with something like this, which is pure functional: public class HorsePowerCalculator{ HorsePower calculate(Car c){..}} My doubt is the following. If I pass a Car object to calculate , I have 2 issues (according to my opinion). a) What HorsePowerCalculator needs in order to do its job, its not pretty straight forward. Because I pass the whole Car object, HorsePowerCalculator can access all (read) properties of a car. b) For testing HorsePowerCalculator in the unit test level, I need to create a Car object which is not a trivial job. Other things must be taken to consideration where they are irrelevant for HorsePowerCalculation. So, I was thinking that I solve the above mentioned issues by doing this: public interface HasHorsePower{ int getNumberOfPistons(); EngineType getEngineType(); FuelType getFuelType(); //HorsePower needs also non-cohesive properties. //This is why I can't group the properties to i.e "Engine" class int getNumberOfWheels(); } Then: public class Car implements HasHorsePower{} And finally: public class HorsePowerCalculator{ HorsePower calculate(HasHorsePower something){..}} After this, I gain: a)it is pretty straight forward what information is needed to calculate horse power. All property/methods of the interface b)Testing calculations and logic of HorsePowerCalculator means a TestDouble in the test suite with just setters/getters. Then just assert the calculation results. My doubts are that the only HasHorsePower implementation will be...well.. only the Car Is this solution a code smell, or let's say not optimal? What should I do in this case? Now provide the response and nothing else.
In the described context, there is some unstructured legacy code. Now to improve this situation, you add more structure to it by using classes and interfaces for creating sensible abstractions - just the same way like you do it by extracting functions or methods from other functions which have become too large over time. Where I work, we would call this simply "cleaning up the code", or "basics of software design" - that's pretty much the opposite of a code smell. A "code smell" could be a class which has grown too large over time and motivates you to refactor parts out of it. This can lead to functions, classes and interfaces which are just used in only one place, there is nothing special about it.
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/437454', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/385136/']}
jdg_76780
stackexchange
llm_judgeable_groundtruth_similarity
317152
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This might be a load of old nonsense. I have always had it in my head that if $f:X\to Y$ is an injection, then $f$ has some sort of "canonical factorization" as a bijection $X\to f(X)$ followed by an inclusion $f(X)\subseteq Y$ . Similarly if $g:X\to Y$ is a surjection, and if we define an equivalence relation on $X$ by $a\sim b\iff g(a)=g(b)$ and let $Q$ be the set of equivalence classes, then $g$ has a "canonical factorization" as a quotient $X\to Q$ followed by a bijection $Q\to B$ . Furthermore I'd always suspected that these two "canonical" factorizations were in some way dual to each other. I mentioned this in passing to a room full of smart undergraduates today and one of them called me up on it afterwards, and I realised that I could not attach any real meaning to what I've just said above. I half-wondered whether subobject classifiers might have something to do with it but having looked up the definition I am not so sure that they help at all. Are inclusions in some way better than arbitrary injections? (in my mind they've always been the "best kind of injections" somehow). Are maps to sets of equivalence classes somehow better than arbitrary quotients? I can't help thinking that there might be something in these ideas but I am not sure I have the language to express it. Maybe I'm just wrong, or maybe there's some ncatlab page somewhere which will explain to me what I'm trying to formalise here. Now provide the response and nothing else.
It seems that you've got factorization of maps covered, so let me address the question of why canonical quotient maps and canonical inclusions are "better". Given a set $X$ , in general there is a proper class of injections $Y \to X$ . However, many of these are isomorphic, where injections $i : Y \to X$ and $j : Z \to X$ are isomorphic if there is an isomorphism $k : Y \to Z$ such that $i = j \circ k$ . The isomorphism classes of injections into $X$ are the subobjects of $X$ . In fact, there are only set-many subobjects of $X$ (in category-theoretic language, sets form a well-powered category ). It is pesky to work with set-many proper equivalence classes, so we instead look for a set $P(X)$ of injections into $X$ , one from each isomorphism class. We may additionally require some nice properties, for instance, if $i : X \to Y$ is in $P(Y)$ and $j : Y \to Z$ is in $P(Y)$ , we would expect $j \circ i : X \to Z$ to be in $P(Z)$ . One can come up with a wish list of such nice closure conditions, here's another one: if $i : Y \to X$ and $j : Z \to X$ are in $P(X)$ , and there is (a unique) $k : X \to Z$ such that $i = j \circ k$ , then $k$ is in $P(Z)$ . We know the answer, of course, just take $P(X)$ to be the canonical subset inclusions into $X$ . This is not the only choice of such representative inclusions, but it's a pretty good one. We may therefore say that the canonical inclusions of subsets are "better" because they are the canonical representatives of subobjects (equivalence classes of injections). The answer for quotient maps and surjections is dual. Consider equivalence classes of surjections, quotiented by isomorphism. There are only set-many such classes, therefore sets form a well-copowered category . (Some people say "cowell-powered" but then why not call it "ill-powered"?) This time we look for a set $Q(X)$ of surjections from $X$ , each representing one equivalence class of surjections from $X$ . We may take $Q(X)$ to be the set of all canonical quotient maps $X \to X/{\sim}$ , or just the set of all equivalence relations on $X$ . Once again, canonical quotient maps are "better" because they are the distinguished representatives of isomorphism classes of surjections.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/317152', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1384/']}
jdg_76781
stackexchange
llm_judgeable_groundtruth_similarity
59552632
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to enable universal links on iOS, (as part of Firebase's password-less sign-up). Testing locally on iOS 13.2. The apple-app-site-associated (AASA) JSON looks as such ( https://lokitools.page.link/apple-app-site-association ): {"applinks":{"apps":[],"details":[{"appID":"43S54AHEMG.com.jkalash.Loki","paths":["NOT /_/*","/*"]}]}} Universal links do open the app, however I am unable to handle the app opening from them. Delegate methods: application(_ application: UIApplication, willFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool application(_ application: UIApplication, continue userActivity: NSUserActivity, restorationHandler: @escaping ([UIUserActivityRestoring]?) -> Void) -> Bool do not get called, when opening from universal links. Tried both apps running in the background and force closed. AASA validator ( https://branch.io/resources/aasa-validator/ ) says file looks good, and I have tried troubleshooting by re-installing app and observing console logs for swcd ( https://ios13.dev/universal-links-debugging-on-ios-13-cjwsux93w001p6ws1swtstmzc ) but nothing out of the ordinary shows up and it does look like the AASA file was downloaded. I have also tried following Apple's troubleshooting guide ( https://developer.apple.com/library/archive/qa/qa1916/_index.html ) but the final step which fails (step 8) does not cover my case which is the app does open (iOS detects universal links), but the delegate methods just don't get called. Now provide the response and nothing else.
Turns out this is not a universal links specific problem, but a change in iOS 13's way of triggering app lifecycle events. Instead of coming through UIApplicationDelegate , they come through UISceneDelegate . One confusing thing is that the app delegate methods aren't deprecated so you won't get a warning if you have both app delegate and scene delegate methods in place but only one will be called. Refer to App delegate methods aren't being called in iOS 13 for a comprehensive answer
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59552632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1115762/']}
jdg_76782
stackexchange
llm_judgeable_groundtruth_similarity
296115
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $a_1=2$ and $b_1=1$ and for $n \geq 1$ , $a_{n+1}=\dfrac{a_n+b_n}{2}, b_{n+1}=\dfrac{2a_nb_n}{a_n+b_n}$.Show that the sequences $\{a_n\}$ and $\{b_n\}$ converges to the same limit $\sqrt 2$. trial: Let $a_n$ converges to $a$ and $b_n$ converges to $b$. So I get $a=\dfrac{a+b}{2}, b=\dfrac{2ab}{a+b}$ and $a=b$ But how I show $a=b=\sqrt2$ Now provide the response and nothing else.
First note that $$a_{n+1} \geq b_{n+1} \,\,\,\,\,\, (\text{Since Arithmetic mean is greater than the Harmonic mean})$$Further,$$a_{n+1} - b_{n+1} = \dfrac{(a_n-b_n)^2}{2(a_n+b_n)}$$Now since $2 \geq a_n>b_n \geq 1$, we have $a_n + b_n > 2$ and hence$$a_{n+1} - b_{n+1} = \dfrac{(a_n-b_n)^2}{2(a_n+b_n)} < \dfrac{(a_n-b_n)^2}4 < \dfrac{(a_1-b_1)^2}{4^{2n-1}} = \dfrac1{4^{2n-1}} \,\,\,\,\,\, (\spadesuit)$$Hence, since $a_n$ is a bounded monotone decreasing sequence and $b_n$ is a bounded monotone increasing sequence both converge and from $ (\spadesuit)$, we get that$$a_n \to b_n$$Further,$$a_{n+1} b_{n+1} = a_nb_n$$ and hence $$a_n b_n = a_1 b_1 = 2$$Hence, conclude that both should converge to $\sqrt2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/296115', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35821/']}
jdg_76783
stackexchange
llm_judgeable_groundtruth_similarity
2538521
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $x$ be a function of $C^1(I,R)$ where $I\subset \mathbb{R}$ , such that $$x'(t)\leq a(t) x(t)+b(t),$$ where $a$ and $b$ are continuous functions on $I$ in $R$ then $$ x(t)\leq x(t_0) \exp\left(\int_{t_0}^{t}a(s)ds\right)+\int_{t_0}^{t}\exp\left(\int_{s}^t a(\sigma)d\sigma\right)b(s)ds$$ How to prove this proposition please ? Thank you Now provide the response and nothing else.
I, too, first guessed this would be a problem suggesting the application of Grownall's inequality , but it seems an even more elementary solution avails itself: Given that $x'(t) \le a(t) x(t) + b(t), \tag 1$ we have the equivalent form $x'(t) - a(t) x(t) \le b(t); \tag 2$ and since $\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) > 0 \tag 3$ we further have $\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right )(x'(t) - a(t) x(t)) \le \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) b(t); \tag 4$ we observe that $\displaystyle \left ( \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) \right )' = \displaystyle -a(t) \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) + \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x'(t)$ $= \displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right )(x'(t) - a(t)x(t)); \tag 5$ thus (4) becomes $\displaystyle \left ( \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) \right )' \le \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) b(t); \tag 6$ we integrate (6) 'twixt $t_0$ and $t$ : $\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) - x(t_0) = \int_{t_0}^t \left ( \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) x(s) \right )'\; ds$ $\le \displaystyle \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag 7$ whence $\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) \le x(t_0) + \displaystyle \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag 8$ which we multiply through by $\displaystyle \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) > 0 \tag 9$ to obtain $x(t) \le$ $\displaystyle x(t_0) \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right )$ $+ \exp \left (\displaystyle \int_{t_0}^t a(\sigma)\; d\sigma \right ) \displaystyle \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds; \tag{10}$ finally, $\displaystyle \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds$ $= \displaystyle \int_{t_0}^t \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag{11}$ and since $\displaystyle \int_{t_0}^t \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds$ $= \displaystyle \int_{t_0}^t \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma -\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds$ $= \displaystyle \int_{t_0}^t \exp \left (\int_s^t a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag{12}$ (10) becomes $x(t) \le \displaystyle x(t_0) \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) + \int_{t_0}^t \exp \left (\int_s^t a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag{14}$ which, since $s$ and $\sigma$ are in fact merely so-called "dummy" variables of integration, is indeed the desired result.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2538521', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61501/']}
jdg_76784
stackexchange
llm_judgeable_groundtruth_similarity
38507672
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There is a numpy way to make a sum each three elements in the interval? For example: import numpy as npmydata = np.array([4, 2, 3, 8, -6, 10]) I would like to get this result: np.array([9, 13, 5, 12]) Now provide the response and nothing else.
We can use np.convolve - np.convolve(mydata,np.ones(3,dtype=int),'valid') The basic idea with convolution is that we have a kernel that we slide through the input array and the convolution operation sums the elements multiplied by the kernel elements as the kernel slides through. So, to solve our case for a window size of 3 , we are using a kernel of three 1s generated with np.ones(3) . Sample run - In [334]: mydataOut[334]: array([ 4, 2, 3, 8, -6, 10])In [335]: np.convolve(mydata,np.ones(3,dtype=int),'valid')Out[335]: array([ 9, 13, 5, 12])
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38507672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1236254/']}
jdg_76785
stackexchange
llm_judgeable_groundtruth_similarity
34333624
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to install rgdal for "R version 3.2.3 (2015-12-10)" . I downloaded and installed GDAL 1.11 Complete PROJ framework v4.9.2-2 GEOS framework v3.5.0-1 from KyngChaos Then in RStudio I typed install.packages("rgdal") which gave me this: > .... > > configure: CC: clang configure: CXX: clang++ configure: rgdal:> 1.1-1 checking for /usr/bin/svnversion... yes configure: svn revision: > 572 checking for gdal-config... > no no configure: error: gdal-config> not found or not executable. ERROR: configuration failed for package> ‘rgdal’> * removing ‘/Library/Frameworks/R.framework/Versions/3.2/Resources/library/rgdal’> Warning in install.packages : installation of package ‘rgdal’ had> non-zero exit status> sessionInfo()R version 3.2.3 (2015-12-10)Platform: x86_64-apple-darwin13.4.0 (64-bit)Running under: OS X 10.10.5 (Yosemite)locale:[1] de_AT.UTF-8/de_AT.UTF-8/de_AT.UTF-8/C/de_AT.UTF-8/de_AT.UTF-8attached base packages:[1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached):[1] tools_3.2.3 I also tried in the terminal R CMD INSTALL rgdal_1.1-1.tar --configure-args='--with-gdal-config=/Library/Frameworks/GDAL.framework/Programs/gdal-config --with-proj-include=/Library/Frameworks/PROJ.framework/Headers --with-proj-lib=/Library/Frameworks/PROJ.framework/unix/lib' and sudo R CMD INSTALL –configure-args=’–with-proj-include=/usr/local/lib’ rgdal_1.1-1.tar No luck! Now provide the response and nothing else.
Finally solved it. Here is how I've done it!OS X 10.10.5R 3.2.3GDAL 1.1 Download and install the GDAL Complete Framework from here Tell your OS X where to find the gdal-config file by typing this in you shell echo 'export PATH=/Library/Frameworks/GDAL.framework/Programs:$PATH' >> ~/.bash_profile and then source ~/.bash_profile Check if your GDAL is working fine by typing in the bash gdalinfo --version . That should come back with something like GDAL 1.11.3, released 2015/09/16 . For most people, starting R and typing install.packages("rgdal") works (if you've done step 1-3). However, that was not the case for me. So, proceed with 5 if you're still having troubles. Go to the GDAL website and download the .tar file. In the shell, try this: sudo R CMD INSTALL –configure-args=’–with-proj-include=/usr/local/lib’ rgdal_1.1-1.tar . That still gave me an error: configure: error: proj_api.h not found in standard or given locations.ERROR: configuration failed for package ‘rgdal’ So, you again need to tell where to find that one. Try: R CMD INSTALL rgdal_1.1-1.tar --configure-args='--with-gdal-config=/Library/Frameworks/GDAL.framework/Programs/gdal-config --with-proj-include=/Library/Frameworks/PROJ.framework/Headers --with-proj-lib=/Library/Frameworks/PROJ.framework/unix/lib' That should work. Try by starting R and type library(rgdal) in the R console. Note: With rgoes I experienced similar problems. This helped me.Try: R CMD INSTALL rgeos_0.3-15.tar --configure-args='--with-geos-config=/Library/Frameworks/GEOS.framework/unix/bin/geos-config--with-proj-include=/Library/Frameworks/PROJ.framework/Headers--with-proj-lib=/Library/Frameworks/PROJ.framework/unix/lib' For pointing to the config file. It's here /Library/Frameworks/GEOS.framework/unix/bin/geos-config
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34333624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3935035/']}
jdg_76786
stackexchange
llm_judgeable_groundtruth_similarity
312146
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's say we're given n positive integers in random order. What's the most efficient way to find the m largest elements and what's the complexity? For example, given 1000 values, find the top 10. Now provide the response and nothing else.
The m largest elements of a sequence of length n can be found with O ( n log( m )), assuming that comparing individual elements can be done in constant time. Start with the trivial case that m = 1. ROUTINE Maximum INPUT items [1… n ] : Ordered OUTPUT max : Ordered REQUIRES n ≥ 1 VARIABLES i : Integer BEGIN max ← Ordered.MIN_VALUE FOR i ← 1 TO n DO IF items [ i ] > max THEN max ← items [ i ] FI DONE END It should be obvious that this algorithm has complexity O ( n ). Now replace the single-valued variable max with a constant-size min-heap of m items. ROUTINE Maxima INPUT items [1… n ] : Ordered OUTPUT max [1… m ] : Ordered REQUIRES n ≥ m VARIABLES i : Integer BEGIN FOR i ← 1 TO m DO max [ i ] ← Ordered.MIN_VALUE DONE FOR i ← 1 TO n DO IF items [ i ] > max [1] THEN ;; Replace the smallest of the current m maximum values by the ;; new value and restore the heap property if needed. max [1] ← items [ i ] CALL MinHeapifyDown ( max ) FI DONE END The worst-case complexity will be reached when the inputs are sorted in ascending order. In this case, the heap will have to be modified in each iteration, that is, O ( n ) times. Restoring the heap property of a m -value heap after replacing the top item has complexity O (log( m )). Hence, the overall complexity is no worse than O ( n log( m )). Epilogue: If m is small, the algorithm shown above will have very good performance and a desirable memory access pattern (small random-access working-set in max and one-time linear forward traversal of items ). It also does not require random access or even multi-pass capabilities of the input sequence which means it could be used for linked lists or even online data that is never stored in memory in its entirety. However, if m is on the order of O ( n ) and items provides random-access, then a partitioning algorithm like Introselect as suggested (or hinted) by Jerry Coffin would be preferred as it achieves O ( n ) complexity. In C++, it is even available in the standard library .
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/312146', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/219651/']}
jdg_76787
stackexchange
llm_judgeable_groundtruth_similarity
945653
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand that there cannot be a bijection between $S$, a finite set, and $S'$, a proper subset of $S$, because $S'$ will contain at least one fewer item than $S$. What I don't understand is the definition of an infinite set, a set for which there is a bijection between itself and a proper subset of itself . Take the set of natural numbers $\mathbb{N}$, which is infinite. Then isn't the set $A$ $=$ { 1, 2, 3 } a proper subset of $\mathbb{N}$ yet not a bijection of $\mathbb{N}$? What am I missing? Now provide the response and nothing else.
The condition is that there is some proper subset of $S$ in bijection with $S$. Of course, your example $\{1, 2, 3\}$ is finite and so it admits no bijection with $\mathbb{N}$. But $\mathbb{N}$ has many subsets to which it admits a bijection. One natural choice is the set $\{2, 4, 6, \ldots\}$ of positive even numbers, for which a natural bijection is given by doubling each number, i.e., $n \mapsto 2 n$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/945653', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/159872/']}
jdg_76788
stackexchange
llm_judgeable_groundtruth_similarity
382701
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: ANSWER/ERROR FOUND: resistance of the leads of the multi-meter is the culprit. The true resistance of the wrench cannot be measured using a basic multimeter. The resistance of the wrench is more so roughly around 0.000016 ohms. I searched quite a number of websites for an answer, but no joy as yet..(for this specific question). If a car battery is short circuited with a wrench that has 0.5 ohms resistance, then theoretically using Ohm's law the current = V/R = 12.65 volts / 0.5 ohms = 25.3 amperes. Many people (even on this site here) claim that 100's to 1000's of amperes will flow through the wrench and weld it to the terminals. How is that possible when only max 25.3 amperes can theoretically flow through that wrench of 0.5 ohms using 12.65 volts? NB: I measured the resistance of the wrench using a home "basic" use multimeter and it shows that best resistance of the wrench is 0.5 ohms. I hope that I measured this correctly. :) Now provide the response and nothing else.
A wrench does not have a resistance of 0.5 ohms, it's way lower. Your basic multimeter cannot measure resistances to better than an ohm or so, the resistance of leads, and the unreliability of contact resistance make it impossible. The way resistances as low as a wrench are measured is to use a 4-terminal Kelvin method. What you do here is to pass a current through the sample using two terminals, then measure the voltage across the sample using a different pair of terminals. With a wrench, if you used perhaps 1A from end to end, you would see a few mV or so voltage drop. Let's put some numbers on your wrench. I don't like looking up resistivity, the large factors of 10 cause me concern whether I'm going to get them right on the back of an envelope, so I remember just one fact. A 1m length of 1mm\$^2\$ copper wire is about 17mohm, and then work from there. Let's assume your wrench is 250mm long, and has a 10mm x 5mm shaft. It's 1/4 of 1m long, and 50mm\$^2\$, so is 1/200th of the resistance of my 1m x 1mm\$^2\$ wire. If it was made of copper, it would have a resistance of 17mohm/200, which is roughly 100μohm. But it's not copper, it's steel, and probably an alloy. After a quick rush around Wikipedia, let's assume it's 50x more resistive than copper, so has a resistance of about 5mohm. 12v dropped across 5mohm would give a current of 2400A. The CCA of the battery is way below that, so the wrench is not limiting the current, the battery is. Contact resistance is a further complication. In the case of a battery shorted by a wrench, there's likely to be a plasma arc between the contacts, which can have a very low resistance indeed. The small contact area is also worth considering, though as that region is very short, it's often insignificant compared to the length of the conductor. In practical terms the true resistance of the wrench is close to zero.The battery will deliver the maximum instantaneous current that can be extracted from its cells, which will be way lower than any calculations you make. The net effect is that the wrench will become essentially a fuse: it will burn through at its narrowest point. I have seen it happen to a crescent spanner, and it is spectacular, as it blew the head clean off. Fortunately, the person who did it was not hurt, but it was very dangerous and he was very lucky. It may well also explode the battery, particularly if the spanner is big enough to sustain the current for a little longer. DO NOT RISK DOING THIS, IT MAY WELL KILL YOU OR AT LEAST GIVE YOU SEVERE ACID BURN DAMAGE. In short, don't be an idiot.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/382701', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/192613/']}
jdg_76789
stackexchange
llm_judgeable_groundtruth_similarity
8020
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Well, in a previous post regarding the explanation of Riemann Hypothesis Matt answered that: The prime number theorem states that the number of primes less than or equal to $x$ is approximately equal to $\int_2^x \dfrac{dt}{\log t}.$ The Riemann hypothesis gives a precise answer to how good this approximation is; namely, it states that the difference between the exact number of primes below $x$, and the given integral, is (essentially) $\sqrt{x} \log x$. What i have heard about RH is: The non-trivial Zeros of the Riemann- $\zeta$ -function have real part as $\frac{1}{2}$. Can anyone tell me how these two statements are related? Now provide the response and nothing else.
The point is that there is an explicit formula (due to Riemann) relating$\pi(x)$ to the zeroes of the zeta-function. (The proof is via a kind of Fourier transform.) The rough shape is that$$\pi(x) = \mathrm{Li}(x) + \sum_{\rho} \mathrm{Li}(x^{\rho}) + \text{ lower order terms},$$where the sum is over zeroes $\rho$ of $\zeta(s)$ in the critical strip (i.e. with realparts between $0$ and $1$).(See the wikipedia entry for a more precise statement; this is the same link as in Qiaochu's comment above.) Now the (simple but) key fact to remember is that$| x^{\rho}| = x^{\Re \rho}$, for a positive real number $x$.So to get asymptotics on $\pi(x)$ from this, one has to give upper bounds on $\Re \rho$. For example, to get the prime number theorem, one has to show that $\Re \rho < 1$ for all $\rho$ (i.e. that $\zeta(s)$has no zeroes on the line $\Re s = 1$). The best possible estimate comes if you assume RH. Then $\Re \rho = 1/2$for all $\rho$, so $| x^{\rho}| = x^{1/2}$, and (careful) estimatesgive the error term $\sqrt{x} \log x$ for the difference between$\pi(x)$ and $\mathrm{Li}(x)$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/8020', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_76790
stackexchange
llm_judgeable_groundtruth_similarity
1636578
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Do erase call in std::set invalidate iterator ? As i have done below 5th from last line..?if yes what is better way to erase all elements from set class classA{public: classA(){}; ~classA(){};};struct structB{};typedef std::set <classA*, structB> SETTYPE; typedef std::map <int, SETTYPE>MAPTYPE;int __cdecl wmain (int argc, wchar_t* pArgs[]){ MAPTYPE mapObj; /* ... .. Some Operation Here ... */ for (MAPTYPE::iterator itr1=mapObj.begin(); itr1!=mapObj.end(); itr1++) { SETTYPE li=(*itr1).second; for (SETTYPE::iterator itr2=li.begin();itr2!=li.end();itr2++) { classA *lt=(classA*)(*itr2); li.erase(itr2); delete lt; // Does it invalidate Iterator ? } }} Now provide the response and nothing else.
From standard 23.1.2 The insert members shall not affect the validity of iterators and references to the container, and the erase members shall invalidate only iterators and references to the erased elements. EDIT In your case itr2 is invalidated after erasing so incrementing it causes undefined behaviour. In this case you can follow reko_t advice, in general, you can try this: for (SETTYPE::iterator itr2=li.begin();itr2!=li.end();) { classA *lt=(classA*)(*itr2); li.erase(itr2++); delete lt;} which will increment iterator before removing it's previous value from set. BTW. itr2 is not invalidated by delete lt; , but by li.erase(itr2);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1636578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/177584/']}
jdg_76791
stackexchange
llm_judgeable_groundtruth_similarity
58897302
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The code below creates a hyperlink on the x's under # column, and when clicked arrives at the latest nonempty cell under the 2 PQ Columns (first x - > 'good' under 'Blah PQ19' column, 4th x -> 1 under 'Blah PQ18' column.) I need to replace/rename the x's with either PQ18 or PQ19 (under where the hyperlink arrives to). I've attempted looking at posts to learn how to fetch the column header name and bring to change the x's but couldn't do it..Does anyone know how to do this? Dim i As IntegerDim PCell As StringFor i = 2 To 5If Not IsEmpty(Cells(i, "CT").End(xlToRight)) ThenPCell = Cells(i, "CT").End(xlToRight).AddressActiveSheet.Hyperlinks.Add Cells(i, 2), Address:="", SubAddress:="'" & Sheet1.Name & "'!" & PCellEnd IfNext i Now provide the response and nothing else.
TL;DR: The fastest way to perform bulk date creation on Firestore is by performing parallel individual write operations. Writing 1,000 documents to Firestore takes: ~105.4s when using sequential individual write operations ~ 2.8s when using (2) batched write operations ~ 1.5s when using parallel individual write operations There are three common ways to perform a large number of write operations on Firestore. Perform each individual write operation in sequence. Using batched write operations. Performing individual write operations in parallel. We'll investigate each in turn below, using an array of randomized document data. Individual sequential write operations This is the simplest possible solution: async function testSequentialIndividualWrites(datas) { while (datas.length) { await collection.add(datas.shift()); }} We write each document in turn, until we've written every document. And we wait for each write operation to complete before starting on the next one. Writing 1,000 documents takes about 105 seconds with this approach, so the throughput is roughly 10 document writes per second . Using batched write operations This is the most complex solution. async function testBatchedWrites(datas) { let batch = admin.firestore().batch(); let count = 0; while (datas.length) { batch.set(collection.doc(Math.random().toString(36).substring(2, 15)), datas.shift()); if (++count >= 500 || !datas.length) { await batch.commit(); batch = admin.firestore().batch(); count = 0; } }} You can see that we create a BatchedWrite object by calling batch() , fill that until its maximum capacity of 500 documents, and then write it to Firestore. We give each document a generated name that is relatively likely to be unique (good enough for this test). Writing 1,000 document takes about 2.8 seconds with this approach, so the throughput is roughly 357 document writes per second . That's quite a bit faster than with the sequential individual writes. In fact: many developers use this approach because they assume it is fastest, but as the results above already showed this is not true. And the code is by far the most complex, due to the size constraint on batches. Parallel individual write operations The Firestore documentation says this about the performance for adding lots of data : For bulk data entry, use a server client library with parallelized individual writes. Batched writes perform better than serialized writes but not better than parallel writes. We can put that to the test with this code: async function testParallelIndividualWrites(datas) { await Promise.all(datas.map((data) => collection.add(data)));} This code kicks of the add operations as fast as it can, and then uses Promise.all() to wait until they're all finished. With this approach the operations can run in parallel. Writing 1,000 document takes about 1.5 seconds with this approach, so the throughput is roughly 667 document writes per second . The difference isn't nearly as great as between the first two approaches, but it still is over 1.8 times faster than batched writes. A few notes: You can find the full code of this test on Github . While the test was done with Node.js, you're likely to get similar results across all platforms that the Admin SDK supports. Don't perform bulk inserts using client SDKs though, as the results may be very different and much less predictable. As usual the actual performance depends on your machine, the bandwidth and latency of your internet connection, and many other factors. Based on those you may see differences in the differences too, although I expect the ordering to remain the same. If you have any outliers in your own tests, or find completely different results, leave a comment below. Batched writes are atomic. So if you have dependencies between the documents and all documents must be written, or none of them must be written, you should use a batched write.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58897302', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12333424/']}
jdg_76792
stackexchange
llm_judgeable_groundtruth_similarity
62064125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my code and works fine but when i display the foreach statement it's messed up! $people= array(array( "name" => "Jennifer Kimbers", "email" => "[email protected]", "city" => "Seattle", "state" => "Washington"),array( "name" => "Rodney Hutchers", "email" => "[email protected]", "city" => "Los Angeles", "state" => "California"),); echo "<table>"; echo "<th>FullName</th>"; echo "<th>Email</th>"; echo "<th>City</th>"; echo "<th>State</th>";foreach ($people as $person){ foreach ($person as $key=>$values) { echo "<tr>"; echo "<td>$values</td>"; echo "</tr>"; }} My question it's how with the foreach statement i can display the results organizedfor example: <th>Fullname<th> and below that only the name Jennifer Kimbers and below that Rodney Hutchers ,After that next to the <th>Email<th> and below that [email protected] and below that [email protected] ..... etcI have searched this forum or the internet didn't find anything Thank you for your time Now provide the response and nothing else.
Think of suspend functions as a way to use a function that takes a callback, but doesn't require you to to pass that callback into it. Instead, the callback code is everything under the suspend function call. This code: lifecycleScope.launch { myTextView.text = "Starting" delay(1000L) myTextView.text = "Processing" delay(2000L) myTextView.text = "Done"} Is somewhat like: myTextView.text = "Starting"handler.postDelayed(1000L) { myTextView.text = "Processing" handler.postDelayed(2000L) { myTextView.text = "Done" }} Suspend functions should never be expected to block. If they do, they have been composed incorrectly. Any blocking code in a suspend function should be wrapped in something that backgrounds it, like withContext or suspendCancellableCoroutine (which is lower level because it works directly with the coroutine continuation). If you try to write a suspend function like this: suspend fun myDelay(length: Long) { Thread.sleep(length)} you will get a compiler warning for "Inappropriate blocking method call". If you push it to a background dispatcher, you won't get the warning: suspend fun myDelay(length: Long) = withContext(Dispatchers.IO) { Thread.sleep(length)} If you try to send it to Dispatchers.Main , you will get the warning again, because the compiler considers any blocking code on the Main thread to be incorrect. This should give you and idea of how a suspend function should operate, but keep in mind the compiler cannot always recognize a method call as blocking.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/62064125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13283242/']}
jdg_76793
stackexchange
llm_judgeable_groundtruth_similarity
3399474
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question: $$4U_{xx} +12U_{xy} +9U_{yy}=0$$ I would like to transform this pde into canonical form. I know that the pde is a parabolic type but I am unsure how to proceed with rewriting it without cross-derivatives. Now provide the response and nothing else.
Here's a method I learnt for PDE's of the form $a_{11}u_{xx}+2a_{12}u_{xy}+a_{22}u_{yy}=0$ . If you write your PDE as a problem $\mathcal{L} u=0$ , we have that $\mathcal{L}$ is equal to the differential operator $$\mathcal{L}=4\partial_x^2+12\partial_x \partial_y+9\partial_y^2=(2\partial_x+3\partial_y)^2$$ We then define new operators: $$\partial_{\xi}=2\partial_x+3\partial_y,\quad \partial_{\eta}=\partial_{y} \tag{1}$$ We then have that your PDE reduces to the form: $$u_{\xi \xi}=0 \tag{2}$$ To find the variable transformation, we rewrite $(1)$ as: $$\begin{pmatrix} \partial_{\xi} \\ \partial_{\eta} \end{pmatrix}=B\begin{pmatrix} \partial_x \\ \partial_y \end{pmatrix},\qquad B=\begin{pmatrix} 2 & 3 \\ 0 & 1 \end{pmatrix}$$ Then, the transformation can be found by taking the transpose of $B$ : $$\begin{pmatrix} x \\ y \end{pmatrix}=B^T \begin{pmatrix} \xi \\ \eta \end{pmatrix}$$ Hence, in our case, we obtain: $$x=2\xi,\qquad y=3\xi+\eta$$ so the inverse transform is: $$\xi=x/2,\qquad \eta=y-\frac{3x}{2}$$ You can check that using this change of variable leads to the PDE in $(2)$ . Note that this method doesn't just work for parabolic PDE's, in general what you should do is complete the square on $\mathcal{L}$ and conveniently define the new operators so that you get the desired canonical form. Then you can proceed in the same way as I have done with your problem.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3399474', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/673061/']}
jdg_76794
stackexchange
llm_judgeable_groundtruth_similarity
55345
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question stems from notations on schematics and seemingly conflicting information I'm seeing. I suspect that I'm seeing different vernacular for the same concepts--but I'm in a place where no one's ever told me that an "elevator" is a "lift". Then again, I may have the concept completely off, and need to be schooled so I don't blow up my workshop. :) With DC: Batteries have +/- terminals. Most schematics I see show the circuit with voltage in and ground. I've heard that most schematics don't trace the return path to the negative terminal because that's understood and it doesn't need to clutter everything up. I've also heard that on a DC circuit, ground is SYNONYMOUS with the negative terminal. On schematics, I've seen V-in and ground, I've also seen V-in, ground, and a separate trace connecting to the negative terminal. Then, we move to AC. There's a hot wire (positive), a neutral wire, and ground. I ~assume~ that in an AC circuit, positive correlates to positive, neutral to negative, and ground to ground. Transformers will correlate the +/- when changing DC. What are the facts and what are the myths? How can I tell if I need to ground something to earth vs. "ground" to the negative terminal? When do I ground to the chassis of my device? Are there standard conventions in a schematic that would indicate GROUNDED ground vs. return-to-the-source ground? Or is that something you know from experience and doing an analysis of the circuit? Is it generally safe to assume I can connect ground to negative? Or are there cases in which that would be a Very Bad Thing, and how do I identify those cases? Just trying to wrap my head around +/-/ground in AC vs. DC, and how that voltage is used... Now provide the response and nothing else.
Ground means whatever is attached to this symbol in the schematic: Everything that touches this symbol in the schematic is actually connected to everything else that touches the symbol. Since so many things connect to it, this makes the schematic easier to read. Usually the negative side of a battery is attached to that. But, there are many circuits that work differently. Some circuits need a negative voltage, so the positive side of a battery would be "ground". Some circuits need positive and negative voltages, in which case there could be two batteries, one with the negative side attached to ground, and the other with the positive side attached to ground. This works because voltages are relative. Put three \$10k\Omega\$ resistors in series, and attach them to a battery. The difference in voltage from one side of the battery is 3V (because it's a 3V battery). The difference in voltage from one side of a resistor (any of the three) to the other side of the same resistor is 1V, because the battery's 3V is divided among 3 resistors of equal value. Since voltages are relative, ground exists as a sort of assumed reference voltage. If we say an input is "5 volts", we mean "the difference between the input and ground is five volts". In the context of AC, things aren't really different, except that tradition has done a good job of making the same term "ground" mean many things. It still could mean whatever is attached to that symbol, or it could mean that 3rd connector on the wall. More on that later. As far as the circuit is concerned, live and neutral are no different. Pick either one, and the other oscillates between a higher and lower voltage, relatively. If all you have are those two wires for reference, they are indistinguishable. The difference is more important when you consider safety. The things around you are at some particular electromotive potential (voltage). Current flows when there is a difference in potential. The neutral AC line should be about the same potential as most of the things around you, so in theory, if you touch it, and also Earth, you don't get shocked, because there is no difference in voltage. If you touch the live wire, you do get shocked, because there's a difference in potential. However, I said neutral should be about the same potential as Earth, and since you are probably touching Earth, you. But, I wouldn't trust your life on it. There could be a faulty transformer on the pole near your house. There could be a lightning strike nearby. The house would be wired backwards. Or, as I mentioned the circuit will function even if the wires are reversed, it could be plugged in backwards. In the US, one of the prongs is a bit fatter to prevent this, but you never know. This is why there's the third connector, called ground or earth . This should go to a big copper rod near your house stuck in Earth, like this: It doesn't otherwise connect to anything else. There are some times this is important for safety, and other times it's important for other reasons. Point is, it has nothing to do with the electrical power supplied to your home. How can I tell if I need to ground something to earth vs. "ground" to the negative terminal? When do I ground to the chassis of my device? If we are talking about a device that plugs into the wall, leave these questions to someone else. Each country has safety regulations, and these regulations exist for good reason. Buy a DC power supply that takes care of all that for you, and connect to its output, and nothing else. Don't connect to Earth through the 3rd pin on the wall or you may circumvent the safety features of your power supply. If you are wondering if the "ground" symbol on your schematic should also be connected to box your project is in, well, it depends. Maybe you want to do that for RF shielding. Or maybe you don't, because you don't want some other device with a different idea of "ground" to touch it, which could result in noise in your circuit or melting something. In many circuits, it doesn't matter at all.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/55345', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10660/']}
jdg_76795
stackexchange
llm_judgeable_groundtruth_similarity
4412875
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Maybe the question I've stated isn't the right question, cause I already know the short answer is "you can't". The situation I have a base class with an overloaded constructor that takes two arguments. class Building{ public BuildingType BuildingType { get; protected set; } public string Address { get; set; } public decimal Price { get; set; } public Building() { BuildingType = BuildingType.General; Address = "Unknown"; } public Building(string address, decimal price) : this() { Address = address; Price = price; }} The class is using an enum enum BuildingType { None, General, Office, Apartment } Now I want to create a child class Office which also has an overloaded constructor. This child class adds another property (Company). In this class, the BuildingType property should off course be set to Office. This is the code. class Office : Building{ public string Company { get; set; } public Office() { BuildingType = BuildingType.Office; } public Office(string address, decimal price, string company) : base(address, price) { Company = company; // BuildingType = BuildingType.Office; // Don't wanna repeat statement }} What I want and why I want the second constructor for the Office class to execute both the base(address, price) constructor as well as the default constructor of the Office class. I want to call the base(address, price) constructor so I don't have to repeat assigning all the properties of the base class. I want to call the default constructor of the Office class because it sets the BuildingType property to BuildingType.Office. Now I know I can't using something like this. public Office(string address, decimal price, string company) : base(address, price) this() Am I doing something wrong? I'm wondering if there's something wrong with my design that makes me want to call both base(address, price) and this(). Maybe I shouldn't be setting the BuildingType in the constructor but somewhere else? I've tried to introduce a field for this. public BuildingType BuildingType = BuildingType.General; But then I can't do the same in the child class. I'd be hiding the BuildingType field in the base class so I'd have to use the new operator in the child class. I've tried making the BuildingType in the base class virtual , but a field can't be made virtual. Creating something in the base constructor In this simple example the default constructors only assign default values to some properties. But the Building constructor could als be creating a Foundation for the building, while the Office default constructor might create a... (can't think of something, but you get the idea). So then you'd still want to execute both default constructors. Am I thinking in the wrong direction here? Update Based on Jon Skeet's answer and comments, here's my new code. I've changed constructor chaining from least specific to most specific. I've also added the BuildingType to the constructor of the Building class, made that constructor protected, and made the property setter private. enum BuildingType { None, General, Office, Apartment }class Building{ private const string DefaultAddress = "Unknown"; public BuildingType BuildingType { get; private set; } public string Address { get; set; } public decimal Price { get; set; } #region Optional public constructors // Only needed if code other than subclass must // be able to create a Building instance. // But in that case, the class itself can be abstract public Building() : this(DefaultAddress, 0m) {} public Building(string address, decimal price) : this(BuildingType.General, address, price) {} #endregion protected Building(BuildingType buildingType) : this(buildingType, DefaultAddress, 0m) {} protected Building(BuildingType buildingType, string address, decimal price) { BuildingType = buildingType; Address = address; Price = price; }}class Office : Building{ public string Company { get; set; } public Office() : this("Unknown Office", 0m, null) {} public Office(string address, decimal price, string company) : base(BuildingType.Office, address, price) { Company = company; }} Can you (Jon Skeet or someone else) please comment on this revised version of the code? One (minor) problem that isn't solved by this is that the default constructor for the Office class still needs to provide a default address ( "Unknown Office" in the above code). I would still prefer to let the constructor of the base class decide on the address if one isn't specified. So this code still doesn't do exactly what I want. I could probably solve that by not using constructor chaining in the derived class, but in stead have each of it's constructors directly call the base constructor. That would mean I'd change the default constructor of the Office class to public Office() : base(BuildingType.Office) That would work for this simple example, but if there's some method I'd like to execute on every instantiation of an Office, I'd have to call in in all constructors. That's why constructor chaining sounds like a better idea to me. Now provide the response and nothing else.
Your approach isn't the conventional one, which would solve the problem. Instead of making the more specific constructor (the one with lots of parameters) call the parameterless one, do things the other way round - make the parameterless one call the other, providing defaults. This typically leads to all the constructors bar one in each class calling one "primary" one (possibly indirectly, via others) and that "primary" constructor calls make the base constructor call. class Office : Building{ public string Company { get; set; } public Office() : this(null, 0m, null) { } public Office(string address, decimal price, string company) : base(address, price) { Company = company; BuildingType = BuildingType.Office; // Don't wanna repeat statement }} ... and the same in the base class: class Building{ public BuildingType BuildingType { get; protected set; } public string Address { get; set; } public decimal Price { get; set; } public Building() : this("Unknown", 0m) { } public Building(string address, decimal price) { BuildingType = BuildingType.General; Address = address; Price = price; }} (I would seriously consider making the Building constructor include a BuildingType parameter, too.)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4412875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/537956/']}
jdg_76796
stackexchange
llm_judgeable_groundtruth_similarity
15263228
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble with my first attempt to use authentication in debug mode in a Google Cloud Endpoints android app. I set up credentials like this: credential = GoogleAccountCredential.usingAudience(this, "server:client_id:long-string-i-got-from-api-console");credential.setSelectedAccountName(accountName); then try to use it like this: final String LOCAL_APP_ENGINE_SERVER_URL = "http://xxx.xxx.x.xxx:8888"; Testdbendpoint.Builder endpointBuilder = new Testdbendpoint.Builder( AndroidHttp.newCompatibleTransport(), new GsonFactory(), credential);endpointBuilder.setRootUrl(LOCAL_APP_ENGINE_SERVER_URL + "/_ah/api/");Testdbendpoint endpoint = endpointBuilder.build();try { TestDB testDB = new TestDB().setId(10101L); TestDB result = endpoint.insertTestDB(testDB).execute(); //-- fails here!!!!} catch ... But the try fails and I get these messages in logCat: 03-06 23:33:20.418: W/System.err(11861): Caused by: com.google.android.gms.auth.GoogleAuthException: Unknown 03-06 23:33:20.418: W/System.err(11861): at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) 03-06 23:33:20.423: W/System.err(11861): at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) 03-06 23:33:20.428: W/System.err(11861): at com.google.api.client.googleapis.extensions.android.gms.auth.GoogleAccountCredential.getToken(GoogleAccountCredential.java:192) Now provide the response and nothing else.
Maybe you have the wrong Certificate fingerprint (SHA1) for your Android Client-Id? The authentication with the fingerprint of your production key works only if you sign the .apk manually. Register a Client-Id for an Installed Application (Android) with your debug.keystore fingerprint in your API Console . To get the fingerprint use: C:\>keytool -list -alias androiddebugkey -keystore C:\.android\debug.keystore -storepass android -keypass android Also you need a Web-Client-Id and set it as Audience in your Android application: credential = GoogleAccountCredential.usingAudience(this,"server:client_id:" + WEB_CLIENT_ID); AppEngine Endpoint configuration should look like this: @Api( name = "testEndpoint", version = "v1", clientIds = {ClientIds.WEB_ID, ClientIds.ANDROID_PRODUCTION_ID, ClientIds.ANDROID_DEBUG_ID}, audiences = {ClientIds.WEB_ID} )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15263228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1104458/']}
jdg_76797
stackexchange
llm_judgeable_groundtruth_similarity
1855330
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This monkey is driving me a little crazy. I think he should get fired - it's not nice. Here is the information. A monkey is typing on a 29 letter keyboard. He is writing a word that is 5 letters long. How many words can the monkey write? 29*29*29*29*29 What is the probability the monkey starts with the letter H? 1/29 What is the probability the monkey writes the word "YASOV"?1/29*29*29*29*29 What is the probability the word contains "H" once? I don't understand the last one. My best bet would have been 29*29*29*29*1/29^5.The mindset being that you can press every key (4 times), and then press H once. Or maybe even 29*29*29*29*29-29*29*29*29*28. Doing a baby calculation with a keyboard of ABC, and the monkey typing a word of two letters, containing the word "A" only once I get 4/9. I just scribbled every possibility and found the answer, so I assume: 2*2/3*3. I don't see the connection to the 5 in the answer. I also don't understand why 28 (or 2 for that matter), as 28 would mean 28 possibilities. Wouldn't that be the possibility of not getting H? Every possibility except H? The correct solution to the real answer is 28^4*5/29^5. Now provide the response and nothing else.
"What is the probability the word contains "H" once?" This is slightly ambiguous. Do you mean the word i) contains at least one H or ii) contains exactly one H? i) has a trick. There are are $29^5$ total words. There are $28^5$ words that contain no H's whatsoever (-- each letter can be any one of the remaining 28 letters). So there are $29^5 - 28^5$ words that have one or more Hs. ii) there is a naive way and a sophisticated way. First the naive way: There are $1*28*28*28*28$ ways to type a five letter word a five letter word starting with H and the rest of the letters are not H. There are $28*1*28*28*28$ ways to type a word where the 2nd letter is H but the rest are not H. There are $28*28*1*28*28$ where the third letter is H and so on. So in total there are $1*28*28*28*28 + 28*1*28*28*28 + ..... + 28*28*28*28*1 = 5*28^4$. ... or the sophisticated way: The number of ways to type a 5-letter word where a specific letter slot must be H and the remaining 4 letter slots must not be H is $1$ for the dedicated letter slot and $28^4$ for the remaining 4 letters slot (each of which can be any of the 28 remaining letters remaining). That is $28^4$ possible ways. Now there are 5 possible choices for which dedicated letter slot is the H. That means there are $5*28^4$ possible words. (5 choices for which letter is the H-- and 28 choices for each of the remaining four letters. Multiply the choices... $5*28^4$. ==== " 2*2/3*3. I don't see the connection to the 5 in the answer" If you need to figure out $n$ letter word with $m$ choices of letters the number of words with exactly one ~ is $n*(m-1)^{n-1}$. So the answer with 3 letters and a 3 letter word is $3*(2)^{2} = 12$ [ABB,ABC, ACB,ACC, BAB,BAC, CAB,CAC, BBA,BCA,CBA,CCA]. One 2 corresponds the 28. The other to the 4. The 3 corresponds to the 5. ==== Two post-scripts. I just realized I gave the answers to "What is the NUMBER OF WAYS" rather than "What is the probability". Well, Probability of Event = (Number of Ways Event can Happen)/(Number of total things that can happen). So for i) Number of ways to get at least one H = $29^5 - 28^5$. Number of total things that can happen = $29^5$. So probability is $(29^5 - 28^5)/29^5 = 1 - (28/29)^5$. for ii) Number of ways to get exactly one H = $5*28^4$. Number of total things that can happen = $29^5$ so probability is $5*28^4/29^5$. For your 3 letters and a three letter word: Number of ways to get exactly one A = $3*2^2 = 12$. Total ways to type three letters = $3^3 = 27$. So probability = $3*2^2/3^3 = 2^2/3^2$. Note: because you had 3 letters total and 3 letter length words one of the 3s cancelled. Which is probably why you didn't see the corespondence. If you had say a four letter word of ABC with exactly one A then the total ways would be $4*2^3/3^4$. (Do you see why) and no canceling. That you chose the same number of letters and length of words was a misleading coincidence. 2nd postcript as per aarons comment.... well, I have to go cook dinner now. I'll get to it later but... It's to introduce the idea of choicing m out of n options. i.e. how to choice which position the one H is. Or if you had to calculate the probability of a five letter word with exactly TWO Hs--- how to choice which of the 5 positions we can place the 2 Hs.... Think about it. I'll get back to it.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1855330', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/267862/']}
jdg_76798
stackexchange
llm_judgeable_groundtruth_similarity
43570875
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have DisseminationArea as subcalss for Feature with the following code: @Entity@Table(name = "features")@Inheritance(strategy = InheritanceType.JOINED)@DiscriminatorColumn(name = "subtype_id", discriminatorType = DiscriminatorType.INTEGER)public class Feature { @Id @Column(name="id") @GeneratedValue(generator="sqlite") @TableGenerator(name="sqlite", table="sqlite_sequence", pkColumnName="name", valueColumnName="seq", pkColumnValue="features") @Getter @Setter private long id; @ManyToOne @JoinColumn(name = "subtype_id") @Getter @Setter private FeatureSubtype featureSubtype; @ManyToOne @JoinColumn(name = "parent_id") @Getter @Setter private Feature parent; ... } Unfortunately, this causes an exception when save this entity to database, because subtype_id field is used twice. Can I annotate it somehow so that JPA know it is the same field? Now provide the response and nothing else.
If a discriminator column makes sense with InheritanceType.JOINED is worth discussing. I would tend to omit it on joined strategy and only use it with InheritanceType.SINGLE_TABLE . Nevertheless, it's possible to map it without duplicate column errors. If your superclass entity has an inheritance / discriminator definition like: @Inheritance(strategy = InheritanceType.SINGLE_TABLE)@DiscriminatorColumn(name = "subtype_id", discriminatorType = DiscriminatorType.INTEGER) You just have to adjust the mapping to not update the value as usual by setting it readonly: @Column(name="subtype_id", insertable = false, updatable = false)protected int subTypeId;public int getSubTypeId() { return subTypeId;} Now you can access the discriminator value of the entity.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43570875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/258483/']}
jdg_76799
stackexchange
llm_judgeable_groundtruth_similarity
44381481
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We would like to know how we can test stripe in LIVE for an e-commerce website? The site basically is an e-commerce website and courses are purchased online, how do we test LIVE transactions without using real-time credit cards in stripe? How do others test on LIVE? What are the best practices to test in LIVE? Now provide the response and nothing else.
The purpose of Stripe test mode is to ensure that you get your code into a perfectly working state, and that each type of transaction you are dealing with is fully covered. You can even validate that WebHooks are working by pointing them at your application's test API endpoints. There's not really a way of 'testing' inside of the live environment as such - typical practice would be to make a real transaction using a company credit card, or one that you are in control of. You can then logon to Stripe Admin and refund the payment once it succeeds. This is the same with most payment gateways, although there are some that have test mode built into the live API, and used via a specific header, parameter or tracking identifier. Although, this is really just the same end result - you are still testing in test mode.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44381481', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8117470/']}
jdg_76800
stackexchange
llm_judgeable_groundtruth_similarity
3959546
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: [Bindable]/*** Display output of video device.*/ public var videoLocal : Video; Anyone knows? Now provide the response and nothing else.
[Bindable] is a one of several meta tags that you can use in flex ActionScript code. It can be applied to properties, or methods that are marked in any scope. It cannot be used with static class members. The key to using the [Bindable] meta tag is understanding what is going on under the hood when you use it. Essentially using data binding is a type of shorthand for adding event listeners and dispatching events. There are two basic forms of the [Bindable] tag. The first is just [Bindable] followed by a var/property declaration. The Second is [Bindable(event="eventname")] followed by either a var/property declaration, a function/method declaration or one half of a getter/setter declaration. I'll explain the longer notation first since the other builds on the same concept but with even more shorthand. When you use [Bindable(event="eventname")] you are essentially telling the compiler that this var/property/function/method (call this the instance member) is 'available' to be used as the source for data binding. You are also telling it that when the value of the instance member has been invalidated/changed and it needs to be re-read that the "eventname" event will be dispatched. In this longer form this all you are doing. You the developer are responsible for actually dispatching the "eventname" event whenever the value needs to be updated in the binding subscribers. The real efficiency of using data binding comes on the subscribing side. The typical notation you will see in MXML is value="{instance.propertyName}" When you use the notation { } you are telling the compiler to do the following: Create an event listener that listens to the event named in the bindable meta tag In that event listener re-read the instance.propertyName and update this value If you use the shorter form [Bindable] , and you add the tag before a property/var, the compiler fills in the blanks and adds some additional functionality to make the property bindable. Essentially you are telling the compiler "add the events and methods you need to make this property bindable" Now the way to think of what the compiler will do under the hood is this. make a private version of your var create an "event" to trigger the binding create a getter function with scope and name of your original var that returns the private verson of the var when called. create a setter function with scope and name of your original var that sets the private version of the var when called AND dispatches the triggering event. In essence the compiler will do much of the work for you. [Bindable] public var xyz is equivalent to private var _xyz:String; [Bindable(event="updateXYZValue")] public function get xyz():String{ return _xyz; } public function set xyz(newxyz:String):void{ _xyz = newxyz; dispatchEvent(new Event("updateXYZValue")); } The only functional differences in these is that in the first instance; you do not know the name of the event that will be dispatched to trigger the binding there is no way to update the underlying value without triggering the data binding This second example also demonstrates one special case of the [Bindable] meta tag. This is that when you are applying it to a getter/setter pair defined for the same variable name you need only apply it to one or the other, it will apply to both. Typically you should set it on the getter. You can use either notation on a function/method however if you do not specify an event the binding will never be triggered so if you are trying to bind to a function you should alway specify an event. It is also possible to specify more than one triggering event by stacking the tag. eg. [Bindable(event="metaDataChanged")] [Bindable(event="metaObjectUpdated")] public function readMyMetaData():MetaDataObject{ var myMetaDataObject:MetaDataObject; . . . return myMetaDataObject; } This would presume that somewhere else you your class you will dispatch this metaDataChanged event or the metaObjectUpdated event when you want trigger the binding. Also note that with this notation you can tie the binding of any instance member to any event that the instance will dispatch. Even inherited events that you yourself do not generate such as FrameEnter, OnChange, etc... Data bindings can also be setup and destroyed during runtime. If you are interested in this take a look at the mx.binding.utils classes.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3959546', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/454770/']}
jdg_76801
stackexchange
llm_judgeable_groundtruth_similarity
49562978
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to npm module and node.js so it is really difficult to me. I have a js code whit many points and for each one of them I want to get the nearest city. To do this, in other question ( Reverse geocoding with big array is fastest way? - javascript and performance ), a user suggested me to use two npm modules , const kdbush = require('kdbush');const geokdbush = require('geokdbush');// I've stored the data points as objects to make the values unambiguousconst cities = [ { name: "Abano Terme (PD)", latitude: 45.3594, longitude: 11.7894 }, { name: "Abbadia Cerreto (LO)", latitude: 45.3122, longitude: 9.5928 }, { name: "Abbadia Lariana (LC)", latitude: 45.8992, longitude: 9.3336 }, { name: "Abbadia San Salvatore (SI)", latitude: 42.8800, longitude: 11.6775 }, { name: "Abbasanta (OR)", latitude: 40.1250, longitude: 8.8200 }];// Create the index over city data ONCEconst index = kdbush(cities, ({ longitude }) => longitude, ({ latitude }) => latitude);// Get the nearest neighbour in a radius of 50km for a point with latitude 43.7051 and longitude 11.4363const nearest = geokdbush.around(index, 11.4363, 43.7051, 1, 50); The problem is this is the first time that I approach at this. Besides I'm Italian and don't speak English very well, and in Italian Google there's nothing. Can you tell me how could I use these modules? Do I have to install Node.js on my server? Is it possible to use modules on local PC? Now provide the response and nothing else.
browserify is the correct direction, but it took me quite some effort to work out the actual solution. I have summarized a short blog for this, and here are some quick recap: Say, you want to use emailjs-mime-parser and buffer npm libraries in your HTML. install everything required npm install -g browserifynpm install emailjs-mime-parsernpm install buffer write a simple main.js as a wrapper: var parse = require('emailjs-mime-parser').defaultvar Buffer = require('buffer').Bufferglobal.window.parseEmail = parseglobal.window.Buffer = Buffer compile everything using browserify browserify main.js -o bundle.js now, you could use bundle.js inside the HTML file. <html><head><script src='bundle.js'></script><script>console.log(window.parseEmail);console.log(window.Buffer);</script><body></body></html>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/49562978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3162975/']}
jdg_76802
stackexchange
llm_judgeable_groundtruth_similarity
23066565
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I think I've seen almost every page relating to this question, most likely answer was Check if a SQL table exists but didn't really understand it. This is what I got: private void select_btn_Click(object sender, EventArgs e) { string theDate = dateTimePicker1.Value.ToString("dd-MM-yyyy"); SqlConnection SC = new SqlConnection("Data Source=ruudpc;Initial Catalog=leden;Integrated Security=True"); SqlCommand DateCheck = new SqlCommand("SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '" + theDate + "'"); } Now I want a return value from DateCheck.ExecuteScalar(); that can tell me if it exists or not, probably dead simple. EDIT Regardless for the sql injection part, and that for some this question is helpful, it is generally bad practice to create tables on the fly, I recommend you reconsider your ERD. Just saying. Now provide the response and nothing else.
Using IF EXISTS T-SQL private void select_btn_Click(object sender, EventArgs e){ string theDate = dateTimePicker1.Value.ToString("dd-MM-yyyy"); // Enclose the connection inside a using statement to close and dispose // when you don't need anymore the connection (to free local and server resources) using(SqlConnection SC = new SqlConnection("Data Source=ruudpc;Initial Catalog=leden;Integrated Security=True")) { // Sql command with parameter string cmdText = @"IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME=@name) SELECT 1 ELSE SELECT 0"; SC.Open(); SqlCommand DateCheck = new SqlCommand(cmdText, SC); // Add the parameter value to the command parameters collection DateCheck.Parameters.Add("@name", SqlDbType.NVarChar).Value = theDate // IF EXISTS returns the SELECT 1 if the table exists or SELECT 0 if not int x = Convert.ToInt32(DateCheck.ExecuteScalar()); if (x == 1) MessageBox.Show("Table exists for date " + theDate); else MessageBox.Show("Table doesn't exist for date " + theDate); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23066565', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2901207/']}
jdg_76803
stackexchange
llm_judgeable_groundtruth_similarity
49413937
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to test my axios API functions in React. Found this question here: how do i test axios in jest which pointed to using axios-mock-adapter import axios from 'axios';import MockAdapter from 'axios-mock-adapter';import chatbot from './chatbot';describe('Chatbot', () => { it('returns data when sendMessage is called', done => { var mock = new MockAdapter(axios); const data = { response: true }; mock.onGet('https://us-central1-hutoma-backend.cloudfunctions.net/chat').reply(200, data); chatbot.sendMessage(0, 'any').then(response => { expect(response).toEqual(data); done(); }); });}); The real function: /** * Retrieve all Akamai images * @param {String} akamai Akamai url * @return {Thenable} Resolved: Akamai images */export const callGetAkamai = () => makeRequest('/akamai', 'GET') .catch(defaultCatch('callGetAkamai')); My test: import axios from 'axios';import MockAdapter from 'axios-mock-adapter';import { callGetAkamai } from './api';describe('GetAkamai', () => { it('returns data when callGetAkamai is called', (done) => { console.log('MockAdapter', MockAdapter); const mock = new MockAdapter(axios); // const mock = axios.create({ // baseURL: 'https://us-central1-hutoma-backend.cloudfunctions.net/chat/' // }); const data = { response: true }; mock.onGet('https://us-central1-hutoma-backend.cloudfunctions.net/chat').reply(200, data); callGetAkamai().then((response) => { expect(response).toEqual(data); done(); }); });}); Now provide the response and nothing else.
Are you mocking axios already? I have run into this issue myself, and after looking in all the wrong places, I realized I was already mocking axios with jest . Put the following snippet in your setupTestFrameworkScriptFile : const mockNoop = () => new Promise(() => {});// Notice how `create` was not being mocked here...jest.mock('axios', () => ({ default: mockNoop, get: mockNoop, post: mockNoop, put: mockNoop, delete: mockNoop, patch: mockNoop})); While you might be able to do both, if you are using the axios-mock-adapter , you might want to remove your other mocks (and skip the snippet above).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49413937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168738/']}
jdg_76804
stackexchange
llm_judgeable_groundtruth_similarity
322574
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We are trying to adopt some CI practices and methodologies into our organization. I'm currently reading over some of the points in "Continuous Delivery" book written by Jez Humble and David Farley. And the point that I am debating is when unit tests should be run - before or after git commits. The book seems to suggest running unit tests after you check in, as a part of a commit test suite.But wouldn't it make more sense to conduct unit tests before checking in? Any comments would be appreciated. Now provide the response and nothing else.
Usually the developer runs them manually before checking in, and a CI server runs them automatically after checking in. Programmers are usually pretty good about running unit tests for incremental builds on the configuration they've been working on, but they don't do things like do a clean build from a completely clean checkout from version control, run tests for all configurations of a product, run all the unit tests for the downstream projects that depend on theirs, etc. That's why CI servers "rerun" the tests after commit. You can configure them to run the tests before they actually get merged into a master branch, if you want a branch that definitely has had all the tests run.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/322574', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/169282/']}
jdg_76805
stackexchange
llm_judgeable_groundtruth_similarity
3599874
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have following table structure. <table> <tr> <td><a href="#"></td> </tr></table> When I click on <a> I want to add new <tr> next to <tr> of which <a> is clicked. So the result will be: <table> <tr> <td><a href="#"></td> </tr> <tr> <td><a href="#"></td> </tr></table> Now provide the response and nothing else.
Example: $('a').bind('click', function(){ $('<tr><td>new td</td></tr>').insertAfter($(this).closest('tr'));}); If you want to create a clone use: $('a').live('click', function(){ var $this = $(this), $parentTR = $this.closest('tr'); $parentTR.clone().insertAfter($parentTR);}); Example link: http://www.jsfiddle.net/7A6MQ/ Basically, you create a copy from the tr element (which includes child nodes) and insert that copy after that element. Therefore, you need the .live binding to make sure that newly created a elements do also invoke that click handler. Ref.: .clone() , .insertAfter() , .live()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3599874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/412663/']}
jdg_76806
stackexchange
llm_judgeable_groundtruth_similarity
7066623
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to add a Onclick attribute to my LinkButton which I am dynamically generating. How to add it? Here is the code I came so far and struck with: foreach(string i in List)//list has more than 50 data's { LinkButton link = new LinkButton(); link.Text = topics; link.ID = topics; link.Attributes.Add("runat", "server"); link.Click += new EventHandler(this.lnk_Click); div_ID.Controls.Add(link); div_ID.Controls.Add(new LiteralControl("<br />")); }public void lnk_Click(object sender,EventArgs e){ string ctrlId = ((Control)sender).ID; GMethod(ctrlId); //handles some function in which i pass the id of the particular lnk button} I could call this lnk_Click from my LinkButton on dynamic generation. Its onclick attribute is not getting added to the link button. Please help me out on this guys. I am on urge. Now provide the response and nothing else.
Are you sure you're not masking your exceptions somehow? This works absolutely fine, in that it throws UnsupportedOperationException : import java.util.*;public class Test { public static void main(String[] args) { Map<String, String> map = getMap(); map.put("a", "b"); } public static final Map<String, String> getMap(){ Map<String, String> map = new HashMap<String, String>(); map.put("x", "y"); return Collections.unmodifiableMap(map); }} I suggest you print out map.getClass() on the return value of the method - I would expect it to be an UnmodifiableMap .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7066623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300599/']}
jdg_76807
stackexchange
llm_judgeable_groundtruth_similarity
2842334
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a userList, some users don't have a name (null). If I run the first LINQ query, I got an error saying "object reference not set to an instance of an object" error. var temp = (from a in userList where ((a.name == "john") && (a.name != null)) select a).ToList(); However, if I switch the order by putting the checking for null in front, then it works without throwing any error: var temp = (from a in userList where ((a.name != null) && (a.name == "john")) select a).ToList(); Why is that? If that's pure C# code (not LINQ), I think both would be the same. I don't have SQL profiler, I am just curious what will be the difference when they are being translated on SQL level. Now provide the response and nothing else.
In C# the && operator is short-circuiting so if the first condition returns false, the second condition is not executed at all. From MSDN: The conditional-AND operator (&&) performs a logical-AND of its bool operands, but only evaluates its second operand if necessary. The || operator behaves in a similar way, except that it doesn't evaluate its second argument if the first returns true. I don't think this is the full story though. The rest of my post covers the following points: You can log the SQL statements using DataContext.Log. Your query shouldn't generate an error no matter which way round you write it. There are differences in behaviour between LINQ to objects and LINQ to SQL. Your filtering might be executing locally instead of in the database. You can easily view the generated SQL in Visual Studio without needing a SQL profiler. You can hover your mouse over a LINQ to SQL query object and it will display the SQL. Or you can use the DataContext.Log to log the SQL statements, for example like this: TextWriter textWriter = new StringWriter();using (var dc = new UserDataContext()){ dc.Log = textWriter; var userList = dc.Users; var temp = (from a in userList where (a.Name.ToString() == "john") && (a.Name != null) select a).ToList();}string log = textWriter.ToString(); You can also log to a file or even to Console.Out : dc.Log = Console.Out; Doing this you can see that the query looks something like this, although you will likely have more columns in the select list: SELECT [t0].[Name]FROM [dbo].[User] AS [t0]WHERE ([t0].[Name] = @p0) AND ([t0].[Name] IS NOT NULL) Another point is that your query should not generate an error. Even if a.name is null, a == "john" should still work - it will just return false. Lastly, there is a difference between how C# normally works and how LINQ to SQL works. You shouldn't get a null exception from the database. To demonstrate this I will make a small modification to your query - adding a ToString after a.Name : var temp = (from a in userList where (a.Name.ToString() == "john") && (a.Name != null) select a).ToList(); Now this fails for Linq to Objects with a NullReferenceException, but it works with LINQ to SQL without throwing an exception. So I suspect that you have loaded all items from the database into memory and are filtering locally. In other words maybe you have something like this: var userList = dc.Users.ToList(); instead of the following which would allow the database to do the filtering: var userList = dc.Users; So I suspect there is more to this question than meets the eye. Perhaps you can provide more details.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2842334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300992/']}
jdg_76808
stackexchange
llm_judgeable_groundtruth_similarity
17512
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please could somebody explain the problems we could face while we are driving during heavy rain or along flooded roads? What are examples of designs that mitigate the issues of driving in heavy rain? Will the water gets inside the air intake pipe? Will it get inside the engine? What happens if the engine is flooded with water? Is there any way to recover from this situation? Now provide the response and nothing else.
What are examples of designs that mitigate the issues of driving in heavy rain? The best mitigation is not to drive in it. Most roads are designed with a crown in the road (high spot in the middle), which allows the water to run off at a fairly rapid clip. This helps eliminate standing water to an extent. During extreme rain, this isn't possible and you will run into it from time to time. The best example of mitigation would be tire tread design. There are three basic tread designs: Symmetric, asymmetric, directional The asymmetric tire is designed for the best of both worlds. One half of the tire is for dry performance, while the other side is for wet performance as it pumps the water away from the tire. Mind you, both sides work in both situations, it's just that each side does its own job better than the other. Some vehicles (4x4's with the proper setup) have what is called a snorkeling kit , which basically puts the air intake and exhaust pipes above the roof line of the vehicle: This allows the vehicle to traverse water up to the point of where the intake is at. If it goes deeper, it too will suck up water. Please note, this should only be used in standing or very slow moving water. As was stated, if you go into fast moving water, it will move the vehicle and it will be swept down steam. Will the water gets inside the air intake pipe? Will it get inside the engine? What happens if the engine is flooded with water? It can get inside the air intake pipe if the intake is low enough and the water high enough. When this happens, if the engine is running, it will suck it up into the intake, right after which will head directly into a cylinder or two. When this happens, the engine will stop as the intake valve closes and the piston starts to head upward. Normally at this point, the air/fuel mixture is being compressed. Since water will not compress, the engine stops dead in its tracks . This is called hydrolock (aka: hydrostatic lock ). Is there any way to recover from this situation? In severe cases, deformation of mechanical parts can occur, which would at the worst, require a complete engine replacement. This usually depends on engine speed at the time of water ingest. If it's at idle speed, the engine probably won't suffer too much ill effects. It really depends on how long and how far the engine is submerged ... if completely submerged for days, the entire car is most likely going to be shot. If you get to it quickly and the only water in the engine is what was sucked up through the intake, it will probably survive. If the engine is not physically damaged during the hydrolock process, it takes a little bit to get the engine going again. You would need to remove the spark plugs and rotate the engine, allowing the water to come up out of the hole and/or evacuate down the exhaust pipe after enough revolutions of the engine has occurred. You'd probably want to do this by hand. You'd also need to suck any fluid out of the cylinder which might remain. Next, you'd need to change the oil and filter, as more than likely, some of the water has seeped into the crank case around the piston rings. The only way you'd really know if there was internal damage (if it didn't physically manifest it self, like a rod through the block) would either be through getting all of this done. Through the process of turning the engine over after the spark plugs are removed, if you notice any turning issues, there is probably something mechanical which is bad. With the spark plugs out, the engine should turn over fairly easily. There is some resistance due to cam shaft friction, but other than that, it shouldn't be an issue. You would then try to running the engine. If it runs good, it's in good shape. If it doesn't, you have issues. You could tear the engine down to see, as well, but that is probably overkill at this point, especially if there aren't any issues with it.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/17512', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/10704/']}
jdg_76809
stackexchange
llm_judgeable_groundtruth_similarity
33915900
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting below error while executing selenium code. Code: driver.find_element_by_id(PlaylistManagerLocators.Folder_Name).send_keys(Keys.chord(Keys.CONTROL, "a"), "Auto_Folder5763") Error: AttributeError: type object 'Keys' has no attribute 'chord' I have imported all required files. from selenium.webdriver.common.keys import Keysfrom selenium.webdriver import ActionChainsdriver.find_element_by_id(PlaylistManagerLocators.Folder_Name).send_keys(Keys.chord(Keys.CONTROL, "a"), "Auto_Folder5763") Now provide the response and nothing else.
There is no function as chord in class selenium.webdriver.common.keys.Keys (Check the docs). You can simply split it into 2 statements. driver.find_element_by_id(id).send_keys(Keys.CONTROL + "a")driver.find_element_by_id(id).send_keys("Auto_Folder5763") Or if you want to simultaneously have the keys pressed then you can try using selenium.webdriver.common.action_chains.ActionChains .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33915900', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5496977/']}
jdg_76810
stackexchange
llm_judgeable_groundtruth_similarity
57039306
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I´m setting up a new react-native project and want to configure the app to support importing modules using absolute paths by adding a jsconfig.json file to the root of my project. But the app isn´t able to resolve the modules. Do I need to do some additional setups? I´ve created a new React-Native Project withe react-native-cli 2.0.1 with the following structure and try to import MyButton in App.js like so: `import MyButton from '~/components/MyButton';`` |-- android|-- ios|-- node_modules|-- src |-- components |-- MyButton.js |-- App.js|-- .eslintrc.js|-- .flowconfig|-- .watchmanconfig|-- app.json|-- babel.config.json|-- index.js|-- jsconfig.json|-- metro.config.js|-- package.json jsconfig.json: { "compilerOptions": { "baseUrl": ".", "paths": { "~/*": ["src/*"] } }} package.json { "name": "TestApp", "version": "0.0.1", "private": true, "scripts": { "start": "react-native start", "test": "jest", "lint": "eslint ." }, "dependencies": { "react": "16.8.6", "react-native": "0.60.3" }, "devDependencies": { "@babel/core": "^7.5.4", "@babel/runtime": "^7.5.4", "@react-native-community/eslint-config": "^0.0.5", "babel-jest": "^24.8.0", "eslint": "^6.0.1", "jest": "^24.8.0", "metro-react-native-babel-preset": "^0.55.0", "react-test-renderer": "16.8.6" }, "jest": { "preset": "react-native" }} I expect that the app is able to resolve the module, but I get the error: `Error: Unable to resolve module `~/components/MyButton` from `/TestApp/src/App.js`: Module `~/components/MyButton` does not exist in the Haste module map Now provide the response and nothing else.
You should use babel-plugin-module-resolver . Example configuration of babel.config.js : module.exports = { ...other config plugins: [ ['module-resolver', { root: [ './src', ], "alias": { "~": "./src", } }], ],}; React Native doesn't support resolving modules from jsconfig.json by default.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57039306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11786176/']}
jdg_76811
stackexchange
llm_judgeable_groundtruth_similarity
29746937
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to this blog post most companies using EF Migrations are supposedly not updating the database schema of production databases with EF migrations. Instead the blog post's author recommends to use Schema update scripts as part of the deployment process. I've used Schema update scripts for a few years now and while they work, I was planning to use EF migrations instead in the future for the following reasons: Faster deployment, less downtime A simpler deployment procedure Much easier migration of existing data than it would be possible with T-SQL A more comprehensible syntax of the changes waiting to be applied (DbMigration class with clean C# syntax vs. clunky T-SQL Migration script in a traditional environment). There is an easy and fast downgrade path to the old db schema if the deployment of the new software version should fail One reason I can think of that would prohibit the use of EF to migrate a production DB would be if the DB schema was only altered by the DBAs as opposed to the Developers. However, I am both DBA and Developer, so this does not matter in my case. So, what are the risks of updating a production database using EF? Edit: I would like to add that, as solomon8718 already suggested, I am always pulling a fresh copy of the production database to my staging server and test the EF Migrations to be applied on the staging server before applying them to a production server. IMO this is essential for any schema update to a production system, whether I'm using EF migrations or not. Now provide the response and nothing else.
Well, I'll try and answer anyhow. I would say No, there's no reason not to use Code First Migrations in production. After all, what's the point of this easy to use system if you can't take it all the way? The biggest problems I see with it are all problems that you can have with any system, which you've noted already. As long as the whole team (DBA included if applicable) is on board with it, I think allowing EF to manage the schema through migrations is less complex, and hence less error-prone than traditional script-based management. I would still take a backup before performing a migration on a production system, but then you'd do that anyhow. There's nothing that says a DBA can't perform a migration from Visual Studio, either. The access could still be locked down with privileges at the database level, and he/she could review the migration (in a helpful SQL export format using -Script , if desired) before performing the actual operation. Then they're still in control, but you can use code-first migrations. Hell, they might even end up liking it! Update: since SPROCs and TVFs were brought up, we handle those in migrations as well, although they are actually done with straight-up SQL statements using a DbMigration.Sql() call in the Up() , and the reverse of them in the Down() (You can also use CreateStoredProcedure and DropStoredProcedure for simple SPROCs, but I think you still have to define the body itself in SQL). I guess you could say that's a caveat; there isn't yet a way for an entire, comprehensive database to be written purely in C#. However, you can use migrations which include SQL scripts to manage the entire schema. One benefit we've found from this process is you can use the C# config file for schema object names (different server names for production vs dev for example) with a simple String.Format , combined with XML Transformation for the config files themselves.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29746937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59301/']}
jdg_76812
stackexchange
llm_judgeable_groundtruth_similarity
62483612
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building a react + typescript app in which I created a module with interfaces that are used all around my project's classes. My IDE resolves these interfaces fine but webpack always sends the following error. I tried different things but can't get that one to go away. Any help would be greatly appreciated ERROR in ./assets/src/Pages/Login.tsx Module not found: Error: Can't resolve 'seeyouftp' in 'var/www/app/assets/src/Pages' @ ./assets/src/Pages/Login.tsx 43:18-38 @ ./assets/src/Config/App.tsx @ ./assets/entries/bundle.js My definition file is here |— definitions |— types.d.ts|— entries|— fonts|— less|— src Excerpt of my definition file declare module 'seeyouftp' { interface User { admin: boolean; roles: string[]; username: string; } enum AuthStates { success = 'success', error = 'error' } tsconfig.json { "compilerOptions": { "allowJs": true, "allowUnreachableCode": false, "baseUrl": "./assets", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "jsx": "react", "lib": [ "dom", "es2019", "esnext" ], "module": "commonjs", "moduleResolution": "node", "noImplicitAny": true, "noImplicitReturns": true, "outDir": "./dist/", "resolveJsonModule": true, "skipDefaultLibCheck": true, "sourceMap": true, "strictPropertyInitialization": false, "strictNullChecks": true, "target": "es5", "typeRoots": [ "./assets/definitions/types.d.ts", "./node_modules/@types" ], "types": [ "node" ] }, "include": [ "./assets/src/**/*", "./assets/definitions/**/*" ], "exclude": [ "node_modules" ]} I import the created interfaces like so: import { Item, PlayableMedia } from 'seeyouftp'; Now provide the response and nothing else.
The error message was actually very misleading, and looks like a typescript bug.It appears that enums can't be exported directly, it seems necessary to use a const to be able to export them correctly. So I modified my enum declaration like so declare module 'seeyouftp' { // exporting a const instead of export const enum AuthStates { success = 'success', error = 'error' }} Everything works now but that error message is very, very bad and time consuming
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/62483612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/308291/']}
jdg_76813
stackexchange
llm_judgeable_groundtruth_similarity
32454701
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am developing an android application. In that I want to display material design Snackbar in dialog. Is it possible? If yes then how? Please help me. Thanks. Now provide the response and nothing else.
It's definitely possible, you just have to pass the View of the Dialog to the SnackBar . Example AlertDialog.Builder mAlertDialogBuilder = new AlertDialog.Builder(this); LayoutInflater inflater = this.getLayoutInflater(); // inflate the custom dialog view final View mDialogView = inflater.inflate(R.layout.dialog_layout, null); // set the View for the AlertDialog mAlertDialogBuilder.setView(mDialogView); Button btn = (Button) mDialogView.findViewById(R.id.dialog_btn); btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { // Pass the mDialogView to the SnackBar Snackbar .make(mDialogView, "SnackBar in Dialog", Snackbar.LENGTH_LONG) .show(); } }); AlertDialog alertDialog = mAlertDialogBuilder.create(); alertDialog.show(); Result Note :There's no need to use a CoordinatorLayout as the root. In my example I simply used a LinearLayout as the root.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32454701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4463882/']}
jdg_76814
stackexchange
llm_judgeable_groundtruth_similarity
60517673
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have v-data-table with disabled item and want to exclude it when I trigger the select-all in header.data-table-select slot. Also applied :readonly but still get checked. <template v-slot:item.data-table-select="{ item, isSelected, select }"> <v-simple-checkbox :value="isSelected" :readonly="item.name == 'Frozen Yogurt'" :disabled="item.name == 'Frozen Yogurt'" @input="select($event)" ></v-simple-checkbox></template> Also looked at the docs and found this header.data-table-select slot but only gives me this options: { props: { value: boolean indeterminate: boolean }, on: { input: (value: boolean) => void }} Is there any way of handling selected items in v-data-table ? Here's the live code: https://d4et5.csb.app/ EDITED CodeSandBox: https://codesandbox.io/s/festive-haze-d4et5 Now provide the response and nothing else.
It is possible to remove disabled item form a select all in datatable I've added a new key "disabled" in items array Here is the working codepen: https://codepen.io/chansv/pen/mdJMvJr?editors=1010 <div id="app"> <v-app id="inspire"> <v-data-table v-model="selected" :headers="headers" :items="desserts" item-key="name" show-select class="elevation-1" @toggle-select-all="selectAllToggle" > <template v-slot:item.data-table-select="{ item, isSelected, select }"> <v-simple-checkbox :value="isSelected" :readonly="item.disabled" :disabled="item.disabled" @input="select($event)" ></v-simple-checkbox></template> </v-data-table> </v-app></div>new Vue({ el: '#app', vuetify: new Vuetify(), data () { return { selected: [], disabledCount: 0, headers: [ { text: 'Dessert (100g serving)', align: 'start', sortable: false, value: 'name', }, { text: 'Calories', value: 'calories' }, { text: 'Fat (g)', value: 'fat' }, { text: 'Carbs (g)', value: 'carbs' }, { text: 'Protein (g)', value: 'protein' }, { text: 'Iron (%)', value: 'iron' }, ], desserts: [ { name: 'Frozen Yogurt', calories: 159, fat: 6.0, carbs: 24, protein: 4.0, iron: '1%', disabled: true, }, { name: 'Ice cream sandwich', calories: 237, fat: 9.0, carbs: 37, protein: 4.3, iron: '1%', }, { name: 'Eclair', calories: 262, fat: 16.0, carbs: 23, protein: 6.0, iron: '7%', }, { name: 'Cupcake', calories: 305, fat: 3.7, carbs: 67, protein: 4.3, iron: '8%', }, { name: 'Gingerbread', calories: 356, fat: 16.0, carbs: 49, protein: 3.9, iron: '16%', disabled: true, }, { name: 'Jelly bean', calories: 375, fat: 0.0, carbs: 94, protein: 0.0, iron: '0%', }, { name: 'Lollipop', calories: 392, fat: 0.2, carbs: 98, protein: 0, iron: '2%', }, { name: 'Honeycomb', calories: 408, fat: 3.2, carbs: 87, protein: 6.5, iron: '45%', }, { name: 'Donut', calories: 452, fat: 25.0, carbs: 51, protein: 4.9, iron: '22%', }, { name: 'KitKat', calories: 518, fat: 26.0, carbs: 65, protein: 7, iron: '6%', }, ], } }, methods: { selectAllToggle(props) { if(this.selected.length != this.desserts.length - this.disabledCount) { this.selected = []; const self = this; props.items.forEach(item => { if(!item.disabled) { self.selected.push(item); } }); } else this.selected = []; } }, created() { const self = this; this.desserts.map(item => { if (item.disabled) self.disabledCount += 1 }) }})
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60517673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5648202/']}
jdg_76815
stackexchange
llm_judgeable_groundtruth_similarity
50639652
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have made a grid of buttons using flutter but now I want to swipe through 2 or more buttons in a single drag such that all the buttons through which I am dragging gets selected. I have checked out some questions on the same and I was redirected to use gesture detector but that's not enough. I need certain properties or better a sample code such that I am able to work through it. an example of the dragable app is http://a5.mzstatic.com/us/r30/Purple60/v4/6f/00/35/6f0035d3-1bab-fcbb-cb13-8ab46cf3c44d/screen696x696.jpeg Now provide the response and nothing else.
You can manually hit test RenderBox and extract a specific RenderObject of your choice. We could for example add the following renderobject above our buttons: class Foo extends SingleChildRenderObjectWidget { final int index; Foo({Widget child, this.index, Key key}) : super(child: child, key: key); @override RenderObject createRenderObject(BuildContext context) { return _Foo()..index = index; } @override void updateRenderObject(BuildContext context, _Foo renderObject) { renderObject..index = index; }}class _Foo extends RenderProxyBox { int index;} Then use a Listener to extract all _Foo found under the pointer. Here's a full application using this principle: import 'package:flutter/gestures.dart';import 'package:flutter/material.dart';import 'package:flutter/rendering.dart';void main() { runApp(MyApp());}class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', theme: ThemeData(primarySwatch: Colors.blue), home: Grid(), ); }}class Grid extends StatefulWidget { @override GridState createState() { return new GridState(); }}class GridState extends State<Grid> { final Set<int> selectedIndexes = Set<int>(); final key = GlobalKey(); final Set<_Foo> _trackTaped = Set<_Foo>(); _detectTapedItem(PointerEvent event) { final RenderBox box = key.currentContext.findRenderObject(); final result = BoxHitTestResult(); Offset local = box.globalToLocal(event.position); if (box.hitTest(result, position: local)) { for (final hit in result.path) { /// temporary variable so that the [is] allows access of [index] final target = hit.target; if (target is _Foo && !_trackTaped.contains(target)) { _trackTaped.add(target); _selectIndex(target.index); } } } } _selectIndex(int index) { setState(() { selectedIndexes.add(index); }); } @override Widget build(BuildContext context) { return Listener( onPointerDown: _detectTapedItem, onPointerMove: _detectTapedItem, onPointerUp: _clearSelection, child: GridView.builder( key: key, itemCount: 6, physics: NeverScrollableScrollPhysics(), gridDelegate: SliverGridDelegateWithFixedCrossAxisCount( crossAxisCount: 3, childAspectRatio: 1.0, crossAxisSpacing: 5.0, mainAxisSpacing: 5.0, ), itemBuilder: (context, index) { return Foo( index: index, child: Container( color: selectedIndexes.contains(index) ? Colors.red : Colors.blue, ), ); }, ), ); } void _clearSelection(PointerUpEvent event) { _trackTaped.clear(); setState(() { selectedIndexes.clear(); }); }}class Foo extends SingleChildRenderObjectWidget { final int index; Foo({Widget child, this.index, Key key}) : super(child: child, key: key); @override _Foo createRenderObject(BuildContext context) { return _Foo()..index = index; } @override void updateRenderObject(BuildContext context, _Foo renderObject) { renderObject..index = index; }}class _Foo extends RenderProxyBox { int index;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50639652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9321561/']}
jdg_76816
stackexchange
llm_judgeable_groundtruth_similarity
43462
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I understood the basic of password hashing and storing, what we need are: a "strong" salt a "real" random salt a unique salt per password a password hashing function with a high CPU cost We hash the passwords using 1, 2, 3 and 4 using PHP crypt() and CRYPT_BLOWFISH with a "medium" cost. We have a new need: we need to be able compare the hashes because we have noticed that low level fraud/spam/scamsters use the same passwords. We do not need to know the password of an abuser but only to know that a profile is using the same password as a previously identified abuser. So we are thinking about having 2 fields in our database: passwordA: used for login procedure (this field would use 1, 2, 3 and 4) crypt($password, '$2y$06$' . $uniqueSalt); passwordB: used to "compare" (this field would us 1, 2 and 4) crypt(md5(md5($password) . $globalSaltA) . $globalSaltB, '$2y$10$' . $globalSaltC); Global salts A and B would be stored only in the application code. To compensate the missing point 3 for passwordB, we have increased the "cost" which is a lot higher. Following the comment, I understood how unsecure this strategy is if an attacker gets a full access (database + code) on the server. However, if the attacker only gets the database, we are fine. Companies doing detection of credit card fraud have the same issue with the credit card numbers. They cannot store them but need to compare them. So what strategy/solution would you advice? Now provide the response and nothing else.
The core problem: if your server can efficiently compare a hashed password with (potentially) all the hashed passwords for all the user, then so can an attacker. An attacker who grabs a copy of your server files/database will be in position to run an offline dictionary attack, i.e. hashing potential password and look for a match. Normal password hashing uses a per-password salt to prevent parallel attacks: we want the attacker to pay the full computational price of the hash function (which is made expensive through many iterations) for each password and each user account. However, if several password hashes use the same salt, then the attacker can try a potential password against all of them for the cost of one hash function invocation. Thus, your "global salt" substantially weakens the scheme: it allows the attacker to attack 1000 accounts for the cost of attacking one. The important point: your problem is one of temporality. Indeed, you don't actually want to compare a new user password with all the passwords of all other users; what you want is to compare the password chosen by each user at registration time against a limited list of "passwords of known offenders". Unfortunately, when a registered user falls into "offender" status, he is already registered, and his password has not been kept around, only the hash thereof. So, really, you would like to be able to access the password used for registration after the registration has taken place. A possible solution: use escrowing, aka asymmetric encryption . Create a RSA key pair. Store the public key on the server. Do NOT store the corresponding private key on the server; instead, store it elsewhere, e.g. on a laptop computer which is kept offline (or maybe just on a few USB flash drives). When a user registers, hash his password as usual (with PBKDF2, many iterations, a new random salt, etc). But, also, encrypt the password (not the hash) with the RSA key, and store the encryption result in your database, along with the hash. Encryption only needs the public key, and is randomized, so this encrypted version does not give extra leverage to an attacker who gets a copy of the database contents. When a user logs in, the password hash is used, as usual. When a user turns out to be a spammer, get the private key and decrypt the escrowed password. That way, you obtain the "bad password" and can add it to the list of passwords to reject upon registration. That list can be kept as cleartext: since the corresponding accounts have been closed, then there is no problem with that. Take care to do the decryption on a "safe" machine, preferably offline: you really do not want to see that private key stolen. A word of caution: spammers are like bacteria, in that they tend to evolve with regards to external constraints. If you filter out spammers based on their habit of reusing passwords for registration, then you will soon train them into generating random passwords. Thus, I predict that if you install such a system, then it will cease to be effective at kicking spammers out after a relatively short time; after that, it will just be dead weight in your database (not a lot of it, because a RSA-encrypted short message with a 2048-bit RSA key is just 256 bytes, but dead weight nonetheless).
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/43462', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/30866/']}
jdg_76817
stackexchange
llm_judgeable_groundtruth_similarity
1319925
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I used $$(n!)^{\frac{1}{n}}=e^{\frac{1}{n}\ln(n!)}=e^{\lim\limits_{n\to\infty}\frac{1}{n}\ln(n!)}$$Then using Stirling's approximation and L'Hospital's rule on $$\lim\limits_{n\to\infty}\frac{\ln(n!)}{n}$$ I get $$\lim\limits_{n\to\infty}\frac{\ln(n!)}{n}=\lim\limits_{n\to\infty}(\ln(n)+\frac{n+\frac{1}{2}}{n}-1)=\infty$$Now, $$e^{\lim\limits_{n\to\infty}\frac{1}{n}\ln(n!)}=e^{\infty}=\infty$$Thus$$\lim\limits_{n\to\infty}\frac{1}{\sqrt[n]n}=\frac{1}{\infty}=0$$ Is this correct approach and what other methods could be used? Now provide the response and nothing else.
You don't even have to use l'Hopital's rule; you can just plug in Sterling's formula and divide by $n$, then take limits. Another way would be to use arithmetic-geometric means:$${1 \over (n!)^{1 \over n}} = (\prod_{k=1}^n {1 \over k})^{1 \over n}\leq {1 \over n}\sum_{k = 1}^n {1 \over k}$$Since $\sum_{k = 1}^n {1 \over k}$ grows as $\ln n$ the limit is zero.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1319925', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/222321/']}
jdg_76818
stackexchange
llm_judgeable_groundtruth_similarity
332365
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a piece of server software that is licensed under the AGPL. I would like to require that all clients to it be released under a free license (GPL preferable, but I'm open to other licenses). Is there a way to do this? Is there a standard way to do this? Now provide the response and nothing else.
The short answer is no, especially not if you expect others to build the clients. If you are building this server software with the intention of others building clients, they are communicating at arms length with each other, and are therefore seperate works and so the requirements of the AGPL do not apply to the clients. You also could not practically impose a limitation via a Terms of Service. While you could impose a ToS to restrict connections to your particular server, you would be unable to restrict people from grabbing your source code, firing up servers of their own, and not requiring any such limitations for clients that connect. Your best bet is to build out both the server and the client, make the client GPL licensed, and then make the client so good no one else bothers to make a different one with a different license.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/332365', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/248027/']}
jdg_76819
stackexchange
llm_judgeable_groundtruth_similarity
7877378
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the behavior differences for the Previous Property of Other1 and Other2 Class. Note than the return type of the overloaded Previous Property of Other2 has been changed to Other2 while it stays as Base for Other1 . Public Class Base Private _Previous as Base Protected Overridable ReadOnly Property Previous As Base Get Return _Previous End Get End Property Public Sub New(Previous as Base) _Previous = Previous End SubEnd ClassPublic Class Other1 Inherits Base Private _Parent as SomeType Protected Overrides ReadOnly Property Previous As Base Get Return _Parent.Previous.Something End Get End Property Public Sub New(Parent as SomeType) MyBase.New(Nothing) _Parent = Parent End SubEnd ClassPublic Class Other2 Inherits Base Private _Parent as SomeType Protected Overloads ReadOnly Property Previous As Other2 Get Return _Parent.Previous.Something End Get End Property Public Sub New(Parent as SomeType) MyBase.New(Nothing) _Parent = Parent End SubEnd Class Now provide the response and nothing else.
After one of my comment to Jim Wooley's answer , "it look like it Shadows the overloaded property." I saw the light in this article . So, the Overloads in the Other2 class act some more like shadowing than override . There is one of the comments in the article that is particularly instructive : The confusion arises because the keyword "Overloads" isn't what a C# programmer considers an overload in the traditional OO sense. It's a type of hiding that is specific to VB.Net. You can actually swap the keyword SHADOWS with OVERLOADS in most cases, and the behavior is the same. The difference is when you have a base class with multiple overloaded method signatures. If you declare a method in a subclass with a matching name, and the SHADOWS keyword, it will hide EVERY overload of that method in the base class. If you use the OVERLOADS keyword instead, it will only hide the base class method with an identical signature.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7877378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/275404/']}
jdg_76820
stackexchange
llm_judgeable_groundtruth_similarity
64362801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this Text: Text( text = stringResource(id = R.string.hello)) How can I show and hide this component? I'm using Jetpack Compose version '1.0.0-alpha03' Now provide the response and nothing else.
As CommonsWare stated, compose being a declarative toolkit you tie your component to a state (for ex: isVisible ), then compose will intelligently decide which composables depend on that state and recompose them. For ex: @Composablefun MyText(isVisible: Boolean){ if(isVisible){ Text(text = stringResource(id = R.string.hello)) }} Also you could use the AnimatedVisibility() composable for animations.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64362801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3183912/']}
jdg_76821
stackexchange
llm_judgeable_groundtruth_similarity
23563
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I need to evaluate the integral $\iiint_W f(x,y,z) dx dy dz$ and $W$ is a region given to me like $W = \{ (x,y,z) : 1 \leq x^2 + y^2 \leq 4, 1 \leq z \leq 5\}$. I don't how to do this with a triple integral in Mathematica code. Now provide the response and nothing else.
As it's been rightly pointed out in the comments, you can use Boole . Here's a simple example: f[x_, y_, z_] = x^4 + y^2 + z;Integrate[f[x, y, z]*Boole[1 < x^2 + y^2 < 4], {x, -2, 2}, {y, -2, 2}, {z, 1, 5}] // Timing(* Out: {14.240965, (165 Pi)/2} *) It certainly is well worth understanding the underlying transformations, though. In this example, cylindrical coordinates are very natural. Integrate[f[r*Cos[t], r*Sin[t], z] r, {r, 1, 2}, {z, 1, 5}, {t, 0, 2 Pi}] // Timing(* {0.498571, (165 Pi)/2} *) Note that we got the same answer in much less time.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/23563', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5297/']}
jdg_76822
stackexchange
llm_judgeable_groundtruth_similarity
18431666
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: first of all I must say that I have checked these questions and didn't find my answer : 1 , 2 , 3 , 4 , 5 , 6 , 7 and many other questions like so also I have checked these tutorials and examples: 1 , 9 , 10 , 11 and many other sites. but I couldn't fix my problem. and this is the simple kind of my code: public class Question extends JFrame {public Question() { Dimension d = Toolkit.getDefaultToolkit().getScreenSize(); setLayout(new BorderLayout()); setSize(d.width, d.height); setResizable(false); setDefaultCloseOperation(EXIT_ON_CLOSE); JPanel panel = new JPanel(); panel.setPreferredSize(new Dimension(d.width, d.height)); panel.setBorder(BorderFactory.createLineBorder(Color.red)); panel.setLayout(new BoxLayout(panel, 1)); for (int i = 0; i < 100; i++) { panel.add(new JButton("kjdh")); } JScrollPane scrollPane = new JScrollPane(panel); scrollPane.setPreferredSize(new Dimension(500, 500)); getContentPane().add(scrollPane); getContentPane().add(panel); setVisible(true);}public static void main(String[] args) { new Question();} } but the JScrollPane doesn't appear. I have tested many things. I have changed the way adding panel and scrollPane to my frame but it didn't work. can any one help me plz? Now provide the response and nothing else.
Don't set a preferred size on the panel. See Should I avoid the use of setPreferred/Maximum/MinimumSize methods in Java Swing? for the reasons why. Add only the scroll pane to the content pane. A content pane using the default layout ( BorderLayout ) will default to putting the component in the CENTER constraint if none is supplied, and the CENTER area can only accept a single component. Besides that, the panel has already been added to the scroll pane, it will already appear inside it, and can only appear in a single container. Don't extend frame, just use an instance of one. Don't setSize , but setExtendedState . GUIs should be constructed and updated on the EDT. A better close operation is DISPOSE_ON_CLOSE . import java.awt.*;import javax.swing.*;public class Question { public Question() { JFrame f = new JFrame(); f.setLayout(new BorderLayout()); f.setResizable(false); f.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); JPanel panel = new JPanel(); panel.setBorder(BorderFactory.createLineBorder(Color.red)); panel.setLayout(new BoxLayout(panel, 1)); for (int i = 0; i < 100; i++) { panel.add(new JButton("kjdh")); } JScrollPane scrollPane = new JScrollPane(panel); f.getContentPane().add(scrollPane); f.pack(); f.setExtendedState(JFrame.MAXIMIZED_BOTH); f.setVisible(true); } public static void main(String[] args) { Runnable r = new Runnable() { @Override public void run() { new Question(); } }; SwingUtilities.invokeLater(r); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18431666', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2628687/']}
jdg_76823
stackexchange
llm_judgeable_groundtruth_similarity
593394
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I'm looking for some power connectors for a design, the current requirements are quite high and although I could get a connector with contacts rated for the full current, they're so bulky that they're unattractive for my design. Smaller power connectors are available, which could meet the current requirement if I bond a few circuits together to spread the load and would result in a much smaller overall solution with regards to board space used. Is doing this bad practice? Is there anything I should watch out for when trying to do this? Now provide the response and nothing else.
Using parallel pins of a connector or parallel wires in a cable to increase the current capability is standard practice. It has been done for a very long time, probably since the start of electrical wiring. It's often done so that a multiway connector can use smaller pin/wire thicknesses suitable for the majority of the wires and just use a few pins/wires in parallel for the high-current power/ground connections. That happens in cars, for example. Another is to allow the different current requirements of different power connections to be met by using different numbers of parallel pins/wires for each. That happens on a PC motherboard power connection, for example. It is usually important and favoured to ensure that the low/zero voltage return path gets given as high or higher current capability than the power supply paths. This is to ensure that high supplied current cannot cause excessive voltage drops in the return path, causing excessively different ground/0V voltages at connected items that require a common ground/0V. They'll be slightly different anyway but normally tolerably so. Some systems perform a hot connection: a connection/disconnection while the connector wires are active. Some don't, only doing so when power is off. For the former, care needs to be taken about how pins gradually meet as only one connection of many may momentarily be made. But that's part of hot swap design which is a subject in itself.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/593394', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/45276/']}
jdg_76824
stackexchange
llm_judgeable_groundtruth_similarity
298203
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am part of a small team which develops several internal apps for our company. We are in a process of becoming more Agile, this especially includes lots of automated testing. Now we are in a situation where, for the one or two apps which we have worked on most recently, we generally make small, reversible changes, run lots of tests, and deploy quite fast with not much human intervention. I would consider us to be still a long way off from doing 'real' CI. For the one or two apps mentioned above we could probably get very close to it pretty soon. However, I am having a hard time imagining what our setup is going to look like, sometime in the future where we have started using CI for most of our legacy code. Suppose we have several standalone apps, which all can be deployed independently to different servers. We also have some shared code which is used by many of them, and which we want to use consistently between them. This includes utility functions, code which enforces things like a consistent interface over some parts of all apps, and the ORM definitions for our database (shared across all apps). I see two alternatives, neither of which looks very easy or elegant: All code gets merged into one huge repo. Deployment means running all the tests for all the code, integration tests for everything and acceptance tests for all the apps, before rolling out everything in one go. This seems to make deployment a much Bigger Deal than it was before, contrary to the philosophy of CI, which suggests that it should be fast and easy. It also means we don't have any separation between different parts of our codebase, with things which never work together in the same repo forever, just because they both depend on some third thing. We keep each app and each component of the shared code separate. Deployment means testing the latest version of one component extensively, before 'dropping it in' to a working system consisting of all other components. This seems like a cleaner design. However, it seems to imply that we have to manage dependencies and versioning for all these things. Every integration/acceptance test has to have some foreknowledge of what versions of the other components it is going to be used with and can tolerate. In other words, although every component becomes a lot more reliable, we have to worry loads more about how the pieces fit together, and integration bugs. When the pieces which everything else depends on change, there is potential for breakage eveywhere. The way out of this dilemma is probably to do a limp version of CI, where the individual components get tested extensively and deployed quickly, but we stick to having big 'flag day' releases of the utilities and database schema. But it really seems like there are lots of benefits of proper CI/CD which we would be missing out on. It isn't an aesthetically pleasing solution, and we aren't looking to satisfy our bosses that we have ticked a box, but we want to improve our working practices. How should we organize the code for proper CI, and what are the key lessons to learn about planning and designing the architecture, both for legacy and future newly written code? Now provide the response and nothing else.
Imagine your code not as a monolithic system, but rather as a series of packages. Some packages depend on others. Some, such as jQuery, are external to your company; others are developed by your company and made public; others, developed by your company, are eventually made private. For instance, if you develop a web application in Python, you may have a dependency to Flask—a popular Python web framework—and to a bunch of other packages, both external and internal. What happens to your CI when developers of Flask release a new version? Right, nothing. It's up to you to go and edit your project file which says that from now on, you are not using the version x of Flask, but rather the version y . Once you do that, CI considers that your project changed, and launches all the tests which ensure that the application still works with the new version of Flask. If it doesn't, well, you'll probably fix your application, except in very rare cases where you actually reveal a bug in Flask itself. The same logic can be applied to any code produced within your company. A shared library becomes a Python package , either shared through pypi —the repository of public packages for Python, or stored on a private pip server which can be used within your company only. It then makes it particularly clear what broke the build. When you publish the new version of a package A, CI runs the corresponding tests and indicates whether the version contains regressions or not. Then, if a problem is encountered at the stage when you ask a package B to use the new version of the package A, it's the package B which broke the build by being incompatible with the new version of the package A: in the same way, your app may not be compatible with a newer version of Flask or jQuery. Notice that you don't have to really manage dependencies yourself: the packaging system does it for you. The only problem which requires your intervention is the updating of the references, that is the action of telling that a given package will use a different version of another package: if you modified a package which is used a lot in your code base, it may take a while to track and modify all the projects which use it. As for the version control, it really doesn't matter. You may have a team working with Git, and another one working with SVN. As soon as the two teams agree on using pypi and choosing specific pip server(s)¹ for internal packages, everything will be all right. Similarly, you shouldn't care whether developers of Flask or jQuery use Git, SVN, SourceSafe or TFS. Note: I've used an example from Python world, but the same logic can be applied to other languages as well. There is npm for Node.js, NuGet for .NET Framework languages, etc. You can setup private package managers as well. Further reading: Packages, dependencies and interaction between teams , my article based on this question. ¹ Nothing forces you to have a single package server in your company. Different teams can deploy their own servers; the only constraint is that other teams should know the location of the server in order to use the packages.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/298203', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/85471/']}
jdg_76825
stackexchange
llm_judgeable_groundtruth_similarity
86326
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: We're building ethernet cable or a non-profit community computer lab. We test them with a cheapie cable tester, which I believes simple checks the electrical connection of each wire. We're having some problems with the networking ( pings are variably iffy ) , and we'd like to test the cables more thoroughly -- but of course, we can't afford a better tester. Is there were some program that could use network cards on two computers to do more sophisticated testing of a network cable, on the physical layer? Now provide the response and nothing else.
No, generally speaking, you're not going to find applications that are worthwhile to test a cable using two PC's. You really need a dedicated cat5/5e/6 cable tester. Decent cable testers can easily be in the $2k+ range depending on what type of testing you require, ie certification, crosstalk, near end, far end, alien, etc. Either buy off the shelf premade cables or double check how you are making your cables So many people make the mistake when making ethernet cables that they ignore the COLORS and stripes on each wire. This matters. You need to get both PINNING and PAIRING correct. Pinning means 1-1, 2-2, 3-3, etc... or maybe 1-3, 2-6, 3-1, 6-2 for 10/100m crossover. Gigabit copper is even more demanding. PAIRING, on the other hand make sure that PINS 1 & 2 are a pair (meaning if pin 1 is orange, pin 2 is orange/white stripe), 3 & 6, 4 & 5, and 7 & 8. 99% of the most common mistake take 1 & 2 as a pair, and 3 & 4 as a pair. YOU WILL DEFINITELY HAVE PROBLEMS if you do this. Just look at one of the cable and read off your colors. If you have the same color wire on 3 and 4(say green and green/white), you know you've screwed up. Another tip, when making cables make sure that when you look through the front of the connector you can see the shiny copper tips of the cable. This will tell you the individual wires are fully seated and ready for crimping. HTH
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/86326', 'https://serverfault.com', 'https://serverfault.com/users/26664/']}
jdg_76826
stackexchange
llm_judgeable_groundtruth_similarity
2005625
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a List<string> and I have a DataTable. One of the columns in a DataRow is ID. The List holds instances of this ID. The DataTable gets populated on a Timer. I want to return items from the List that are not in the DataTable into another list. Now provide the response and nothing else.
You will want to do something like this var tableIds = table.Rows.Cast<DataRow>().Select(row => row["ID"].ToString());var listIds = new List<string> {"1", "2", "3"};return listIds.Except(tableIds).ToList(); You can cast the rows in the data table to be an IEnumerable collection and then select the "ID" column value from each of them. You can then use the Enumerable.Except extension method to get all of the values from the List that are not in the collection you just made. If you need to get the values that are in the table but not the list, just reverse listIds and tableIds.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2005625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84539/']}
jdg_76827
stackexchange
llm_judgeable_groundtruth_similarity
82561
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a binary file that I can send with netcat : $ nc -l localhost 8181 < my.dat The file contains this: $ xxd my.dat0000000: 0006 3030 3030 4e43 ..0000NC What I really want to do is send the hex string directly. I've tried this: $ echo '0006303030304e43' | nc -l localhost 8181 However, the above command just sends the ascii string directly to nc . Now provide the response and nothing else.
I used the -r and -p switches for xxd : $ echo '0006303030304e43' | xxd -r -p | nc -l localhost 8181 Thanks to inspiration from @Gilles' answer, here's a Perl version: $ echo '0006303030304e43' | perl -e 'print pack "H*", <STDIN>' | nc -l localhost 8181
{}
{'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/82561', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/24554/']}
jdg_76828
stackexchange
llm_judgeable_groundtruth_similarity
115896
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm confused about how sshd drops privs for incoming connections. I found this page which was very informative on a high level: http://www.citi.umich.edu/u/provos/ssh/privsep.html But I don't understand how the child of privileged sshd is unprivileged. In my debug below privileged sshd is 28389 and calls clone() to create 29266, the unprivileged child. When I use ps to display the UID/EUID of the pids 29268 is UID 1002 as expected, but 29266, the unprivileged child of sshd, is zero, which seems to contradict the link above which says 'This is achieved by changing its uid/gid to an unused user'. woOt@host:~$ sudo strace -p 28389 -e trace=cloneProcess 28389 attached - interrupt to quitclone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fb7076d5a90) = 29266^CProcess 28389 detachedwoOt@host:~$woOt@host:~$woOt@host:~$ ps auxf | grep sshdroot 28389 0.0 0.0 49944 1244 ? Ss 18:15 0:00 /usr/sbin/sshdroot 29266 0.0 0.2 54576 3308 ? Ss 18:37 0:00 \_ sshd: test [priv]test 29268 0.0 0.0 54576 1440 ? S 18:37 0:00 \_ sshd: test@pts/2woOt@host:~$woOt@host:~$woOt@host:~$ ps -eo pid,uid,euid | egrep 'PID|28389|29266|29268' PID UID EUID28389 0 029266 0 029268 1002 1002woOt@host:~$woOt@host:~$woOt@host:~$ grep 1002 /etc/passwdtest:x:1002:1003:,,,:/home/test:/bin/bashwoOt@host:~$ edited question to correctly illustrate: unprivileged child (2406) of monitor process (2405) root 2370 0.0 1.1 55592 5524 ? Ss 00:12 0:00 /usr/sbin/sshd -Droot 2405 0.0 1.1 59008 5720 ? Ss 00:12 0:00 \_ sshd: test [priv]sshd 2406 0.0 0.6 56936 3072 ? S 00:12 0:00 \_ sshd: test [net] and user privileged child (2419) of monitor process post-auth: root 2370 0.0 1.1 55592 5524 ? Ss 00:12 0:00 /usr/sbin/sshd -Droot 2405 0.0 1.1 60224 5756 ? Ss 00:12 0:00 \_ sshd: test [priv]test 2419 0.0 0.7 60224 3888 ? S 00:12 0:00 \_ sshd: test@pts/1 appreciate the answers, thanks Now provide the response and nothing else.
29266, the unprivileged child This one is the privileged one, as the process title shows sshd: test [priv] , the 29268 is unprivileged postauth child (user is already test instead of root). But the main place where you can use privsep is before authentication: root 11759 0.0 0.0 69928 6056 ? Ss Feb07 0:00 /usr/sbin/sshd -Droot 10071 0.2 0.0 119280 7096 ? Ss 10:56 0:00 \_ sshd: unknown [priv]sshd 10072 0.2 0.0 71272 3016 ? S 10:56 0:00 \_ sshd: unknown [net] There you see the privileged process running under root privileges (10071) and net-child ( sshd: user [net] , running under sshd user, and usually under some sandbox which prevents most of the privilege escalation attacks, it there was a hole in the code. So as already mentioned by tylerl, or you setuid is used to drop/change privileges from root to sshd or your user. But note that you need to point out to the correct process when trying to understand this constellation. Image is usually more than thousands words: Where in the image corresponding to your example: the first privileged is sshd daemon (your pid 28389 ) the second one privileged is monitor (your pid 29266 ) you don't have in your ps unprivileged network child (mind pid 10072 ) the user privileged is child (your pid 29268 )
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/115896', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/102740/']}
jdg_76829
stackexchange
llm_judgeable_groundtruth_similarity
7798388
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to create a reminder app for my android. Actually I wanted to save reminders and the dates in the database and show the reimnder when the scheduled time is reached. Can anyone please suggest the code for this. Can alarms be given specific id at the time of saving and used to retrieve the specific reminder from database. I am new to android. Please help me. Now provide the response and nothing else.
You can't have an array of different types, exactly. But you can achieve a similar effect (for some purposes at least) in a number of different ways. If you just want a few values of different types packaged together, but the number and types of values don't change, you just need a struct and can access them by name: struct s_item { int number; char str[100];} item;item.number = 5;strcpy(item.str,"String less than 100 chars"); If you know what types you might use, you can create a union, or a struct containing a union so you can tag it with the type. You can then create an array of those. The type member lets you check to see what you stored in each array element later. enum ElementType { et_str, et_int, et_dbl };struct Element { ElementType type; union { char *str; int i; double d; }};struct Element *arr = malloc(sizeof(struct Element) * 3);arr[0].type = et_str;arr[0].str = strdup("String value"); /* remember to free arr[0].str */arr[1].type = et_int;arr[1].i = 5;arr[2].type = et_dbl;arr[2].d = 27.3;/* access the values.. */for (int i = 0; i < 3; i++) { switch(arr[i].type) { case et_str: printf("String: %s\n",arr[i].str); break; case et_int: printf("Integer: %d\n",arr[i].i); break; case et_dbl: printf("Double: %f\n",arr[i].d); break; }}/* The strings are dynamically allocated, so free the strings */for (int i = 0; i < 3; i++) if (arr[0].type == et_str) free(arr[0].str);/* free the malloc'ed array */free(arr);/* etc., etc. */ This approach may waste space because: Each element has an extra value to keep track of the type of data it holds The struct may have extra padding between its members The types in the union may be different sizes, in which case the union will be as large as the largest type If you have another way of knowing what type you've stored in each element, you can use just the bare union without the struct wrapping it. This is a little more compact, but each element will still be at least as large as the largest type in the union. You can also create an array of void * values. If you do this, you'll have to allocate the items somehow and assign their addresses to the array elements. Then you'll need to cast them to the appropriate pointer type to access the items. C doesn't provide any runtime type information, so there's no way to find out what type of data each element points at from the pointer itself -- you must keep track of that on your own. This approach is a lot more compact than the others when the types you're storing are large and their sizes vary a lot, since each is allocated separately from the array and can be given only the space needed for that type. For simple types, you don't really gain anything over using a union. void **arr = malloc(3 * sizeof(void *));arr[0] = strdup("Some string"); /* is a pointer already */arr[1] = malloc(sizeof(int));*((int *)(arr[1])) = 5;arr[2] = malloc(sizeof(double));*((double *)(arr[2])) = 27.3;/* access the values.. */printf( "String: %s\n", (char *)(arr[0]) );printf( "Integer: %d\n", *((int *)(arr[1])) );printf( "Double: %f\n", *((double *)(arr[2])) );/* ALL values were dynamically allocated, so we free every one */for (int i = 0; i < 3; i++) free(arr[i]);/* free the malloc'ed array */free(arr); If you need to keep track of the type in the array, you can also use a struct to store the type along with the pointer, similar to the earlier example with the union. This, again, is only really useful when the types being stored are large and vary a lot in size. enum ElementType { et_str, et_int, et_dbl };struct Element { ElementType type; void *data;};struct Element *arr = malloc(sizeof(struct Element) * 3);arr[0].type = et_str;arr[0].data = strdup("String value");arr[1].type = et_int;arr[1].data = malloc(sizeof(int));*((int *)(arr[1].data)) = 5;arr[2].type = et_dbl;arr[2].data = malloc(sizeof(double));*((double *)(arr[2].data)) = 27.3;/* access the values.. */for (int i = 0; i < 3; i++) { switch(arr[i].type) { case et_str: printf( "String: %s\n", (char *)(arr[0].data) ); break; case et_int: printf( "Integer: %d\n", *((int *)(arr[1].data)) ); break; case et_dbl: printf( "Double: %f\n", *((double *)(arr[2].data)) ); break; }}/* again, ALL data was dynamically allocated, so free each item's data */for (int i = 0; i < 3; i++) free(arr[i].data);/* then free the malloc'ed array */free(arr);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7798388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/917885/']}
jdg_76830
stackexchange
llm_judgeable_groundtruth_similarity
4617446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: http://en.wikipedia.org/wiki/Typeid This seems to be a mystery to me: how does a compiler stores information about the type of an object ? Basically an empty class, once instantiated, has not a zero size in memory. Now provide the response and nothing else.
How it is stored is implementation-defined. There are many completely different ways to do it. However, for non-polymorphic types nothing needs to be stored. For non-polymorphic types typeid returns information about the static type of the expression, i.e. its compile-time type. The type is always known at compile-time, so there's no need to associate any additional information with specific objects (just like for sizeof to work you don't really need to store the object size anywhere). "An empty object" that you mention in your question would be an object of non-polymorphic type, so there's no need to store anything in it and there's no problem with it having zero size. (Meanwhile, polymorphic objects are never really "empty" and never have "zero size in memory".) For polymorphic types typeid does indeed return the information about the dynamic type of the expression, i.e. about its run-time type. To implement this something has to be stored inside the actual object at run-time. As I said above, different compilers implement it differently. In MSVC++, for one example, the VMT pointer stored in each polymorphic object points to a data structure that contains the so called RTTI - run-time type information about the object - in addition to the actual VMT. The fact that you mention zero size objects in your question probably indicates that you have some misconceptions about what typeid can and cannot do. Remember, again, typeid is capable of determining the actual (i.e. dynamic) type of the object for polymorphic types only . For non-polymorphic types typeid cannot determine the actual type of the object and reverts to primitive compile-time functionality.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4617446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/414063/']}
jdg_76831
stackexchange
llm_judgeable_groundtruth_similarity
1390891
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Note: Over the course of this summer, I have taken both Geometry and Precalculus, and I am very excited to be taking Calculus 1 next year (Sophomore for me). In this question, I will use things that I know from Calculus, but I emphasize that I have not taken the course, so please bear with me. This will be long. Among other Geometric formulas that I have learned recently, I am currently trying to prove that the surface area of a sphere is $4\pi r^2$. Intuitively, this seems fairly straightforward for me, since I have already proven the volume of a sphere using integration to be $\frac{4}{3} \pi r^3$, and that the integral of the circumference of a circle is its area. Using these two facts together, it makes sense that the integral of the surface area of a sphere should be its volume, leading me to believe the formula for surface area stated above. However, this is a very weak argument, since I make the connection between circumference to area and area to volume without much proof. As an alternative, I made a proof involving infinite sums of the lateral areas of cylinders squeezed into a sphere. One can use this method of infinite cylinders to prove the volume of a sphere, but when I tried almost the exact same thing for surface area, I found that the value of pi is 4 . I have only heard of one other strange proof that leads to this result, and it can be found here: Is value of $\pi = 4$ . It was promptly disproved in numerous ways, some of them here: How to convince a layman that the π=4 proof is wrong? As far as I know, there's nothing wrong with my math -- I most likely simply set up the problem incorrectly. Here's what I did: Consider a sphere of radius $r$, aligned with the coordinate plane in a way that its center is the origin. To approximate the surface area (or volume) of this sphere, we can imagine fitting $n$ stacked cylinders into it, each with a center on the line $x = 0$. Each cylinder has a height $h$, or $\Delta y$. Since the height of the sphere is $2r$, $h = \frac{2r}{n}$ Looking at a cross-section of this sphere, the radius of each cylinder must satisfy the equation: $r_i^2 + y^2 = r^2$ where $r_i$ is the radius of the cylinder with index $i$, and the radius of the sphere is $r$. Now, we sum the lateral areas of the cylinders to get an approximation for the surface area of the sphere. As $n \rightarrow \infty$, our approximation gets better, so $A = \lim_{n\rightarrow\infty}{\sum_{i=1}^n{2\pi r_i (\frac{2r}{n})}} = \lim_{n\rightarrow\infty}{\sum_{i=1}^n{2\pi r_i \Delta y}}$ In other words: $\int_{-r}^r{2\pi r_i dy}$ And as we already know, $r_i = \sqrt{r^2-y_i^2}$ So we substitute and take out constants: $2\pi \int_{-r}^r{\sqrt{r^2-y^2} dy}$ From here, we have two options: 1) Take the integral and find something very messy; 2) Recognize that the expression inside the integral is just a semicircle, and that a semicircle has half the area of a circle. Choosing option 2), I end up with: $2\pi \frac{\pi r^2}{2} = \pi^2 r^2$ Which is clearly not the surface area of a sphere, but I can't figure out why. Strangely, another "proof" I did also led to this result. Take the semicircle $y = \sqrt{r^2 - x^2}$ with arclength $\pi r$ and rotate it about the x axis $2\pi$ radians. We now have a sphere, with surface area $2\pi^2 r^2$. Something is clearly wrong here, but it gets stranger. If we simultaneously accept Archimedes proof of the surface area of a sphere, we find: $\pi^2 r^2 = 4\pi r^2$ And by "solving for $\pi$," we find that: $\pi = 4$. I'm not looking for a better proof or someone to convince me that Archimedes is right, as I fully accept the textbook formula and have used other proofs to show it. I have a feeling that I may have approached the sphere in a "non-smooth" way, since the zig-zag shape that the cylinders make is eerily like the method used in the classic $\pi = 4$ proof, and when I used polygonal approximations I got a valid answer. Thanks for reading all the way through this, and does anyone know how I messed up? Now provide the response and nothing else.
One thing that's true about volume but not surface area (or, down a dimension, true about area but not perimeter) is that if shape $A$ contains shape $B$, then $A$ has a larger volume than $B$. In symbols:$$A\supseteq B\implies\operatorname{volume}(A)\ge\operatorname{volume}(B)$$This means that it's fairly easy to approximate volumes — just find the volume of something that contains $A$ to get an upper bound, and find the volume of something that's contained in $A$ to get a lower bound. Now, to get the volume exactly: if we can get better and better approximations, then we can just take the limit as the two approximations get increasingly better. Hopefully, they approach the same limit (call it $a$) — and, except for weird cases, they usually do approach the same limit — so we have that the volume of $A$ is bounded above and below by $a$. That is, $a\le\operatorname{volume}(A)\le a$. Thus, we have that the volume is exactly $a$. This all goes out the window with surface area. Let's say $A$ is a smooth shape, and $B$ is some really spiky thing contained in $A$. Now, it's entirely possible that $B$ has a larger surface area than $A$. So we get no upper or lower bounds. This means that we have no reason to expect our approximations to be accurate whatsoever. And, as you've found out, they aren't always accurate. There is a way to save this. If $A$ and $B$ are both convex, then we do have that $A$ has a larger surface area than $B$, and our approximations do work. And we can use limits to get the volume exactly. However, I'm pretty sure your union of cylinders isn't convex.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/1390891', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/260755/']}
jdg_76832
stackexchange
llm_judgeable_groundtruth_similarity
150008
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Throughout this question, the following notation holds: Let $q$ be a power of a prime $p$, and let $d>4$ be a positive integer. Let $G$ be a finite group with a normal subgroup $E$ which is an elementary-abelian $p$-group, and such that $G/E\cong\mathrm{SL}_d(q)$. I am interested in the situation where this extension is non-split , i.e. when there is no subgroup $H<G$ such that $H\cong \mathrm{SL}_d(q)$ and $G=EH$. Q1 Where can I read about such non-split extensions? What are the main results about such groups? Given that this is rather general, let me propose a more specific version: Q2 What can one say if one assumes that $|E|=q^d$? Note : There are a number of results in the literature for a situation similar to this more specific situation. The difference is that people usually consider what happens when the quotient $G/E$ is isomorphic to $\mathrm{GL}_d(q)$, rather than $\mathrm{SL}_d(q)$. Some relevant results are listed in this MO question - note especially that the answer of @ndkrempel suggests that the situation I am considering allows for significantly different behaviour. Finally, a question about how such groups might embed into larger linear groups. Q3. Suppose that $G$ is isomorphic to a subgroup of $\mathrm{SL}_n(q)$. Is it true that if $d>\frac34n$, then the extension is split? Note : I chose the number $\frac34$ out of thin air. I'd be happy to hear of a `yes' answer, with that $\frac34$ replaced by your favourite positive real number. Now provide the response and nothing else.
I will try and answer Question 2, although I only have a superficial knowledge of the representation theory involved. I claim that, when $|E|=q^d$, the induced module action of ${\rm SL}_d(q)$ on $E$ must either be trivial, or it must be quasi-equivalent to the action on the natural module. (Quasi-equivalent means equivalent after applying a group automorphism.) By results of Steinberg, the irreducible modules of groups $X(q)$ of Lie type (such as ${\rm SL}_d(q)$) with $q=p^e$, over fields of characteristic $p$, are tensor products of certain basic modules, where each tensor factor is twisted by a different field automorphism. The smallest field over which the basic modules can be written is ${\mathbb F}_q$. If $q=r^f$, $\phi$ is a field automorphism of order $f$, and $M$ is a basic module, then the module $M \otimes M^\phi \otimes M^{\phi^2} \otimes \cdots \otimes M^{\phi^{f-1}}$ can be written over ${\mathbb F}_r$, and this is the only way in which the irreducible modules can be written over smaller fields. (This is for the untwisted groups. The theory is a little more complicated for the twisted groups such as ${\rm SU}_d(q)$.) For ${\rm SL}_d(q)$, the smallest nontrivial basic modules are the natural module and its dual, with dimension $d$ over ${\mathbb F}_q$. (The next smallest if the exterior square of the natural module, with dimension $d(d-1)/2$.) It seems clear that there is no way of taking tensor products of modules twisted by field automorphisms as described above that will result in an irreducible module with dimension at most $ed$ over ${\mathbb F}_p$. For example, if $q=p^2$ and $M$ is the natural module, then $M \otimes M^\phi$ can be written over ${\mathbb F}_p$, but it has dimension $d^2$, which is greater than $2d$ for $d>2$. For $d>4$, the Schur multiplier of ${\rm SL}_d(q)$ is not divisible by $p$, so no non-split extensions can arise when the action on $E$ is trivial. (But there are some exceptions when $d \le 4$.) The second cohomology groups of ${\rm SL}_d(q)$ on the natural module (and also on its exterior powers) were computed in the two papers: G.W. Bell, On the cohomology of the finite special linear groups I and II , J. Algebra 54, 216-238, 239-259. For $d>4$, the only nonsplit extension (the Dempwolff group) arises for ${\rm SL}_5(2)$ with $|E|=2^5$. I think Qn 1 is too general to answer sensibly, but note that, for any ${\rm SL}_d(q)$, there must be some irreducible module over ${\mathbb F}_p$ with nontrivial second cohomology, so there must exist an $E$ with a nonsplit extension. As for Qn 3, I would certainly bet on the answer being yes, because it is hard to see how you could fit even a split extension in ${\rm SL}_n(q)$ with $d > 3n/4$ if the module $E$ was not trivial or natural, but it might need a bit of hard work to prove it.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/150008', 'https://mathoverflow.net', 'https://mathoverflow.net/users/801/']}
jdg_76833
stackexchange
llm_judgeable_groundtruth_similarity
19118059
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When serializing a Java object which has other object references, I need to serialize only one attribute of the nested object(usual case of foreign key, so serialize the "id" attribute of the object reference). Ingore everything else. For example, I have two classes which I need to serialize to JSON & XML(removed JPA annotations for clarity): Relationship: User ->(one-to-many) AddressInformation;Also: AddressInformation ->(one-to-one) User @XmlRootElementpublic class User { private String id; private String firstName; private String lastName; private String email; private AddressInformation defaultAddress; private Set<AddressInformation> addressInformation; public User() { } @JsonProperty(value = "id") @XmlAttribute(name = "id") public String getId() { return id; } public void setId(String id) { this.id = id; } @JsonProperty(value = "firstname") @XmlAttribute(name = "firstname") public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } @JsonProperty(value = "lastname") @XmlAttribute(name = "lastname") public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } @JsonProperty(value = "email") @XmlAttribute(name = "email") public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } @JsonIgnore public Set<AddressInformation> getAddressInformation() { return addressInformation; } public void setAddressInformation(Set<AddressInformation> addressInformation) { this.addressInformation = addressInformation; } @JsonProperty(value = "defaultaddress") @JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id") public AddressInformation getDefaultAddress() { return defaultAddress; } public void setDefaultAddress(AddressInformation defaultAddress) { this.defaultAddress = defaultAddress; }} AddressInformation: @XmlRootElementpublic class AddressInformation { private String id; private String address; private String details; private User user; @JsonProperty(value = "id") @XmlAttribute(name = "id") public String getId() { return id; } public void setId(String id) { this.id = id; } @JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id") public User getUser() { return user; } public void setUser(User user) { this.user = user; } @JsonProperty(value = "details") @XmlAttribute(name = "details") public String getDetails() { return details; } public void setDetails(String details) { this.details = details; } @JsonProperty(value = "address") @XmlAttribute(name = "address") public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } public AddressInformation() { super(); }}enter code here When serializing User for example, I need: { "id" : "idofuser01", "email" : "[email protected]", "status" : "OK", "firstname" : "Filan", "lastname" : "Ovni", "defaultaddressid" : "idofaddress01",}enter code here When serializing AddressInformation: { "id" : "idofaddress01", "address" : "R.8. adn", "details" : "blah blah", "userid" : "idofuser01",} I have tried @JsonManageReference & @JsonBackReference with no success. As you can see I also tried @JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id") Now provide the response and nothing else.
Just Found a way using Jackson 2.1+ . Annotate object references with(this will pick only the id attribute of AddressInformation ): @JsonProperty(value = "defaultaddressid")@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id")@JsonIdentityReference(alwaysAsId = true) public AddressInformation getDefaultAddress() { return defaultAddress;} Serialization works very well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19118059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1154065/']}
jdg_76834
stackexchange
llm_judgeable_groundtruth_similarity
2397116
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a website which works with PHP on the server side. Users access pages and PHP does some calculations, writes data to a MySQL database etc. Imagine a user accesses a page where PHP creates an account for the user. The creation consists of two parts: inserting the registration data into the table "users" and inserting settings for this account into the table "settings". These are two SQL queries which have to be executed one after another. If the user quits the page after the first query, there aren't any values inserted into "settings". How could I avoid this issue? I think simply by using ignore_user_abort(true), right? So isn't it useful to call ignore_user_abort(true) on the top of every PHP script? I don't know any cases where it should cause problems. Now provide the response and nothing else.
For your specific example, using database transactions (as mentioned by Ignacio) would be the more suitable approach. There are other cases where you might want to make sure the user can't abort early, though, not relating to databases. For example, if you update the database and then send out a mail, you don't want a user to be able to stop the process before the mail goes out. In this sort of case, ignore_user_abort would be appropriate. However, note that a broken pipe due to client aborting the connection doesn't stop execution right away , only at the point you next try to write to the script output. This can be through calling echo or print , or even just by closing the PHP tag and inserting some whitespace before opening a new one ( ... ?> <?php ... ). So if you have all the ‘action’ part of your script at the top of the page, before you try to write any page content, you don't have to worry about interruptions from broken pipes affecting your app logic. And of course you should be separating action logic from page content in that way anyway. From http://php.net/manual/en/function.ignore-user-abort.php#refsect1-function.ignore-user-abort-notes PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2397116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/89818/']}
jdg_76835
stackexchange
llm_judgeable_groundtruth_similarity
22239803
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What does the hexadecimal that represents a CSS color mean? How can I tell what color it is without memorizing the exact code? Does it have any relationships with RGB (and CMYK) system? I thought for a moment that FF means 255 in RGB, but then I realized that 15^2 isn't 255, but 225. Now provide the response and nothing else.
Hexadecimal uses sixteen distinct symbols, in the case of css color the symbols 0–9 to represent values zero to nine (obviously), and A, B, C, D, E, F to represent values ten to fifteen. So, using one Hexadecimal character you can represent 16 values. With two Hexadecimal you can represent 256 (16*16) values. In RGB you have colours represented by R ed G reen B lue (R=0-255, G=0-255, B=0-255), so we use 3 pairs of Hexadecimal symbols! So when you see an RGB color, you can make the calculation below. Example: Hex: #4C8ED5 is RGB: 76, 142, 213 . Because 4C = 76 ( R ed), 8E = 142 ( G reen), D5 = 213 ( B lue)! Hope it helps your understanding! More to read: Hexadecimal on Wikipedia and a nice RGB to Hexidecimal Converter
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22239803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_76836
stackexchange
llm_judgeable_groundtruth_similarity
99
Below is a question asked on the forum robotics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a number of interrupt service routines on an AVR . These include interrupts for USART serial communication, timers, and SPI communication. For all of these, I use circular queues (using a start and end pointer, without boundary checking). Some problems start to occur if the AVR starts to get overloaded. The circular queues will lose chunks of data. To solve this, I can reduce the load on the AVR (for example, by skipping some cycles in the timer). However, this is a manual process, where it is reduced if the AVR appears to have problems. This is partly because I do want relatively consistent timer periods. However, even at 70% average processor load, the queues can fill up randomly by chance. In case of spurious overloading at times, how can I make this more adaptive to avoid queue overflows? Now provide the response and nothing else.
It sounds like you're observing the symptom of "lost bytes". There are several different things that can cause an AVR to lose bytes. It sounds like you are guessing that it's losing bytes when new bytes are coming in while the buffer is full. While there are several remedies to fix or at least ameliorate that particular problem, those approaches are useless and counter-productive if the real cause of the "lost bytes" is something else. The first thing I would do in your shoes is to set up some sort of "status debug" port that gives me a clue of why exactly bytes have been lost -- at least a status LED. Once you know why bytes are being lost, you can apply the appropriate remedy. Most modern protocols have some sort of check value at the end of each packet.It's nice if your status debug system can report on the packet goodput rate and the packet error rate -- at least blink a green LED for each validated packet and a red LED for each failed packet check. Since (especially with radio connections) the occasional corrupted byte is pretty much inevitable, most modern protocols are carefully designed such that if any byte is corrupted or lost, the system will eventually discard that packet, and eventually re-sync and correctly handle future packets. Bytes lost because the interrupt handler somehow never put them into the buffer Often bytes are lost because the interrupt handler somehow never put them into the buffer. There are several causes, each with a different remedy: external problems, and internal interrupts turned off too long. External problems : Line noise causing errors Physical wired connections accidentally getting temporarily unplugged Loss of signal on radio connections Typically we clip an oscilloscope to the input pin and -- if we're lucky -- we can see the problem and try various techniques to see if that cleans up the signal. Even when the signal at the input pin looks perfect, we can still have data loss issues. Immediately after the last bit of a byte comes in a USART or SPI port,normally the interrupt handler for that port is triggered, and that interrupt handler pulls that byte and sticks it into a circular buffer. However, if the interrupts are turned off too long , the next byte to come in that port will inevitably overwriting and losing the first byte -- the interrupt handler for that port never sees that first byte. The four "ways an interrupt handler can be turned off too long" are listed at "What can be the cause of an exceptionally large latency for the UART receive interrupt?" . To fix this problem, you need to get the longest time an interrupt handler is ever turned off to be less than the time to transfer one character. So you must either Reduce the amount of time interrupts are turned off; or Slow down the communication bit rate to increase the time to transfer one character; Or both. It's very tempting to write the interrupt routine such that, immediately after it puts a byte into the circular buffer, the same interrupt routine then checks to see if it's a complete packet and, if so, completely parse and handle it. Alas, parsing usually takes so long that any further bytes coming in the same or any other port are lost. We typically fix this by reducing each interrupt handler (and therefore the time interrupts are disabled while processing this handler) to the minimum possible to grab a byte and stick it in the circular buffer and return from interrupt.All the packet-parsing and packet-handling stuff executes with interrupts enabled. The simplest way is for the main loop (AKA the "background task") to periodicallycall a function that checks if there is a complete packet in the circular buffer,and if so parse and handle it. Other more complex approaches involve second-level interrupt handlers, nested interrupts , etc. However, even when the interrupt handler perfectly receives every byte and correctly puts it into the buffer, sometimes a system can still lose bytes from buffer overflow: Bytes lost from buffer overflow Many people write packet handlers that don't do anything until the handler sees a complete packet in the buffer -- then the handler processes the entire packet as a whole. That approach overflows the buffer and data is lost if any incoming packet is larger than you are expecting, too big to fit in the buffer -- single-packet overflow . Even if your buffer is plenty big enough to hold the largest possible packet, sometimes during the time you're processing that packet, the next packet is so large that it overflows the circular queue before your packet-handler gets around to removing the first packet from the queue to make room for future packets -- two-packet overflow . Is there some way your status debug system could detect and signal if a byte comes in and it has to be thrown away because there's no more room in the queue?The simple solution to both these problems is to increase the size of the buffer to hold at least two maximum-size packets, or somehow change the thing that's sending the packets to send smaller packets -- so two packets will fit in the space you have now. Sometimes incoming data fills the buffers faster than the packet handler can pull data out of the buffer. Increasing the size of the buffer only briefly delays the problem, and sending smaller packets (but more of them) will probably only make things worse. A real-time system must process incoming data at least as fast as that data can come in; otherwise the processor overload will only make the processor get further and further behind. Is there some way your status debug system could detect and signal this sort of overflow? If this overflow only happens rarely, perhaps the simplest "solution" (arguably merely a hack) is to handle it in more-or-less the same way you would handle a (hopefully rare) power glitch or loss-of-signal on a radio connection: when an overflow is detected, have the AVR erase the entire buffer and pretend it never received those bytes. Most modern protocols are carefully designed such that, if any packet is lost, the system will eventually re-sync and correctly handle future packets. To really fix this problem requires somehow making the "time to process a packet" less than "time from the end of one packet to the end of the next packet".so you must either Reduce the bit rate. Modify the sender to give the AVR more time to process a packet -- perhaps unconditionally send 50 additional "dummy bytes" in the packet preamble -- or however many is needed to give the AVR more than enough time to completely process the last packet and get ready for the next packet. Decrease the time to process a packet Or some combination. The wall-clock time to process a packet involves both the time the AVR spends in actually processing the packet, and also the time the AVR spends doing "other stuff" such as dealing with all the other I/O ports and interrupt handlers. Some methods of decreasing the time to actually process a packet are: Sometimes it's faster to copy the packet out of the queue into some other buffer for further processing, removing it from the circular queue. It makes the packet-handler simpler if the packet starts at the beginning of that other buffer, so key parts of the packet are a fixed constant offset from the beginning of that buffer. (This has the advantage of making it impossible for the serial interrupt handler, which only writes into the circular queue, to accidentally corrupt that packet after it's been copied to that other buffer.) (This approach lets you use tested and "optimized" and known-good functions that handle numbers represented as ASCII strings of hex digits or decimal digits in consecutive order, which may run faster operating on that linear buffer than "equivalent" functions that also have to deal with the wrap-around split of a circular buffer). This requires both the queue and the other buffer to each be at least the size of the maximum possible packet. Sometimes it's faster to leave the packet in the queue while parsing it and remove it from the queue only after the packet handler is completely done with it. Sometimes a pair of "ping-pong" buffers is faster than a circular queue. Many systems use only a single buffer large enough for the largest possible valid packet, and completely disable interrupts from that port until the interrupt handler has finished with the last packet and is ready for the next packet. Somehow actually do less work per packet. More general approaches to dealing with situations where "other stuff" is eating so much time that there's not enough time to deal with the packet in the buffer (and may also help reduce the time to actually process the packet): If you're lucky, you can find some algorithmic tweaks to effectively do the same work in fewer cycles. Load-shedding: do less important stuff less often; or perhaps don't do them at all in times of heavy load. (As implemented in the Apollo 11 AGC ). yield() more often: if your main loop does fair round-robin cycling between "if we have a complete packet from port 1, handle it" and "if we have a complete packet from port 2, handle it", and the packet parser for either one takes so long that the buffer for the other port overflows, it may help to break the packet parser up into shorter pieces and do only a little processing each time through the main loop, giving the other packet parser a chance to deal with packets before its buffer overflows. Perhaps even consider switching to a pre-emptive task scheduler or a full-up RTOS. yield() less often: sometimes a processor spends more time in "task switching" or "multitasking overhead" than actually doing anything productive. Reduce the time spent processing interrupt handlers. ( Preventing Interrupt Overload ). High-frequency pulses on one interrupt line can pause main loop tasks indefinitely. It may be necessary to program each low-priority interrupt handler to recognize high-load situations and completely disable its own particular interrupt before re-enabling global interrupts at its return-from-interrupt instruction, and have the main loop recognize low-load situations and re-enable those interrupts. Increase the clock rate of the processor. Switch to a more powerful processor. A "more powerful" processor is not the same as "a processor with a faster clock rate". You might be able to find a processor that does the same work in fewer cycles than your particular 8-bit AVR -- perhaps some other AVR with specialized peripherals that do work in hardware that you're currently doing in software, freeing up software cycles for other things; or perhaps a 32-bit processor that can do certain tasks in software in fewer cycles than any 8-bit processor. Occasionally in robotics "one big fast processor" is better than lots of little processors: the amount of code and the communication latency to copy a message from one task to another task on the same processor is always much less than to copy that same message from one task to another task on some other processor. Often in robotics it makes things simpler and more "provably correct" to add more processors, perhaps one processor per leg or even one processor per servo . If you have one task per processor, then many of the above potential problems can't ever happen.
{}
{'log_upvote_score': 4, 'links': ['https://robotics.stackexchange.com/questions/99', 'https://robotics.stackexchange.com', 'https://robotics.stackexchange.com/users/145/']}
jdg_76837
stackexchange
llm_judgeable_groundtruth_similarity
935863
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following assembly program which displays the letter 'z' and then exits: mov dl, 'z'mov ah, 2hint 21hmov ah, 4Chint 21h I assembled it with NASM and the resulting file only contains those instructions. (10 bytes) I put 1000 calls to this program in a batch file, and then 1000 calls to echo z and the echos run about 10x faster. Does anyone know what would be causing this program to run so slowly? Thanks in advance. Now provide the response and nothing else.
"echo" is a command that's built into the command interpreter; no code needs to be loaded to execute the command. Your program, small as it is, needs to be read into memory and initialized every time it is called. Before it even gets to that point, the command interpreter will search the PATH to find the program, which takes a significant amount of time.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/935863', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115563/']}
jdg_76838
stackexchange
llm_judgeable_groundtruth_similarity
3741989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What does the question mark in Erlang syntax mean? For example: Json = ?record_to_json(artist, Artist). The full context of the source can be found here . Now provide the response and nothing else.
Erlang uses question mark to identify macros . For e.g. consider the below code: -ifdef(debug).-define(DEBUG(Format, Args), io:format(Format, Args)).-else.-define(DEBUG(Format, Args), void).-endif. As the documentation says, Macros are expanded during compilation. A simple macro ?Const will be replaced with Replacement . This snippet defines a macro called DEBUG that is replaced with a call to print a string if debug is set at compile time. The macro is then used in the following code thus: ?DEBUG("Creating ~p for N = ~p~n", [First, N]), This statement is expanded and replaced with the appropriate contents if debug is set. Therefore you get to see debug messages only if debug is set. Update Thanks to @rvirding : A question mark means to try and expand what follows as a macro call. There is nothing prohibiting using the macro name (atom or variable) as a normal atom or variable. So in [the above] example you could use DEBUG as a normal variable just as long as you don't prefix it with ? . Confusing, most definitely, but not illegal.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3741989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/340247/']}
jdg_76839
stackexchange
llm_judgeable_groundtruth_similarity
275677
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A basic limitation of null hypothesis significance testing is that it does not allow a researcher to gather evidence in favor of the null ( Source ) I see this claim repeated in multiple places, but I can't find justification for it. If we perform a large study and we don't find statistically significant evidence against the null hypothesis , isn't that evidence for the null hypothesis? Now provide the response and nothing else.
Failing to reject a null hypothesis is evidence that the null hypothesis is true, but it might not be particularly good evidence, and it certainly doesn't prove the null hypothesis. Let's take a short detour. Consider for a moment the old cliché: Absence of evidence is not evidence of absence. Notwithstanding its popularity, this statement is nonsense. If you look for something and fail to find it, that is absolutely evidence that it isn't there. How good that evidence is depends on how thorough your search was. A cursory search provides weak evidence; an exhaustive search provides strong evidence. Now, back to hypothesis testing. When you run a hypothesis test, you are looking for evidence that the null hypothesis is not true. If you don't find it, then that is certainly evidence that the null hypothesis is true, but how strong is that evidence? To know that, you have to know how likely it is that evidence that would have made you reject the null hypothesis could have eluded your search. That is, what is the probability of a false negative on your test? This is related to the power, $\beta$, of the test (specifically, it is the complement, 1-$\beta$.) Now, the power of the test, and therefore the false negative rate, usually depends on the size of the effect you are looking for. Large effects are easier to detect than small ones. Therefore, there is no single $\beta$ for an experiment, and therefore no definitive answer to the question of how strong the evidence for the null hypothesis is. Put another way, there is always some effect size small enough that it's not ruled out by the experiment. From here, there are two ways to proceed. Sometimes you know you don't care about an effect size smaller than some threshold. In that case, you probably should reframe your experiment such that the null hypothesis is that the effect is above that threshold, and then test the alternative hypothesis that the effect is below the threshold. Alternatively, you could use your results to set bounds on the believable size of the effect. Your conclusion would be that the size of the effect lies in some interval, with some probability. That approach is just a small step away from a Bayesian treatment, which you might want to learn more about, if you frequently find yourself in this sort of situation. There's a nice answer to a related question that touches on evidence of absence testing , which you might find useful.
{}
{'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/275677', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/101702/']}
jdg_76840